Wikipedia:Village pump (idea lab)

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.

Discussions are automatically archived after remaining inactive for 10 days.

« Archives, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74

Add a bot/policy that bans AI edits from non-extended confirmed users

[edit]

I saw this thread yesterday and I wanted to chime in this idea I had, but I waited to long to act on it and now it's archived. So I guess I'll have to make a new thread.

It's clear that lots of new editors struggle making good content with AI assistance, and something has to be done. WP:G15 is already a good start, but I think restrictions can be extended further. Extended confirmation on Wikipedia is already somewhat of a benchmark to qualify editors to edit contentious articles, and I think the same criteria would do well to stop the worst AI slop from infecting mainspace. As for how this would be implemented, I'm not sure - a policy would allow human intervention, but a bot designed like ClueBot NG might automate the process if someone knows how to build one. Koopinator (talk) 10:50, 18 October 2025 (UTC)[reply]

I do t see a practical way to enforce that. I also dont think that peoples skill level with AI can transfer to an assessment of their skill level in wikipedia. —TheDJ (talkcontribs) 11:31, 18 October 2025 (UTC)[reply]
Regarding enforcement, I would suggest:
1. Looking at whatever process ClueBot uses to detect and evaluate new edits, and add a "extended confirmed/non-ec" clause.
1.1. I will admit I'm not entirely sure of how this would work on a technical level, which is why I posted this idea in the idea lab.
2. Look to word frequency as in User:Gnomingstuff/AI experiment to distinguish AI from non-AI edits. Koopinator (talk) 15:32, 18 October 2025 (UTC)[reply]
please don't use this in any kind of blocking enforcement capacity, it is not remotely ready for anything like that Gnomingstuff (talk) 17:41, 20 October 2025 (UTC)[reply]
A person's willingness to use AI on Wikipedia is an immediate and absolute WP:NOTHERE, in my opinion. TooManyFingers (talk) 05:50, 4 November 2025 (UTC)[reply]
Too sweeping an opinion in my opinion. First you would have to be talking about specifically using unsupervised AI to write articles. Secondly I think it would be "insistance" rather than "willingness". And thirdly it could well be a WP:CIR or user education issue rather than a NOTHERE one. All the best: Rich Farmbrough 18:03, 6 November 2025 (UTC).[reply]
Do you have any evidence that extended confirmed users create any better edits with AI than users who are not extended confirmed? Phil Bridger (talk) 14:33, 18 October 2025 (UTC)[reply]
I would say it's a reasonable inference. Here's what I can say:
  • We can expect that extended-confirmed users are more likely to be familiar with Wikipedia's policies and guidelines, by virtue of having been here longer.
  • Some anecdotal evidence:
    • [1] LLM edit with no sources, survived for almost 2 months. Was created by an editor who was neither confirmed nor extended confirmed.
    • [2] Personal project by yours truly, AI assistance was used, careful review of text-source integrity of every sentence as I constructed the page in my sandbox over the course of 59 days before airing it.
  • I admit none of this is hard evidence.
I do feel LLM has its place on the site (otherwise I wouldn't have used ChatGPT assistance in constructing a page), but if it's allowed, the barrier for usage really should be heightened. Wikipedia's content translation tool is also restricted to extended-confirmed users.
Koopinator (talk) 15:25, 18 October 2025 (UTC)[reply]
The issue is raising the bar to prevent bots from editing Wikipedia using LLMs. LDW5432 (talk) 19:57, 27 October 2025 (UTC)[reply]
LLM detection for text is very hard and has far, far too many false positives, especially for non-native speakers and certain wavelengths of autism. Aaron Liu (talk) 16:41, 18 October 2025 (UTC)[reply]
^ This is my experience. Also, a lot of edits are too brief for the already-dodgy AI "detectors" to be reliable for.
@Koopinator, you've made around 2,000 mainspace edits in the last ~2 years. Here's a complete list of all your edits that the visual editor could detect as being more than a handful of words added.[3] It's 78 edits (4% of your edits) – less than once a week on average. And I'd guess that half of your content additions are too short to have any chance of using an anti-AI tool on, so the anti-AI tool would check your edits two or three times a month. Why build something, if it could only be useful so rarely? WhatamIdoing (talk) 00:58, 19 October 2025 (UTC)[reply]
Well, how would that tool's frequency scale across the entire Wikipedia community? I'd imagine it'd be used at least a little bit more often then. (or, I imagine, multiple orders of magnitude) Koopinator (talk) 05:55, 19 October 2025 (UTC)[reply]
For brand-new editors, it might capture something on the order of half of mainspace edits. High-volume editors are much more likely to edit without adding any content, so it'd be much less useful for that group. WhatamIdoing (talk) 19:54, 23 October 2025 (UTC)[reply]
We could at least use a flagging system for vandalism review. LDW5432 (talk) 14:05, 6 November 2025 (UTC)[reply]
It should be possible to detect low hanging fruit AI text, based on certain common features. Raw AI inference cut and pasted from a chat bot is going to be easier to detect. I agree that the type of user doing this probably has no reputation at stake, doesn't care very much, more likely to be newbie and/or a non-native speaker from another Wiki. I don't know about policy, but a bot that sends a talk page notice, or flags the edit summary with a "[possible ai]" tag. No one is already working on this? -- GreenC 17:10, 18 October 2025 (UTC)[reply]
mw:Edit check/Tone Check uses a Small language model to detect promotionalism. (See tagged edits.) I'd guess that it would be possible to add an AI detector to that, though the volume involved would mean the WMF would need to host their own or pay for a corporate license and address the privacy problems.
mw:Edit check/Paste Check is probably more efficient, though, as anyone copying from a chatbot is going to be pasting it into the article, and detecting a big paste is easier than checking the words that were pasted in. WhatamIdoing (talk) 01:04, 19 October 2025 (UTC)[reply]
I think AI edits should be mandatory for everyone to disclose, both in articles and talk pages. There could be a box where you check it if your content comes from AI or is mostly AI, similar to how you can check minor edits. Bogazicili (talk) 18:40, 21 October 2025 (UTC)[reply]
Having a UI element like that would work towards legitimizing LLM use in creating text for Wikipedia. Merko (talk) 00:41, 22 October 2025 (UTC)[reply]
I agree: Either it will allow the material to be posted and thus legitimize LLM use, or it won't allow the material to be posted and cause people to tell lies so they can get it posted. WhatamIdoing (talk) 02:18, 22 October 2025 (UTC)[reply]
Do we currently have a policy on LLM usage? This one seems failed Wikipedia:Large language model policy
My position is that if it's not banned, it should be declared. Bogazicili (talk) 10:45, 23 October 2025 (UTC)[reply]
I thought the failed policy proposal was supposed to require people to declare it. WhatamIdoing (talk) 20:00, 23 October 2025 (UTC)[reply]
Almost 2 years ago. Merko (talk) 22:09, 23 October 2025 (UTC)[reply]
LLM-generated content is a cancer on Wikipedia, and it will only get worse. "AI detectors" have many false positives, as do checks made by editors themselves, but just because we can't reliably detect something today doesn't mean we shouldn't implement a policy against it. I support mandating the disclosure of LLM-generated contributions by all users. We don't treat WP:GNG differently on articles created by extended-confirmed users or others, we shouldn't do it here either. Merko (talk) 22:21, 21 October 2025 (UTC)[reply]
If you think original content generated by a program is a negative to that extent, then I don't think requiring disclosure is the appropriate approach, since that would only be a prelude to removal. We should skip straight to requiring editors not to use programs to generate original content. isaacl (talk) 04:38, 22 October 2025 (UTC)[reply]
Wikipedia should first address LLM content from anonymous IPs. LDW5432 (talk) 19:56, 27 October 2025 (UTC)[reply]
IP editing actually isn't that much of a problem here -- in my experience almost all AI text I find came from someone with a registered account. Off the top of my head I'd say less than 10% of it comes from IPs.
This may change with temporary accounts in a few days though, who knows. Gnomingstuff (talk) 20:56, 30 October 2025 (UTC)[reply]
I came here to propose pretty much the same thing (policy, not bot). Having a blanket rule would be hugely helpful in dealing with editors, since it can get very tedious explaining why each AI edit they claim to have checked is in fact problematic. I might even go so far as to propose a separate user right (or pseudo-right?) called something like LLM user, for editors who can demonstrate they are sufficiently competent with content policies and have a legitimate use case. I don't think such a right should convey any actual abilities, but users found to be using LLMs without it could then be much more easily censured and guided towards other forms of editing. Applying exactly the same system but tying it to extended confirmation seems like it minimizes potential rule creep, but it's a blunter filter which might not be as effective, since I'm sure there are plenty of extended confirmed users who lack the requisite understanding of policy. lp0 on fire () 21:03, 10 November 2025 (UTC)[reply]
That is probably a good idea, but I don't see any way to enforce it automatically and also do it well, as it would not be good if someone got flagged for using AI when they did not, and Wikipedia is so large it would happen a lot. I believe that AI should be used extremely rarely on Wikipedia, as it is known to hallucinate mis-information and drag on and on about things that don't matter (see: Grokapedia, or search up AI hallucinations). It has many chances to cause things to go awry, and should not be made main-stream as a way to enhance/speed up editing. I suggest it is done by humans. If a new user joins Wikipedia and is flagged or seen on talk pages, maybe give there edits a look, just to make sure there doing good. Some ways to spot AI writing is looking for constant pairs of 3's (like, LOTS, basically every sentence), un-usual use of Em dashes,(looks like a bigger hyphen, — Vs. -) as they are not on a normal keyboard and either take a copy and paste or a very unique keyboard shortcut to type, repeating info or full paragraphs that don't really say/mean anything. A lot of these are hard to give examples for and you just have to see them for the first time to start noticing. Overall, I agree that there should be restrictions on AI edits. Oak lod (talk) 15:49, 20 November 2025 (UTC)[reply]
I strongly support the suggestion and would even go as far as suggesting a new flag. The AI as a tool is similar to WP:AWB: in unskilled or malicious hands it can do a lot of damage in a short amount of time. Correspondingly, use of AWB is not allowed for drive-by accounts. Similar logic applies to AI, IMHO. For the avoidance of doubt, I think that proper use of AI improves articles, so I think that we should regulate the use of AI, and not prohibit it. Fear of outright hallucination is overblown, as far I can tell: as long as the input was explicitly restricted to correct sources (either a foreign-language Wikipedia article or manually-selected WP:RS), there were no hallucinations. Note that texts of RS you are planning to use for the article should be fed to the engine first in their entirety, as for some reason the AI engines are really shy when it comes to actually fetching information off the Web (I suspect there are legal reasons in play here), so if you just point to the sources, AI will start generating ideas of its own, not summarizing the WP:RS as it should. Викидим (talk) 00:14, 24 November 2025 (UTC)[reply]
What if we make a box that allows people to flag their own edits as AI-assisted, and a warning that lets people know that fully AI-generated content will be taken down in accordance with a policy and partially AI-assisted content must be marked so that humans can review it or it will be taken down if not marked. (if there's not a policy to ban unreviewed AI text already, make one). Then, we make a bot like Cluebot to detect AI slop and revert it and leave a warning, but we have it set to be very cautious so it minimizes false positives. I think this would solve the problem and it neatly combines all the ideas I saw above. RBarr-12@wiki:~/user/talk/contribs 20:07, 2 December 2025 (UTC)[reply]
That's probably the best solution. Good idea.
Oak lod (talk) 20:14, 2 December 2025 (UTC)[reply]
It won't work.
First, Wikipedia:Nobody reads the directions. Then, if someone does manage to see the checkbox, they'll check it ...and check back, and if their edit has been reverted, they will never check it again. We have evidence of this in the in-editor image uploading tools. If people believe it's reasonable to upload a corporate logo (or some other common type of image), then they'll tick whatever box you require. Sure, I own the copyright to the McDonald's logo. Sure, I wrote all that myself. Sure, I'll give my first born to Rumpelstiltskin. Whatever is necessary to do the task, people will claim they've done. WhatamIdoing (talk) 02:02, 5 December 2025 (UTC)[reply]
Man. I guess the simplest, easiest, and first solution you think of really is never the best solution.
Oak lod (talk) 15:54, 5 December 2025 (UTC)[reply]
Good point. Maybe simplify to just the bot checking for AI content, warning editors. Basically, a Cluebot clone for AI detection. RBarr-12@wiki:~/user/talk/contribs 17:32, 5 December 2025 (UTC)[reply]
The problem is that "AI content" is nebulous and hard to define, any automated tagging will either include false positives or so many false negatives as to be useless (or both). Any edits flagged for being AI will include some that have no problems at all, some that have issues that can be trivially fixed, and some that have actually serious issues. These issues will be a mix of all-types making it harder to fix (e.g. it will mix non-existent references due to minor errors in with hallucinated references, text that includes prompts for the user and other problems). Thryduulf (talk) 18:45, 5 December 2025 (UTC)[reply]
Honestly, the best way to find and fix issues regarding AI content would most likely to just have no specific bot for catching these edits. Might be best to just use the hundreds of thousands of editors already looking for errors in pages instead, as the best detector of not human content is a human. Oak lod (talk) 15:52, 8 December 2025 (UTC)[reply]
Which is why when someone finds an error they can't (for whatever reason) fix it themselves there and then, they should be encouraged to tag it with what the specific problem is (failed verification, inappropriate tone, etc) rather than a generic AI tag. Being specific about what the problem is means others don't have to spend time figuring it out (what's obvious to one person isn't necessarily obvious to someone else) and those editors who are looking to fix problems of a given type know that such a problem exists in that article. Thryduulf (talk) 16:18, 8 December 2025 (UTC)[reply]
The problem is that there are too many people willing to use AI, and AI can compose long stretches of almost-plausible ideas. There's more inaccuracies being produced than edits people are willing to spend their time on reverting said inaccuracies instead of on their new draft paper. We can't control the people making the edits until after they've made those edits, so we either need more editors or some assistance to make the editors we do have more efficient somehow. I don't know what that assistance would look like though. RBarr-12@wiki:~/user/talk/contribs 17:44, 8 December 2025 (UTC)[reply]
I'm not understanding how that follows on from my comment? Thryduulf (talk) 18:01, 8 December 2025 (UTC)[reply]
Accidental reply to your comment, meant to reply to Oak Iod's comment above. RBarr-12@wiki:~/user/talk/contribs 18:52, 8 December 2025 (UTC)[reply]
We could use those specific tags to train AI detecting models. The edits with those tags could be validated if plausible or just straight added to the training data. After the model has been trained enough and proves to be accurate, we could do a test run on Wikipedia, with the bot only tagging edits and not reverting them. People could then remove the tag to indicate a false positive and the amounts of tags from the bot that were removed and the ones that stayed could be counted up. If there are too many false positives then the bot is scrapped or re-trained. This could be repeated as many times as found necessary. The bot could also be built upon already existing bots. This would be a little far fetched, as the issue is not that large, Wikipedia might not be built for a system like that, it could be very expensive, and Wikipedia hasn't done anything like this before as far as I am aware. This also might bring up similar issues to other solutions.
I also think that RBarr-12 intended to reply to my comment. Oak lod (talk) 18:14, 8 December 2025 (UTC)[reply]
IDK about the technical feasibility of scanning all edits with a bot, but the policy side of this is just WP:LLMDISCLOSE. -- LWG talk 20:42, 2 December 2025 (UTC)[reply]
Here's yet another AI edit that survived for years: Special:Diff/1145967029. I only noticed this one when some user decided to add an {{AI-generated}} tag to it, in 2025, and the edit was made in 2023. The user, by the way, had 144 edits and a history of creating AI-looking pages like this one. Somepinkdude (talk) 14:47, 13 December 2025 (UTC)[reply]
I feel as though it should be the case that for those who have minimal experience in editing, like me, should not be allowed to utilize AI to make edits. This is something that needs to be considered: AI cannot help you unless you already know what you are doing. I can't have AI write code for me if I don't have the skills necessary to interoperate, give the correct input into the AI, know when the AI is lying. Also sources must be considered. Most of the content on here is facts, that must be backed up by sources. It should be mandatory that research is done, and that facts never originate from AI, rather AI should only be allowed to be used as a phrasing tool. Because I get it, its quite annoying to write super long sections, you have to go look stuff up, then cite, then you go back to writing for a sentence and then it loops, and by the end you are just left with an incomprehensible, dense as hell paragraph that is useless because people can't understand it. But if the facts that the writer already collected are then told be phrased by an AI assistant, then that should be allowed, it would aid in comprehensibility. CatLove989 (talk) 16:59, 14 December 2025 (UTC)[reply]
I agreed with you RituPunMagar (talk) 13:13, 15 December 2025 (UTC)[reply]
Pinging the operators of Cluebot, @DamianZaremba: and @Rich Smith:, to see if they have possible implementation ideas. Koopinator (talk) 17:56, 16 December 2025 (UTC)[reply]

Idea for International Mentoring Day '26 & beyond

[edit]

Recently I have learned that there is an International Mentoring Day on 17 January. The UK and the US also have national commemorations to celebrate mentoring and thank mentors of all sorts (i.e. in corporate mentoring programmes; adult-led youth groups; and teaching). In the UK, this is 27 October; in the US, the entire month of January.

With this in mind, I would like to propose that Wikipedia:

  • Start an annual commemoration on January 17 of this coming year with notification about the day somewhat in advance, and encouragement to all editors to take a few minutes to thank their mentors whether current or past, as well as those who offer guidance as Teahouse, Help Desk, and Village Pump staff;
  • Share stories about how mentoring helped; and
  • Offer "Did You Know?" tidbits around and on January 17 about how the commemorations came about in the UK and the US.

As we are a little over 9 weeks away from January 17, there would be adequate time to plan for its commemoration on Wikipedia if the decision is taken to carry this idea forward. ~2025-33078-41 (talk) 17:52, 12 November 2025 (UTC)[reply]

The problem with days of X is that anyone can declare any day the day of X and these things die after a year or two when a few people forget about them.
Also I haven't really seen much active mentoring on Wikipedia, but that can be my fault because it is not the kinda thing I would notice. Polygnotus (talk) 03:42, 20 November 2025 (UTC)[reply]
There really is an International Mentoring Day on 17 January. It was started as an extension of the US National Mentoring Month (held throughout the month of January), but is now encouraged worldwide.
Because mentorship is an important part of Wikipedia for many editors, it just seems like promoting the day would be a wonderful way to honor those who serve in this way.
Do you have any idea where else in the world of Wikipedia that this suggestion could be raised with greater likelihood of taking it further? ~2025-36716-26 (talk) 10:13, 27 November 2025 (UTC)[reply]
No clue, sorry. Polygnotus (talk) 10:32, 27 November 2025 (UTC)[reply]
I think I have just found what seems a good step to move forward with this idea: to make a "Central Notice banner request." ~2025-37075-42 (talk) 16:54, 28 November 2025 (UTC)[reply]
Central Notice banners are rarely used and for fully fleshed out ideas with consensus behind them that have been implemented already.
So far you reached one person, and they were not enthusiastic about the idea.
Is there a reason you would like to push this, which could include but is not limited to being involved with the people/an organization who/which decided to give that day that label or who/which joined the initiative? Polygnotus (talk) 17:07, 28 November 2025 (UTC)[reply]
The only reason I would like to "push this," Polygnotus, is because of the wonderful guidance I've received from my own mentor, as well as many other knowledgeable editors who staff Wikipedia help venues ... and the immense appreciation I've come to feel for volunteering their time and effort.
No, I'm not at all involved with any of the people or organizations who created or joined the International Mentoring Day initiative. It was only at some point this year that I even heard of such a day. ~2025-39632-68 (talk) 11:59, 9 December 2025 (UTC)[reply]
Maybe try the Teahouse? Polygnotus (talk) 12:00, 9 December 2025 (UTC)[reply]

IP talk page blanking bots, now that we have temporary accounts

[edit]

Three years ago, an editor got consensus to create a bot to blank all stale IP talk pages. Wikipedia:Village pump (proposals)/Archive 190#RfC: Bot to blank old IP talkpages The main reason for this was that Stale warnings and other messages will confuse legitimate new editors editing from that IP seeing it apparently directed at them

Fast forward to 2025, and we have temporary accounts; new editors will never be directed toward talk page IPs. So we don't need to worry about scaring them off.

Given that, I would like to see what the community's attitude is toward this problem now.

Personally, this post was made because I'm trying to track down a Mississippi IP editor who inserted copyright violations into articles about American TV soaps, so I can remove the copyvios. Having their talkpages easily accessible, for searching and whatnot, would be very helpful. Speaking more generally in terms of my CCI work, non-obscured accessible talk pages allow me to more easily link to previous warnings, track copyright violations that were spotted at the times, and track older socks[4][5][6][7], especially if they were duck blocked at the time but not recorded at SPI. I also only have 24 hours in each day; time spent going back to previous revisions is time I'm not spending removing problematic content. GreenLipstickLesbian💌🧸 09:35, 23 November 2025 (UTC)[reply]

I support stopping the bot. It has served its purpose. Toadspike [Talk] 09:42, 23 November 2025 (UTC)[reply]
I do too. Thryduulf (talk) 11:00, 23 November 2025 (UTC)[reply]
+1 ~/Bunnypranav:<ping> 12:25, 23 November 2025 (UTC)[reply]
I'd support stopping this. I looked quickly but maybe is faster (I'm not sure the best way to find this) to just ask if any non-blocked bot is currently performing this task? Skynxnex (talk) 12:33, 23 November 2025 (UTC)[reply]
The task was inherited by User:VulpesBot (run sporadically by Dr vulpes, but they've said they plan to run it again I believe?) but I know some editors do large AWB runs to indiscriminately blank the old IP talk pages. GreenLipstickLesbian💌🧸 20:34, 23 November 2025 (UTC)[reply]
Ah, thanks. Still agree we should stop blanking them at this point. (And earlier maybe would have been better.) Skynxnex (talk) 21:36, 23 November 2025 (UTC)[reply]
  • Just to clarify, are we talking about stopping the bot with respect to temporary accounts? Because the bot is set to only blank pages for IPs who have not edited in over five years, there are still tens of thousands of IP talk pages identifying IP addresses. If you look at, for example, User talk pages that link to "Blueberry", there are dozens of them just on that list. BD2412 T 18:50, 23 November 2025 (UTC)[reply]
    No, it is for IP talk pages only, per what I understood from GLL's example above. ~/Bunnypranav:<ping> 18:53, 23 November 2025 (UTC)[reply]
    No, it's stopping it for the talk pages of IP's. There are benefits to not blanking these IP talk pages (detailed in GLL's first post), and given that no new editors will be assigned these talk pages in the future there remain almost no benefits to blanking them.
    Whether talk pages of temporary accounts should be blanked after the account expires is not something I can recall seeing anywhere and is not part of this proposal, but given that they will not be reused I can't immediately see any benefits to doing so. Thryduulf (talk) 19:41, 23 November 2025 (UTC)[reply]
    I agree with Thryduulf that I see no benefit to blanking them. I do see potentially harm, however, for much the same reason. I often use the What Links Here tool to investigate, and if TA talkpages get blanked, then just like with old IPs, I am no longer able to do that. GreenLipstickLesbian💌🧸 20:42, 23 November 2025 (UTC)[reply]
    I would think your use of "What Links Here" is hampered by an excess of links to IP talk pages from which no edits have come in many years, even decades. Wikipedia's purpose is not to serve as a permanent host for long-irrelevant IP talk page messages. That should be even less so when the IP talk pages no longer reflect any current account usage due to the changeover. BD2412 T 20:57, 23 November 2025 (UTC)[reply]
    Interesting enough, it is not - generally if there's enough links to IP talk pages to become unusable, then there's enough links to registered account talkpages to be unusable. Removing IP talk pages just hampers my ability to look for historic disruption on lower trafficked pages, and also stops me from being able to use the search tool as effectively. GreenLipstickLesbian💌🧸 21:03, 23 November 2025 (UTC)[reply]
    To be perfectly clear, the typical ancient IP talk page message has been where the IP did something like randomly add "poop" to an article once or twice in, say, 2012, got reverted with a warning, and no other edits ever came from that IP address (although I grant that most of those have already been blanked). I think we can refine the model to maintain pages where there is a possibility of copyvio involvement or the like, but I am at least dubious about the long term value of maintaining those pages. BD2412 T 21:47, 23 November 2025 (UTC)[reply]
    A lot of these old accounts don't always get reverted for copyvio, they get reverted with anti-spam, anti-unsourced content, page hijacking, and really pretty every warning under the sun. Knowing at a glance that an account was editing disruptively in a topic area is still very useful. See User talk:70.49.196.202 or User talk:62.28.161.202 for examples - I just reverted a bot blanking on the first, and the other was saved because the IP got notified of an AfD late last year. Both of these editors have still open CCIs which either have been or will need to be expanded to include IP edits.
    If somebody sees an IP where the IP only made one vandal edit, got warned, and would rather blank the talkpage than fix whatever lint error they found, they could still do so manually. GreenLipstickLesbian💌🧸 22:04, 23 November 2025 (UTC)[reply]
    @BD2412 VulpesBot is exclusion compliant so you can just stick {{nobots}} on User talk:70.49.196.202 if you want. Polygnotus (talk) 00:00, 24 November 2025 (UTC)[reply]
    That was for me. I do a lot of IP talk page blanking outside of VulpesBot's strictures. BD2412 T 00:02, 24 November 2025 (UTC)[reply]
    I agree that there's no need to hide the content of these pages, and since temp accounts only last for 90 days (under the current configuration), there's no need to ever blank those. WhatamIdoing (talk) 21:18, 23 November 2025 (UTC)[reply]
Support the bots could be doing something else so they're wasting resources and/or time. As noted by GLL, it makes work harder for those trying to check for violations that the IP has done, so it is bad for that reason as well. I think the case made for blanking IP user talk pages isn't strong enough because it doesn't fix the linter errors (so the revisions still show errors on that talkpage) and it only ignores them. As noted by GLL (again), the bots could be made to ignore old IP talk pages and they could be configured to stop showing up in reports. An argument against this was that this approach would be sweeping errors under the rug but (as noted by GLL, again) so is blanking the talk page. User:Easternsaharareview this 19:21, 15 December 2025 (UTC)[reply]
The above comment shows a misunderstanding of how the Linter error pages are generated and maintained by the MediaWiki software. The misunderstanding makes sense: it is clear to me from this discussion that the group of people who monitor the Linter reports and make other maintenance edits to pages do not overlap much, if at all, with the group of people who check for misbehavior by unregistered editors. Also, blanking does remove the Linter errors from the affected pages. Yes, previous revisions often contain errors of many types, but there are no reports or automatic MediaWiki tracking that look at old revisions, as far as I know. – Jonesey95 (talk) 19:39, 15 December 2025 (UTC)[reply]

Please see WP:BOTAPPEAL for instructions on how to start a discussion about reexamination of approved bot tasks. – Jonesey95 (talk) 00:15, 5 December 2025 (UTC)[reply]

Note there's probably not an approval to review in this case. Wikipedia:Bots/Requests for approval/VulpesBot was approved as a one-time run ("will return six months after run is complete to request a rerun", which didn't happen), while the operator of Wikipedia:Bots/Requests for approval/MalnadachBot 13 is sock-blocked. Also note that establishing that consensus has changed would be a necessary part of a review, so a Village pump discussion would still be useful to establish that. Anomie 00:28, 5 December 2025 (UTC)[reply]

One benefit of blanking IP talk pages

[edit]

(copied and expanded from Wikipedia:Village pump (proposals)): Multiple editors above have said that they see no benefit in blanking IP talk pages. Here's a counterpoint. Most of them are not harmful, but I recently found User talk:144.160.98.31 on a report of Linter errors. Its only edits in the last twelve years had been seven edits by bots to perform various cleanup tasks, and when I visited, there were still 18 Linter errors on the page, meaning that someone was going to edit that page in the future to clean it up. I replaced its content with {{blanked IP talk}}. If someone had done that years ago, those seven bot edits would have been unnecessary. It made me wonder if there was any point in maintaining any of the IP editor talk pages, since there are (in my understanding) no more IP editors. Can we just blank them all, or at least blank the ones that have errors so that they don't clog up error reports? Is it really useful to maintain a live page with IP editor communication messages that are more than five years old? Editors investigating a particular IP can easily look at the pre-blanked page in the history. – Jonesey95 (talk) 22:04, 4 December 2025 (UTC)[reply]

And lest anyone think that the page linked above is an edge case, here's a link to thousands of IP User talk pages with Linter errors. – Jonesey95 (talk) 22:43, 4 December 2025 (UTC)[reply]
Jonesey95, why do you fix linter errors on those pages? GreenLipstickLesbian💌🧸 22:45, 4 December 2025 (UTC)[reply]
For the same reasons that the MediaWiki developers tagged them. See mw:Help:Extension:Linter § Why and what to fix for details. Note that stale IP User talk pages are not just an attractive nuisance due to Linter errors. They can also contain templates that are being deleted, categories that are being moved, code that has become obsolete, and other required maintenance needs that cause bots or humans to visit them. – Jonesey95 (talk) 22:49, 4 December 2025 (UTC)[reply]
Correct me if I'm wrong, but it looks like a lot like it's to keep pages readable as support for various tags changes. (also, sorry, we should have edit conflicted when I made my post) GreenLipstickLesbian💌🧸 22:53, 4 December 2025 (UTC)[reply]
Yes, or to display what the original editor intended without mis-rendering their or anyone else's contributions to the page. – Jonesey95 (talk) 22:58, 4 December 2025 (UTC)[reply]
Do you think blanking the page makes it more readable? GreenLipstickLesbian💌🧸 23:00, 4 December 2025 (UTC)[reply]
Blanking the page means fewer unnecessary bot and human edits while preserving the page history for those who need to see it. – Jonesey95 (talk) 23:04, 4 December 2025 (UTC)[reply]
Or we could just program the bots and tell the humans not the edit the disused pages, and it would have the same impact, right? Sorry if there's something I'm missing, but the lint errors, broken templates, deleted categories, they don't suddenly become less broken, deleted, or errorful when you have to look at an old revision, right? GreenLipstickLesbian💌🧸 23:08, 4 December 2025 (UTC)[reply]
Re tell the humans not the edit the disused pages: Pages with errors that show up on reports, lists, or error categories but should be ignored make those reports/lists/categories less manageable, because other pages with problems become less visible. I have not found ignoring some pages on reports to be a useful strategy in my years of gnoming dozens of error reports and categories. Do you regularly monitor reports/lists/categories that have a subset of pages to be ignored? – Jonesey95 (talk) 23:50, 4 December 2025 (UTC)[reply]
In this case I have to back @Jonesey95 up, it is very annoying and complicated when gnoming to keep a blacklist, and gnoming often leads to the discovery of thousands of minor problems, but also a bunch of big problems. Polygnotus (talk) 23:53, 4 December 2025 (UTC)[reply]
Could you set up the report to not include IP talk pages? Or ask the person responsible for the report to remove all IP talk pages? Or just... fix the lint error so that the page remains readable? GreenLipstickLesbian💌🧸 23:54, 4 December 2025 (UTC)[reply]
No, no, and not easily. The reports (I linked to a subset of one above) are generated by the MediaWiki software. The word "just" is doing a lot of work in the last sentence; there are over 7,000 IP user talk pages with Linter errors, with a wide variety of errors. – Jonesey95 (talk) 00:01, 5 December 2025 (UTC)[reply]
Okay, then you can talk to the folks who generate the MediaWiki softwaree? It does look like they'r sortable, to some extend - Wikipedia:Linter/reports/IP user talk pages by Lint Errors for example, only has old IP talk pages. Couldn't you just ignore that page, rather than updating it?
Or, at the very least, if you'd like to blank a user page - could you go through every single one of the IP's contributions, check them for PAG compliance, do an exhaustive search for any unattributed plagiarism, source text integrity, hoax material, BLP violations, NPOV issues? And repeat it for any neighboring IP's (like others on the /64) before you hide evidence that those problems existed?
Because that's what I'm trying to do. GreenLipstickLesbian💌🧸 00:06, 5 December 2025 (UTC)[reply]
We can't sweep the errors under the rug, that defeats the whole point of them being reported in the first place. Tenshi! (Talk page) 00:11, 5 December 2025 (UTC)[reply]
Sorry if I'm misreading you, but is blanking them not sweeping them under the rug? GreenLipstickLesbian💌🧸 00:13, 5 December 2025 (UTC)[reply]
It would mean that the lint errors would not be reported, though it doesn't address the issue for anyone looking back at the history before the page was blanked. Tenshi! (Talk page) 00:18, 5 December 2025 (UTC)[reply]
Wikipedians love to debate everything but this proposal is an obvious yes. In the past, stale IP talk pages were routinely blanked to reduce confusion if someone new used the same IP years later. That reason no longer applies. Routine blanking of stale IP pages should not occur now because it would be pointless churn and would hide possibly useful information when searching for old copy-vios or spam. By contrast, stale pages with WP:LINT errors should be cleaned up. Removal of weird wikitext that generates such errors is often best because wasting time polishing stale comments would not be helpful. Simply blanking a stale page with linter errors gives a clue about what happened to anyone investigating the history. Painfully fixing or removing multiple errors on a stale page would obfuscate history and not have any benefit. Johnuniq (talk) 03:58, 6 December 2025 (UTC)[reply]

Instead of showing UTC time, show the time the user is in

[edit]

On edits, diffs, and posts, the timestamp is always in UTC. Discord has a feature where, when you copy/view a timestamp, it displays the time according to the viewer’s local timezone. For example, if you report a post that occurred at a specific time in your timezone, another user will see the corresponding time in their own timezone, which helps avoid confusion. I believe adopting a similar feature would support the modernization of Wikipedia. Rc2barrington (talk) 02:46, 24 November 2025 (UTC)[reply]

You can have that with User:Mxn/CommentsInLocalTime or WP:LOCO.
This somewhat used to be a built-in feature (m:Help:Date formatting and linking): every date was linked everywhere to automatically convert the timezone according to the user's preference at Special:Preferences#ooui-23. However, various things resulted in the feature being disabled and then removed: Wikipedia:Manual of Style/Dates and numbers#cite_ref-5. Aaron Liu (talk) 03:22, 24 November 2025 (UTC)[reply]
That feature converted the format, but not the time zone. Also, if we wanted, there's a #dateformat parser function that could be used to format dates according to the user preference. But we've never wanted. Anomie 04:05, 24 November 2025 (UTC)[reply]

I know this is the idea lab and we're not supposed to just support or oppose, but I can't really find a "yes and" here. I'm generally skeptical of attempts to make users see something different from what was written, even with an opt-in. Fonts and dark mode, OK, I guess, but not actually changing the text. I think that was a mistake from the beginning. --Trovatore (talk) 03:39, 24 November 2025 (UTC)[reply]
The perks of living in England are that UTC is just the current time for me. (outside of summer) GarethBaloney (talk) 11:37, 24 November 2025 (UTC)[reply]
For myself, I have my preferences set so that everything is set to my time zone automatically. The only thing that doesn't get converted is dates and time when I am editing the source.
Converting the time and date when I need to is a bit of a pain, but it is better for me as I can see at a glance on talk pages how long ago the last replies were, which is the most common thing I see related to time on Wikipedia.
In short, I think that what we have works. --Super Goku V (talk) 05:50, 24 November 2025 (UTC)[reply]
DiscussionTools puts "Latest comment: 41 minutes ago" at the top of every talk page and each ==Section==, so you should be able to see at a glance on talk pages how long ago the last replies were no matter what your timezone settings are.
I used to set my local time at Special:Preferences#mw-prefsection-rendering-timeoffset but eventually it became too much of a hassle to keep straight which timestamp on the talk page corresponded to which edit in the page history. I find it much simpler to have the whole thing in UTC. The UTC clock gadget in Special:Preferences#mw-prefsection-gadgets-gadget-section-appearance may be helpful, if you are trying to figure out what time it is in UTC right now. (I turned that off with Vector 2022, though.) WhatamIdoing (talk) 07:18, 24 November 2025 (UTC)[reply]
So as seen in this image I just really think it would be better to show the time I AM IN. Not the standardized UTC time. Rc2barrington (talk) 01:26, 25 November 2025 (UTC)[reply]
Try the scripts I linked above. Aaron Liu (talk) 01:38, 25 November 2025 (UTC)[reply]
Apparently I don't use DiscussionTools on Wikipedia, but I recall seeing something like that on other Wikis. Still I feel more comfortable seeing the exact time people made their replies rather than seeing the UTC time of when they made their comments. Besides, I don't need to convert the date and time enough to where that would be the bigger hassle. (And yes, I have the UTC clock in the upper-right corner just to keep myself aware of it.) --Super Goku V (talk) 05:56, 30 November 2025 (UTC)[reply]
It seems to be the case that at least some-language Wikipedias have adopted the time zone of where most speakers of that language reside. For example, French Wikipedia seems to use CET/CEST. English Wikipedia could really have adopt ET (the time zone where c. half of Americans/Canadians live). Or GMT/BST (time zone used in UK). But the UTC gives a compromise not only because English speakers live across the globe, but its also the time zone used for computers, aviation, ISS, etc. If anyone wants to ensure that comments are outputted in the local time zone, the WP:Comments in Local Time should be of help.
For me, I live in a place where its at UTC for winter and UTC+1 for summer, so I just remember to subtract 1 in summer. JuniperChill (talk) 20:08, 10 December 2025 (UTC)[reply]

Mass-reverting AI serial abusers

[edit]

If someone has repeatedly used an LLM without adequate verification of its output, I think we should be able to mass-revert their edits. I envisage a system whereby we only have to glance over each edit and check it is AI-generated, rather than the much higher bar of reverting the cases where the AI has caused a definite problem. My rationale is that if someone has repeatedly failed to use AI responsibly, then their other uses can be assumed to be irresponsible as well. Roughly speaking, I imagine the level of abuse required being roughly the current threshold for a dedicated subpage of the AI cleanup noticeboard. It has been remarked on numerous occasions that checking whether AI output is inclusion-worthy is about as hard as writing the material from scratch, so I think requiring other users to perform this level of checking before reverting AI edits is not reasonable. What do people think? lp0 on fire () 22:03, 26 November 2025 (UTC)[reply]

Are we talking about a blocked user? Was there a discussion about their behavior? I could imagine forming a consensus to Wikipedia:Rollback all of an individual's edits, but I'm not sure that I'd recommend that an individual editor unilaterally declare that everything you did in the mainspace is definitely AI and should all be reverted.
Also, outside the mainspace, it's a bit more complicated. If an AI-generated comment on a talk page received a reply, it probably shouldn't be reverted. WhatamIdoing (talk) 23:42, 26 November 2025 (UTC)[reply]
IDK if a tool like this is a good idea, but if it did exist I'd envision it being used for blocked editors (look up the user whirlingmerc for an example that wasted hours of my time). For editors who have not been blocked, it's appropriate to ask them to clean up their own mess by self-reverting all the problematic contributions. -- LWG talk 01:09, 27 November 2025 (UTC)[reply]
I think it certainly applies to talk pages, per wall of text issues. All AI edits should be deleted, per my comment below. Yesterday, all my dreams... (talk) 14:54, 27 November 2025 (UTC)[reply]
I agree that if an editor has been blocked for using AI, reverting any of their edits that look like AI output should be allowed. This sounds like presumptive deletion in copyright cleanup. I don't think we need a special tool for this though. Toadspike [Talk] 07:23, 27 November 2025 (UTC)[reply]
That presumptive deletion is exactly the idea I was going for. I wasn't suggesting a special tool, but I think mirroring the wording there pretty much exactly could save a lot of time (i.e. not requiring that the user be blocked). If someone does a long spree of AI additions but leaves the project before anyone notices, there's no need to block them, but being allowed to mass-revert their mainspace edits would still be helpful. lp0 on fire () 07:45, 27 November 2025 (UTC)[reply]
I agree, and think to succeed you need to invent a name for it, say "vagabond AI editor" reverts. I think this is important because the trend is the increase in AI edits. And I think it should also apply to talk pages given wall of text issues. AI edits are the termite that can ruin Wikipedia. Yesterday, all my dreams... (talk) 14:50, 27 November 2025 (UTC)[reply]
I don't see why we can't just call it presumptive deletion. For talk pages, we have {{aitop}}/{{aibottom}} already and I think that's enough. lp0 on fire () 15:12, 27 November 2025 (UTC)[reply]
Or we could make something similar to Template:Single-purpose account, except instead of saying:

Example (talkcontribs) has made few or no other edits outside this topic.

for AI use, it could say something like:

WhatamIdoing believes that this comment was written by generative AI instead of by Example (talkcontribs).

WhatamIdoing (talk) 20:49, 27 November 2025 (UTC)[reply]
Yesterday, I'm not convinced with your view. In fact, you're rapidly making me less supportive of this whole idea. It begins to feel like this:
  • We should revert everything.
    • Maybe not talk page comments, if someone's already replied.
  • No, really, everything, because it's a Wikipedia:Wall of text.
    • Even if it's just a short reply?
  • Really, everything, because everything is a Wikipedia:Wall of text.
You obviously loathe AI use, which is fine. But what if the comment is not a wall of text? Would you seriously recommend reverting a one-word reply because a single word is "a wall of text"? How would you even know whether such a short comment used AI?
Would reverting a talk-page comment actually help anyone? WP:REDACT says usually no, particularly if someone's already replied. Would it be better than alternatives such as striking (like we do with socks), hatting (e.g., aitop/aibottom), labeling (like we do for WP:SPAs), or archiving? I doubt it.
I wonder whether your ham-fisted recommendation signals that you're getting burned out. If editing feels like a sisphyean struggle against the forces of spam and stupidity, then you might try to find a way to contribute that feels fun and/or effective. WhatamIdoing (talk) 20:45, 27 November 2025 (UTC)[reply]
Well, you know that our agreement rate is pretty low. But that is the nature of free speech. As for "forces of spam and stupidity" being in full swing on many pages, we actually agree on that. And I assume you are also thinking of my talk comment on fuzzy concept. On that page OR and stupidity are in full swing indeed. We can not have a "respectable" encyclopedia with that type of content. Yesterday, all my dreams... (talk) 00:44, 28 November 2025 (UTC)[reply]
I have spent no time looking at your comments on talk pages, so no, I had no idea that you posted a comment there (that says nothing about AI use). WhatamIdoing (talk) 04:04, 28 November 2025 (UTC)[reply]
I've been thinking about this sort of thing as well. Regardless of the approach we end up taking, we do need to be more proactive in removing unverified AI content and quickly putting a stop to people who add it. Thebiguglyalien (talk) 🛸 04:57, 28 November 2025 (UTC)[reply]
Agreed. A quick look at the AI cleanup noticeboard will make it abundantly clear how serious a problem this is. As I see it, there are three levels of assuming good faith we could exercise when doing the cleanup (clarifying what I mean here because I think there was some confusion above; sorry in advance for the wall of text).
  1. If someone has repeatedly misused LLMs, we go through their contributions and delete anything that violates policy (weasel/peacock words, OR, hallucinations, &c.) but we can't revert anything until we've identified the problem. This might involve verifying sources and/or translations, might require specialised knowledge, and is about as difficult as writing the content from scratch. This is the current standard, and it makes cleaning up after LLM use unreasonably difficult, leading to a growing backlog of additions to Wikipedia that might be nonsense.
  2. Like copyright violations, any mainspace edits by an AI abuser can be reverted indiscriminately. This would make cleaning up after AI misuse very easy (although, given how easy it is to write content with AI, this might still not be enough).
  3. What I was originally suggesting was a middle ground: if someone has repeatedly misused LLMs, then any edit of theirs that looks AI-generated can be reverted without proof that the AI has hallucinated or otherwise violated policy, because they are presumed incompetent. This would still make cleanup much easier than in currently is, with reduced risk of undoing good contributions.
lp0 on fire () 07:41, 28 November 2025 (UTC)[reply]
Sockpuppet cleanup allows other users to restore sock edits if they are positive (every now and then some are, or partially are), without putting that burden on the cleanup. CMD (talk) 09:13, 28 November 2025 (UTC)[reply]
I don’t think it’s a matter of LLM or not LLM; it’s a matter of good editors and bad ones. There were plenty of bad editors who tried to push bad articles before LLM. The fairest way to approach low-quality articles is the same way it has always been done: with tags that can only be removed if an editor has done the necessary work to justify their removal.
We can’t allow LLM to become a reason for people to ban whoever they want, for whatever reason. Take a contentious subject, for example: an editor could be falsely accused of using an LLM in order to censor their vote on articles. Orlando Davis (talk) 15:53, 28 November 2025 (UTC)[reply]
Instead of deleting the articles, we can have a 3 strike policy where you get banned for 24 hours if you have 3 strikes, and are banned permanently after enough strikes without an attempt to change your behavior. Orlando Davis (talk) 16:29, 28 November 2025 (UTC)[reply]
The difference is that LLMs allow people to churn out huge amounts of bad content extremely quickly without first having to learn how Wikipedia works, which makes it significantly more disruptive than just "bad editors".
I don't think your worries about false accusations make sense. If anyone tried to censor someone by accusing them of using AI, then much like accusing someone of being a sock, that would be highly problematic and likely lead to the accuser being blocked (especially in a contentious topic); however, it's much easier to spot a bad-faith accusation of AI than a bad-faith accusation of sockpuppetry.
Your suggestion of "get banned if you have enough strikes" (I assume you mean blocked not banned) doesn't sound substantially different from the standard system of "you get blocked if you keep doing stuff wrong after being warned" and indeed the template {{uw-ai1}} through {{uw-ai4}} exist for this very purpose.
I think you may have misunderstood the purpose of this proposal: it's not for dealing with people who disrupt the project using AI but rather for cleaning up their edits, which otherwise demands an unreasonable amount of time from the users doing the cleanup. lp0 on fire () 16:43, 28 November 2025 (UTC)[reply]
Couldn’t a way to reduce backlog be to put a cap on how many articles and edits a user can perform per day, to give reviewers enough time to keep up? For example, a 1–2 article per day limit and a 100–200 edits per day limit. What do other editors think? Orlando Davis (talk) 17:09, 28 November 2025 (UTC)[reply]
That sounds way out of scope for this issue. Bear in mind most a lot of AI cleanup involves cleaning up after editors who stopped before (or when) they were noticed, so such a filter would have to apply to all users. I also note that 100 edits a day isn't very much for normal editing, but it's a huge amount of work to clean up after 100 edits of AI drivel. For example, see Wikipedia:WikiProject AI Cleanup/Noticeboard/2025-09-17 Thefallguy2025 which is from early September and still less than half done. lp0 on fire () 17:25, 28 November 2025 (UTC)[reply]
What about the cap on edits being applied more strictly to flagged users? Orlando Davis (talk) 17:41, 28 November 2025 (UTC)[reply]
Or to newbies. Very few brand-new accounts make even five edits on the first day. WhatamIdoing (talk) 01:24, 29 November 2025 (UTC)[reply]
To the extent that new accounts do, they're usually people who have made accounts before (sockpuppets, WP:CLEANSTART) Katzrockso (talk) 01:28, 29 November 2025 (UTC)[reply]
Or someone who couldn't figure out how to use the [Preview] button, so it took them five tries to fix the same sentence. WhatamIdoing (talk) 21:49, 14 December 2025 (UTC)[reply]
So, #3 is what we've been doing at WP:AINB since around August and it has been working just fine, albeit without any PAG to justify... we typically leave an edit summary like "LLM cleanup, as discussed at AINB and/or ANI". I personally have cleaned ~500 articles in this way and only on one of those articles did someone else complain, and I just reverted my deletion and asked that user to verify/fix the article, which they did. Also agreed with Toadspike that it would be a rare case where a tool would be helpful. In almost all cases this has to be done manually. NicheSports (talk) 19:45, 28 November 2025 (UTC)[reply]
Oh, that's encouraging I suppose. It would still be nice to formalize it in a guideline (or at minimum a WikiProject advice page), for the combination of legitimacy and clarity that we get from explicitly writing stuff down. lp0 on fire () 23:05, 28 November 2025 (UTC)[reply]
I feel like we can just use the general provisions of WP:CHALLENGE etc if it's the usual AI stuff and the sources don't verify. Alpha3031 (tc) 23:50, 28 November 2025 (UTC)[reply]
Also, WP:5P3 exists. I don't really know why this is even a discussion to be honest. Text can be added, changed, or removed at any time, that's the fundamental point of a wiki. Gnomingstuff (talk) 01:15, 30 November 2025 (UTC)[reply]
Good idea, any chance you want to give it a whirl? Maybe makes sense to start as an advice page at WP:AIC. Also pointing you to this, which is an idea I had with some support at AIC: WT:WikiProject AI Cleanup/Archive 4 § Guidance on handling article with mostly minor edits subsequent to LLM-rewrite. Maybe this could be incorporated? NicheSports (talk) 21:16, 29 November 2025 (UTC)[reply]
I fully agree with implementing a presumptive deletion-esque policy here. Take a look at this tracker on AINB for example - 74 pages by a chronic LLM user need to be reviewed. I've been doing some myself, and I've found that a lot of it is innocuous AI copyediting, but then on one or two edits, you'll see places where the AI accidentally combines two sentence clauses and changes the meaning, or does a thesaurus rewrite of a direct quotation from a real person; it requires an intense and time-consuming level of scrutiny to pick those out, but I can't simply in good faith revert everything without checking, because a lot of it is useful copyediting changing articles to a more formal tone.
It would be much easier to just go in and revert everything this person has substantively changed. Athanelar (talk) 18:16, 10 December 2025 (UTC)[reply]
I wouldn't necessarily want to require editors to revert everything, or to send a bot around, but for individual editors who have been specifically identified as causing problems, I think that it's reasonable to assume a problem unless you can prove otherwise at a glance. For example, @Athanelar, I looked at that editor's contributions to Georg Klein (composer). They might be fine. But I can't tell at a glance. And the editor is known to have problematic contributions. So I think that reverting that with a suitable edit summary would be justified. WhatamIdoing (talk) 18:47, 10 December 2025 (UTC)[reply]

I'm inclined to agree that the community is currently fairly vigorously contesting LLM-slop. There are even false positives, at least one case of something from 2010 getting tagged. Remember that LLMs are trained on Wikipedia. Nobody tagged me for this but I recently saw text I had written where I used "fostered" and "surpassed," two tagged vocab words, but on double-checking both of which were used by the sources, so I was being faithful by also using them. Shlomo Lambroza (Wikidata) and Diana Dumitru probably didn't use an LLM, they used that vocab because they with precise diction decided that "surpassed" and "fostered" were the best way to express themselves at that moment. Not saying that the slop isn't a big problem but right now I think there is adequate control of it - thanks to a lot of volunteer work, time, energy. See, I did 3 things. But I remember someone telling me about the rule of 3 at least 5 years ago and it had nothing to do with LLMs. Andre🚐 02:08, 29 November 2025 (UTC)[reply]

To be clear, I'm not proposing that anyone can delete anything they personally think might have been written by an LLM, but in cases where a user has a long history of LLM misuse, it feels unlikely that they also just happen to write like an LLM. I don't necessarily agree with you that enough is being done to clean up after LLMs to avoid needing a measure like this, but rven if that's true, such cleanup still wastes a huge amount of community time. The current wording of WP:ONUS means that if a source has been provided, it's the responsibility of the person removing information to check that verification fails. The thing about AI is it's very easy to make something that looks convincing, meaning one often can't tell at a glance whether the sources are okay. This creates a WP:TNT situation where it's easier to blow it up and start over than to fix the problems by manually checking each source, which can take a very long time. lp0 on fire () 13:01, 29 November 2025 (UTC)[reply]
That makes sense. But isn't it pretty easy to make something look convincing without AI? Shouldn't we use a system of cleaning up that isn't so confrontational? Couldn't erasing pages start edit wars? There have been very good alternative suggestions here. Orlando Davis (talk) 20:31, 29 November 2025 (UTC)[reply]
It's not true that WP:ONUS means that if a source has been provided, it's the responsibility of the person removing information to check that verification fails. WP:BURDEN means the other editor has to provide one source (but only one; you can't make them WP:FETCH and endless supply of sources). WP:ONUS says only that it's the other guy who has to organize a consensus to include the information.
One of the footnotes in BURDEN gives a partial list of reasons why one might be justified in removing cited content: removing editors "must articulate specific problems that would justify its exclusion from Wikipedia (e.g., why the source is unreliable; the source does not support the claim; undue emphasis; unencyclopedic content; etc.)". In practice, I suspect that an edit summary along the lines of "Presumptive removal of text from an editor since blocked for abusing AI tools" would be considered an entirely sufficient articulation of a specific problem. WhatamIdoing (talk) 21:46, 29 November 2025 (UTC)[reply]
That was my failure to read the footnote; thanks for clarifying. I still think it'd be helpful to formalize allowing such presumptive deletions. lp0 on fire () 22:09, 29 November 2025 (UTC)[reply]
It might be useful to have a short page on when and why a Wikipedia:Presumptive removal would be warranted. If it gets used and doesn't create a lot of problems, it would probably be easy to get an "Oh BTW there's this WP:PRESRM thing..." added to a guideline or policy somewhere. WhatamIdoing (talk) 23:25, 29 November 2025 (UTC)[reply]
To be clear, are you suggesting a single page that collates all the common kinds of presumptive removal (AI, socks, copyvios, banrevert, arbecp, maybe something else I haven't thought of)? lp0 on fire () 09:11, 30 November 2025 (UTC)[reply]
Yes.
I'm thinking of something that's more of a 'process description' page than a 'rulebook'. It could be a little bit similar to Wikipedia:Why was the page I created deleted? or Wikipedia:What is significant coverage? After someone reads it, they should know what presumptive removal is (mass removal of edits from known-problematic individuals), why we use it (efficiently protecting Wikipedia), and what to do (careful evaluation). WhatamIdoing (talk) 23:45, 30 November 2025 (UTC)[reply]
lp0 on fire, let's not forget about creating this documentation. WhatamIdoing (talk) 03:16, 5 December 2025 (UTC)[reply]
I might have time to do this today. Thanks for the reminder. lp0 on fire () 08:49, 5 December 2025 (UTC)[reply]
@WhatamIdoing: Very early stage beginnings of a draft created at User:lp0 on fire/Drafts/Presumptive removal; I'll work on this when I have some time but feel free to contribute. lp0 on fire () 16:36, 5 December 2025 (UTC)[reply]
It may be relevant to this discussion that Orlando Davis has been temp-blocked following an ANI report concerning disruptive editing and LLM use. fifteen thousand two hundred twenty four (talk) 02:34, 1 December 2025 (UTC)[reply]
A system of cleaning up that isn't so confrontational is easy to achieve, simply by getting the confrontation over with. Four warnings followed by a site ban. AI use is significantly more damaging than ordinary vandalism. TooManyFingers (talk) 01:49, 14 December 2025 (UTC)[reply]
I'm doubtful about your assertion that "AI use is significantly more damaging than ordinary vandalism". Maybe we have different ideas of what "ordinary vandalism" looks like? WhatamIdoing (talk) 02:16, 14 December 2025 (UTC)[reply]

Wikipedia app

[edit]

In the Wikipedia app, the English Wikipedia doesn't show whether an article is Good or Featured. For example, in the German Wikipedia—like this good article—this information appears at the bottom of the article in the app, and it even shows the date when the article was selected as Featured. I strongly suggest adding this feature—and the date of selection—to the English Wikipedia app as well. Vastmajority20025 (talk) 19:37, 28 November 2025 (UTC)[reply]

Last I heard, readers don't notice or care about those little icons, so why should we bother? WhatamIdoing (talk) 21:47, 29 November 2025 (UTC)[reply]
Yeah @WhatamIdoing, but it would be better for the English wikipedia to be more accessible on phone for a better experience, like German wikipedia example, and it doesn't need to be icon, like this article in German wikipedia, at bottom of it is a section for the date of the article turning Good or Featured, and says it is Good or Featured. Vastmajority20025 (talk) 16:59, 4 December 2025 (UTC)[reply]
I don't think this requires much consensus. With enough time, try opening up the publicly-available code for the apps and implement it! Aaron Liu (talk) 20:54, 9 December 2025 (UTC)[reply]
@Aaron Liu, I am not familiar with programming and even with help of chatgpt, I don't know where to put it. Now If anybody familiar with programming and know what to do, do this, would be great. Vastmajority20025 (talk) 13:47, 14 December 2025 (UTC)[reply]

Wikipedia as a human-written encyclopedia

[edit]

I'm opening this as a more general idea lab discussion since I don't have a specific proposal, but we've reached the point now where we really need to be looking into how we frame Wikipedia's relationship with AI, especially in public-facing areas. There's currently nothing public-facing, not even on the main page, emphasizing that Wikipedia is a human-written encyclopedia (or whatever term you want to use). As LLM content only becomes more common, the fact that Wikipedia is written by humans is going to become one of its defining characteristics and a major reason why it's a better alternative to other sites. Has anyone given thought to how we might incorporate this? Thebiguglyalien (talk) 🛸 02:57, 29 November 2025 (UTC)[reply]

I do think Wikipedia has always had a human and humanistic aspect, and I support the proposal in the abstract. Maybe we could have a contest for someone to design a banner or an interactive display to promote Wikipedia: The Free as in Libre, Human Encyclopedia. Like we used to do in the old days. Andre🚐 03:02, 29 November 2025 (UTC)[reply]
Awful suggestion. 1. Being human-written is not an important pillar of Wikipedia, it is rather the bare minimum for any respectable encyclopedia, book or news article. Hence it's a bad idea to emphasive this fact so prominently. 2. Wikipedia is not "human". That particular phrasing is confusing.
I don't object to including the fact that Wikipedia is human-written in some guidelines, essays or promotions. But it's not the central selling-point of Wikipedia – lots of other outlets are human-written too but inferior to Wikipedia in many ways (e.g. less reliable). Joe vom Titan (talk) 13:17, 29 November 2025 (UTC)[reply]
I have some bad news for you about the internet of the 2020s. Thebiguglyalien (talk) 🛸 15:41, 29 November 2025 (UTC)[reply]
What are those bad news? Has AI slop appeared on nytimes.com or home.cern yet? AI is neither the biggest problem in the world nor the biggest problem on the internet. For one, misinformation spread by oil companies, oligarchs and petrostates to serve their own interests is much more insidious. Joe vom Titan (talk) 13:44, 30 November 2025 (UTC)[reply]
nytimes.com almost certainly, can't speak for cern though. The point is, if you google something a good 75% of the time most of the first page results will be SEO infested ai-generated spam that vaguely summarizes a topic instead of providing useful information. Wikipedia is fundamentally not that, and as more and more of what used to be considered "reliable" websites for most people become infested with slop I feel like it's worth highlighting the fact that we aren't doing that mgjertson (talk) (contribs) 20:36, 8 December 2025 (UTC)[reply]
What's the AI slop on nytimes.com? Aaron Liu (talk) 20:55, 9 December 2025 (UTC)[reply]
Pretty sure it was Perplexity AI that copied NYT, not NYT posting AI slop.[8] Polygnotus (talk) 00:06, 11 December 2025 (UTC)[reply]
Even more bad news—The list (misinformation spread by oil companies, oligarchs and petrostates) includes states, x-archs... that have lots of cash they crave to grow—what better way to get richer than AI (restricted by very high subscription fees). $20USD/mon is my limit. What's Bezos'? Oh, right, Amazon is one of the three largests investors in AI—looked at or listened to the A. website lately? — Neonorange (talk to Phil) (he, they) 03:32, 1 December 2025 (UTC)[reply]
  • I am quite keen on the idea of making a statement of principle like this. As for the implementation, I think there are a few possibilities. I can see something being incorporated into the Wikipedia:Five pillars. Another possibility is to add something into Wikipedia:What Wikipedia is not, e.g. 'Wikipedia is not written by machines'. The last possibility I can think of is to write a new one-line policy or guideline to the effect that 'Wikipedia is a human-written encyclopaedia', in a similar format to WP:IAR. Whatever is proposed will need wide community support to be adopted. Yours, &c. RGloucester 03:59, 29 November 2025 (UTC)[reply]
    Just today I was musing on writing a "Wikipedia is not Grokipedia" essay which stresses that the entire point of Wikipedia is to eliminate error by having different perspectives, opinions, editing approaches etc coming together to make consensus, and how using AI essentially centralises everything into coming from one authorial voice which fundamentally undermines the spirit and purpose of the project. Athanelar (talk) 18:24, 10 December 2025 (UTC)[reply]
  • It is actually difficult to know post-2022 if Wikipedia was written by human, machine, or machine-human combo. -- GreenC 04:25, 29 November 2025 (UTC)[reply]
    It's almost as if we should have some sort of clarification in public-facing areas stating that the purpose of Wikipedia is to be written by humans. Thebiguglyalien (talk) 🛸 15:37, 29 November 2025 (UTC)[reply]
    @Thebiguglyalien So we should just indef you and only allow humans to edit Wikipedia from now on? Polygnotus (talk) 00:08, 11 December 2025 (UTC)[reply]
  • I wonder how this would interact with m:Abstract Wikipedia. If I write code that produces a sentence – "$company-name is a business in $country-name that produces $product" – is that still "human-written"? WhatamIdoing (talk) 05:08, 29 November 2025 (UTC)[reply]
    The interaction is a distinguishing point between English Wikipedia and Abstract Wikipedia (is that the final name?). Auto-generated text is not human-written. CMD (talk) 05:39, 29 November 2025 (UTC)[reply]
    'Language-independent articles'? How has the world become so dystopic? Each language has its own mode of communication, its own mode of thinking. There is no one-to-one relationship between a concept in one language and a concept in any other. Even if we could modify language to allow for such things, this would destroy the organic diversity that is the body of human language. God knows I don't want to read an article that is written in a manner inconsistent with the thought process that is associated with the language in which it is written. I can only imagine the horrible damage this will do to languages other than English. Haven't we done enough harm with the likes of the Scots Wikipedia? Yours, &c. RGloucester 06:05, 29 November 2025 (UTC)[reply]
    On the other hand, there are quite a few articles that exist in fr, de, etc and nobody has created in en. Google Translate does ok, but affects ease of discovering information and browseability. So if we had a way to conceptualize a layer between factoids and prose, it could be useful to aid in translation or spreading knowledge further and sooner. At any rate, this is only theoretical. If and when it is accomplished, it may or may not even achieve critical mass. Andre🚐 06:12, 29 November 2025 (UTC)[reply]
    Our goal is not to have more articles for the sake of more articles, but to have articles that meet our quality standards. Usually, there is a reason why an article may exist on a non-English Wikipedia, but not on the English Wikipedia. The English Wikipedia has much higher standards in terms of referencing. Very often, articles found on other Wikipedias lack sources at all, or rely heavily on niche sources that would be insufficient to establish notability here. Additionally, they are frequently written from a perspective that is insufficiently global for the English Wikipedia. I have many times endeavoured to translate an article from one Wikipedia to another, in the languages that I know, only to be stymied by the poor quality of the content. It is often easier to start a new English Wikipedia article from scratch, using some of the sources from the other Wikipedia as a foundation. Yours, &c. RGloucester 06:19, 29 November 2025 (UTC)[reply]
    Not necessarily always the case. There are many good quality articles on fr or de that if I could snap my fingers to port over with an idiom-proof translation would be worthwhile in edifying readers, and have appropriate references. Andre🚐 06:28, 29 November 2025 (UTC)[reply]
    Ask a translator for assistance, there are plenty of volunteers willing to help. No translation can be 'idiom-proof', unless the fundamentals of language itself are to be destroyed. Yours, &c. RGloucester 07:21, 29 November 2025 (UTC)[reply]
    (I wouldn't use the German-language Wikipedia as an example of appropriately cited articles, as their standards are very different from ours.) WhatamIdoing (talk) 21:49, 29 November 2025 (UTC)[reply]
    I am aware that a human translation can't be idiom-proof, but that is the promise of an abstract Wikipedia, a syntactically complete database-frontend of facts that takes Wikidata beyond simply data and makes actual articles. I mean another way to do that would just be to feed Wikidata to an LLM that doesn't have other knowledge or the ability to call out to random tools and make things up, but simply weaves Wikidata into article form. That wouldn't work though without a lot more UX work and volunteer time on data entry. At any rate, I don't necessarily think the articles I'm personally interested in are the ones that translators need to work on, so it kind of feels like an imposition to dump my requests into that list. I'm sure there's a backlog. Instead, I'm dumping them into Wikiprojects that will potentially have a contributor write an English article while just consulting the other articles. But I do know that there are many many topics that are adequately covered in international Wikipedias. It seems silly to ignore the possible technological developments that will make reading content in other languages more accessible. Here's an example: Mikhail Kulisher (he; ru; uk). The articles seem fairly complete and are referenced. There is a whole pile of similar articles. Andre🚐 05:59, 1 December 2025 (UTC)[reply]
    Your claim that There is no one-to-one relationship between a concept in one language and a concept in any other sounds a bit overstated. Simple facts (Angela Merkel was Chancellor of Germany; calculus is a type of mathematics; carrots are edible) seem to translate quite well between most languages. There are individual instances of non-translation (家は青い – the house is, um, blue or green or thereabouts; ), but it's not true that there are no concepts that map to the same concept in any other language. WhatamIdoing (talk) 22:10, 29 November 2025 (UTC)[reply]
    I said that there is no 'one-to-one' relationship, not that there was no relationship. The process of translation is a delicate one. What you call a 'simple fact' could potentially be translated tens of different ways. The meaning of 'edible' can be rendered many ways in English, and it is likewise true in most other languages. I could say 'can be eaten', 'able to be consumed', 'safe to eat', 'comestible', depending on context, register, &c. By creating an artificial one-to-one relationship between words, whereby 'edible' can only be rendered as one specific term in another language, you destroy the organic diversity of that language, and the naturalness of the text produced. It is very likely that whatever term is chosen may end up being inappropriate in the relevant context, because the person creating this artificial one-to-one relationship will not have a full grasp of the relevant language, and will rely on horrible dictionaries or computer code. The end result will be Scots or Greenlandic Wikipedia, redux. Yours, &c. RGloucester 07:51, 30 November 2025 (UTC)[reply]
    And yet, somehow, I think that if it offered me a sentence like "carrots are edible[source]", and I didn't think it was appropriate in the relevant context, had the wrong register, etc., then I could probably either reject it or re-write it without destroying either the organic diversity of the English language or the naturalness of the text in the Wikipedia article. WhatamIdoing (talk) 23:49, 30 November 2025 (UTC)[reply]
    Sure, if you're a speaker of English and a speaker of the source language, you will be able to evaluate whether the machine's output is suitable or not, though I don't see how this will save any time as compared with traditional translation. However, I expect that this 'abstract Wikipedia' will mainly be used for minor languages, with few available editors qualified to make such judgements. It is a recipe for disaster. Yours, &c. RGloucester 11:05, 1 December 2025 (UTC)[reply]
    I think it will get used in a variety of ways, many of which involve numbers that change in a more or less predictable fashion. For example: "According to $source, the current population of the world is estimated to be $world-population.[source]"
    Speaking of which, I frequently wish that the second sentence of World population had an up-to-date number in it. WhatamIdoing (talk) 03:20, 5 December 2025 (UTC)[reply]
    I'm a native Anglophone, and I wrote poetry in Hebrew that I had trouble translating. user:RGloucester is absolutely right that there are things that don't translate well. "Traduttore, traditore" -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:30, 1 December 2025 (UTC)[reply]
    Of course there are things that don't translate well. I object to the overbroad statement that there is no one-to-one relationship between any part of one language and any part of any other language, for any statement. WhatamIdoing (talk) 03:22, 5 December 2025 (UTC)[reply]
    Indeed in closely related languages it is likely that there are very many concepts and phrasings that correspond 1:1 and while I haven't attempted to verify this I would be astonished if a phrase like "Thryduulf is a living person." could not be directly and accurately translated into the majority of the world's languages without any change of meaning or nuance. Note I explicitly don't say "all" as I'm sure there will be some exception somewhere, perhaps there is a language that mandates specifying whether this is direct, second hand or inferred knowledge or requires an explicit indication of gender. Thryduulf (talk) 03:53, 5 December 2025 (UTC)[reply]
    Angela Merkel was Chancellor of Germany "Okay, well, what's a "chancellor?" We don't have a word for that in Examplese, so we could keep it untranslated, but that might be confusing, so I'd rather try to pick an equivalent word in our language."
    Well, in the context of Germany, the chancellor is the executive leader of a federal republic; i.e., an electoral-democratic state divided into smaller polities with some degree of independence, which is governed by elected representatives in charge of each administrative subdivision, where the chancellor acts as the prima inter pares of the representatives, representing the whole federal state rather than an individual subdivision. Suddenly the Examplese-speaking editor has quite a lot more translating to do. Athanelar (talk) 18:30, 10 December 2025 (UTC)[reply]
    @Chipmunkdavis, please see m:Abstract Wikipedia/Abstract Wikipedia naming contest. I gather that the team would very much like to have a different name (though I don't have any insight into why). WhatamIdoing (talk) 21:52, 29 November 2025 (UTC)[reply]
    I was pretty sure that I had proposed Wikigenerator, but I guess great minds think alike. Andre🚐 21:58, 29 November 2025 (UTC)[reply]
  • It is fair to exclude LLM written content from Wikipedia on the grounds that they're currently not very competent at the task of writing an encyclopedia article, but I am opposed to any display of human or "humanistic" chauvinism, specially anywhere as prominent as the front page. It is also not practical to uphold this claim/promise, as it basically impossible to be certain whether any text is "really human" or has had a partial/full LLM contribution behind it. TryKid[dubiousdiscuss] 14:48, 29 November 2025 (UTC)[reply]
    Seconded. The LLM text is more prevalent than some people realize, and certainly more than laypeople realize. Making such a claim after 2 years of having no AI policy or guidelines would be telling our readers a lie. Gnomingstuff (talk) 05:16, 30 November 2025 (UTC)[reply]
    I agree on all counts. LLM is both unsuitable for writing new articles, but it's also not outright banned by policy (at least not yet). Even if it were banned, there are still articles out there that have been written partially using LLM.
    We could theoretically ban any LLM use, but that still wouldn't make the statement "Wikipedia is entirely human-written" true. – Epicgenius (talk) 23:53, 30 November 2025 (UTC)[reply]
    Gnomingstuff & Epicgenius, I don't know if you're referring to this or if you haven't seen it yet, but as of last week there is in fact a content guideline in regard to creating articles with LLMs, and there are ongoing discussions to decide how its scope will be expanded beyond simple article creation: Wikipedia:Writing articles with large language models. Thebiguglyalien (talk) 🛸 06:04, 1 December 2025 (UTC)[reply]
    @Thebiguglyalien, thanks for the ping. I did see this, but it doesn't apply retroactively, nor does it cover LLM-assisted expansions of existing articles. We'd need to ban LLM for at least the latter before we can claim that WP is human-written (and even then, people will try to sneak in LLM text constantly, so vigilance will be required). Epicgenius (talk) 06:44, 1 December 2025 (UTC)[reply]
    To clarify, when I said "2 years" I meant the prior 2+ years' worth of accumulated AI edits. (The guideline was approved just days before the 3-year anniversary of ChatGPT.) Gnomingstuff (talk) 13:47, 3 December 2025 (UTC)[reply]
  • First, let us ask if a bicycle is "human powered"? It is, but provides more power than walking. Wikipedia can be human powered but with bicycle type tools. The human decides where the bicycle goes. Secondly please let me introduce the concept of closed loop system to the discussion. The LLM nightmare is when other sources pick half baked content from AI generated sources, and said sources pick it up again themselves. The term to User then is jambalaya knowledge. Yesterday, all my dreams... (talk) 16:00, 29 November 2025 (UTC)[reply]
    This has nothing to do with whether bots are writing the content. No one said "human powered" until you did. Thebiguglyalien (talk) 🛸 16:06, 29 November 2025 (UTC)[reply]
    Yes, I used human powered, because I think the term should be considered. If you do not like it, do not use it. Others may consider it and it will linger in their minds. Yesterday, all my dreams... (talk) 16:22, 29 November 2025 (UTC)[reply]
  • Even ignoring all the AI-related issues, there are many articles (partially) written by bots - see for example the article about nearly any small town in the United States - so the statement isn't true. Thryduulf (talk) 00:00, 1 December 2025 (UTC)[reply]
    agreed. i think wikipedia's greater value is its verification and its robust community that debates articles. unfortunately, thats not as pithy as "human-written" User:Bluethricecreamman (Talk·Contribs) 05:31, 1 December 2025 (UTC)[reply]
    Maybe it should be Wikipedia: The encyclopedia of human ideas and discussion? Surely we agree the ideas and discussion are human even if we can't, as Gnomingstuff and Thryduulf point out, actually claim the articles are all human-driven, aside from LLMs, due to Rambot and similar automation that has been around almost as long as the project. Andre🚐 05:47, 1 December 2025 (UTC)[reply]
The Wikimedia Foundation itself seems to be much less wary of generative AI, using it in some of their TikTok videos (one on Wicked (film), if I do recall) and advertising in their 25th anniversary video how Wikipedia trains AI. If there is a community consensus that Wikipedia and generative AI are not allies, should we address this with Foundation leaders so they can alter their messaging? ✨ΩmegaMantis✨blather 20:19, 1 December 2025 (UTC)[reply]
The foundation has long since even pretended to represent the consensus of wikipedians (as evident by the temporary account roll out) mgjertson (talk) (contribs) 20:38, 8 December 2025 (UTC)[reply]
It certainly doesn't have to stay that way. The Foundation has relented to our demands involving AI in the past, like through halting the Simple Summaries feature. The temp account rollout seems to be prompted by a legal issue faced by the WMF.✨ΩmegaMantis✨blather 00:11, 9 December 2025 (UTC)[reply]
Yes, I think the temp account thing was a band-aid solution to legal pressure on the WMF. I think if we can come up with something better that addresses the relevant legalities, we could probably get it implemented (I think I'm of the 'requiring registration' camp) Athanelar (talk) 18:31, 10 December 2025 (UTC)[reply]
There was some kind of "leaked" memo over at Meta regarding challenging the community to accept more AI features. If you have sysop rights there you'll find it by inspecting my deleted contribs. Children Will Listen (🐄 talk, 🫘 contribs) 00:23, 9 December 2025 (UTC)[reply]
I'm not even extended confirmed yet, so I definitely can't view it, but that is incredibly concerning. Since Wikimedia is inherently a movement driven by its contributors and community, it seems to be another dangerous step of the WMF to negate this mission by concentrating their own power. Perhaps it should be proposed on Meta's Wikimedia Forum to make some larger change involving greater community election of board members so the WMF is more Wikimedian and isn't trying to thwart its own community. ✨ΩmegaMantis✨blather 00:28, 9 December 2025 (UTC)[reply]
not surprising, given that they pushed the simple summaries feature through with the rationale of "editors will hate this but it's not for them" Gnomingstuff (talk) 03:37, 9 December 2025 (UTC)[reply]
I think "by humans, for humans" would be a great (if a bit cliché) tagline for Wikipedia to have somewhere. Athanelar (talk) 18:17, 10 December 2025 (UTC)[reply]
Remember me?
The slogan that you're looking for is that Wikipedia is the free encyclopedia that only humans can edit. The trouble is that we often don't know who's doing the editing and so can't verify such a claim. Andrew🐉(talk) 20:12, 12 December 2025 (UTC)[reply]

Okay, if for a moment we were to ignore the ideas where we welcome and accept AI content as part of Wikipedia's identity, what could we hypothetically do as a project to make it clear what separates reading Wikipedia from things like asking ChatGPT, searching Grokipedia, or using the Google AI Overview? Thebiguglyalien (talk) 🛸 05:59, 1 December 2025 (UTC)[reply]

Perhaps we can mention that it's "human-vetted" or "human-curated"? Even the AI-generated content is (usually) detected, and tagged or removed, rather quickly. However, Thryduulf also has a good point that many articles have at least some non-human input. – Epicgenius (talk) 15:50, 1 December 2025 (UTC)[reply]
Even the AI-generated content is (usually) detected, and tagged or removed all we can say is that the problematic AI-generated content is usually tagged and/or removed. Any AI-generated content that is stylistically similar to a Wikipedia article and which contains no errors (e.g. incorrect statements, non-existent references, etc) will almost always not be flagged because doing so wouldn't benefit the encyclopaedia. Accordingly it is impossible to know whether there have been 1 or 1 million edits of this nature. Thryduulf (talk) 18:23, 1 December 2025 (UTC)[reply]
If an article contains no errors (e.g. incorrect statements, non-existent references, etc) do we actually care what wrote the text? Just asking on behalf of my human... — GhostInTheMachine talk to me 18:32, 12 December 2025 (UTC)[reply]
That's one of the major points of debate right now. Myself and some others would say yes, we absolutely do care.
My thesis is thus; if we allow AI usage provided the output is Wiki-suitable, we will inevitably trend towards a higher and higher percentage of the encyclopedia being authored by AI, and I don't think that's desirable in the same way it would be undesirable if any large percentage of the wiki were authored by a single human person. It is a good thing that we have such a wide variety of authorisl voices. Athanelar (talk) 18:37, 12 December 2025 (UTC)[reply]
if we allow AI usage provided the output is Wiki-suitable, we will inevitably trend towards a higher and higher percentage of the encyclopedia being authored by AI firstly, why? Secondly, if the output of AI reviewed by humans to the point that it is of the same standard as directly human authored, why is that differently good (or bad) than content directly written by humans? Thryduulf (talk) 19:31, 12 December 2025 (UTC)[reply]
The 'why?' is the same reason it's happening to the rest of the internet. Just like it takes far less effort to pump out SEO-maximising AI listicles that are now dominating google search, AI-generated wikicontent would be much faster to produce than the human alternative. One needs only look at how widespread the actions of chronic AI-abusing individuals can get to see what I mean; the tracker for User:A Touch of Humanity still has some 60 articles that need review. User:Gnomingstuff has already hazarded a guess at the potential volume of AI text that might already be on Wikipedia, and it's concerning.
As for the second point, it's a philosophical more than ptactical thing. I don't think it'd be a good thing if, say, 20-30% of Wikipedia's text was authored by some individual human John Wikipedia, no mstter how good the content actually was. We're a communal project, that is inherently undermined if much of the new content is coming from a single authorial source. Athanelar (talk) 19:55, 12 December 2025 (UTC)[reply]
If AI were producing Wiki-suitable output, why would a high rate be a problem? Levivich (talk) 19:57, 12 December 2025 (UTC)[reply]
I think @Athanelar's point is that it's because it contradicts the idea of Wikipedia. The whole reason why there was a Wikipedia in the first place (instead of simply accepting any normal peer-reviewed, low interaction encyclopedia) is because it's a communal project since we believe in the strength of the commons, and that any monopolization of edits from a single source goes against that mission and founding idea, making it inherently anti-Wikipedian. then again, what do I know, I am not John Wikipedia ✨ΩmegaMantis✨❦blather | ☞spy on me 20:37, 12 December 2025 (UTC)[reply]
I'm John Wikipedia, and I endorse this message. Athanelar (talk) 20:49, 12 December 2025 (UTC)[reply]
Why is anti-Wikipedia a bad thing? Which is another way of asking: if a machine could do what Wikipedia can do, but faster, then why is that a bad thing? If we had a fast machine-written Wikipedia, and a slow human-written Wikipedia, and they both produce articles of the same quality, then what use is the latter? Levivich (talk) 20:50, 12 December 2025 (UTC)[reply]
What you're essentially asking is "why would it be a problem if Wikipedia were fundamentally transformed into a completely different thing?" Which, like, I see your thought experiment, but I'm beginning with the assumption that we generally want to maintain the overall ethos of the project, and we don't want "the free (as in libre) encyclopedia that anybody can edit" to turn into "the free (as in gratis) encyclopedia which is mostly edited by a content engine developed by OpenAI et al which anybody can double-check the output of." Athanelar (talk) 20:55, 12 December 2025 (UTC)[reply]
But you're not really answering the question, you're just restating the position: "Wikipedia is better than..." I'm asking why is it better? Why don't we want "free (as in gratis) encyclopedia which is mostly edited by a content engine developed by OpenAI et al which anybody can double-check the output of"? I think that's better, and I could tell you my reasons. Why do you think it's worse? Levivich (talk) 21:07, 12 December 2025 (UTC)[reply]
Wikipedia's popularity resulted in the major decline of traditional written encyclopedias. If we assume that this decline was a rational decision by the people because of the merits of the Wikipedia project in contrast to traditional encyclopedias, and that at least one of those merits was its diversity of voices (likely so, as Wikipedia branded itself as the encyclopedia any one can edit) than that means that people like encyclopedias that are based off a diversity of voices. If we take @Athanelar's argument that AI destroys this diversity, then this means people won't like an AI-generated encyclopedia, and therefore it is not suitable for Wikipedia as it will cause us to lose readers. It's sort of an argumentum ad populum, but we certainly do depend on the populum for donations. ✨ΩmegaMantis✨❦blather | ☞spy on me 21:16, 12 December 2025 (UTC)[reply]
Well, and it's also an argument from the idea that I don't want any more of the world's information to be monopolised by tech corporations than already is. Athanelar (talk) 21:27, 12 December 2025 (UTC)[reply]
If we assume that this decline was a rational decision by the people because of the merits of the Wikipedia project... I think it's more likely that people just found it easier and quicker to look things up online than to go to a library to look in a traditional encyclopedia. (Sets of encyclopedias were not cheap and most homes didn't own a set.) Wikipedia is basically one-stop-shopping, the Amazon-equivalent of information. Schazjmd (talk) 22:27, 12 December 2025 (UTC)[reply]
Yeah, that's probably right. The "rational" Homo econominus model is a quite inaccurate one. Still, I think it's hard to wave away how much the collaborative wiki model mattered and influenced perceptions of Wikipedia -- and probably of knowledge as a whole. It certainly made it novel and unique, and even now it still mostly is. That has to count for something. ✨ΩmegaMantis✨❦blather | ☞spy on me 22:32, 12 December 2025 (UTC)[reply]
More to the point, Wikipedia is now actively telling said populum to give them money because it is human-written. Which, at this point, is a lie. Gnomingstuff (talk) 17:22, 15 December 2025 (UTC)[reply]
@OmegaMantis said: The whole reason why there was a Wikipedia in the first place...is because it's a communal project since we believe in the strength of the commons.
And here I thought Wikipedia was created because Jimmy Wales' dot-com venture wanted to attract more men by hosting factual information about sports, cars, pornography, etc., and the non-wiki version was moving too slowly to grab their intended audience. WhatamIdoing (talk) 03:01, 14 December 2025 (UTC)[reply]
Fair, fair. Still, the community seems to have generally been motivated to keep up the project, and start the Wikimedia movement, based off some sort of collaborative ideals that stem from free software and free culture. These ideals pair much better with our encyclopedia than top-down, knowledge monopolizing overuse of large language models -- even if the latter may be sexier to the commercial eye, at least to today's dot-com ventures. ✨ΩmegaMantis✨❦blather | ☞spy on me 03:22, 14 December 2025 (UTC)[reply]
Slightly relevant to this is today's donation appeal banner: December 4: Knowledge is human. We're sorry we've asked you a few times recently, but it's Thursday, December 4, and this fundraiser matters. We're nearing today's goal, but time's running out. If just 2% of our most loyal readers gave $2.75 today, we'd reach our goal quickly. Most people donate because Wikipedia is the internet we were promised: useful, free to use, and filled with reliable, human-created knowledge. If you agree, consider giving $25 or even just $2.75. So apparently the WMF is leaning into this kind of messaging. -- LWG talk 05:52, 5 December 2025 (UTC)[reply]
WMF leans into whatever they think will get them donations, if they think AI hype will get more they'd mention that instead mgjertson (talk) (contribs) 20:40, 8 December 2025 (UTC)[reply]
I think this is a good idea, and it seems already integrated into the banner campaigns. I don't think Wikipedia has much advertising, though, so it'd be difficult to adapt our message when we don't really have one. Aaron Liu (talk) 20:57, 9 December 2025 (UTC)[reply]
Based off @Athanelar's belief that Wikipedia is powerful as a content source not monopolized by one voice (as it would be if dominated by AI) I would advocate for the pitch of "Wikipedia remains one of the few sources of knowledge that all humans, including you, can contribute to." This wouldn't make any claims about what content on Wikipedia is AI-generated or how it differs from bots, but basically rephrases the "anyone can edit" by emphasizing the control of humans (contrasting with AI) while implying that now, a lot of what we learn with AI we have little human control over at all. ✨ΩmegaMantis✨❦blather | ☞spy on me 22:38, 12 December 2025 (UTC)[reply]

... make it clear what separates reading Wikipedia from things like asking ChatGPT ...

  • Wikipedia: the free encyclopedia that's probably still more accurate than ChatGPT
  • Wikipedia: the free encyclopedia that contributes less to climate change than LLMs
  • Wikipedia: the free encyclopedia whose job hasn't yet been completely outsourced to AI
  • Wikipedia: the free encyclopedia that doesn't write better than you
  • Wikipedia: the free encyclopedia that doesn't talk back

I wrote those without the assistance of an LLM.

Comparison of Wikipedia and ChatGPT
Feature Wikipedia ChatGPT
Where info comes from Human-written articles with citations AI-generated text based on patterns from training + optional web search
How content is created People write and edit pages ChatGPT writes responses on the fly
Can you check sources? Yes, every claim should have citations Sometimes -- sources aren't built-in unless the model is using web search
Tone & style Neutral, encyclopedic Variable: can be friendly, technical, simple, creative
Good for Facts, history, definitions, lists, research Explanations, summaries, tutoring, brainstorming, custom help
Weaknesses Not personalized; incomplete topics Can make confident mistakes; no built-in citations
Update frequency Whenever volunteers edit Mostly based on training + optional web searches

Wikipedia is like a big school book written by lots of teachers. Every fact has to be checked. All the teachers agree on what goes in the book. It explains things the same way for everyone.

ChatGPT is like asking a super-smart robot friend. It explains things in whatever way helps you understand. You can ask follow-up questions. It can give stories, examples, or simpler explanations. But sometimes the robot might guess wrong, so you still have to be careful.

Wikipedia is like a museum: Everything on display is curated, sourced, and labeled. You see stable information. You walk through and learn at your own pace. It does not answer you directly; you explore it.

ChatGPT is like a personal tour guide: You can ask anything: "Can you explain that again, but simpler?" The guide adapts to your interests. It connects ideas across rooms ("Here’s how this painting relates to that sculpture.") But occasionally, the guide might misremember or over-explain something, so you verify if it matters.

The table and everything after it was written by ChatGPT. Levivich (talk) 20:59, 11 December 2025 (UTC)[reply]

These seem to be catered to the audience that is already LLM-skeptical. But most average people do not necessarily share an LLM-skeptical view or care about the ethical aspects of data centers. That is why ChatGPT and Gemini are growing and slowly eating the rest of the internet's lunch. The confident mistakes is important, but Wikipedia can also confidently report a hoax for years. Andre🚐 21:23, 11 December 2025 (UTC)[reply]

The proposal would be misleading. If you say "human-written" the layman would understand "written by a single and definite human", ideally someone the layman already knows and trusts (like a newspaper article or opinion piece). Wikipedia is not AI but, for the layman who ignores our internal procedures, it's something in-between: it's not AI (at least not what the layman understands for AI, a chatbot or a similar service that generates content in answer to a query), but it's not something written by an identifiable someone either. It's stuff written by several nobodies with usernames.

Also, "human-written" is only a virtue in the eyes of people with a strong anti-AI sentiment. There is a sizeable group of people like that, but they are not everybody. There are people who like AI (some even to insane degrees), and others who just don't care, and just think of asking "who is this guy?" to ChatGPT as a more evolved version of googling it. If everybody was as anti-AI as the anti-AI guys pretend, AI would not be the success it is. Cambalachero (talk) 13:47, 12 December 2025 (UTC)[reply]

If you say "human-written" the layman would understand "written by a single and definite human" I don’t see how. The lack of any article (a, the…) makes it indefinite and plural, as it is understood in both academic and common speech. Aaron Liu (talk) 17:22, 12 December 2025 (UTC)[reply]
Being written by "an identifiable someone" isn't necessary for something to be written by humans. One doesn't look at Beowulf and think "Oh, dear, the author is unidentifiable, so it might have been written by non-humans". WhatamIdoing (talk) 19:31, 12 December 2025 (UTC)[reply]
There was no AI back in the 700 AD Cambalachero (talk) 22:51, 12 December 2025 (UTC)[reply]
I agree. But I think that it is a strong counter-example to the idea that knowing the identity of the author ("written by a single and definite human"...written by an identifiable someone) is what makes people think that something is human-written.
So, what is human-written? I think "human-written" encompasses works (like Beowulf) for which the author is unidentifiable and also those for which there is more than one author. "Human-written" indicates that humans (not AI, not monkeys, not Martians) do a particular thing (write, which I distinguish here from other activities, such as "prompt a chatbot to write" or "copy and paste" or "prettyprint the wikitext" or even "add citations to", though some of those are sometimes desirable activities). Wikipedia is (still, or at least mostly still) human-written because it is written by humans (Monkey selfie excluded). It might be difficult to draw an exact line between how much automation is possible before it stops being human-written (see also Ship of Theseus), but I think "human-written" is a fair description of both what we have now and what we want for, say, the rest of this decade. WhatamIdoing (talk) 23:37, 12 December 2025 (UTC)[reply]
The human chauvinism is really uncalled for. No species other than the Homo Sapiens has human-like intelligence (except perhaps other long extinct hominids), there is no evidence that alien civilizations even exist, and artificial general intelligence is still firmly in the realm of science fiction. All knowledge is human knowledge, all writing is human writing, and that includes writing generated by AI. Cambalachero (talk) 20:52, 13 December 2025 (UTC)[reply]
I don't think that "writing generated by AI" is written by humans. WhatamIdoing (talk) 03:05, 14 December 2025 (UTC)[reply]
I don't even think that text created using a Diceware word list and dice is necessarily authored by a human. Polygnotus (talk) 15:16, 15 December 2025 (UTC)[reply]
As pointed out previously, "human-written" is incorrect because of bots that have been part of Wikipedia for over 20 years and long-predate AI. Thryduulf (talk) 19:33, 12 December 2025 (UTC)[reply]
I think I would accept some of Wikipedia's "bot-created" content as being human-written in the end: If the bot is filling in the blanks to create a sentence like "$Name is a city in $State with a population of $population as of the 2000 US Census", then a human is still significantly responsible for it. That's closer to mail merge for a form letter than to bot-created. WhatamIdoing (talk) 23:29, 12 December 2025 (UTC)[reply]
If your criteria is "a human is significantly responsible for it" then an AI prompted by a human, trained on information written by humans, and with output fully reviewed by humans must also count as human written. Thryduulf (talk) 23:42, 12 December 2025 (UTC)[reply]
No, I don't think so. In the case of User:Rambot geography articles, we can name a specific human (User:Ram-man) who created the blanks for the bot to fill in and prepared the list of the exact things that the bot was to put in those blanks.
In the case of AI, I'm not sure that "written" is even the correct verb (maybe "assembled"?), but it's not really a human writing it. I'm not sure where you would draw the line between the total human control of Rambot-like scripts and AI, but at some point, it stops being human written. WhatamIdoing (talk) 03:08, 14 December 2025 (UTC)[reply]
I would not say that bots have been writing content. They've been reverting edits and making technical and style fixes. Aaron Liu (talk) 03:33, 13 December 2025 (UTC)[reply]
They also generated many wholesale articles about small localities. Andre🚐 03:34, 13 December 2025 (UTC)[reply]
Okay, I would concede on that point. It irks me, though, that such a low-visibility area stops us from claiming "human-written"... Aaron Liu (talk) 03:36, 13 December 2025 (UTC)[reply]
It's not just small localities, although that's the greatest number, there have been articles about species and multiple other topics too. Just because it's low visibility to you doesn't mean it's not important, or not high visibility to someone else (I'd be surprised if most readers don't look up articles related to their local area at some point). Thirdly, fixing spelling and grammar errors, updating templates, fixing links, and many other small tasks that bots do (solo or in conjunction with a human) are also very much part of writing an encyclopaedia. Thryduulf (talk) 04:46, 13 December 2025 (UTC)[reply]
On third: I agree that they are vital work, but I wouldn't call them "writing". Aaron Liu (talk) 19:45, 13 December 2025 (UTC)[reply]
There's little scope for creative writing on Wikipedia because the articles are supposed be entirely derivative, being based on the writing of others, and presented in a bland, dispassionate and formulaïc style. This is perhaps why forums like the Village Pump are so popular – they enable editors to express themselves more freely. Andrew🐉(talk) 20:10, 13 December 2025 (UTC)[reply]
Indeed, I would describe Wikipedia's content as technical writing. Aaron Liu (talk) 13:52, 15 December 2025 (UTC)[reply]
A major bot editor of Wikipedia is Cluebot NG which has made over 6 million edits and counting. This explicitly uses an artificial neural network which has been weighted with training data. This seems to be much the same as the new AIs and its scope includes all articles not just the cookie-cutter stubs created by bots like Rambot. Andrew🐉(talk) 08:12, 13 December 2025 (UTC)[reply]
That's somewhat true, in that GPT is also a type of a neural network, a more specialized type for generating text. The problem is when people use it as a reasoning tool instead of a fancy autocompleter or a search tool. It is searching to come up with something, and not really thinking. Cluebot is trained with edits as data and is returning basically a binary decision, what is the score of likelihood that this is a bad edit and I should revert it. Therefore it works well only because some common types of vandalism look similar. But it doesn't actually read or write articles. Andre🚐 08:22, 13 December 2025 (UTC)[reply]
Cluebot is performing recent changes patrol and this requires both reading and writing. Few human editors read or write entire articles; much of the activity is piecemeal.
Looking at an example of a recent edit by Cluebot, note that the article source looks more like code than English prose. That's because of the heavy use of markup, templates and tables. Many ordinary humans would find this incomprehensible. Wikipedia is not just some English text; the whole thing is a complex bundle of software.
Note also that the vandalism was made by a temporary account and so we don't really know who or what did that. But the trusted version to which Cluebot reverted, was created by Citation bot. In this case, the bots seem to have more presence and standing than the putative human.
Andrew🐉(talk) 08:48, 13 December 2025 (UTC)[reply]
I'm quibbling a bit but Cluebot doesn't really write the article. It reads the diff, determines it looks like vandalism, and reverts. Citation bot only modifies the URL inside the citation. In neither case are they actually composing article text. Andre🚐 20:39, 13 December 2025 (UTC)[reply]
That's editing rather than writing, but humans often do this too. And such bots can be versatile – an earlier version of Cluebot created thousands of articles such as 1803 Zwicky. Andrew🐉(talk) 20:59, 13 December 2025 (UTC)[reply]
That's true. More in the vein of the Rambot type stub articles. Andre🚐 22:07, 13 December 2025 (UTC)[reply]
I don't think that reverting is "writing" content. WhatamIdoing (talk) 03:10, 14 December 2025 (UTC)[reply]
To this I would add that the assumption but we've reached the point now where we really need to be looking into how we frame Wikipedia's relationship with AI, especially in public-facing areas is unfounded. I don't know why there is such huge worries and people describing the situation as complete chaos with Wikipedians scrambling to manage the huge flood of AI problem and stay relevant.
People basically know Wikipedia is written not by AI but by humans.
The WMF and news orgs currently reporting on Wikipedia communicate over and over that Wikipedia stays human.
Lots of other websites are also written by humans, just because LLMs are there now doesn't mean it's soon all just AI texts. Basically nobody uses Grokipedia. So re one of its defining characteristics that's also kind of false because people don't expect it to change otherwise and lots of other text websites and text are also still written by humans.
Andrew, Cluebot NG doesn't write content but just reverts edits so it doesn't really add content. Prototyperspective (talk) 21:08, 13 December 2025 (UTC)[reply]
Cluebot II created about 20,000 articles; that's plenty of content. Andrew🐉(talk) 22:01, 13 December 2025 (UTC)[reply]
Cluebot II is a now inactive bot that generated a bunch of stubs about individual asteroids based on the JPL Small Body Database, most of which are redirects. Its source is published for review, and the articles it generated consisted of SBDB data and a single sentence. I don't intend to debate whether this kind of algorithmic generation is valuable, but suffice it to say that neither Cluebot II, nor Cluebot NG, a classifier, is much the same as the new AIs. Agentdoge (talk) 14:53, 16 December 2025 (UTC)[reply]
The point about the various bots is that they are not human and so are counter-examples to the OP's conception that Wikipedia is purely written by humans. Claims of this sort are generally contrary to the disclaimers which explain that Wikipedia's editors are not vetted or qualified and so the content is unreliable. Andrew🐉(talk) 15:43, 17 December 2025 (UTC)[reply]
Not to go off-topic, but I find it weird how this (and many other discussions about AI use on WP) seem to laser-focus on edge cases ("how can we say we should limit LLM use when we've always used bots?") while totally ignoring the big picture. The claim that "human-written" is only a virtue in the eyes of people with a strong anti-AI sentiment may or may not be true, but is ignoring the fact that people who come to WP choose not to use AI, which is now unavoidably built into every Google search and thus more easily accessible. I don't much care to defend the idea that WP is better than some hypothetical future LLM output, only that it's different, and we should be able to offer such an option to those who want it for whatever reason. Should black licorice be pulled from store shelves and replaced with chocolate because "the taste of black licorice is a virtue to only those few people who hate the taste of chocolate"?
To those who say that AIs can write brilliant articles at scale...well, sure, daily Coca-Cola consumption can be a part of a healthy diet and lifestyle. But look at Facebook, look at LinkedIn, look at the dozens of listicles that CNN is trying to pass as news on their homepage. They're utter total predictable garbage at a machine-gun pace. The ability of AI to make good things at scale is vastly dwarfed by its ability to make garbage at scale. Why are so many people here focusing on the can and the might and the if it's done right while ignoring the internet that is right in front of their faces, from which WP is an increasingly rare island of refuge? Isn't that worth preserving? WeirdNAnnoyed (talk) 22:00, 13 December 2025 (UTC)[reply]
Faulty example. Rather than pull black licorice from store shelves, the correct analogy would be to advertise it as "not chocolate". Would would care about such detail, other than those who do not want chocolate? Cambalachero (talk) 22:15, 13 December 2025 (UTC)[reply]
That's also a faulty example, but only because the whole premise of both examples is faulty too.
Why are we afraid to publicly say what's actually going on? Who are we trying to defend or appease?
I propose adopting the following attitude, even if we don't literally go for my new tagline:
Wikipedia: Our editors are 100% real, because AI is bullshit. TooManyFingers (talk) 02:10, 14 December 2025 (UTC)[reply]
Thanks for confirming that ""human-written" is only a virtue in the eyes of people with a strong anti-AI sentiment". Cambalachero (talk) 14:48, 15 December 2025 (UTC)[reply]
Also a faulty example. Rather than advertise something as "black licorice" while knowing that an undetermined amount of the licorice is actually being replaced with chocolate, the actual correct thing to do would be to put a disclaimer on the licorice box saying that it may contain chocolate. Gnomingstuff (talk) 17:26, 15 December 2025 (UTC)[reply]

What do you want to achieve by this? Do you expect more people to visit Wikipedia and/or to contribute to it? Then it becomes an empirical question (it either achieves this goal or not) that can be tested on a small scale. Alaexis¿question? 21:22, 15 December 2025 (UTC)[reply]

I guess this is based on the hope that we will be able tell the difference between people and non-human agents. I wonder when that will start to become difficult. Perhaps not too long from now...although Evan Ratliff's Shell Game show should give people with "a strong anti-AI sentiment" a comforting feeling that we are not there yet. Sean.hoyland (talk) 11:05, 17 December 2025 (UTC)[reply]

Suggestions for new features in Wikipedia

[edit]

With how popular explanatory footnotes are, a feature in the section of the visual editor citation button for creating footnotes could be pretty useful. A section to the visual editor link button for reusing previous links could be useful considering how many times I find myself linking to the same article. A more secondary visual feature is that instead of citations next to each other being distinct like [1][2], they could be merged like [1,2]. Misterpotatoman (talk) 07:07, 29 November 2025 (UTC)[reply]

Like the idea on footnotes.
For reusing previous links, you just need to type '<ref' where you want to put your source in the Visual Editor, and then a pop-up would automatically appear where you would get 3 options 'Automatic', 'Manual' and 'Re-use'.
Merged citations [1,2] would be too close for comfort, and could result in mis-taps on smaller handheld devices, also WP:AINT. Cdr. Erwin Smith (talk) 12:46, 29 November 2025 (UTC)[reply]
If you'd like to see merged footnote markers, then see w:fr:Moineau mélanure#Description. The proposal is similar to the style used at the French Wikipedia. WhatamIdoing (talk) 22:13, 29 November 2025 (UTC)[reply]
That requires manually inserting fr:Template:, between each <ref>. jlwoodwa (talk) 04:15, 30 November 2025 (UTC)[reply]
no, i mean reusing links as the wikipedia feature that let's you link to links, in not talking about citations, also i think if it was merged, it should pull up a screen where all the citation links appear, i think it will actually make it easier on smaller devices. Misterpotatoman (talk) 22:32, 29 November 2025 (UTC)[reply]
It sounds like you're thinking about the scenario in which I go from one article to the next to add a link to (for example) Rare disease (real example, BTW), and instead of clicking the link button and typing rare dise in the search box until it pops up the link to the correct article, it would have a list of the most recent ones I've added links to, and I could just click on one of those instead of typing.
As someone who never edits from a smartphone, this would not be efficient for me. But since you're on mobile, where IMO typing is practically impossible, let me ping @PPelberg (WMF) and ask him to please make a Phab task suggesting this new feature idea for the mw:mobile visual editor. WhatamIdoing (talk) 23:53, 30 November 2025 (UTC)[reply]
exactly, that's what i meant, and thats even better than my first idea for link reuse. Misterpotatoman (talk) 23:58, 30 November 2025 (UTC)[reply]
“recently created citations” would be a good Meta:Community Wishlist entry ~ 🦝 Shushugah (he/him • talk) 23:26, 9 December 2025 (UTC)[reply]
FWIW you can just click on the "Cite" button Aaron Liu (talk) 20:29, 11 December 2025 (UTC)[reply]
Imagine that you're adding the same two or three sources to multiple articles. The "Cite" button will let you re-create the source each time (hand-correcting each time whatever it gets wrong). You could alternatively copy/paste the citations between articles. But I believe the request is for something like:
  1. Click the "Cite" button
  2. See everything that's there now plus a short list of the last few citations you generated (including the last few that you used in other articles).
  3. Pick one from the pre-loaded list of recently used citations.
WhatamIdoing (talk) 00:16, 12 December 2025 (UTC)[reply]
Yeah, that seems about right. I was pointing out to @Cdr. Erwin Smith that you can do the same without having to type out the ref syntax. Aaron Liu (talk) 17:24, 12 December 2025 (UTC)[reply]
The 'Citation' button beside the 'Link' button looks much more like a Quotation [ " ] button than a reference one. Me being an erstwhile Quora user assumed it as such :D Cdr. Erwin Smith (talk) 18:59, 12 December 2025 (UTC)[reply]
Yeah me too. c: User:Jack who built the house/Convenient Discussions even uses it as a quote button! Aaron Liu (talk) 19:02, 12 December 2025 (UTC)[reply]

A new space for all these proposals for proposals relating to problematic LLM usage ?

[edit]

It's taking up like 80% of this page. I have no formal proposal, but it might be a good idea to have a separate talk page/notice board for this. -1ctinus📝🗨 20:47, 2 December 2025 (UTC)[reply]

Wikipedia talk:Writing articles with large language models additional participation is welcomed, especially as a lot of the discussion here is redundant to discussion already happening over there. -- LWG talk 20:57, 2 December 2025 (UTC)[reply]
Maybe a banner would be helpful to redirect prospective posters? -1ctinus📝🗨 22:27, 2 December 2025 (UTC)[reply]
That's a good idea -- it's a little concerning how many people weren't aware this RfC even happened (not saying it's their fault, the topic just seems strangely under-publicized somehow despite taking up volumes of space) Gnomingstuff (talk) 03:24, 3 December 2025 (UTC)[reply]
I agree with you. It's getting annoying seeing all these LLM/AI discussions clog up the Village Pump. Maybe another Village Pump tab (e.g. Wikipedia:Village pump (AI/LLM)) is needed, I dunno. Some1 (talk) 18:22, 7 December 2025 (UTC)[reply]
I feel the same, but I wonder whether general community oversight of those discussions would be better than letting them 'hide' on a page only frequented by people with a particular interest/POV. WhatamIdoing (talk) 22:21, 9 December 2025 (UTC)[reply]
Agree with the above. An idea: putting all LLM- or all AI-related threads into one thread here.
Then there's fewer page headers about these and maybe one could collapse it to scroll over the page. Maybe a feature to collapse threads on desktop is missing.
Another feature that would be great is getting notifications for new threads as another approach to watchlisting this page. See this idea in the wishlist: W370: Mute some discussions on a busy page. Narrowly scoped proposal pages are problematic for several reasons. Prototyperspective (talk) 12:37, 14 December 2025 (UTC)[reply]
Then the mega-thread would never get archived, and eventually this page would be so large that people on mobile devices couldn't participate. WhatamIdoing (talk) 22:20, 14 December 2025 (UTC)[reply]

Recruiting expert editors

[edit]

One of the main issues with the most important pages is they require expert editors on the topic to improve them to GA status. These people are busy IRL, and are unlikely to take Wikipedia seriously. Peer-reviewed journals get these people to review for free, and this can count as service for tenure packets. One issue with using Wikipedia for this is that accounts are generally anonymous, and anyone can claim to be anything or anyone here. Recently we introduced temp accounts, could a non-anonymous account that requires a .edu email to sign up for, combined with some collection of access to sources and letters of thanks that tracks service that could be put in a tenure packet, be possible/useful? Is there anything else that could be used as bait for expert editors? GeogSage (⚔Chat?⚔) 18:49, 3 December 2025 (UTC)[reply]

Possessing a .edu email address (or equivalent) is not restricted to subject experts or even just academics. For example by virtue of being a life member of the computer society at Swansea University, which I got being serving as the society secretary for a year about 25 years ago, I have an @swan.ac.uk email address despite not even being a graduate. I have a friend with a dot .ac.uk email address because they work as an administrator at a sixth-form college.
Secondly, not everybody who is a subject matter expert is an academic and/or works in academia. I have acquaintances who are experts in different aspects of railway history but they are retired railway professionals not academics. I spoke with one of them a few years ago about editing Wikipedia, but they were simply not interested - their primary interest was in conducting the original research. There is also the issue that much of what they would want to write about if they were interested in doing so would be regarded as too niche for a general purpose encyclopaedia. Thryduulf (talk) 19:48, 3 December 2025 (UTC)[reply]
I'm aware that you don't need to be an academic for a .edu email, it is one possible limit though, especially if the email is made public and the account is not anonymous. Trying to recruit experts outside academia is another challenge, I'm trying to focus on one approach to getting one possible group of people who have a potential institutional motivation to do service. If you have suggestions on ways to recruit and motivate other groups of experts like those you mention, please suggest it. GeogSage (⚔Chat?⚔) 19:59, 3 December 2025 (UTC)[reply]
Students get them en masse. At minimum, it would have to be restricted to non-students, and I don't think that's feasible, so, long story short, this is not workable. Piotr Konieczny aka Prokonsul Piotrus| reply here 12:53, 10 December 2025 (UTC)[reply]
The WMF has programs like Wikimedian in Residence that encourage universities to support Wikipedia by encouraging them to create Wikipedia-oriented positions for academics. But that involves a lot of resources to get a single position at a university. I wonder if we could encourage more editors by asking the WMF to also try encouraging universities to promote Wikipedia as a option for fulfilling faculty service requirements.
On the front of experts outside of academia, expanding Wikipedia Library offerings and publicizing them more might attract some contributors. signed, Rosguill talk 01:41, 4 December 2025 (UTC)[reply]
If we could get universities to accept Wikipedia work as service, through whatever means, I suspect we would have a large volume of academics editing. I use Wikipedia as a means to help me actually read the stack of PDFs I download for work on other projects and broaden my understanding of my discipline, the instantaneous gratification of including a source or bit of information is a great motivator, but most professors I know consider it a waste of time they could spend on things they get credit for. Even if the University doesn't consider it as part of a tenure packet, "verified" profiles could help overcome this by allowing a professional to demonstrate some outside work in a qualitative way (even outside academia). GeogSage (⚔Chat?⚔) 02:19, 4 December 2025 (UTC)[reply]
User:GeogSage, I already count my Wikipedia work as "service", but let's be clear: very few people in academia need more "service" to put on their annual report (for the outsiders, we typically get evaluated on teaching, research, service). What we need is for Wikipedia to count as "research", and that's not going to happen until Wikipedia's status in academia goes up. My dean tells me every year "yeah we can't count that as research" and he bases that, pretty much, on what he sees as a rough consensus, nationwide, in the profession: that writing up articles, whether GA or FA, even within one's own field, does not constitute what we call "research". Writing up stuff for online databases, that counts, but the various stigmas associated with Wikipedia continue to prevent us academics from getting credit for work done here. Look at my contributions: I've given up on getting them recognized professionally, and that is one factor in my no longer being so active in actual writing and improving articles. Drmies (talk) 16:41, 10 December 2025 (UTC)[reply]
The problem with counting Wikipedia work as research is Wikipedia:No original research. Fundamentally, research as I understand it requires the creation of original thought, and should be original. Wikipedia is an aggregator of that thought, and by its nature is not original. One of the pages I'm the most proud of is Technical geography, and I could improve it tremendously if I could use my own thought on the topic, there are things I know about it through synthesis that are just not in the readily available literature. However this requires I first publish that synthesis in a reliable outlet, which would itself count as research on my annual report. Based on Wikipedia's own policy, I don't see getting it counted as research duties, which is why I started with service. GeogSage (⚔Chat?⚔) 19:58, 10 December 2025 (UTC)[reply]
User:GeogSage, "research" in my business is not necessarily original thought: research comes in all kinds. The problem with it not being weighted as research is the status of Wikipedia, not the nature of the writing. I got two publications in the pipeline--one is of the kind that you're thinking of, with me doing thinking and interpreting, but the other, for the most part, is a biography of the kind that we write here. And I got a couple articles in this series--there's a mix of "original research" there, along with regular biographical/historical writing. But if Eric Corbett and I had written up Green children of Woolpit outside of Wikipedia, I am sure I could have found an academic journal that would take it. Drmies (talk) 20:42, 10 December 2025 (UTC)[reply]
That is true, however in my experience different types of research are weighted differently. Peer-reviewed publications are the gold standard, other stuff is nice but given as much weight. This is a problem in itself, I have some publications that are not in journals, but they aren't valued as highly. GeogSage (⚔Chat?⚔) 20:51, 10 December 2025 (UTC)[reply]
There's also a corollary issue, which is that irrespective of what one's current institution thinks of Wikipedia activity, most academics also need to think about building a resume of publications for future jobs. signed, Rosguill talk 20:55, 10 December 2025 (UTC)[reply]
The argument I made, and I made this in my promotion file as well, is that FAs and to a lesser extent GAs are in fact peer-reviewed, as are DYKs. It didn't fly, but it should have. On an average FA one gets more peer-review than for most journal submission. For my book, I got two reviewers. For a recent book chapter, two; for a biographical article, one. But for one article in Studies in Medieval and Renaissance Teaching, I had seven reviewers. My point is that "peer review" (and you know this also of course) isn't always the same thing, and to fetishize it for journal articles and deny it happens on Wikipedia, or doesn't count for anything, is just wrong. But this is a problem in the profession--it's not a problem Wikipedia caused or can do much about. It's up to the T&P committees (our colleagues) and the deans (our supreme rulers). Drmies (talk) 21:06, 10 December 2025 (UTC)[reply]
One of the main issues with the most important pages is they require expert editors on the topic to improve them to GA status. Except that isn't true. Anyone who's reasonably careful and willing to do some background reading if necessary should be able to raise most articles to GA status. (Our most technical math articles may be an exception, but more or less everything else is fair game.) I'm also a little confused which of our articles are now the "most important". Cremastra (talk · contribs) 16:52, 4 December 2025 (UTC)[reply]
I'm generally referring to articles rated highly by the vital articles project. While anyone can technically put the work in to get an article to GA status, an expert editor will already have that background. Finding sources is not always straight forward, and the knowledge of how to navigate the literature landscape is not something that happens over night. There are concepts that are not common knowledge that people won't even know should be included in an article without some background. Furthermore, in my narrow area of knowledge, I see that there are errors on Wikipedia that are major but that, no matter how many sources I provide, most editors don't even understand the issue. There are some things that are really hard to self teach, but really easy to think you've mastered without an outside opinion. GeogSage (⚔Chat?⚔) 21:13, 5 December 2025 (UTC)[reply]
Wikimedia sponsored some research on the problem of low academic engagement [9]. Alaexis¿question? 21:02, 5 December 2025 (UTC)[reply]
Some of our editors are (or at least claim to be and I have no reason to doubt them) academics, and seem to spend a lot of time editing Wikipedia. Piotrus and Drmies, can you say anything here? Phil Bridger (talk) 23:07, 9 December 2025 (UTC)[reply]
I have written articles, in newspapers and in peer reviewed journals, arguing that "If we could get universities to accept Wikipedia work as service, through whatever means, I suspect we would have a large volume of academics editing" and that this is ethically a good idea. More influential folks than me have done the same, but clearly, we are a voice crying in the wilderness. I hae no idea what could be done better. I could say that WMF could use some of its funds that it is wasting on some stuff to do PR for this idea, but honestly, I doubt it would help much, the organizational intertia is just too big to deal with. Universities are not accepting Wikipedia as service, because it is not a component of university rankings, and this is the main thing that matters for bureaucracy (as rankings=student draw=$$$). It's as simple as that. Piotr Konieczny aka Prokonsul Piotrus| reply here 12:56, 10 December 2025 (UTC)[reply]
User:Piotrus, as I said above, I count my work here as service and I don't think many in academia would have a problem with that, but service is typically only up to 15% or 20% of the evaluation. We need it counted as research. Drmies (talk) 16:42, 10 December 2025 (UTC)[reply]
That too. And I read your point above about OR. It's valid, and other encyclopedias usually allow OR. That said, OR is often in the eye of beholder, particularly in cases of WP:SYNTH, and when we create articles on topics that don't have proper treatment. Again, folks disagree. Recently I talked with a collegue of mine (academic who also occasionally dabbles here with small edits). I believe that articles such as a book writeup, summarizing reviews and academic analysses and creating the first poper overview of said book is valuable research, even if it is just compiling existing knowledge. He doesn't think so. Anyway, to keep it short, while OR is not allowed on Wikipedia, R (research) is, and what we often do is research, as defined and explained in tha article. So, sure, it should be counted. And we know it is not going to happen soon, due to organizational intertia, lack of understanding and incentives for change. I mean, academia has more serious problems it cannot deal with (peer reviews, closed access parasitism, degree inflation, etc.). Shrug. Piotr Konieczny aka Prokonsul Piotrus| reply here 10:20, 11 December 2025 (UTC)[reply]
Is the wikiversity:WikiJournal User Group still active? WhatamIdoing (talk) 03:18, 14 December 2025 (UTC)[reply]
@OhanaUnited may know Piotr Konieczny aka Prokonsul Piotrus| reply here 03:52, 14 December 2025 (UTC)[reply]
@WhatamIdoing @Piotrus Somewhat. Our efforts are hampered by WMF funding reduction this year towards our project that led to layoff of our only paid staff and termination of contractors. The lack of prompt action by the Board of Trustee and their self-appointed "Sister Projects Task Force" on WikiJournal after submitting the proposal 6 years ago has substantially decreased our volunteer editorial board's motivation. OhanaUnitedTalk page 14:39, 16 December 2025 (UTC)[reply]
@OhanaUnited It's ridcoulous. Have you considered writing for The Signpost about that, and submitting a Wikimania presentation on the WMF destruction of Wikijournals? This needs to be shown to our community; WMF is becoming increasingly useless for us. Piotr Konieczny aka Prokonsul Piotrus| reply here 01:21, 17 December 2025 (UTC)[reply]
See the pages in Category:Wikipedia expert help.
Some ideas and Pros and Cons can be found at this spot in the structured argument map "Should scientists contribute to Wikipedia?".
.
I agree with what Piotrus somewhat but also see potential issues of the intrinsically-motivated genuine-volunteering and NPOV principles being undermined by such to some degree. I think a quite effective approach would be anonymous Wikipedia contributions certificates where academics could show that they contributed substantially constructively without having to reveal what they did (1. safeguards privacy 2. and neutrality and 3. addresses potential conflict of interest issues). This concept isn't far developed so more R&D on it would be great. Also relevant to recognition of open source development contributions as 'volunteering' (see petition). This maybe could also be used the other way around to verify one's academic experience without harming privacy albeit I don't think that would have much of an impact (could make it easier to find relevant users for a topic or by suggested tasks).
.
Secondarily, I think when it comes to effectiveness it's maybe less about "bait" and incentives and more about making the potential expert editors find places where they're needed and about them learning Wikipedia editing / getting them signed up and to explore a bit. The latter could e.g. be addressed by universities showing a demo of how Wikipedia works or somehow incentivizing such potential editors to sign up etc. The former could be partly addressed via what I proposed at W316: Suggested tasks based on contributions history (user interests) also for experienced editors. Tasks (& articles) would basically find their relevant experts who may spend only very short times on the site and aren't looking much / exploring around to find such. Prototyperspective (talk) 12:31, 11 December 2025 (UTC)[reply]
Incentivizing Academic Researchers & Scientists to engage in Wikimedia
…also you may be interested in this talk. I found it interesting.
Time-constraints are a main reason for why I think the task-sharing system proposed in the link above would be useful and limited academic recognition a reason for why I think contribution validations/certifications would be useful. Regarding the latter, I forgot to mention that edit types R&D (see m:Research:Wikipedia Edit Types & c:Category:Wikimedia contributions) is relevant to it as correcting a small typo or making lots of small edits which are later reverted is different than an edit creating a new large heavily-viewed article. Prototyperspective (talk) 01:52, 18 December 2025 (UTC)[reply]

Scope of AI tool use

[edit]

Recently, @LuniZunie and myself created Wikipedia:WikiProject AI Tools, aimed at working on tools leveraging AI models to help Wikipedia editors with tasks other than content writing. However, the line appears to have quickly been blurred. Some of the proposed tools have been focused on tasks such as generating edit summaries, which we've historically been using as a warning sign to stop generative AI abuse. More worryingly, others (Flow Checker, AI Proofreader) will review an article's writing, which might risk editorializing or pushing a POV (even something as innocuous as a false balance) without the AI writing words itself.

Beyond the question of the WikiProject's scope, there is a fundamental question of what the community is okay with in terms of AI-assisted tools, and it is crucial that we workshop a policy or guideline regarding what is or isn't accepted by the community. Chaotic Enby (talk · contribs) 20:32, 5 December 2025 (UTC)[reply]

As long as the final output is verified by a human then it doesn't matter what an AI did or didn't do before the human reviewed it. If the final output is not verified by a human then that's not acceptable regardless of what it is wasn't reviewed. Thryduulf (talk) 21:40, 5 December 2025 (UTC)[reply]
That's easy for you to say, but we'd have to actually establish what "meaningful human review" is and how we can confirm it has happened. Cremastra (talk · contribs) 21:42, 5 December 2025 (UTC)[reply]
Random thought: what do you guys think about GenAI contributions being posted to talk pages in the form of edit requests, to be implemented by another human after review? -- LWG talk 21:56, 5 December 2025 (UTC)[reply]
what problem does this solve/benefit does it add relative to human-written edit requests? NicheSports (talk) 21:58, 5 December 2025 (UTC)[reply]
None at all, human-written is always preferred. But in the case that we end up landing as a community on "some generated text is acceptable to be inserted after review" the advantage of keeping that text in edit requests is that it prevents harm to the wiki without consuming experienced editor attention, since if the influx of requests exceeds the capacity to review, they can simply be ignored until more capacity is available, as opposed to the current case, where the text is inserted directly to the article and remains there in unreviewed state until someone devotes the effort to review and possibly remove it. -- LWG talk 22:28, 5 December 2025 (UTC)[reply]
I think we'd end up swamped in requested edits, some of which were good and many of which were posted by new users who can't see which changes were good and just got an AI to scan the article and then tried to be helpful.
If we're going to allow any AI use (i.e. for identifying typos, etc., not in a generative sense) it should be restricted to a set of trusted editors who are experienced and smart enough to know what changes to implement. Cremastra (talk · contribs) 22:02, 5 December 2025 (UTC)[reply]
Good suggestion indeed. Maybe make this a new user right? We've had issues with EC users misusing AI many times before, so it isn't just a matter of edit count. Chaotic Enby (talk · contribs) 22:10, 5 December 2025 (UTC)[reply]
Policies and guidelines already require a meaningful review. Use the exact same standard. Thryduulf (talk) 22:02, 5 December 2025 (UTC)[reply]
@Thryduulf What policies and guidelines have established detailed procedures to review articles? And why in earth would those procedures be useful for reviewing the accuracy and usefulness of AI-generated content? Cremastra (talk · contribs) 22:13, 5 December 2025 (UTC)[reply]
I objected to the proposals that introduced these policies and guidelines because I believed they were vague and did not take into account details like this one. However the community consensus rejected this viewpoint, therefore sufficient procedures must exist to make it workable. I can't tell you what these are, you need to take it up with those who introduced the relevant policies. Thryduulf (talk) 00:33, 6 December 2025 (UTC)[reply]
The AI Tools project was potentially a little premature, given that the community is actively wrestling with what the limits on AI use should be. My recommendation would be that until our policies on LLM use stabilize, the AI Tools project should avoid advancing any use cases that 1) generate content (including edit summaries) or 2) review or adjust the meaning of article content. Catching typos does not adjust the meaning, so something along those lines would be fine. NicheSports (talk) 21:45, 5 December 2025 (UTC)[reply]
That's what I would support too, and I hoped that the project would develop along these lines. Also interested by Cremastra's idea of additionally restricting this to a set of trusted editors, which might provide regulation from another angle. Chaotic Enby (talk · contribs) 22:09, 5 December 2025 (UTC)[reply]
I have suggested restricting LLM-assisted content generation to editors containing an llm-user right several times, so would certainly support this :) I'd prefer similar requirements to autopatrolled for that right, but could discuss. NicheSports (talk) 22:12, 5 December 2025 (UTC)[reply]
The AI Proofreader is just that. The prompt is:
  • Spelling and Typos: Look for misspelled words, especially proper nouns, technical terms, and common words.
  • Grammar and Style: Identify grammatical errors, awkward phrasing, run-on sentences, and violations of Wikipedia's manual of style.
  • Factual Inconsistencies: Point out contradictory information within the article.
So it won't editorialize or push a POV. It just helps identify internal inconsistencies, bad writing and mistakes. Polygnotus (talk) 22:18, 5 December 2025 (UTC)[reply]
If it only identifies, and does not suggest new content, then I think that will be compliant with any future PAGs we develop for LLMs. Sounds fine to me? NicheSports (talk) 22:22, 5 December 2025 (UTC)[reply]
Yup. And it can't do anything, it just tells the user "This is possibly a typo, there is a missing word in this sentence". Stuff like that. Feel free to give it a try. Polygnotus (talk) 22:23, 5 December 2025 (UTC)[reply]
My worry is that, even though this is the intent, issues can easily creep in. For instance, fixing "grammar and style" might sound straightforward to us, but has often been used in AI-generated content as a justification for changes in tone or in due weight. Same for "factual inconsistencies", where it might make inferences from its knowledge base on matters that might not be clear-cut inconsistencies. The intent is noble, but I am worried about the doors it might open. Chaotic Enby (talk · contribs) 22:34, 5 December 2025 (UTC)[reply]
Yeah on second thought I think "factual inconsistencies" is too close to "article meaning" for me to be comfortable with. CE, would you be fine with use cases specific to identifying potential typos and MoS violations? NicheSports (talk) 22:37, 5 December 2025 (UTC)[reply]
A factual inconsistency would be "According to the infobox this dude was born in 1765 but in the body of the article it says 1865". Allowing an AI to actually make such edits would be bad of course, see Grokipedia, but telling a human editor that is fine. Polygnotus (talk) 22:38, 5 December 2025 (UTC)[reply]
Potential typos is fine with me. MoS violations could be okay, although, given the amount of conflict on one or two aspects of it, some care should be needed. Restricting it to specific violations (formatting-focused rather than prose/content-focused) could be ideal. Chaotic Enby (talk · contribs) 22:41, 5 December 2025 (UTC)[reply]
Seconding Chaotic Enby. In my experience, AI edits that "rewrite for neutral tone" actually introduce promotional tone, AI edits that "make things more concise" often do so by synthesizing material, and AI edits that summarize sources are currently awful as they simply tack on a source citation to their own homegrown AI slop. (In particular, AI tics like "highlighted" and "emphasized," in my experience, are almost always mischaracterizations of the source -- "emphasizing" means actually emphasizing, but AI will use it for even a passing throwaway mention, which can create a false impression of WP:SIGCOV.)
You can see these issues crop up in the many ANI cases of people mass-rewriting leads with AI, creating problems (one example). I've also been gathering a dataset of AI snippets of article text with human snippets (Q2 2022 and earlier) of the same article to analyze later, and where possible I've used the same paragraphs. In my experience, even stuff that claims to be "just copyediting" tends to introduces tone issues, editorializing, or new information marked. (It also usually makes the text sound like AI, which is bad optics for readers.) Going through them now, will post some illustrative examples once I do. Gnomingstuff (talk) 23:15, 5 December 2025 (UTC)[reply]
I just did an experiment on concision. Today I wrote Northern emerald § Taxonomic history, an interesting couple of short paragraphs on the history of the taxonomy of a species of dragonfly. I asked Gemini to "make it more concise and neutral". Here's what it gave me: (I removed the stupid bolding it scattered all over the place).

"Here is a concise and neutral revision of the provided text, focusing on the key taxonomic and historical facts:
The northern emerald dragonfly was first described by Johan Wilhelm Zetterstedt in 1840 as Aeschna arctica, based on specimens found in sub-alpine areas of Finland and Norway. In the same year, Edmond de Sélys Longchamps described a similar species from Belgium, Cordulia subalpina, a name he later listed as a junior synonym to Zetterstedt's species. In 1850, de Sélys reclassified the species as Cordulia arctica, documenting its distribution across Belgium, Norway, Sweden, Finland, and Scotland. De Sélys further reclassified the species in 1871 into the subgenus Epitheca, and finally established its current classification, Somatochlora arctica, in 1887. He noted its range included cold and temperate Europe, Siberia, and Transcaucasia. A separate Siberian species, Somatochlora gratiosa, described by Aleksandr Bartenev in 1909, was synonymized with S. arctica by Syoziro Asahina in 1958. As of 1985, three forms of S. arctica are recognized: fuscoptera, infuscata, and ornata."

It removed all the citations and the writing is possibly the dullest thing I've ever read (which is saying something because my original writing was pretty damn dull unless you're excited by taxonomic revisions), but I don't see any WP:SYNTH. Cremastra (talk · contribs) 23:31, 5 December 2025 (UTC)[reply]
I tried exactly the same task, and the sources were not removed. I always tell AI "You are an English Wikipedia editor", provide the article name for context, feed in wikitext, and request the same in return. --Викидим (talk) 02:49, 14 December 2025 (UTC)[reply]
Викидим "Always"? You have used Ai for copyedits (substantially) before? Did you note this in the edit summary, and have these edits been safely reverted?? This is a very, very serious issue. Cremastra (talk · contribs) 03:06, 14 December 2025 (UTC)[reply]
Yes, I use AI. Yes, I always disclose its use, including prompts used. IMHO, a lot of issues reported on this page is due either to using bad (or simply old) tools or bad prompts. I fully expect the same users to produce bad texts without the use of AI. Again IMHO: the problem is real and serious, yet it is not in the AI itself, but in the sheer quantity of text that unskilled editors can generate in a minute. This problem will not be solved by a Prohibition, it will just drive the same unskilled hands underground. Викидим (talk) 03:21, 14 December 2025 (UTC)[reply]
See, e.g., Georges Le Turcq and Talk:Georges Le Turcq#AI promts. Викидим has objected to "AI slop" but seems very comfortable with creating articles using AI. Presumably he does not deem all AI uses to be "slop". WhatamIdoing (talk) 03:25, 14 December 2025 (UTC)[reply]
No, I do not think that this article is slop. Yes, I think that the current state of AI allows it to write better texts than produced by many human editors. Yes, I am comfortable with creating articles with the assistance of AI. I always check and review personally the texts I create. Викидим (talk) 03:37, 14 December 2025 (UTC)[reply]
Whelp. One way or the other, this has drifted far beyond the initial scope of the conversation (which was explicitly about non-generative AI) and we're circling back into the old "AI-written articles, good or bad?" debate. Chaotic Enby (talk · contribs) 07:21, 14 December 2025 (UTC)[reply]
Cremastra has draftified that page since I posted the links last night. Based on Draft talk:Georges Le Turcq#Draftified, it appears that Cremastra wants a reader-facing disclaimer in the mainspace/article itself saying that AI was used to write the article. WhatamIdoing (talk) 21:41, 14 December 2025 (UTC)[reply]
If the issue is indeed in an {{AI-generated}} hatnote, any editor on or off this thread is welcome to add this hatnote to Draft:Georges Le Turcq and resurrect the article from the draft. Викидим (talk) 23:27, 14 December 2025 (UTC)[reply]
OK, here's a selection, all of which are copyedits from the Newcomer Tasks copyediting task, made throughout 2024 (circa GPT-4/GPT-4o). These are all from one user, but they don't read particularly differently from the (too many) other AI copyedits I've seen, and I have no reason to think this user has an unusual prompt. I'm also comparing them to the previous diff rather than the pre-2022 paragraph text; this does mean it's possible that it's AI copyediting AI, but I wanted to remove any intervening changes.
I've marked these edits up accordingly:
  • Blue = introduced new information, removed information for unclear reasons, or changed meaning
  • Green = introduced puffery
  • Orange = introduced clunky, wordy, or otherwise bad phrasing
Hopefully this illustrates the issue. Gnomingstuff (talk) 23:52, 5 December 2025 (UTC)[reply]
This is an excellent piece of work from you, thank you! Here is what I got for Flower Parade out of Gemini Pro 3.0 with just a default prompt as listed in User:Викидим/AI prompts and instruction "rewrite the paragraph in Wikipedia style. If the articles for nontrivial terms terms do not exist in English wikipedia, but are present in other languages, use the ill templates:". The result looks actually good to me (I fed in your text as a plain one, without wikilinks, in real life an editor would check the links and remove the bad ones for Charoen Muang Road and Buak Hat Park), so I would say that the problem here most likely was between the chair and keyboard and unrelated to AI:
Викидим (talk) 03:21, 14 December 2025 (UTC)[reply]
Here's some more, from a different user's batch of rapidfire AI copyedits, all of which claim to rewrite for neutral tone but actually introduce puffery (and other issues). Same markup:
(I've omitted a lot of edits that didn't introduce promotional tone but didn't remove it either even when the problems are screamingly obvious, such as this.) Gnomingstuff (talk) 00:29, 6 December 2025 (UTC)[reply]
Thanks a lot. That's making a good case for these tools to be restricted, at most, to just fixing typos. Although even then, some people (and AI models) have a very generous definition of what counts as "fixing typos", beyond unambiguous spelling mistakes.
I've ran the latter three (with a typo deliberately added in each one) through Gemini with the prompt "Please fix any typos you may find in the following paragraph", here are the results:
In all three cases, Gemini managed to find the typo I added and correct it, without adding any extraneous material, which makes me confident that it can be trusted with this task Chaotic Enby (talk · contribs) 01:24, 6 December 2025 (UTC)[reply]
That's making a good case for these tools to be restricted, at most, to just fixing typos. No, that is making a good case for (re)writing articles with GenAI to be restricted. I think we already have consensus for that. Polygnotus (talk) 07:00, 6 December 2025 (UTC)[reply]
The thing is that I suspect most people would view the above edits as closer to "fixing typos" than to "writing articles." Gnomingstuff (talk) 18:23, 6 December 2025 (UTC)[reply]
That reminds me of User_talk:WorldPeace888#COI_/_PAID. Polygnotus (talk) 07:12, 6 December 2025 (UTC)[reply]
I wouldn't worry much, these tools seem to require paid for API keys that almost nobody has (I was excited to try some of these tools and bounced back hard). Oh well. Piotr Konieczny aka Prokonsul Piotrus| reply here 13:39, 10 December 2025 (UTC)[reply]
@Piotrus Gemini is free. And I don't mind giving you a Claude API key. And I already got pregenerated suggestions in User:Polygnotus/barfoo & User:Polygnotus/barfoo2 (please remove em from the list when you've implemented the suggestions or decided thhat they are incorrect). Polygnotus (talk) 13:52, 10 December 2025 (UTC)[reply]
@Piotrus, would be interested in your feedback if you can make it work.
Also, maybe it would be possible to give established Wikipedia editors API credits (in the spirit of the Wikipedia library) that they can use in apps like this one. From the user perspective it happens behind the scenes and they shouldn't be aware of it, unless they hit usage limits. The costs would be minimal but someone would have to bear them. Alaexis¿question? 16:32, 10 December 2025 (UTC)[reply]
@Alaexis Good idea. WMF has more than sufficient funds, but it's overly bureaucracized these days, so I don't even know what procedure if any would be relevant here. Anyway, after I get the tool to work, and I'll post my thoughts about it on its talk page. Piotr Konieczny aka Prokonsul Piotrus| reply here 09:15, 11 December 2025 (UTC)[reply]
@Piotrus, I've found a free open-source model that we can use as long as it stays small-scale. I've told u:Polygnotus about it, hopefully it'll be implemented. In the meantime I've added it to my standalone citation checker, feel free to test it. It's a standalone app rather than a script so it's functional but less convenient. Alaexis¿question? 17:00, 12 December 2025 (UTC)[reply]
@Alaexis Oh, citation checker? That sounds like an awesome tool I very much need for my grading of wiki student submisions :) Will try it out ASAP! Piotr Konieczny aka Prokonsul Piotrus| reply here 01:13, 13 December 2025 (UTC)[reply]

What requirements should be expected for a "can use LLMs" user right?

[edit]

This has been touched on above, so I'm creating a new section to avoid the above discussion getting too derailed.

If we do create a user right for users trusted to use LLMs for non-generative purposes in articles (e.g. no changes that alter the meaning of the article and do not expect the LLM to check references) what should the minimum requirements for that right be? @Chaotic Enby and NicheSports: Cremastra (talk · contribs) 22:22, 5 December 2025 (UTC)[reply]

Willingness and ability to check every single (proposed) edit and take responsibility for it. We should demand the same for all edits. CIR is not a joke. AI slop and human brain slop ain't that different. Polygnotus (talk) 22:24, 5 December 2025 (UTC)[reply]
Support this as a baseline, but I would also add as a soft requirement a demonstrated track record of transparency and responsibility (e.g. no issues of playing fast and loose with source verification). Stating willingness is good, but admins granting the right might want to also rely on evidence from the user's contributions. Chaotic Enby (talk · contribs) 22:38, 5 December 2025 (UTC)[reply]
I'm not sure I understand your response in the context of the question – should everyone have to apply for a user right before being allowed to make any edits?
Now I have said before that in the areas I edit, there is so much poor editing from humans that edits with program-generated content would just a drop in the bucket. The existing content dispute resolution processes are very inefficient, costing a lot of effort to deal with editors making poor edits. So I agree we need better processes to handle all those who submit poor writing, no matter how it was created. isaacl (talk) 23:01, 5 December 2025 (UTC)[reply]
Hmmm. I am certainly in the more restrictive camp when it comes to desired LLM policies, but do you guys really think the community will support this direction? My idea for an llm-user right has been to restrict LLM-assisted content generation to highly experienced and trusted users, with the same requirements as autopatrolled, although having to apply separately. Frankly I think the ship has sailed when it comes to using LLM tools for unambiguously non-generation tasks like finding typos. NicheSports (talk) 22:47, 5 December 2025 (UTC)[reply]
As someone who uses AI a lot (too much) I believe AI content generation should simply be banned.
Using AI to support a human editor is fine tho, as long as the human makes the decision and takes the responsibility. If Claude gives me bad advice I'll just ignore it. Polygnotus (talk) 22:49, 5 December 2025 (UTC)[reply]
@Polygnotus I understand this and it makes sense, but I think in practice it would just create an AI free-for-all anyway. Unless someone like you is willing and able to police every single edit, no one is going to "take the responsibility" as you described.
I would support your first sentence if a couple of words were removed:
"... AI content generation should simply be banned."
AI isn't worth the costs it imposes. TooManyFingers (talk) 02:26, 14 December 2025 (UTC)[reply]
There are 3 distinct ways of working:
  • AI makes the edit -- bad
  • AI suggests the edit and the human can accept or skip with a single button press -- bad
  • AI suggests improvements and the human can make the edit if they agree -- good
Polygnotus (talk) 22:54, 5 December 2025 (UTC)[reply]
Oh, LLM content generation should be banned, outright. I'm in the "insanely restrictive" camp on LLM use. Cremastra (talk · contribs) 22:58, 5 December 2025 (UTC)[reply]
@Cremastra Well, it sure looks like we (mostly?) agree User_talk:Cremastra#LLMs. Polygnotus (talk) 22:59, 5 December 2025 (UTC)[reply]
Yeah I mean my preference would be to completely ban LLM-assisted content generation because evidence (at AFC, NPP, AINB, 1346 (hist · log), etc.) has shown that the vast majority of users will not or cannot sufficiently review LLM output to make it WP:V compliant. My suggestion above is a compromise that I think would solve 99% of the problem so I am fine with it as well. Either works for me. NicheSports (talk) 23:04, 5 December 2025 (UTC)[reply]
As I mentione before, this is closest to my preferred policy too. As far as specifics, I think there should be an application process. To apply, someone should at minimum:
  • Describe, in detail, their entire process, including the tools they use and versions thereof, the exact prompts, the exact review process, etc. Obviously they should write it themselves, not use AI.
  • Walk through an example of that. Provide the raw LLM output or iterations thereof, go through it line by line, check every statement against every source, change problematic material accordingly, and explain why they made every change.
  • Then a reviewer -- preferably one familiar with AI writing who knows what issues are likely to crop up -- would need to also double-check the verification behind them, as well as review the prose. If a reviewer flags any issues, the person applying should take that into account.
  • Upon getting the right, the user should disclose AI use in all edit summaries and ideally on the talk page -- in part to indicate to anyone coming along later that the AI use was in fact reviewed (which they wouldn't otherwise know). In my ideal world there would also be a note on the article page that AI was used to write the article, because I think readers deserve to know this and because there's precedent in stuff like articles based on Catholic Encyclopedia text. I don't expect anyone to agree with me on that.
  • The right can be revoked if there is a pattern of bad AI-assisted edits.
I don't think this process is too extreme -- it's what copy editors and fact-checkers do every day as their job -- but I don't think it is likely to gain much traction, because it's a lot of work for both parties. Gnomingstuff (talk) 00:44, 6 December 2025 (UTC)[reply]
I would support this, but it could be made much simpler as we're talking about using AI-powered on-wiki tools for non-generative purposes, rather than asking outside LLMs for raw content generation. In that case, a lot of steps (like disclosing AI in edit summaries, or having to describe their process in detail) would be simplified. Chaotic Enby (talk · contribs) 01:08, 6 December 2025 (UTC)[reply]
Oh I should clarify -- this is regarding NicheSports' llm-user right for content generation. Personally I would prefer people didn't use AI to write articles but I would be ok with this kind of compromise.
I do still think AI use should be required in edit summaries though, no matter what. I guess that's where I'm a hardliner -- my view is that all AI use should be disclosed in a prominent reader-facing location, not just in places like edit summaries where nobody but editors ever looks. News organizations, research papers, etc. are expected to have these disclaimers, and we should too. Gnomingstuff (talk) 01:30, 6 December 2025 (UTC)[reply]
Thanks for the clarification! Since the comment by Cremastra above mentioned non-content generation uses (which WP:AIT is about), I felt it could be useful to mention it, but we're indeed talking about two different things here (and I would also prefer much stricter regulations, or a total moratorium, on AI content). Chaotic Enby (talk · contribs) 01:55, 6 December 2025 (UTC)[reply]
I'd definitely support this stricter approach as well, but I think it will take some serious admin advocacy to be approved NicheSports (talk) 03:36, 6 December 2025 (UTC)[reply]
Thankfully, admins don't have more authority over policy decisions than other editors. If a few concrete ideas come out of this discussion, which is quite likely, you or me can start a formal request for comment for the community to decide on them. Chaotic Enby (talk · contribs) 03:51, 6 December 2025 (UTC)[reply]
just be sure to wait 20 years otherwise it's too soon Gnomingstuff (talk) 03:57, 6 December 2025 (UTC)[reply]
I believe the introduction of a userright requires the WMF, so that 20 years will fly by. Polygnotus (talk) 07:03, 6 December 2025 (UTC)[reply]
On the other hand, what would a user right actually do? MediaWiki can't really restrict which external tools someone uses to populate an edit form or build an API query, and there aren't any AI extensions (yet) that could be restricted like mw:Extension:ContentTranslation is restricted.
We wouldn't really even need an empty MediaWiki group to represent the "right", as assignment could as effectively be done like Wikipedia:Requests for permissions/AutoWikiBrowser and Wikipedia:AutoWikiBrowser/CheckPageJSON instead. Anomie 15:10, 6 December 2025 (UTC)[reply]
We can't, but we can require (per policy) for these user scripts to be limited to that user right. As a matter of precedent, we already do the equivalent when requiring Huggle and AntiVandal to limit one-click reverts to users in the rollbacker group. Chaotic Enby (talk · contribs) 16:26, 6 December 2025 (UTC)[reply]
No, not really. I don't know how Flow Checker works (you'd have to ask Nullnominal) but you can't really have an opensource JavaScript check if someone has a specific userright.
Or, you know, if you did it would be laughably easy to evade that restriction (like is the case with AWB and its JavaScript equivalent, JWB). I have AWB rights but if I didn't I would still be able to use it if I wanted to; there is no protection mechanism to protect against that.
The problems we have seen with AI on Wikipedia are , like Chipmunkdavis (CMD) points out below, all about generating text and then sticking that text in Wikipedia articles. Give the AI proofreader a try; you'll see that it is not the problem. If you don't have any API keys ping me and I will send you one. Polygnotus (talk) 16:48, 6 December 2025 (UTC)[reply]
Based on my experience with AI edits that claim to "improve flow" I do not feel very confident about the whole concept of that tool. The quality of the tool is only as good as the actual editors and how they gauge what good "flow improvements" are. Will try to track some of those down Gnomingstuff (talk) 19:32, 6 December 2025 (UTC)[reply]
This is like the rules for the first automobiles. Better to just ban GenAI (re)writing of articles and be done with it.
You can't factcheck your way out of AI slop. Maybe someone should write an AI version of WP:BACKWARDS for that. Polygnotus (talk) 07:17, 6 December 2025 (UTC)[reply]
I agree with @Gnomingstuff and @Chaotic Enby that the requirements for AI content generation and AI tool usage should be different. The AI-assisted content generation is a hard problem and if we let everyone do it we'd be buried in slop - I think that a temporary moratorium or a small-scale pilot project would be the best way to proceed.
On the other hand, AI-based tools can be quite helpful (full disclosure: I created one for citation checking). Editors' time is precious and we should strive to use it more effectively. To take citation verification as an example, we know that some percentage of our citations doesn't support the claims they purport to support (there are 18k failed verification tags). However, assuming that the ratio of bad/good citations is 1/100 we can't reasonably expect editors to sift through 100 citations to find one incorrect one. Now these errors are fixed only if a subject matter expert happens to notice it and cares enough to fix/tag it.
If we use AI tools for this use case or for something else, the AI assistance should be limited to raising the flag and the human editor still has to make an edit and bears the responsibility for it. We should also require the AI tools to be open-source and to make AI prompts easily available to avoid bias. Alaexis¿question? 09:30, 6 December 2025 (UTC)[reply]
The only issue I've seen so far regarding using llms for source checking is over whether the source really doesn't support the fact or not, which can happen as pat of normal editorial processes. Such uses are only getting caught up in these discussions because when yet another content generation issue comes up, AI tool use is raised as a 'well what about this use would you ban this?' or caution about such a question. We should focus discussion only on the generation of text, because that is where 99% of the current issues lie. CMD (talk) 12:35, 6 December 2025 (UTC)[reply]
They should show the ability to write decent articles on their own, without the usage of AI assistance, as one criteria. This is important because it signifies that they already have a good understanding of the best practices for writing an article, and are capable of checking an AI's output and modifying it to be of good quality. I think it's important we not allow people to automate tasks we couldn't trust them to do without automation in the first place; that's a recipe for massive destruction. It would be like giving AutoWikiBrowser permissions to a person who has no clue how to edit Wikipedia on their own. aaronneallucas (talk) 22:53, 6 December 2025 (UTC)[reply]
I can barely string two words together, but I am a pretty good editor if I may say so myself.
capable of checking an AI's output and modifying it to be of good quality
That is writing WP:BACKWARDS. You can't factcheck your way out of AI slop. Polygnotus (talk) 23:45, 6 December 2025 (UTC)[reply]
@Polygnotus That would be why I said a user should show that they can write on their own without AI tools. If you disagree with this criteria, that's okay, but please don't sarcastically misrepresent what I said; it isn't productive to this discussion. Besides that, my reply should not be construed as an endorsement of AI usage in writing Wikipedia articles, merely what should be required before a user is granted a theoretical user right to use AI tools. aaronneallucas (talk) 02:08, 7 December 2025 (UTC)[reply]
@Aplucas0703 But I didn't sarcastically misrepresent what you said. Perhaps I interpreted it differently than intended, in which case you can just explain that my interpretation is incorrect. I don't know you and since we are communicating via written text misunderstandings are basically inevitable. Polygnotus (talk) 08:59, 7 December 2025 (UTC)[reply]
It's also not true that the only thing relevant to checking AI output is factchecking. What is actually required depends in large part on what the AI was asked to do and what changes it made/proposed. For example if you've just used an AI to improve your grammar and it made no changes to the sources used, then making sure that it didn't introduce any misleading statements when it changed the grammar (and correcting any found) is the difference between bad and good content. Thryduulf (talk) 02:21, 7 December 2025 (UTC)[reply]
It's also not true that the only thing relevant to checking AI output is factchecking. Indeed, but no one made that claim as far as I know. Polygnotus (talk) 09:02, 7 December 2025 (UTC)[reply]
That was the implication I took from your You can't factcheck your way out of AI slop. comment. Thryduulf (talk) 13:48, 7 December 2025 (UTC)[reply]
Ah. No, that was not what I was trying to say. AI output contains a myriad of problems; way too many to list here. Polygnotus (talk) 14:31, 7 December 2025 (UTC)[reply]
But that's so overgeneralised as to be untrue. AI output can contain a myriad of problems, but not every instance of AI output does. Some AI output contains sufficiently few (in some cases no) problems that it is possible for a human to entirely resolve them with an amount of effort that they deem worthwhile (how much effort that is varies person-to-person, and possibly task-to-task). Thryduulf (talk) 16:02, 7 December 2025 (UTC)[reply]
@@Thryduulf Apologies for stating the obvious, but AI output contains a myriad of problems does not have the same meaning as every single instance of AI output contains a myriad of problems. Polygnotus (talk) 17:45, 7 December 2025 (UTC)[reply]
Literally no, but in the context of your comments on this page there is no meaningful distinction between the two. Thryduulf (talk) 17:47, 7 December 2025 (UTC)[reply]
Yes, there is. Don't strawman. Cremastra (talk · contribs) 18:02, 7 December 2025 (UTC)[reply]
Could you identify a meaningful distinction, then? It seems to me like you and Polygnotus are making some very black-and-white claims and then backing up when someone points out the situation is more nuanced than you originally portrayed. Loki (talk) 18:59, 7 December 2025 (UTC)[reply]
@LokiTheLiar Would you be so kind to explain why you think there is no meaningful distinction between AI output contains a myriad of problems and every single instance of AI output contains a myriad of problems, either in the context of my comments of this page, or in general?
And can you post 2 or more links to Cremastra and myself backing up when someone points out the situation is more nuanced than we originally portrayed? Thanks, Polygnotus (talk) 19:26, 7 December 2025 (UTC)[reply]
You started with You can't factcheck your way out of AI slop, then backed up to AI output contains a myriad of problems; way too many to list here when Thryduulf pointed out that factchecking isn't necessarily relevant, and then further backed up to the claim that some AI output contains a myriad of problems when Thryduulf pointed out that some AI output contains few problems. Loki (talk) 22:27, 7 December 2025 (UTC)[reply]
@LokiTheLiar Thanks. Probably best if I wait to respond until you've answered the other 2 questions right? Polygnotus (talk) 22:32, 7 December 2025 (UTC)[reply]
Yes, there is a difference.
The statement AI output contains a myriad of problems means that AI-generated output as a whole produces many problems, such that, say, in a set of 10 pieces of AI outputs there will be 25 "problems" large and small. It makes no comment on how those problems are distributed among the texts. Some texts may be fine, but "AI output" as a whole contains problems.
The statement every single instance of AI output contains a myriad of problems means that every single on of those texts above contains a large number of problems.
Obviously, these are different: the first is true; the second, an exaggeration no-one is arguing.
Identifying blatant logical fallacies does not constitute casting aspersions. Cremastra (talk · contribs) 21:00, 7 December 2025 (UTC)[reply]
The interesting thing is that one is overly broad ("every single instance") and the other is overly narrow ("the only thing"). Polygnotus (talk) 22:42, 7 December 2025 (UTC)[reply]
My comment was not a strawman and I would ask that you refrain from making such aspersions in the future. If you believe there is a meaningful difference between the position you are arguing for and my statement then you should have no trouble explaining what that difference is and why it is meaningful. Even after re-reading this discussion multiple times, I'm still unable to identify any. Thryduulf (talk) 20:40, 7 December 2025 (UTC)[reply]
That is not an aspersion. The burden is on those who make a claim, especially when its an extraordinary claim like that 2 pieces of texts with different meanings are functionally the same. Polygnotus (talk) 22:34, 7 December 2025 (UTC)[reply]
I would argue a candidate for such a pseudo-right should demonstrate
  1. A history of understanding content policies and guidelines
  2. Competence checking that sources verify
  3. An understanding of the uses and limitations of large language models
  4. A legitimate use-case
If at any time it were believed they no longer meet these criteria, the pseudo-right could be revoked.
My reasoning:
  1. LLMs produce many issues, such as weasel and peacock wording, that a user would need to know how to identify. This could be demonstrated by a history of high-quality content contributions or detailed and skilled copy editing.
  2. Not a difficult skill, but essential for anything involving AI-generated content. Not sure how this could be demonstrated beyond not introducing misinformation, but it would likely be the most common grounds for revocation.
  3. Obviously necessary to avoid relying on the AI excessively. Could be demonstrated by time doing AI cleanup, or just an experienced admin asking the user questions.
  4. Duh.
I think such a pseudo=right is a good idea, if only because it makes it easier to tell other users their AI contributions are prohibited. lp0 on fire () 19:42, 9 December 2025 (UTC)[reply]
@Lp0 on fire These are for a userright allowing the user to paste AI-generated text into Wikipedia, right?
I think I know only 4 or 5 people irl who understand the limitations of current AI models to a reasonable degree. People just think its some magic box that spits out the truth, or that its a magic box that spits out lies. Few people have a more nuanced opinion than that. Polygnotus (talk) 19:48, 9 December 2025 (UTC)[reply]
And for that reason, few people should be allowed to use AI to write for Wikipedia. I'm not sure how broadly this should be scoped, but the question was what the requirements should be, so I was giving some suggestions. I suppose the "understanding of limitations" clause should only apply to understanding the limitations of the specific task they want to use AI for. Something more than "magic box give me answers" is definitely necessary but people don't need to have a PhD in AI. Or at least that's my take. lp0 on fire () 20:55, 9 December 2025 (UTC)[reply]
the "understanding of limitations" clause should only apply to understanding the limitations of the specific task they want to use AI for this would be the reasonable approach. If I wanted to use AI for copyediting articles about Indian settlements (I don't, but this is a class of article that, generally speaking, would benefit from copyediting) it is reasonable to expect me to understand the strengths and weaknesses of AI copyeditors (or at least the model I will be using) and possibly how they interact with Indian English. It is not necessary for me to have any particular understanding of the limitations of a different LLM regarding creating articles about contemporary scientists.
Obviously if my editing history shows that I spend more time creating articles about living scientists than I do editing existing articles about Indian settlements then it would be prudent for those evaluating the request for the right to ask about this if not addressed in the request itself. It should not be automatically disqualifying as there might be a legitimate reason for that (e.g. they might make tens of edits writing each new article but make only one edit per existing article when copyediting and state that an LLM wouldn't help them with their personal article writing process but would fit well with how they copyedit). Thryduulf (talk) 23:03, 9 December 2025 (UTC)[reply]
That is a good point, although when giving a right that gives access to a broad variety of tools, we can't predict that the user won't start employing it in more problematic use cases later down the line. Someone could be very good at understanding that LLMs can, in fact, find typos, and then slide from "move typos" to "rewrite paragraphs of text" with little supervision. Chaotic Enby (talk · contribs) 23:13, 9 December 2025 (UTC)[reply]
At some point we have to assume good faith. I imagine it working something like how bots are currently authorised - a request is made to run a specific bot for a specific task (or set of tasks). Whether that task is desirable, whether a bot is suitable for carrying out that task, whether there is consensus for a bot to do that task, the suitability of the applicant to be a bot operator (in general and for the specific task) and the suitability of the specific bot for the specific task are all evaluated, and if approved, the bot flag is granted. There is no technical restriction on running a bot without the bot flag and/or for tasks other than those approved, however when we detect someone doing those things we stop them and, if appropriate, revoke the right. It wouldn't be identical (for obvious reasons) and we'd have to make it absolutely explicit that any comments on requests that there are no tasks suitable for LLMs (or similar) should be struck and/or ignored as contrary to community consensus (and repeatedly leaving such comments should explicitly be disruptive editing). Thryduulf (talk) 00:48, 10 December 2025 (UTC)[reply]
Bot operators are presumed to make a request for approval for each new task, as they are usually well-defined, easy-to-track matters, and bots are relatively rare all things considered. Here, it would be much more likely that, once approval is given to someone to use LLMs for one purpose, they won't make a separate request for each task – similar to how folks can be granted the page mover right to draftify pages, but won't be expected to ask for it again if they want to help out at WP:RM/TR. It works for page mover as it is a position of pretty high trust (only a few hundred non-admin page movers!), but a "LLM user" right might be more widespread, meaning we would put trust in a lot more users to know their own limits. Chaotic Enby (talk · contribs) 00:56, 10 December 2025 (UTC)[reply]
Please stop assuming bad faith of LLM-users. An LLM right will be as high or low trust right as we choose to define it, it might be something akin to autopatrolled in which case treating it like a bot authorisation probably wouldn't work, it might be something on the level of edit filter manager (which is arguably a higher trust position than botop) in which case something like my thoughts above absolutely would work. Realistically it would almost certainly be somewhere between those levels (and that is where I would argue for placing it although I couldn't tell you exactly where right now). Similarly we would be free to define the scope of authorisations to be like botop, page-mover, adminship or anything else. As long as we are clear about what authorisation is being granted for (which can be as general or specific as we choose) and what the expectations are regarding doing/wanting to do things other than authorised, then the majority of those granted the right will meet those expectations. Those that don't will have the right revoked and, if appropriate, other action taken in exactly the same way that those who fail to comply with the expectations of other rights are treated. Thryduulf (talk) 01:21, 10 December 2025 (UTC)[reply]
If it is as high as autopatrolled or higher, I would be comfortable with it. I was afraid that it would be something easily given out like TAIV or rollbacker, which would be a lot more problematic. Chaotic Enby (talk · contribs) 01:30, 10 December 2025 (UTC)[reply]
They didn't assume bad faith of LLM-users. Polygnotus (talk) 09:38, 10 December 2025 (UTC)[reply]
As I said, it will be as easy or hard to get as we (community consensus, not you and me) choose. I don't have a good feel for what level others (other than those who oppose (almost) all LLM use) would desire. Thryduulf (talk) 03:26, 10 December 2025 (UTC)[reply]
If it's easy to get, it might be easier to remove. WhatamIdoing (talk) 04:49, 10 December 2025 (UTC)[reply]
As far as assuming good faith: there's one other entity involved here, and that's whatever AI company made the tool. AI companies are not known to be transparent about... well, anything, and they change shit all the time. So if an editor appears to slide from basic grammar-fixing copyediting to more substantive and problematic "copyediting," the thing doing the sliding or "acting in bad faith" might not be the editor but ChatGPT (or whatever). Gnomingstuff (talk) 21:24, 10 December 2025 (UTC)[reply]
@Gnomingstuff, if this is the central concern then this is solved by using open source models like Apertus hosted on publicai.co. They are perfectly adequate for most purposes that have been mentioned here. Alaexis¿question? 22:19, 14 December 2025 (UTC)[reply]
@Alaexis I doubt that that is the correct link target. Polygnotus (talk) 22:36, 14 December 2025 (UTC)[reply]
Polygnotus, its at Apertus (LLM). I've also disambiguated the Apertus redirect. 45dogs (they/them) (talk page) (contributions) 09:11, 15 December 2025 (UTC)[reply]
Thanks for fixing it! Alaexis¿question? 10:07, 15 December 2025 (UTC)[reply]
I don't think we're going to get most people to use an obscure Swiss LLM. Gnomingstuff (talk) 18:41, 15 December 2025 (UTC)[reply]
@Gnomingstuff, most people won't use the obscure Swiss model directly. The idea is to use it to power AI tools. To take the citation checking as an example again, you'd just get the check result and you won't know that that behind the scenes it used this model. Alaexis¿question? 21:04, 15 December 2025 (UTC)[reply]
No need to reinvent the wheel. We already have a Bot policy. Cambalachero (talk) 11:09, 15 December 2025 (UTC)[reply]
WP:BOTPOL wouldn't be good for covering human use of LLMs. Anomie 13:03, 15 December 2025 (UTC)[reply]
This specific proposal is not about article content generated by AI, but about other mundane edits. Cambalachero (talk) 18:57, 15 December 2025 (UTC)[reply]
Doesn't matter. There's still very little room in WP:BOTPOL for dealing with human activity that isn't WP:MEATBOT or the like. Anomie 20:24, 15 December 2025 (UTC)[reply]

Fact-checking sister project

[edit]

I've been thinking a lot about the current level of misinformation floating around on the Internet, and how it can be difficult for news websites and fact-checking services to keep up with the demands brought on by it. I had been thinking that we could make a kind of sister project called Wikifact (and just now checking to see that the name was available, turns out someone proposed this exact thing under the same name a while back, go figure, though it didn't receive much attention). It would function similar to websites like PolitiFact: just straight-up dedicated to fact-checking and nothing much else. However, of course, this would rely on verifiable sources and not original research, and would not have to be framed as a declaration of "true" or "false," but perhaps framed as "supported by reliable sources," "partially supported by reliable sources," "not supported by reliable sources," and "contradicted by reliable sources".

I'm curious to hear what other think of this as a sister project (or being incorporated elsewhere), what you would like to see from such a project, potential problems you think we could encounter if this project were made live (and solutions), and what safeguards you would want to see put in place for something like this? I think we're at a place right now where more accessible fact-checking might be a good thing. aaronneallucas (talk) 02:37, 7 December 2025 (UTC)[reply]

This is the wrong venue for this discussion - see m:Proposals for new projects. Thryduulf (talk) 02:42, 7 December 2025 (UTC)[reply]
...but we'll talk about it here anyway, even though nothing can come out of the discussion. Didn't people understand what Thryduulf wrote? Phil Bridger (talk) 19:33, 10 December 2025 (UTC)[reply]
@Phil Bridger User:DVRTed/move-talk-section.js Enjoy! Polygnotus (talk) 19:37, 10 December 2025 (UTC)[reply]
On the contrary, by restarting this discussion we could remind @aaronneallucas to create the actual new project proposal. :P Loki (talk) 21:43, 10 December 2025 (UTC)[reply]
There is also https://wikispore.wmflabs.org/wiki/WikiFacts_Spore see https://meta.wikimedia.org/wiki/WikiFacts Polygnotus (talk) 14:35, 7 December 2025 (UTC)[reply]
That doesn't seem to me like that's the same thing. That WikiFacts would be a list of facts; this WikiFacts would be for fact-checking Loki (talk) 16:59, 10 December 2025 (UTC)[reply]
@LokiTheLiar True. But the second link does explain why this is a bad idea (in the Discussion section near the end of the page), or at least the criticism they can expect. Polygnotus (talk) 18:54, 15 December 2025 (UTC)[reply]
I think it's a fair criticism, but I don't know if we have to describe it as "fact-checking" even (though that's probably the best marketing). Maybe rather thinking of it as a peer review type process that just is like providing feedback on factual accuracy (but of course, for people who did not ask for feedback). I hope this is making at least some level of sense. aaronneallucas (talk) 05:12, 18 December 2025 (UTC)[reply]
See also https://captainfact.io/ (this one was featured in a documentary partly about Wikipedia). Govdirectory may also be relevant. I think the biggest hurdle is that it would likely be rather useless and unknown because a niche website nobody uses has little impact. What would have some impact if e.g. bots commented underneath posts claiming verified false info, if Web browsers added a note at the top that the page one is reading contains several false claims, etc. I would start with thinking about how misinformation can be effectively addressed and then from there see where the potential for a wiki project is. Moreover, as you more or less implied, "supported by reliable sources" does not make something true and "not supported by reliable sources" does not make something false. Often, things are not clearly true or false and if a source supports something, it depends on on what basis it's supporting the statement (data suggesting so, proof suggesting so, people the journalist interviewed claiming so, the journalist's opinion, sth else?). Prototyperspective (talk) 17:08, 10 December 2025 (UTC)[reply]
I think that is somewhat of a good point, too. Though I don't necessarily think we have to say that we are the arbitrators of truth, which is why I thought indicating the sources themselves might be more useful. I think we could add more context along with a statement, like "verified by Labor Department survey data" or "contradicted by study conducted by Smith et al. (2024)" or even adding an additional label like "mixed evidence in reliable sources". We could even add a tag to all pages stating: "If you have a reliable source you can add relevant to this fact check, please do so". I also think, of course, the full page for it would provide an in-depth explanation of all sources.
Perhaps, alternatively, we could add a short summary to display on a mainpage, along with the full summary on the fact check's page, ditching any type of universal labeling system all together. aaronneallucas (talk) 18:53, 10 December 2025 (UTC)[reply]

Modification for template:page numbers needed

[edit]

It would be a significant improvement for the template to be modified. Please modify the page numbers needed template so that it can also flag articles, sections, or lists that are partially lacking page numbers. Currently, it only flags the entire thing as having none. Vastmajority20025 (talk) 17:53, 9 December 2025 (UTC)[reply]

@Vastmajority20025 I believe you are talking about Template:Page needed, which is the inline version, right? Check out this See also section: Template:Page_needed#See_also. Polygnotus (talk) 19:33, 9 December 2025 (UTC)[reply]
No @Polygnotus, I mean adding the option I said to the template page numbers needed that is a non-inline tag for the whole article, section or list. sometimes an article has too many citations without page number or timestamp —this tag can be used for AV media also—that it's better instead of putting Template:Page needed beside each one, do it for whole article. Vastmajority20025 (talk) 06:11, 10 December 2025 (UTC)[reply]
@Vastmajority20025 Ah, my bad. In that case I would recommend WP:VPT. Polygnotus (talk) 09:13, 10 December 2025 (UTC)[reply]
It's all good @Polygnotus, no worries. Vastmajority20025 (talk) 12:51, 10 December 2025 (UTC)[reply]
@Vastmajority20025, I don't know if you've already figured this out, but it already supports |section:
Most templates based on {{mbox}} do so automagically. The only problem here is that the template's documentation doesn't tell you that it's possible. WhatamIdoing (talk) 03:33, 14 December 2025 (UTC)[reply]
@WhatamIdoing, I know about |section, I'm not talking about that the template doesn't have the option to say for a section that it lacks page numbers, I'm talking for both article and section, there are occasions that not all of references lack page numbers, but the template doesn't have the option to note that some of references lack page numbers, it just says This section/article cites its sources but does not provide page references Vastmajority20025 (talk) 07:38, 14 December 2025 (UTC)[reply]
Vastmajority20025, it sounds to me like all you need is a wording change:
from: This article cites its sources but does not provide page references.
to:     This article cites its sources, but some or all of the ctiations do not provide page references.
Does that work for you? Mathglot (talk) 08:35, 14 December 2025 (UTC)[reply]
Yes @Mathglot, that's a good change; but it also can be like having an option, like if you don't toggle it, it writes the old dialogue, and if you do, it notes that "some" page numbers are missing—like its is for writing "section" instead of "article". Vastmajority20025 (talk) 09:22, 14 December 2025 (UTC)[reply]
Vastmajority20025, I hear you, and of course that is technically feasible with a parameter, but there is a lot of precedent for permissive wording in templates that talk about some issue that happens repetitively, like "some or all", "one or more", and so on. To get your parameterized change made, I think you would have to demonstrate a consensus for it. Normally, that would be carried out via discussion at Template talk:Page numbers needed, followed by an WP:Edit request on that page after you gain consensusa. Since you have already started here, you could just keep the discussion here and see what happens, but if it were me, I would probably close this one and move further discussion there, adding a link from here to there, and feedback requests from WT:Template index/Maintenance and WT:WikiProject Templates to it. Mathglot (talk) 09:45, 14 December 2025 (UTC)[reply]

Thoughts on serverless functions for toolforge?

[edit]

The rough idea of my proposal is, WP is hosting a complete kubernetes node for each project, but some of these projects don't even need a node running 24/7, e.g. citation bot, since they are on-demand(only does work when called, idling most of the time) or running on a fixed cron schedule(the non-free image removal bot, a lot of the stat bots), and are using a full node's worth of resources without doing anything. Making a serverless function an option, as well as object storage, would give willing bot developers move individual tasks to a serverless function, and only use resources when actually needed.

Id suggest using https://knative.dev/, since it's based of kubernetes. not too sure about the details of the implementation, since i, of learner's-license age, have never had an opportunity to use k8 at all, nor have done any work on systems for more than 16 people. This is mostly just throwing this idea out there before I bring it to idea lab, just to see if this is even feasible. (knative has a cold start time of around 2-4 secs)

I'd also suggest using minIO for object storage, since NFS is a huge resource hog and headache to deal with, along with scaling and performance issues.

Cheers.

TL;DR: Running a k8 node for every bot on toolforge is overkill, and wastes resources, especially when most aren't even doing anything. Instead, giving toolforge devs an option to have a serverless function, which only use resources when called on, as well as object storage for permanent/semi-permanent data storage, which are much more reliable than NFS, would save a lot more on performance.


(this message was previously sent by me on the WP community discord server, see https://discord.com/channels/221049808784326656/1448033679896219748) monkeysmashingkeyboards (talk) 21:11, 9 December 2025 (UTC)[reply]

For people who don't know how Toolforge works:
(see also https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/System_overview)
Toolforge is PaaS - it offers users a virtual private server(VPS) as a service. Each of these VPS are a Kubernetes node, and they all have access to their own storage, hosted on NFS. A VPS consumes a constant amount of resources at all times, no matter if anything is running or not.
For people who haven't worked with serverless functions(aws lambda) before:
Serverless functions are functions/code that run without a VPS - they don't have to be always on. Say, for example, citation bot. Citation bot, when running on a VPS, will always consume resources even if it isn't doing anything. When someone puts an article for citation bot to check, it will go through the citations and fix them. However, if citation bot were a serverless function, it would only be turned on when a request is made.
Another analogy is, a VPS is leaving the lights on 24/7, a serverless function is only turning on the light when you need it. The end result/experience is similar, but the VPS uses a lot more resources/electricity. However, let's say instead of lights, it's a restaurant. A restaurant has to be always ready for customers, no matter what, and we can't open the restaurant the moment a customer comes in, since that would take too long. This is a situation where a VPS would be more powerful.
(sorry in advance, my analogies suck) monkeysmashingkeyboards (talk) 21:29, 9 December 2025 (UTC)[reply]
Additional observations:
On the WP discord server, another user pointed out that Toolforge has a jobs system. At first glance it seems to be a run-on-demand system(https://wikitech.wikimedia.org/wiki/Help:Toolforge/Running_jobs) that scales to zero(uses 0 resources when inactive) but after a bit more scrutiny it essentially is a job running service for developers. It can't run on demand, it needs Toolforge credentials and SHH access, and it's functionality is basically either "npm start" or "npm run build" - it either can run a terminating job(e.g. a build) or a continuous job(e.g. a web server, or... a bot!). And a continuous job can't scale to zero. monkeysmashingkeyboards (talk) 22:43, 9 December 2025 (UTC)[reply]
Also, some pros and cons for adding Knative(ignoring implementation details)
Pros
  • Resource-efficent, and as a result, cost-efficent: Knative scales-to-zero, meaning that when not actively in use, resources aren't being used either. This isn't the case for the current Toolforge system(which I'll refer to as "TF" or "Toolforge"), in which applications use resources regardless of whether they're in use or not. This means that https://gamingcheck.toolforge.org/, despite not being used very often, still consumes just as much resources as, say, Earwig's Copyvio tools.
  • Demand-based resource allocation, or, in English, each app only uses the resources it needs, no more and no less. E.g, if I have 3 tools, A, B, and C, hosted on a server in containers X, Y, and Z respectively, then if A is used to beyond container X's limits, it can't use containers Y and Z, which aren't using all of their resources. Each container is allocated it's own set amount of resources, and it can't use more, nor "donate" the resources it isn't using. However since serverless functions don't run in their own container(note that Knative uses K8 containers for each serverless function, but that's a different thing), and are instead all processed by a generalized server, there are no hardware limits to how much ram tool A can use. All the computing resources are pooled, and each function only uses what it needs from that pool.
  • Pros that only apply for a certain kind of implementation:
    • Opt-in: Since Knative would not replace Toolforge, only be a new system on the side, developers used to Toolforge can still choose to continue using Toolforge.
    • Minimal hardware debt: Since it would be add-on to the existing Toolforge, there's no requirement to replace all the servers (which would be costly and would instantly dig a grave for this idea to die in), only to use the existing newer servers.
    • Isolated: If we mess up the implementation, it'll only affect Knative, and not the rest of Toolforge.
(will write cons when I have more time) monkeysmashingkeyboards (talk) 23:47, 9 December 2025 (UTC)[reply]
The English Wikipedia is not the place for this discussion. You might try #wikimedia-cloud connect, the cloud@lists.wikimedia.org mailing list, or Phabricator. As for your premise, I trust that the people who built it have taken resource usage into account when building the service. Anomie 23:48, 9 December 2025 (UTC)[reply]

Thoughts on adding conspiracy theories and theorists to WP:CTOP?

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Articles dealing with conspiracy theories and their proponents have been a longstanding magnet for POV pushing and other forms of disruptive editing. Before opening a formal discussion I thought I'd start here and get an idea of how people feel about adding this to CTOP -Ad Orientem (talk) 22:22, 11 December 2025 (UTC)[reply]

The first requirement for new CTOP designation is evidence that there is disruption in a defineable topic area that is not being and cannot be resolved without such a designation. Do we have that here? Thryduulf (talk) 23:05, 11 December 2025 (UTC)[reply]
I think if you peruse List of conspiracy theories, itself protected, and randomly look at the editing history of some of the more well known ones, you will find ample evidence of problematic editing. Many have already been protected. In fairness, this is not universal. Some of the more obscure CTs garner little attention. But on balance, I'd say that yes, this is a subject area that has been a routine target for various flavors of disruptive editing. Often, it is subtle and not overtly malicious. The true believers are trying to correct the record backed by what they believe are reliable sources. Which they duly cite. Anyone unfamiliar with these subjects and sources may not realize that they are in fact unreliable, and often patently WP:FRINGE. One egregious example is the article on Dorothy Kilgallen. Back in 2014, I and several other editors had to undertake a massive cleanup of the article that had been turned into a WP:COATRACK for JFK Assassination conspiracy theories. These fringe claims had even been promoted on the main page, presumably because no one bothered to look closely at the claims and their sources. But to answer your question, yeah, I think this is a subject area that has produced a lot of disruption and misinformation going back to the earliest days of the project. Some of it coincidentally falls under other CTOP subject areas. But a lot doesn't. It's a problem. -Ad Orientem (talk) 00:32, 12 December 2025 (UTC)[reply]
That doesn't actually answer my question. You've waved your hands in the direction of disruption and given an 11-year old example of specific disruption. However, disruption isn't enough it needs to be disruption that is not being and cannot be resolved without CTOP and it needs to be ongoing, so stuff resolved a decade ago is irrelevant. I took a look at the history of a couple of the conspiracy theory articles in the list and there was nothing there that wasn't being handled just fine currently. Thryduulf (talk) 05:23, 12 December 2025 (UTC)[reply]
  • When considering expanding our shadowy network of special opaque rules barely understandable even to most people who report on Wikipedia in the press, much less newer editors still trying to learn the baseline rules for contributing at all, the import question is whether we can get by alright without it. I expect we manage to get by alright without it. This isn't usually an especially subtle crowd causing disruption. They need to manage being unmanageable to justify smacking new users with special bitey lawyerly templates any time they get near a fairly broad subject area. GMGtalk 02:28, 12 December 2025 (UTC)[reply]
    The smacking new users with special bitey lawyerly templates any time they get near a fairly broad subject area bit is what causes me to hesitate. WhatamIdoing (talk) 20:47, 12 December 2025 (UTC)[reply]
    Wikipedia is like playing EU4, a game where you can spend 1k hours and still not totally know what's going on. They've got 20 some odd expansions, because their hardcore base are notching up 50k hours and adding new stuff is fun and exciting, but much of it is either overwhelming or useless to the average player.
    Most everything Arbcom related is an expansion pack for Wikipedia. The majority of folks can get along just fine being totally unaware it exists or mostly ignoring it, and for most of those who happen to intersect with it, it's mostly confusing and overwhelming. GMGtalk 22:58, 12 December 2025 (UTC)[reply]
I think CT-stuff fairly often overlaps with some other CTOP, like BLP, pseudo science or AmPol, but of course that will not always be the case. Gråbergs Gråa Sång (talk) 08:58, 12 December 2025 (UTC)[reply]
Does anyone have any statistics about what percentage of our articles are subject to CTOP restrictions, or what percentage of edits are to those articles? I'm a little concerned that, even though a case can be made for each topic, the cumulative effect could be to give too much power over content to administrators. Some statistics could clear things up. Phil Bridger (talk) 21:11, 12 December 2025 (UTC)[reply]
Probably not as while it is easy to find an answer for "how many articles have CTOP-related templates on them" that is a lower number than "how many whole articles are subject to CTOP restrictions" (e.g. Ulster Banner is within the scope of The Troubles CTOP authorisation ("The Troubles, Irish nationalism, and British nationalism in relation to Ireland") but it is not tagged as such), and that is a lower number than "articles which have parts subject to CTOP restrictions" (e.g. History of Manchester#IRA bomb and its effects is subject to CTOPs restrictions under The Troubles but the rest of the article is not.). It is definitely impossible to identify all the last two groups of articles automatically without some sort of context-aware bot familiar with the CTOP topic area (e.g. the bot would need to understand The Troubles, US politics, the Palestine-Israel dispute, the India-Pakistan dispute, Armenia-Azerbaijan dispute, etc, etc.) Thryduulf (talk) 00:53, 13 December 2025 (UTC)[reply]
  • OP Cmt Not going to close this quite yet. But as of this comment there seems little enthusiasm for the idea. If this remains the case after another day or two of discussion, I will close it, or any other experienced editor should feel free to do so. -Ad Orientem (talk) 19:58, 13 December 2025 (UTC)[reply]
  • I have one outlier case that could be interesting to consider. The page dead Internet theory currently has (and has had for years now) a dispute on if it is actually a conspiracy theory or not. While in this case, I would not be opposed to listing it as a Wikipedia:Contentious topics for the reasons you suggest (POV pushing from proponents) it illustrates that what we classify as a conspiracy theory may be disputed by editors.
GeogSage (⚔Chat?⚔) 20:01, 13 December 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Require editors to acknowledge unread warnings

[edit]

I feel like many new editors who aren't aware of Wikapedia's policies may unintentionally vandalize. Even after they have been warned, they might not check their talk page and therefore not get the message. It might help if we force editors to read their unread warnings before they can publish their changes so they understand how Wikipedia works, and there's no excuse for making intentionally bad edits after that. Speedrunz (talk) 02:25, 12 December 2025 (UTC)[reply]

How exactly can we force people to do stuff? Warnings can be abused as well by people, especially those who are displaying Wikipedia:Ownership of content behavior and/or treating Wikipedia like a WP:BATTLEGROUND. GeogSage (⚔Chat?⚔) 02:28, 12 December 2025 (UTC)[reply]
@Speedrunz, a tangential point, but vandalism on Wikipedia is by definition intentional, see WP:NOTVANDAL and the beginning of Wikipedia:Vandalism. It is important not to mislabel unintentional mistakes as vandalism. Helpful Raccoon (talk) 02:36, 12 December 2025 (UTC)[reply]
I guess I meant unconstructive edits in general, sorry about that. Speedrunz (talk) 02:41, 12 December 2025 (UTC)[reply]
The question is perhaps how to encourage newer editors to check their talkpages, if the yellow bar and the red bell don't do it? Are there editors with experience of not seeing these things when they first joined, and can improvements be made? CMD (talk) 03:24, 12 December 2025 (UTC)[reply]
I know that some IPs just weren't being shown them at all. Not sure if temporary accounts fixed that. Gnomingstuff (talk) 13:02, 12 December 2025 (UTC)[reply]
That was a mobile and app issue. Phab:T278838 is still open but doesn't mention TAs either way; I'm not sure where to look for enlightenment on this. CMD (talk) 14:55, 12 December 2025 (UTC)[reply]

One "for deletion" page

[edit]

Genuine idea. All Wikipedia pages with the preindex "for deletion" or "for discussion" (WP:AFD, WP:MFD, WP:RFD, WP:TFD, WP:FFD, etc.) could be merged to a new page titled "Wikipedia:Requests for deletion" (And it's shortcut, WP:RQFD). It's logs would generally be split into (by alphabetical order) ""Articles"", ""Files"", ""Miscellany"", ""Others"", ""Redirects"" and ""Templates"". The logs would be split into parts 1 and 2 so that it would not take forever to load the logs.

Now, before saying, this idea is NOT related to Wikipedia:Votes for deletion, which later evolved into WP:AFD. ~2025-40346-02 (talk) 20:33, 12 December 2025 (UTC)[reply]

These forums all have different rules/procedures, expectations for closing, admins that patrol them, different people interested in following them. It would also be unwieldly to use, given that AfD already doesn't load for me without incredible lag. Katzrockso (talk) 22:53, 12 December 2025 (UTC)[reply]
If you just want today's list, try WP:AFD/Latest. There's also WP:AFD/Yesterday for yesterday's. WhatamIdoing (talk) 23:52, 12 December 2025 (UTC)[reply]
Several of those are "for Discussion" pages, rather that "for Deletion". By this, we mean that they take a type of page (e.g., categories at Wikipedia:Categories for discussion) and handle not just deletion itself, but also the functions equivalent to Wikipedia:Requested moves (=renaming), Wikipedia:Merge requests, Wikipedia:Split requests, Wikipedia:Requested articles, etc. This is because sometimes it's efficient to have similar activities grouped and sometimes it's more efficient to have similar page types grouped. Thus, e.g., deletion and merges are separate for Wikipedia articles, but they're grouped for categories and templates (both of which have technical complexities that aren't obvious to most contributors). WhatamIdoing (talk) 23:49, 12 December 2025 (UTC)[reply]
Perhaps This idea would merge all the rules of the forums mentioned in this idea. ~2025-34392-86 (talk) 00:44, 13 December 2025 (UTC)[reply]
What benefit would that bring? Discussions of redirects are qualitatively different to discussions about lua modules with very little overlap between them for example. Thryduulf (talk) 01:06, 13 December 2025 (UTC)[reply]
Other user has a good point about not all of these being for deletion, but I think putting together the "for deletion" categories would be useful. I think more useful would actually be to have a page where everything is put together (say "items for discussion") and each item has its own tab. Something like the layout of the WP:AfC page or most other WikiProjects. This would at least improve navigability and hopefully increase participation in other neglected space (many newer editors might not even know these places exist or where to find them). aaronneallucas (talk) 05:21, 18 December 2025 (UTC)[reply]

changes to suggestion system and new lists

[edit]
imagine the edit short articles and edit articles thing apart of the same suggestion called expand articles but separatable with a a dropdown arrow

new lists

[edit]

there should be a list of things that need a article but don't have one

Wikipedia:Requested articles Aaron Liu (talk) 03:37, 13 December 2025 (UTC)[reply]

system rework

[edit]
  1. suggestion levels are now removed as if you think a suggestion is to hard or easy you just disable them from appearing
  2. there's drop-down boxes represented by arrows that open more specific suggestions

suggestion changes

[edit]

easy

[edit]

copyedit

  1. copy edit is now split in 2 different options
  2. copyedit is now only for grammar correction and sentence restructuring while still getting the same meaning across
  3. it's split half becomes neutralize articles which is for removing opinions
  4. copyedit gets a dropdown arrow splitting it into edit article short description and grammar correction

medium

[edit]

references

  1. add references now has a dropdown arrow with
    1. cite sources (add cites to citeless parts)
    2. add cites (added cites to already cited parts that need more)

new tasks

  1. there should be new medium task called add image where you add a image to a article
  2. there should be a separate article button (separate lengthy parts into different sections or subsections)

hard

[edit]

expand articles

  1. expand short articles should get a dropdown arrow and renamed to expand articles, it would have
    1. expand short articles (expand articles that lack a lot of information)
    2. expand articles (expand articles that already have alot of information but could use more)

new suggestions

  1. there would be add section (add a section to articles)
  2. there would be a suggestion called reform drafts (edit drafts that could be good articles if edited and you have to edit it until it's acceptable), this happens if the draft maker agrees to let other people edit they're draft and the page rater says if it's reformable (the creator and secondary editor could work together in this potentially)
  3. there should also be a suggestion called make a article (make a article about something not on wikipedia, it would be sourced by list of needed articles)

Misterpotatoman (talk) 00:54, 13 December 2025 (UTC)[reply]

That's a lot to say in just one sentence. –Deacon Vorbis (carbon • videos) 01:30, 13 December 2025 (UTC)[reply]
what does this mean? Misterpotatoman (talk) 02:07, 13 December 2025 (UTC)[reply]
It's rather unreadable – see WP:WALLOFTEXT. Can you try rewording this so that others can understand what you're suggesting? ClaudineChionh (she/her · talk · email · global) 02:18, 13 December 2025 (UTC)[reply]
is it better now? Misterpotatoman (talk) 03:17, 13 December 2025 (UTC)[reply]
Much better! ClaudineChionh (she/her · talk · email · global) 03:50, 13 December 2025 (UTC)[reply]
@Trizek (WMF), is Growth still working on Wikipedia:Newcomer homepage? WhatamIdoing (talk) 03:36, 14 December 2025 (UTC)[reply]
You might be onto something with some of these! I think the add image one makes the most sense, since it is already available in the mobile app as a suggested edit. Beyond that, I think a good medium task might be for articles with the tag "topic is notable but sources are missing". It should be easier to expand since someone has determined that sources exist that could support more content but that the sources are missing. Only issue would be that you'd have to note that it might be either an expansion task or citation task. aaronneallucas (talk) 05:26, 18 December 2025 (UTC)[reply]

COI edit request backlog

[edit]

There are currently 305 edit requests waiting in the COI edit request queue, the oldest of which is about 5 months old. This is a pretty daunting wait time for folks that are following the COI rules, and really just encourages people not to. So...this is the idea lab...any ideas for what to do about this? –Deacon Vorbis (carbon • videos) 01:20, 13 December 2025 (UTC)[reply]

Is there an option to notify relevant Wikiprojects of the articles within their scope that are part of a (non-admin) backlog? In a similar manner to AAlertBot? Nil🥝 01:51, 13 December 2025 (UTC)[reply]
Well basically all other backlogs are more important, so they should have priority. And the COI edit requests mostly fall within a few topic areas. You don't really see biology-related COI edit requests for example. Polygnotus (talk) 09:23, 13 December 2025 (UTC)[reply]
@Hellknowz, is this something that could be added to Wikipedia:Article alerts? I think @Nil NZ is on the right track with breaking down the list into smaller groups and getting the word out to potentially interested editors. WhatamIdoing (talk) 03:39, 14 December 2025 (UTC)[reply]
In the meantime, here's a PetScan link for COI edit requests at WPMED. It returns 23 results at the moment. If you swap in the name of a different WikiProject's main category (e.g., put Category:WikiProject Biography articles where Category:WikiProject Medicine articles is currently listed), then you can generate a smaller, more focused list now. (Non-COI edit requests will require a different category on the first line of the query.) WhatamIdoing (talk) 03:49, 14 December 2025 (UTC)[reply]
Yes, this is something that could be added to article alerts. It's on my todo list. —  HELLKNOWZ  TALK 10:32, 14 December 2025 (UTC)[reply]
Thanks.
@Deacon Vorbis, is this a worse backlog than usual? Or is it always approximately this many open requests? WhatamIdoing (talk) 20:45, 14 December 2025 (UTC)[reply]
No idea really. I was just looking through the normal semi-protected edit request queue (which is pretty well kept in check right now), and saw one that should have been marked as a COI request instead, so I switched it over, and took a look at the queue, and was mildly horrified to see the size of it, so figured I'd bring the topic up. –Deacon Vorbis (carbon • videos) 21:54, 14 December 2025 (UTC)[reply]

AI enhanced picture on biographies should be prohibited

[edit]

There was attempt to use AI enhanced pics on Alexandr Wang

Photo [10]

If its more common - then i think we should prohibit such uses Cinaroot (talk) 19:32, 13 December 2025 (UTC)[reply]

I agree with this guy. ~2025-40346-02 (talk) 19:46, 13 December 2025 (UTC)[reply]


Per WP:AIIMAGES "AI-generated images should not be used to depict named individuals or any living people. Marginal cases (such as major AI enhancement or if an AI-generated image of a living person is itself notable) are subject to case-by-case consensus. AndyTheGrump (talk) 19:47, 13 December 2025 (UTC)[reply]
Can we change it to AI-generated or enhanced images should not be used.... to be more clear. Cinaroot (talk) 19:51, 13 December 2025 (UTC)[reply]
If you want to propose a change to the wording, you need to do so on the relevant talk page. AndyTheGrump (talk) 19:53, 13 December 2025 (UTC)[reply]
Prohibiting it outright could lead to some issues, modern smart phones use filters, settings, and other technology that could be seen as "AI enhancement." Popular editing software also makes use of AI tools for mundane tasks. A blanket policy like this might limit the devices we can capture photos on, and would likely result in a witch hunt across the media linked on Wikipedia. GeogSage (⚔Chat?⚔) 19:58, 13 December 2025 (UTC)[reply]
In my view lede images should be 100% unaltered - maybe light touchups. But on Catherine Zeta-Jones - i think there is too much enhancements. Probably not AI Cinaroot (talk) 20:04, 13 December 2025 (UTC)[reply]
There was a discussion about that a while back at Wikipedia_talk:Biographies_of_living_persons/Archive_65#Propose_rephrasing_WP:MUG. Some1 (talk) 20:14, 13 December 2025 (UTC)[reply]
I'm not a photographer, but I do have a background with working with satellite images. The issue here is "unaltered" is not always simple when it comes to digital media, and often times is done to remove artifacts from the camera/scanning process itself. GeogSage (⚔Chat?⚔) 03:28, 14 December 2025 (UTC)[reply]
I agree with GeogSage.
Coincidentally, I was looking at one of the big real estate websites today, and I think all the initial photos were AI-enhanced to a fairy tale level. I don't think we want that. But I also don't think we want someone rigidly applying a "no AI enhancements at all" rule down to fine details. Imagine what a mess we'd have if some obsessive editor started telling people which buttons in which photo editing software they're not allowed to use. WhatamIdoing (talk) 04:07, 14 December 2025 (UTC)[reply]
Then we need to define whats acceptable refinement and whats not
See an example https://commons.wikimedia.org/wiki/File:Alexandr_Wang,_Chief_A.I._Officer,_Meta.jpg
Do you think its acceptable ? Cinaroot (talk) 04:12, 14 December 2025 (UTC)[reply]
Why do you believe that it's AI enhanced? WhatamIdoing (talk) 20:51, 14 December 2025 (UTC)[reply]
If you have looked at his real pics or videos on YouTube - then you may see the skin color and hair are all enhanced. Cinaroot  💬 21:02, 14 December 2025 (UTC)[reply]
So? Makeup and hair styling exist in the analog world, and if I were running the PR department for a zillionaire company, I'd hire people who have those skills to make the leadership team look attractive in their official corporate photos. No AI is required for this. WhatamIdoing (talk) 21:34, 14 December 2025 (UTC)[reply]
Official photo of Donald Trump is enhanced. Nothing we can do about that. Because its standard practice to use official portrait for government officials. But wiki don't do that for a CEO or private individuals as far as i know. Anyway - its my opinion. I'd want to see real pics. Not heavily altered ones Cinaroot  💬 21:42, 14 December 2025 (UTC)[reply]
As editors, we are allowed to use editorial judgements on images, and choose the image we (as a community) think is best. If we think an image has been overly enhanced, we are free to reject it. However, the same is true in the other direction. An enhanced image might be deemed perfectly reasonable and appropriate in some cases. In other words: no “rule”… discuss each image individually. Blueboar (talk) 21:57, 14 December 2025 (UTC)[reply]
That's fine then. I wanted to have some directives in writings - but no need Cinaroot  💬 22:17, 14 December 2025 (UTC)[reply]
At the moment, I'm feeling like the directive we need is "Don't make unverifiable accusations of AI enhancements for publicity photos".
I can't find it right now, but there was a discussion somewhat recently (could have been a few years ago) in which an editor tried to get a rule against overly flattering photos of BLPs adopted. It failed. WhatamIdoing (talk) 23:00, 14 December 2025 (UTC)[reply]
How do you verify AI enhancement? I use my own judgements. Nothing wrong with that. Picture in question is provided by Meta public relations team. Cinaroot  💬 23:08, 14 December 2025 (UTC)[reply]
Do you see a problem with someone jumping from "This professionally produced studio portrait seems overly flattering to me" to "Therefore it was definitely made overly flattering through the exact method of AI image software, and definitely not through other methods of making studio portraits flattering, such as makeup, hair styling, lighting, trick photography, filtering, facetuning, photoshopping, etc."?
Sources suggest looking for watermarks, considering the context (e.g., joke website?), looking at the resolution (AI images are usually low resolution), checking fine details (is the skin texture blurred at max resolution? Do the pupils align?), or using AI image detector tools.[11][12] WhatamIdoing (talk) 00:31, 15 December 2025 (UTC)[reply]
I see the problem. Its just a proposal. I already acknowledged we don't need to take any action here. Cinaroot  💬 01:22, 15 December 2025 (UTC)[reply]
Just two months ago: Wikipedia talk:Biographies of living persons/Archive 65#Propose rephrasing WP:MUG. Schazjmd (talk) 23:45, 14 December 2025 (UTC)[reply]
Thanks. WhatamIdoing (talk) 00:33, 15 December 2025 (UTC)[reply]
Cinaroot, enhancing pictures with ai is quite ok for me. But making ai generated pictures is where i cross the line. ~2025-32362-48 (talk) 13:50, 17 December 2025 (UTC)[reply]

Any rules about "enhancing with AI" really needs to define that term. AI is integrated into a wide range of standard photography tools. Removing a dust spot, reducing noise, sharpening, changing lighting, upscaling, extending backgrounds, colorization, restoration, etc. Some of those are probably relevant to the display here, and others less so. Many of the AI tools do the same things you could do in Photoshop without AI, but make them faster/easier. i.e. denoising, upscaling, sharpening, cloning out artifacts/dust spots are as old as photo editing software, but have become used more since AI has made them easier. IMO the key is for anything beyond the most trivial modifications that actually change the appearance of context of the subject should be noted on the file page with the "retouched" template. Then there's a separate discussion, probably best made on a case-by-case basis, of what makes sense to include in an article. — Rhododendrites talk \\ 00:41, 15 December 2025 (UTC)[reply]

I dont have problem with slight enhancement. See this pic https://commons.wikimedia.org/wiki/File:Alexandr_Wang,_Chief_A.I._Officer,_Meta.jpg
And see his latest pics from 2025
https://time.com/7296215/alexandr-wang-interview/
No one seems to be answering - if this kind of enhancement is acceptable? Cinaroot  💬 01:34, 15 December 2025 (UTC)[reply]
I might be bad at comparing pictures, but do not see anything beyond what a stylist and a portrait photographer can accomplish. Therefore to me these "AI-was-not-essential" enhancements that I am perfectly OK with. Retouching is a standard MO, any person getting their photo portrait taken receives this service unawares, unless the shot is completely amateurish. As an example, in a group portrait someone is always blinking or sneezing, so the final version is frequently combined from multiple shots. We should not object to doing the same with AI. Викидим (talk) 02:00, 15 December 2025 (UTC)[reply]
Cinaroot, I don't think we're understanding each other here. Here's what it sounds like to me:
  • A: Is File:Alexandr Wang, Chief A.I. Officer, Meta.jpg an acceptable use of AI?
  • B: I don't think that's an AI image.
  • A: This studio portrait is more flattering that the candid photos I've seen of him, so it's AI enhanced.
  • B: It has a resolution (5,760 × 8,640 pixels), which is significantly in excess of most AI image software's capabilities, so I don't think that's an AI image.
  • A: His skin tone doesn't match the photos published under completely different circumstances, so it's AI-enhanced.
  • B: Have you considered old-fashioned makeup and lighting effects?
  • A: In my own judgment, this is an AI-enhanced photo.
  • B: AI photos have problems with fine details, like the texture of skin or fabric when you zoom in. There are no such problems here, so I don't think that's an AI image.
  • A: Why isn't anyone telling me whether this kind of AI enhancement acceptable or not?!
Let me try this a little more bluntly: This is NOT an AI-enhanced photo. Therefore, it is NOT possible to say whether this non-AI-enhanced photo represents an acceptable use of AI enhancement, because it is NOT an AI-enhanced photo.
Even if we pretended for a moment that there was a good reason to believe that this was an AI-enhanced photo, there'd be no way to tell whether it was enhanced "too much" without knowing what the original is. "Too much" is a relative measure of difference from the original photo, not from unrelated photos taken under significantly different circumstances. WhatamIdoing (talk) 02:09, 15 December 2025 (UTC)[reply]

Here is, I think, an exactly valid use case for AI enhancement of an image, which I created a few years ago. James Avery was a smoker. The character he played, Philip Banks (The Fresh Prince of Bel-Air), was not. I took the best free-use image we have of James Avery and removed the cigarette for use in the article on the character, as the image now shows what the actor looked like in character. I really have no concerns about using an AI-enhanced photo of an actor to correctly depict a fictional character. BD2412 T 02:24, 15 December 2025 (UTC)[reply]

That one, and not the OP, seems a bit much to me, but that would be an argument for the article talk page. My point above was that removing the cigarette is something that doesn't actually require AI, and thus it's a good example of why "AI enhancement" needs clear definition if it were to be added somewhere. I mean really it's just "avoid heavily manipulated images in general, but exceptions can be made on a case-by-case basis", which is probably stated somewhere already. — Rhododendrites talk \\ 02:40, 15 December 2025 (UTC)[reply]
For AI enhancements, it's stated under WP:AIIMAGES: "Marginal cases (such as major AI enhancement or if an AI-generated image of a living person is itself notable) are subject to case-by-case consensus.". BD2412's example raises an interesting question, though... When the end result is the same, what's the difference between using AI to remove the cigarette from the image vs having a person use Photoshop to do the same thing? Is one method acceptable but not the other, and if so, why? Some1 (talk) 05:33, 15 December 2025 (UTC)[reply]
Oh, and I just learned that Adobe Photoshop is now built into ChatGPT for free – and you don't need graphic design skills to use it [13] Some1 (talk) 06:09, 15 December 2025 (UTC)[reply]
Actually, I did use Photoshop, which had as far as I recall just added its "inpainting" feature. BD2412 T 13:06, 15 December 2025 (UTC)[reply]
if. A picture was to be ai enhanced, tber should be some sort of notice about the picture being ai enyanced,like a box or a footnote. ~2025-32362-48 (talk) 13:52, 17 December 2025 (UTC)[reply]
A photograph is intended to tell us something about the subject. Modifying a photo changes the narrative. Subtracting a cigarette from a photo is explicit misrepresentation. (WP:OR if you like.) If you need a photo of an actor portraying a character, then find and use a photo of that actor portraying that character, do not create a fake derived from a photo of the actor. The same goes for colourization of old black and white photos – they are just another form of misrepresentation. Being specific to the AI question, you have to ask about intent as well – does the fake image convey a narrative that is validly supported by the reliable sources? If so, then say that it is an AI simulation that illustrates the topic involved and include the references that support the representation — GhostInTheMachine talk to me 15:31, 17 December 2025 (UTC)[reply]
We can't just "find and use a photo of that actor portraying that character" due to copyright law. Fair use only applies where no free substitute is possible, which it obviously is in this case. I would also reject the characterization of this image as a "misrepresentation"; I looked at other (non-free) images of the subject to make sure that there was not some quirky tooth or other feature varying from the detail effectively restored here. This is what the subject looks like without a cigarette dangling from his mouth, and had this photograph been taken a minute before or after at a point when he was holding the cigarette in his hand, for example, then it would have captured exactly this. BD2412 T 16:28, 17 December 2025 (UTC)[reply]
If we can't "find a suitable .. photo of that actor portraying that character" then we are stuck with not displaying one. Asserting how the person might have looked had the photo been taken at a different time does not help us. Such suppositions are still more or less WP:OR or WP:SPECULATION or probably both — GhostInTheMachine talk to me 17:06, 17 December 2025 (UTC)[reply]
If this picture constitutes "WP:SPECULATION" as to what James Avery looks like without a cigarette, that's the clearest case for WP:IAR that I've ever heard. A picture of a character in an article on the character helps the reader to understand the character they have come to read about. There is no sane case that this picture fails to do that. BD2412 T 02:07, 18 December 2025 (UTC)[reply]
An image that was altered like this should not be used on Wikipedia. If you want to show the actor as himself, then you should include the photo as it was (cropping, OK, but not altering artefacts, clothing, hairstyle, whatever); if you want to show the character, you need a screenshot or similar image where he is actually in character. If no free character image is available then either we don't show one, or we write a good fair use justification for one. Fram (talk) 16:39, 17 December 2025 (UTC)[reply]
I don't agree that Subtracting a cigarette from a photo is explicit misrepresentation. I would agree that subtracting a cigarette from a photo plus writing a caption that says "This person is a lifelong non-smoker" or "There was never a cigarette in this photo" or "When this photo was taken on 32 Octember, this man didn't smoke cigarettes" would be explicit misrepresentation. It probably wouldn't even occur to me to think about making this kind of change, but I wouldn't call it wikt:explicit misrepresentation. I'm not sure that it's a significant misrepresentation at all. A cigarette isn't permanently attached to him. If the photographer had taken the picture a minute earlier or later, there would have been no cigarette in his mouth. Therefore, a photo sans cigarette accurately represents a true thing. WhatamIdoing (talk) 21:49, 17 December 2025 (UTC)[reply]

Same titled page in both WP and Help namespaces

[edit]

 Courtesy link: WT:Translation § Merge proposal
 Courtesy link: WP:Namespace/Help vs. Wikipedia

There are a number of project topics that have pages in both Help and Wikipedia namespaces with the same title (pagename), and it isn't always clear how these differ or ought to. For example: Help:Translation & Wikipedia:Translation, or Help:Substitution & Wikipedia:Substitution.

Recently, a merge proposal was held at WT:Translation involving a lot of good discussion, but I would say a fair bit of confusion in how to decide what goes where, and even how the two namespaces differ when a topic is covered in both, or should that even ever happen? After it was over, I questioned whether this case isn't just a subtopic of a larger question involving topics that have pages in both Help and Wikipedia space, and I wondered how many such topics there were. Ignoring redirects, it turns out there are about eighteen such twinned pagenames (36 pages in two spaces), most or all of which will be familiar to you.

help
namespace
wikipedia
namespace
help
first edit
wikipedia
first edit
help
uniq
users
wp
uniq
users
help
total edits
wp
total edits
help
page
views[a]
wp
page
views[a]
Help:Authority control WP:Authority control 2012-10-15 2011-06-15 202 182 311 293 50,630 274
Help:Books WP:Books 2009-02-25 2008-12-06 164 255 303 536 276 703
Help:Censorship WP:Censorship 2012-03-09 2012-03-08 29 12 47 23 280 38
Help:Contents WP:Contents 2012-09-20 2001-09-27 129 713 483 1883 132,281 66,142
Help:Disambiguation WP:Disambiguation 2010-01-18 2002-02-02 44 1313 85 3767 17,626 5,362
Help:Education
Program extension
WP:Education
Program extension
2012-05-09 2013-09-26 15 6 90 12 14 2
Help:Example WP:Example 2014-07-29 2009-12-01 5 57 8 95 27 110
Help:ISBN WP:ISBN 2013-02-15 2002-11-03 135 207 222 348 1,049 507
Help:Lua WP:Lua 2013-03-01 2013-02-16 25 153 57 289 115 779
Help:Media WP:Media 2005-02-08 2008-03-08 265 48 415 61 8,964 51
Help:Page name WP:Page name 2010-01-27 2004-09-13 99 426 150 877 358 1,138
Help:Redirect WP:Redirect 2004-09-20 2001-04-17 328 1272 604 2666 1,088 11,772
Help:Reverting WP:Reverting 2005-11-24 2005-05-25 1213 278 1830 620 4,345 348
Help:Substitution WP:Substitution 2006-05-12 2003-12-09 241 473 434 1010 555 1,018
Help:Translation WP:Translation 2007-05-26 2004-02-17 27 272 80 595 333 1,874
Help:VisualEditor WP:VisualEditor 2013-06-26 2012-12-05 221 458 533 921 1,062 46,468
Help:Wikidata WP:Wikidata 2016-11-27 2012-10-20 20 173 103 416 130 5,098

Notes

  1. ^ a b Page views: total for period 2025-11-21 – 2025 12-11
Other tables and more details at Wikipedia:Namespace/Help vs. Wikipedia.

I think it would be helpful to discuss how these two namespaces differ, and in particular, if and when a topic deserves treatment in both, how that breaks down as far as what belongs where, and where the boundaries are. Please note: I am as big a foe of instruction creep as the next person and I am not looking for any ironclad rules here, but I think a lot of people would appreciate some general guidance so we know how to frame merge or delete discussions that may come up so that there is some sustained, underlying principle or goal in mind, and not always have to redefine the meaning of what's appropriate to each namespace every time a discussion arises, depending on what cast of characters happens to respond to a discussion. A wee bit of consistency is not a bad thing.

In an effort to provide some insight into the scope and nature of the issue, I made a Quarry request and got some good data back thanks to volunteer Cryptic, and massaged it into a more digestible format at Wikipedia:Namespace/Help vs. Wikipedia. I am hoping that providing this data will stimulate a discussion here that might provide some support for how we ought to view and deal with twinned pages in Help and Wikipedia namespaces. Thanks, Mathglot (talk) 01:43, 14 December 2025 (UTC)[reply]

Do you mean how these two namespaces actually differ, or how these two namespaces should differ? In practice, there are some pages in the Wikipedia: namespace that should be in the Help: namespace (converse is not true). WhatamIdoing (talk) 04:09, 14 December 2025 (UTC)[reply]
I mean, if it were up to me, I would have opposed the creation of the help namespace in the first place, or just made it a redirect to projectspace. The distinction between it and projectspace has never been particularly sharp or well-defined, which has led us to the mess we face here. There are some potential distinctions that could be made between these types of pages — simplified beginner-friendly pages vs. comprehensive documentation that can be cited in disputes, for instance, or reader-facing pages vs. editor-facing pages. I wish that discussion was had and resolved and enforced when the namespace was created, since it'd be a lot more difficult now to retrofit everything. Sdkbtalk 05:21, 14 December 2025 (UTC)[reply]
There are barely 1000 non-redirect pages in the Help namespace, most of which could simply be moved to projectspace with minimal issues. Somehow I can't find any proposals to merge the two namespaces on enwiki; this is the closest I could find. I am inclined to support such a proposal. Helpful Raccoon (talk) 06:17, 14 December 2025 (UTC)[reply]
AIUI all the Help: pages were once at Meta-Wiki, and then the community decided to copy/import them here so they could fork them. The decision was made long enough ago that it might have been discussed on the mailing list or even in IRC. (That was the ordinary thing to do, back in the day.) WhatamIdoing (talk) 00:37, 15 December 2025 (UTC)[reply]
There is a useful purpose for a documentation namespace, apart from a projectspace namespace, in a MediaWiki installation: it can hold generic instructions about using MediaWiki that is common to all installations, which can be updated through MediaWiki updates. Perhaps ideally the Help: prefix would have been a magic redirecting prefix, redirecting to a Wikipedia namespace page if it exists, otherwise to a Documentation namespace page if it exists. But now that Internet connectivity tends to be quite widespread, even within internal corporate networks (though not in all cases), relying on WMF servers to host the generic instructions is probably sufficient (added bonus: it can provide access to the instructions in multiple languages).
Based on this concept of generic instructions, personally I think of the Help namespace providing cookbook-like instructions on the mechanics of basic editing, without any customizations. But I appreciate for the non-technically oriented editor, it isn't obvious what fits into this category. (All descriptions of using any specific template, for instance, wouldn't fit.) It might be workable to keep English wikipedia processes documented within projectspace, and having the Help namespace document how to edit an article page so it has a certain component or appearance. isaacl (talk) 17:35, 14 December 2025 (UTC)[reply]
Isaacl, can you elaborate? Why do you think that Help space is appropriate for "generic instructions about using MediaWiki that is common to all installations"? Isn't that precisely what the MediaWiki project itself is for? And it already provides access to instructions in multiple languages; e.g., mw:Help:Temporary accounts is translated into Dutch, German, Indonesian, Javan, Luxembourgian, Malaysian, Sunda, and many more. Doesn't this already do what you are proposing? Mathglot (talk) 20:57, 14 December 2025 (UTC)[reply]
I was responding specifically to Sdkb's comment that no Help namespace was needed at the outset. Note at the genesis of the MediaWiki software, company intranets with access to the Internet wasn't nearly as common as now, and so it makes sense to bundle help documentation within a MediaWiki installation that doesn't require external Internet access. I already agreed with you that today, relying on the currently existing help documentation on Wikimedia Foundation servers is probably sufficient. I wasn't making a proposal about that. isaacl (talk) 23:28, 14 December 2025 (UTC)[reply]
Around 10 years ago there was a talk (cant find it) - that many though was good idea, but an overwhelming job - about moving all WP:HOWTOPAGES and WP:ESSAYPAGES to a namespace called "information" and WP:MAINTPAGES to a new namespace and WP:PROPAGES to "WikiProject" namespace. Thus devoting the Wikipedia namespace to the administration and governance of Wikipedia its self with only {{policy}}, {{guideline}} and {{MoS guideline}} pages alongside WP:DISPAGES, in the Wikipedia namespace, Moxy🍁 16:52, 17 December 2025 (UTC)[reply]
Both how they do and how they should. It would be useful to see your list of WP's that belong in Help, and your thoughts about why there are no Help's that belong in WP. I get the impression that more than one user would move all of them to WP. Mathglot (talk) 09:55, 14 December 2025 (UTC)[reply]
Some of the "reader-focused" help pages, like Help:Disambiguation and Help:Authority control, should just go in the lead of the corresponding WP page. I don't see a point to keeping two separate versions of Help:Redirect and Help:Reverting. I also noticed that Help:Substitution is more complex than WP:Substitution.
There are other twinned pages that are not captured here, such as Help:Category/Help:Categories/WP:Categorization (Yes, Help:Category and Help:Categories are separate pages.) Or Help:Protection/Help:Protected pages/Wikipedia:Protection policy, or Help:Alt text/Wikipedia:Manual of Style/Accessibility/Alternative text for images. Helpful Raccoon (talk) 20:17, 14 December 2025 (UTC)[reply]
Great examples; thanks for listing them. They were not included at the outset, because the original Quarry query required identical names and does not do stemming or alternative titles. (The other query includes redirects, and the corresponding table is 317 rows.) I've added these to a new table at § Non-exact matches at the data page. Feel free to add more examples like these directly to the table. Mathglot (talk) 22:01, 14 December 2025 (UTC)[reply]
Had another look, and realized that all of these examples are already in the larger table at § Redirects. Mathglot (talk) 00:14, 15 December 2025 (UTC)[reply]
exactly where is the similar name? The link lt or the name???? ~2025-32362-48 (talk) 13:54, 17 December 2025 (UTC)[reply]

Listed at: Wikipedia talk:Namespace. Mathglot (talk) 09:49, 14 December 2025 (UTC)[reply]

Should we just go ahead and propose that the Help namespace be deprecated? It seems we agree Help and WP: overlap too much and the distinction can't really be made clearer. FaviFake (talk) 16:09, 17 December 2025 (UTC)[reply]
Help:About help pages Moxy🍁 16:20, 17 December 2025 (UTC)[reply]
§ Same titled page in both WP and Help namespaces FaviFake (talk) 16:23, 17 December 2025 (UTC)[reply]

Synchronizing and removing inconsistencies of all human-written editions of Wikipedia by implementing Machine Wikipedia

[edit]
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The village pump is not the place for new project proposals or discussion of the existing Abstract Wikipedia project. The OP has been directed to the appropriate venues on meta. Cremastra (talk · contribs) 02:08, 15 December 2025 (UTC)[reply]

Hi, I have proposed the idea of creating Machine Wikipedia earlier here. I wanted to note this idea is well implemented by ChatGPT by this instruction, because ChatGPT is ready for all human-written languages. I want to note that if we convert text to RDF (Resource Description Framework) on Wikipedia, then any inconsistencies between human-written data across versions of Wikipedia would be resolved easily, by looking at Wikipedia's Machine version. I propose this scenario for synchronizing an English Wikipedia article to its French counterpart:

  1. Convert English article to RDF and fill its Machine Wikipedia version containing RDFs
  2. Convert French article to RDF
  3. Find inconsistencies.
  4. Resolve inconsistencies by modifying wrong sentences, and then check the RDF version again.

Finally, implementing Machine Wikipedia by LLMs like ChatGPT is as a piece of cake. Regarding its benefits, I really propose to implement that. Thanks, Hooman Mallahzadeh (talk) 05:30, 14 December 2025 (UTC)[reply]

Additionally, "filling Wikidata items" using idea of Machine Wikipedia is very convenient. Hooman Mallahzadeh (talk) 05:35, 14 December 2025 (UTC)[reply]
I don't think you can just convert English natural language to RDF. Also the project proposal page is hard to read as the description is obviously not meant to be put into the small infobox. Moreover, I don't think LLMs would be good to use due to hallucination problems and other issues that stem from their fundamentally flawed architecture that is not made to be accurate but just to sound plausible based on their training data.
Regarding inconsistency detection, maybe something similar to this could be useful, but it wouldn't resolve the inconsistencies.
See m:Contradictions within Wikimedia projects. Prototyperspective (talk) 12:44, 14 December 2025 (UTC)[reply]
The English and French versions of a given article are not supposed to be similar, no Wikipedia is supposed to be just a translation of another but a project developing on its own (even if an article begins as a translation, it may evolve into something else later on). So yes, there will be several inconsistencies, and that's by design. Even if there are no factual inconsistencies, there would be inconsistencies over stuff that projects decide to write about or ignore, if info has been moved to a subarticle or kept at the main one, if certain ways of saying things are allowed or discouraged (which may even be tied to the meaning of such phrases in the specific language), etc. Cambalachero (talk) 14:18, 14 December 2025 (UTC)[reply]
@Cambalachero You are right! Various versions of Wikipedia express the same facts by different modes, someone says it is good and other says it is bad. I think subjective part can be ignored, only factual part would be extracted. I propose "Machine Wikipedia" to be a factual-cumulative version of all human-written versions of Wikipedia. If we encounter factual-conflict between these Wikipedia versions, then some conflict resolution should be done for synching them. Hooman Mallahzadeh (talk) 15:10, 14 December 2025 (UTC)[reply]
@Cambalachero Existence or lack of an RDF in a Wikipedia version is not important. Only resolve conflicted RDFs. Hooman Mallahzadeh (talk) 15:26, 14 December 2025 (UTC)[reply]

@Prototyperspective: Hi and thanks for your response. You said:

I don't think you can just convert English natural language to RDF.

Please insert this prompt to ChatGPT site:

Extract RDF triples from the following text.

Text:
"Albert Einstein was born in Ulm in 1879."

Output the result as subject–predicate–object triples.

Output is:

Albert_Einstein — birthPlace — Ulm
Albert_Einstein — birthYear — 1879

and then change "Text:" to your customized text. Maybe you get surprised about its accuracy. But certainly assessing accuracy of RDF created by ChatGPT should be done by a benchmark. Even though it is prone to hallucinations, I really think that starting this project is a pioneer implementation of Web 3.0 for Wikipedia. Hooman Mallahzadeh (talk) 13:04, 14 December 2025 (UTC)[reply]

How would this differ from Abstract Wikipedia? CMD (talk) 13:52, 14 December 2025 (UTC)[reply]
@Chipmunkdavis Hi, I think Tim Berners-Lee first proposed the idea of Abstract Wikipedia. So many ideas existing in it should be changed to its original one from Tim Berners-Lee:
  1. Rename project to Machine Wikipedia
  2. Use RDF for structured data
  3. Use RDF-Schema for Constructors
So in my opinion, Abstract Wikipedia needs some renaming to get consistent to Web 3.0. But as I said, implementing Web 3.0 by LLMs like ChatGPT is a very easy. Hooman Mallahzadeh (talk) 14:06, 14 December 2025 (UTC)[reply]
You probably should raise this with Abstract Wikipedia then, there is not much we would be able to do here. The voting on renaming has just ended. CMD (talk) 14:11, 14 December 2025 (UTC)[reply]
@Chipmunkdavis I proposed the idea here. Thanks for your guidance. Hooman Mallahzadeh (talk) 14:27, 14 December 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Use British English templates

[edit]

Since the mass deletion of Use English dialect templates, on the basis that the national dialects were too similar to British English and very few editors were familiar with them, we've been using {{Use British English}} for articles whose topics have very little to do with the UK. What it means is that a foreign dialect gets enforced on those topics, and an editor who happens to be familiar with the relevant national dialect and makes appropriate changes gets reverted. This is even more inappropriate given that most of these countries are former British colonies.

I thought a good solution would be to recreate {{Use Commonwealth English}} (deleted in 2021). Commonwealth English refers to a variety of dialects of Commonwealth nations that tend to be similar to British English. This template could be used for Commonwealth countries whose dialects are too similar to British English to warrant separate templates. In practice, most people would treat it as Use British English, but it gives the few editors familiar with the relevant national dialect room to copyedit the relevant articles appropriately.

Alternatively, we can follow Amakuru's idea and remove Use British English from articles whose strongest national ties are not to the UK, leaving it blank since there aren't templates for those national dialects.

Happy to hear people's thoughts, obv further ideas are welcome (pinging WhisperToMe, Amakuru, and Dgp4004 from previous discussion) Kowal2701 (talk) 20:26, 14 December 2025 (UTC)[reply]

I'm wondering if a template could be called something like "Use British English conventions for this Commonwealth Country". It would say something to the effect of "Please use British conventions for the national English of X, Y, and Z countries." There would be documentation showing that the standardized, academic, formal English of each relevant country aligns with British English, but not that the template is itself using British English. WhisperToMe (talk) 20:35, 14 December 2025 (UTC)[reply]
Can you give some examples where this has caused a problem? Phil Bridger (talk) 20:39, 14 December 2025 (UTC)[reply]
@Phil Bridger: Hi! When I realized Template:Use Kenyan English was deleted, I wanted to add Template:Use British English to replace the deleted templates, as the standardized formal, academic English used in Kenya aligns with British English. Template:Use Singapore English was replaced with Use British English with a bot, and the relevant deletion discussions stated that the standardized formal, academic Englishes of those two countries pretty much aligned with UK English. English is an official language in both Singapore and Kenya, and so WP:ENGVAR guidelines on "strong ties to a particular English-speaking nation" apply to both.
On Talk:Kenyan English, where I added "Use British English" to represent the conventions of standard, formal English used in Kenya, a user reverted and argued that the deletion of "Use Kenyan English" was unfair as the people of interest in Kenya articles were, in his view, not properly notified. Also, there are users who feel it is not proper to put "Use British English" on Kenya-related articles as Kenya and the UK are two different countries, even if the formal, standardized Englishes between the two are the same. I'm thinking of a solution where people ask to use UK style English but don't feel that it's the UK being pushed on a former colony.
WhisperToMe (talk) 20:51, 14 December 2025 (UTC)[reply]
The problem seems to be the word "British" in the template rather than anything linguistic. The revival of {{Use Commonwealth English}} would seem to get round that. Phil Bridger (talk) 21:06, 14 December 2025 (UTC)[reply]
I proposed "Use British English conventions for this Commonwealth Country" specifically to address a reason why the "Use Commonwealth English" template was deleted. The argument was that Australian, Canadian, and NZ English are all Commonwealth but aren't the same as British English, so using a title like "Use British English conventions for this Commonwealth Country" states specifically what is really going on and avoids the stated argument. WhisperToMe (talk) 21:09, 14 December 2025 (UTC)[reply]
I think you're making a mountain out of a molehill to be honest WhisperToMe. Looking through your edits, you've inserted the British English template onto dozens and dozens of articles today. Your changes were reverted on one page. So I fear this trying to reinvent the wheel might be a bit of an overreaction. There will always be users who object to one template or another. You'd be better just accepting the reversion and moving on. Dgp4004 (talk) 22:26, 14 December 2025 (UTC)[reply]
Thank you for the feedback. I think I've made my points and will see the discussion play out. WhisperToMe (talk) 22:53, 14 December 2025 (UTC)[reply]
Why can't we have redirects from (e.g.,) Template:Use Commonwealth English to {{Use British English}}? WhatamIdoing (talk) 00:40, 15 December 2025 (UTC)[reply]
One of the objections is that templates should be clear and unambiguous to human editors, not just spelling bots. As there is no such thing as 'Commonwealth English', it's not an ideal title. A fundamental problem is that there is no clearer way of saying 'use British English' than {{Use British English}}. Dgp4004 (talk) 00:51, 15 December 2025 (UTC)[reply]
We have an article about Commonwealth English. Why would we have an article about something if "there is no such thing"? WhatamIdoing (talk) 02:25, 15 December 2025 (UTC)[reply]
That's a redirect, not an article. NebY (talk) 13:07, 15 December 2025 (UTC)[reply]
Since the current 'rubbing point' in this is my objection to putting {{Use British English}} on the article about Kenyan English, I want to at least state here that I would object to any template that calls the language used in a non-British country "British", without very strong sourcing showing absolutely no localized difference in the language used there and British English. (Perhaps I'm alone in this objection.)
Contributing to Kowal2701's topic here, I do think creating a new template at {{Use Commonwealth English}} is a good starting point for this discussion, because I vehemently disagree with the notion that people living in these places, with their own varieties of English that are notably just different from Bitish English, are still speaking British English. Would it be unacceptable to have the template, with a variable for a parenthetical suffix, so it could be used across most Commonwealth places (the ones with Englishes that don't deviate very far from each other, but do still deviate), but for each case could have the specified location available? My description/explanation might be confusing, but I'm thinking something like: {{Use Commonwealth English (Kenyan)}}, {{Use Commonwealth English (Singaporean)}}, {{Use Commonwealth English (Namibian)}}. This could (I emphasize, could, not will) alleviate issues with those who oppose the Commonwealth English label for being vague. The template would allow for administrative grouping of close-but-not-quite-the-same languages, would signal to human editors what the specific variety of English should be used, and would refrain from inappropriately centering Britain in all of this. (Another possibility would be substituting "local" for "Commonwealth" above: {{Use local English (Kenyan)}}.)
--Pinchme123 (talk) 01:12, 15 December 2025 (UTC)[reply]
Maybe there could be a parameter, so {{Use Commonwealth English |country=Kenya}} outputs "Use Commonwealth English (Kenya)". I agree that there’s no other country that follows British English exactly, there is always mixing with native languages and created 'quirks' Kowal2701 (talk) 01:24, 15 December 2025 (UTC)[reply]
These templates don't have any visible output. They add a category, and otherwise they work like a <!-- hidden HTML comment --> telling editors which WP:ENGVAR to use. WhatamIdoing (talk) 02:29, 15 December 2025 (UTC)[reply]
How about, for these specific templates, it is instead {{Use local Commonwealth English}}? Then, for talk pages, it would be {{Commonwealth English|country=Kenya/Singapore/etc.}}, which would provide some generic banner, but which has the changing qualifier or suffix, based on the country parameter? (I'm more trying to provide suggestions for changes, rather than closing off with what has already been done in a limited way in the past.) -- Pinchme123 (talk) 02:43, 15 December 2025 (UTC)[reply]
Thanks for starting this discussion; I was planning on making this proposal myself but didn't get around to it. I agree that Template:Use Commonwealth English should be recreated. The previous deletion made sense since it was redundant with the many specific templates, but that's not the case anymore. And the British English template simply doesn't make sense when we're indicating national ties to a topic. — Vigilant Cosmic Penguin 🐧(talk | contribs) 01:23, 15 December 2025 (UTC)[reply]
The templates that were deleted were deleted at TFD because there were no reliable sources in the corresponding articles showing any written spelling, word choices, or grammar that were (a) different from standard British English and (b) compatible with MOS:COMMONALITY. We go by what reliable sources tell us. It helps to remember that the sole reason for the existence of these templates is to help editors choose the correct spelling and vocabulary in a given article, per WP:ENGVAR. {{Use British English}} explains its purpose concisely: to denote articles that use British English spelling, vocabulary, and grammar. That's it. The templates do not imply ownership, or ties, or hegemony, or anything else aside from spelling, vocabulary, and grammar.
Recreating {{Use Commonwealth English}} would be pointless, because there is too much variance in spelling, vocabulary, and grammar among Commonwealth countries. It would also not address the spurious concerns about ties or hegemony, since the word "Commonwealth" is right there in the proposed template name. Not every article needs a "Use X English" template. If there is no version of English that should be applied to an article, just omit the template entirely. Most articles do just that. – Jonesey95 (talk) 07:00, 15 December 2025 (UTC)[reply]
I see you've run afoul of a common misunderstanding on Wikipedia: we use verifiability, not verified, as the standard for inclusion. From WP:V: "Each fact or claim in an article must be verifiable. Your claim there were no reliable sources in the corresponding article (emphasis added) shows this: no one checked if the differences were verifiable, they only checked if they were already verified.
(WARNING: info dump)
I know one area very well and can say unequivocally: Kenyan English is different from British English, in writing, with respect to vocabulary and grammar at a minimum.
I'll start with an important basic concept: writing and speaking are two modes of the same language. Wikipedians wouldn't be expected to know this or distinguish in everyday conversations, but for the purposes of discussion about language use, this concept is crucial. Writing and speaking are both acts of communication and are not separable in a language (they may vary slightly, but not enough to constitute separate varieties). Here's a paper to help introduce this concept, which shows through experiment how the two are linked: [14]. So, when a source refers to "speaking" a language, unless it is explicitly denying that "speech" does not also encompass written use, "speaking" refers to both spoken and written use. This is to address those who wish to separate the acts when discussing what sources say. But below I've tried to include references that are specifically talking about written forms.
On the subject of Kenyan English, plenty of articles demonstrate it as a distinct form of English in its own right. As this article demonstrates, the denial of Kenyan English, among other dialects, is really about a power hierarchy in determining who gets to be called a "native speaker" of English (Brits), and who does not: [15]. This article's thrust is that those who deviate from British English are not native speakers because of their deviations, yet if their English dialects (including Kenyan English) were recognized as distinct and legitimate, they would be correctly called native speakers. This article is full of sources pointing out the problems of holding up British and American Englishes ("inner-circle varieties") as "correct". As for identifying Kenyan English as distinct dialect, here's a paper that explores the actual differences from British and American Englishes in Kenyan English, and especially how written Kenyan English deviates from "standard" (i.e. British): [16]. An older book The Other Tongue (1982) has an entire chapter on Kenyan English (titled as such and written by Jane E. Zuengler), which uses outdated terminology ("native", "non-native", "nativization") to describe how Kenyan English varies from British English specifically, which though dated in its description of Kenyan English specifically as "non-native", nonetheless describes it as a variety of English in its own right. This book chapter has an entire section on published Kenyan authors and the specifics in their writing that are Kenyan English. The section just before it ("register") is about published newspaper letters to the editor, written in Kenyan English and not British (or any other) English. If you can get a copy of the book (I'm referencing the 1983 edition), these sections are specifically on pages 117-118.
This is just a quick few paragraphs with sources to show that, yes, Kenyan English is identified by scholars as actually distinct from others.
Second, asserting "The templates do not imply ownership, or ties, or hegemony, or anything else aside from spelling, vocabulary, and grammar is a rather shallow analysis of how language works, because yes, putting a "British English" banner on an article about one of Britain's former colonized countries does in fact perpetuate the notion that Britain still has some claim of ownership, ties, or hegemony over it. And that is especially true with respect to the article about a variety of English spoken in a non-British country. And by the way, attempting to delineate between "formal" and "informal" writing and describing the former as "British" and the latter as not is perpetuating ownership and hegemony.
My point here is, I'm willing to bet all of these dialects under discussion have plenty of sourcing that mark them as distinct from "British English" (or "American English"). Following the sources, they should not be lumped together under the British or American English templates. I do however see utility in a template that does lump them together, as Commonwealth English is recognized by scholars as a useful concept (after all, it's a bolded term at its redirected page and is therefore rightfully also a subject of that page) and is specifically used to distinguish between non-British Englishes and British English. Which is why I proposed above a template that would lump them together (not any that are deviations from American English obviously, that would require a different lumping template), and then banners for human editors that would display the location-specific variety on Talk.
--Pinchme123 (talk) 17:45, 15 December 2025 (UTC)[reply]
The lovely, sourced wall of text above creates a straw man and then sets it alight. The resulting fire is delightful to the eye, but I did not claim that Kenyan English, for example, does not exist. I claimed, specifically, that it has not been shown to be both (a) different from standard British English and (b) compatible with MOS:COMMONALITY. And I stand by the assertion about ownership, which is backed by the documentation in the templates. The above author appears to be misreading the purpose of the templates. First, the template does not put "a 'British English' banner on an article"; it adds a hidden tracking category that explains which type of spelling and word choices editors should use when editing the article. That's all. And second, it is an interpretation, not a fact, that such templates "perpetuate the notion that Britain still has some claim of ownership" over any topic. (Also see MOS:TIES, which provides clear guidance on which form of English to use for some topics.) Articles are not owned in this way at Wikipedia. The templates are about spelling, vocabulary, and grammar guidance only. – Jonesey95 (talk) 19:48, 15 December 2025 (UTC)[reply]
Wow. Just wow. I will note for others here that there's nothing "straw man" about my comment above. Kenyan English is different from British English – in grammar, in vocabulary, in spelling relevant to certain local concepts/contexts, both in spoken and written form (which are also inexstricable), and regardless of so-called formality level (the book chapter I referenced even states at one point that Kenyan English appears more formal in writing than British English) – and all these sources show as much in their own ways. Yet Jonesey95 waives them away as a "straw man", reasserts that somehow, Kenyan Engslish is not "different from standard British English" in the face of these relevant expert sources, and then bizarrely points to MOS as some kind of trump card for what template label should be applied in asserted absence of difference. (This line of reasoning also ignores that MOS:COMMONALITY is about word choice and universality, not about which articles have which dialect banner, with examples of universal word choices to override specific American, British, or Indian English variations.) MOS:TIES is clear: "An article on a topic that has strong ties to a particular English-speaking nation should use the standard (formal, not colloquial) English of that nation." English dialects under discussion are formal, not colloquial. Their articles should be written using them and their templates and Talk page banners should reflect this.
Jonesey95, you may refer to me as Pinchme123, not "the above author" or "author". I am a Wikipedian and an editor exactly like you, not an "author", which implies I am somehow outside of this community. Your own word choice here demonstrates the power of language choices and the subtle ways one can impart influence (whether intentionally or not) through those choices. Even if you disagree with this argument, I am asking you directly: refer to me by my username or as an editor or Wikipedian.
--Pinchme123 (talk) 22:09, 15 December 2025 (UTC)[reply]
I don't think the understanding of COMMONALITY is quite that clearcut, as evidenced by the discussions surrounding gaol. I would also be hesitant to completely dismiss the subjective feel of editors over the naming conventions of these templates, keeping editors happy when it costs little to do so or sometimes even when it costs quite a bit to do so is worthy goal in of itself. Is there a difference in formal encyclopedic language between en-UK and en-IE? Not really, or at the very least no more and often less than the distinctions between it an many other localized varieties. Is it nonetheless worthwhile to maintain a separate template {{Use Hiberno-English}}? Unquestionably.
As another example, even if there were no difference whatsoever between en-IN and en-PK though there are in fact some differences merging one into the other is just asking for trouble for no benefit. Scripts and editors can easily be instructed to treat the two the same, and needless acrimony is avoided. Far and away a net positive admitting I've long been a staunch defender of CITEVAR so probably take this even further than most would.
At the risk of deviating from the topic, I also worry we are ending up in a situation where SYSTEMIC issues are calling the shots. If there were more editors from the countries like Kenya, it is likely the templates corresponding to the localized variants in those places would not have been deleted. In the end, using say Mashujaa Day in lieu of Heroes' Day is not fundamentally different from using Taoiseach in lieu of Prime Minister.
Anyway I don't think there is much problem with a template that suggests to editors that if they see something unusual they should pause and research before summarily changing it is that much of a problem, though a better solution may be to just have a template like {{Use Commonwealth spelling}} in the way that {{Use Oxford spelling}} already exists. Or something along those lines that is less peremptory on vocabulary, leaving it open to reasoned talk page discussion. ~2025-41540-19 (talk) 02:56, 19 December 2025 (UTC)[reply]
I agree with you here; the aim is to facilitate localization, including in situations where standard terms are not applicable. "Use British English" would imply to me that we're localizing it to use British terms, which is not the case for an article about Kenya. I'll use my previous example of "motorway"—a literal use of the phrase "use British English" would tell us to use this word even when it's not the appropriate word for the context. I also agree with your point on focusing on the spelling. Although others are correct that "Commonwealth English" is not distinct in itself, I believe a template like "Use British English spelling" would be appropriate. — Vigilant Cosmic Penguin 🐧(talk | contribs) 04:23, 19 December 2025 (UTC)[reply]
I believe "Use Commonwealth English" would still be useful as a maintenance template. I'm imagining that this template would use the same spellings as British English—as, with the exception of Canada, this is common among Commonwealth countries—so that a bot could know to change color to colour. Without differences in spelling, there's no need for more specific templates. But there would still be differences in vocabulary that mean "Use British English" is not precise. Certain words may be standard in British English but not worldwide; I believe motorway is a example, and I would speculate that some countries might use potato chips in the American sense. This would result in situations where, even if there is no "Dictionary of X English", editors can use their discretion to choose which word is appropriate based on actual usage. — Vigilant Cosmic Penguin 🐧(talk | contribs) 19:54, 15 December 2025 (UTC)[reply]
The templates exist to provide guidance to editors. If an editor finds an article that uses "truck" instead of "lorry", or "eggplant" instead of "aubergine", and the article is written in New Zealand English or Australian English, that usage is correct. Tracking that article with {{Use New Zealand English}} or {{Use Australian English}} provides helpful guidance about whether to change those usages (hint: don't), but {{Use Commonwealth English}} would not, because there is no actual standard Commonwealth English. The template was deleted for a good reason. – Jonesey95 (talk) 20:04, 15 December 2025 (UTC)[reply]
Viewed from another angle, these templates exist to indicate whether or not an article uses US English. Do we need to pair {{Use American English}} with {{Don't use American English}} or {{Use whatever English is locally appropriate, not American}}? NebY (talk) 13:17, 15 December 2025 (UTC)[reply]
Largely per what user:Jonesey95 said, if both {{Use British English}} and {{Use Commonwealth English}} are going to use the same vocabulary, and grammar, then why bother with two templates. They don't denote who rules over what, just the phrasing that fits the best there.
I think "Commonwealth English" also brings up other issues, like why does Commonwealth English just mean "British English" in practice when the likes of India, Australia, Pakistan, South Africa and Canada are also Commonwealth countries and have their own different forms of English that are actively used on Wikipedia. DervotNum4 (talk) 17:49, 15 December 2025 (UTC)[reply]
I will add that I do see that there are some merits to the Commonwealth template existing. DervotNum4 (talk) 18:02, 15 December 2025 (UTC)[reply]
I should draw the attention of this thread to this RfC that decided that commas at least do not come under the scope of Engvar. The Manual of Style also dictates the style of quotation marks and such like, regardless of the variety of English. And then there's the commonality policy. I'd wager that any attempts to vary grammar in a way that isn't already laid out in the manual of style using Engvar as the justification would meet the same community opposition at RfC. So I'm not sure there's leeway for any of these templates to dictate much in the way of grammar.
I saw somewhere (although I can't recall where as this topic has sprawled all over the place) that two editors felt 'Use British English spelling' would take some of the sting out. To my mind, it's unnecessary verbiage, but it is at least clear and I could live with it. I wonder if that would command much support amongst those opposed to 'Use British English'? It doesn't suggest a variety as such. Only spelling. Perhaps all such templates should be so called. 'Use American English spelling' etc. Dgp4004 (talk) 19:14, 15 December 2025 (UTC)[reply]
The above suggestion has been made in the past, but it won't work, because MOS:ENGVAR is more than spelling. Constructions like "From 1970, Smith has ..." and "Smith was in hospital ..." and "England are playing Germany tomorrow ..." are valid in British English but are errors in US English. – Jonesey95 (talk) 19:54, 15 December 2025 (UTC)[reply]
If it is about "more than just spelling", than why did we ditch {{Use Kenyan English}} in the first place? You can't have it both ways. It was stated several times in the TfD debate that the existence of Kenyan English as a concept didn't affect the decision to delete, because the template was purely about spelling and not about other word usage. That being the case, moving the {{Use British English}} template to {{Use British English spelling}} is exactly what we should do, and sounds like possibly a proposal that could find consensus, if Ddp4004 is also on board with it. Failing that, {{Use Commonwealth English}} would certainly be fine from my end, it's largely summarising the position without irritating people who don't like all the individual countries having their templates.  — Amakuru (talk) 00:19, 16 December 2025 (UTC)[reply]
The way I see it, the question is about what a maintenance template is really for. A template like "Use British English" is about two things: spelling and vocabulary. It also does two things: it allows bots (or users with automated tools) to correct BrEng spelling, and it lists the article as an article that should be using BrEng vocabulary. I'm essentially proposing "Use Commonwealth English" to do the first thing but not the second, in cases where there is no specific guidance for the vocabulary. — Vigilant Cosmic Penguin 🐧(talk | contribs) 00:48, 16 December 2025 (UTC)[reply]
Spelling is indeed where the nomination started, but as I said above, MOS:ENGVAR is about more than spelling, and I took that into account when posting at the TFD for Use Kenyan English. I encourage you to read my comprehensive !vote at the TFD, which you can find by searching for "family of templates is intended to". – Jonesey95 (talk) 00:56, 16 December 2025 (UTC)[reply]
  • Comment – People keep going on about 'Commonwealth English' – as convenient as it would be if such a thing existed, it does not. Canadians and Australians have their own spelling schemes. Any such template would be misleading, and original research. Just as misleading, in fact, as templates like the now deleted 'Use Kenyan English', which implied the existence of an independent written standard of Kenyan English, when one does not exist. What we can do is revive {{EngvarB}} – and maybe rename it '{{Use British spelling}}', i.e. similar to the existing {{Use Oxford spelling}}. I think what is confusing many editors is the existence of articles like Kenyan English. This article is about spoken dialect – there is a Kenyan English dialect, but there is no independent written standard of Kenyan English. Wikipedia is not written in dialect – we write in standard written English, and we have a policy of WP:COMMONALITY. Yours, &c. RGloucester 09:41, 18 December 2025 (UTC)[reply]
    There is no conceptual difference between Kenyan English and Australian English / Canadian English / American English / British English / Indian English, except that you've decided to label the first a "dialect" and therefore declare it off-limits, for reasons that have no basis in policy. The reality is that English as a whole has very few differences across regions, far fewer than languages such as German, such that it doesn't really make sense to label any of the individual varieties as "dialects". The reality is that each of the regions where English is spoken has words and usages specific to that region, some of which might be appropriate to use in encyclopedic text pertaining to that region, and others not. Kenyan English introduces words like ugali and matatu, which are widely used and understood in those regions and acceptable for use in an article on a Kenyan topic, as long as they are linked so that English speakers from elsewhere can check what they mean. Just the same as an Australian article might talk about the bush. Overall though, I see little reason why we need to flag such things explicitly - many articles already use WP:COMMONALITY ahead of local words, and WP:TIES takes care of the rest. Hence why it's best to move all of the templates so that they clearly denote spelling to be used in the article only, and not wider terminology. Either way, we shouldn't be reserving such a change only for those varieties of English that don't come from the "western world".  — Amakuru (talk) 13:38, 18 December 2025 (UTC)[reply]
There is a conceptual difference – I can consult reliable sources and find a clear guide to the British, or Australian, or Canadian, or American written standards of English. British writing is codified by Oxford, Australian by Macquarie, Canadian by Gage, and American by Webster. No such sources existing for an independent written standard of 'Kenyan English'. You repeatedly say that we must have 'Kenyan English' templates to avoid creating an impression of impropriety, but this contrary to spirit of the encyclopaedia, which is based on WP:V. No matter how much we may wish that 'Commonwealth' or 'Kenyan' English might exist, so as to make our own internal procedures vis-a-vis WP:TIES simpler, the fact remains that they do not. Finally, commentary about localised diction is a red-herring – whether the article is written in British or 'Kenyan' or Australian, no one is going to try remove or rewrite proper names like 'ugali', any more than one would remove mentions of 'shingeki' from articles about Japanese theatre. Yours, &c. RGloucester 22:04, 18 December 2025 (UTC)[reply]

Removing the GS for professional wrestling

[edit]

After WP:GS/PW was authorized in 2018, there have been a total of two user enforcement actions, including zero in the last three years. There has been a steady trickle of page protections, but those can occur regardless of whether there is a GS or not. There have been 13 alerts since 2020, and I don't think 13 editors over four years is enough to support a full-fledged CCT designation. Moreover, most of those alerts seem to be for disruptive SPAs who quite frankly need an indef rather than a topic ban; TonySt, who placed 5 of those 13 alerts, agrees that it should be removed.

I realize that this is more active than most CTs which get repealed—normally we wait for disruption to be all but gone, such as zero actions in the last few years. I am sort of testing the waters here. I don't think that pure page protection and a dozen alerts in five years is indicative of a problem that we need the song and dance of an official CCT; do you agree? Best, HouseBlaster (talk • he/they) 23:52, 15 December 2025 (UTC)[reply]

There have been a couple fairly recent ANIs in the topic area; [1], [2], and [3], from a search. [2]'s indef appears to be partly inspired with the knowledge that professional wrestling is under general sanctions. 45dogs (they/them) (talk page) (contributions) 22:28, 16 December 2025 (UTC)[reply]

new template for ref sections

[edit]

i don't know much about template creation or if it's a suitable question here, but there should a big citation template for those wikitables that use a reference row. Misterpotatoman (talk) 18:41, 16 December 2025 (UTC)[reply]

Please could you explain the problem and maybe add links to an example — GhostInTheMachine talk to me 08:52, 17 December 2025 (UTC)[reply]
something like that
a example of what it could look like
Misterpotatoman (talk) 09:19, 17 December 2025 (UTC)[reply]
It is fairly standard to see tables with a column of references. What extra would you add? BTW — the {{refh}} template is often used for the heading — GhostInTheMachine talk to me 09:27, 17 December 2025 (UTC)[reply]
my vision is that there is a template called big ref, it's just a ref but big, you could change it's size to fit the box it's in, my idea is that the first line os the website it links too, the second line is the display text because it would be weird if this really big citations had a single a in it and third is the template length and forth is the template length, so here's how it would look when writing, {Bigref|https://m.youtube.com%7Cdjdjdjjdjdjd%7C[number]|[number]} and would become [djdjdjjdjdjd], i have no idea how wikipedias screen space measuring system work. Misterpotatoman (talk) 10:22, 17 December 2025 (UTC)[reply]
A reference is just a link to the References section at the bottom of the article. It just displays as a small piece of text with [brackets]. It would be inappropriate to make the text any larger. We also routinely "fix" bare links embedded in the text of an article – either by removing them or converting them into proper references — GhostInTheMachine talk to me 10:35, 17 December 2025 (UTC)[reply]
i have no idea what this is supposed to mean, can you explain more simply? Misterpotatoman (talk) 10:57, 17 December 2025 (UTC)[reply]

User warning templates for sandbox misuse

[edit]

As the title states. I think it might be useful to have user warning templates for sandbox misuse (Wikipedia:BADSAND). I know that it’s not all that common, but there are several fairly rare types of disruptive editing warnings. It seems useful enough to save people time. Something like:

Information icon Hello, I’m Example. I noticed that one of your recent edits to the sandbox appeared to be libelous, offensive, copyrighted, or disruptive to the sandbox’s functionality. While the sandbox is meant to be used for test purposes, and has few restrictions, adding those kinds of content is still considered unconstructive. If you want to know what is and isn’t allowed in the sandbox, check out Wikipedia:Misuse of the sandbox. If you have any questions, you can ask for assistance at the Teahouse or the Help desk. Thanks.

for level one. The part about “disruptive to functionality” is specifically for people who delete the header. Anyway, thoughts? FloblinTheGoblin (talk) 23:18, 16 December 2025 (UTC)[reply]

Template:Uw-sandbox1 and related ones do exist :) 45dogs (they/them) (talk page) (contributions) 23:56, 16 December 2025 (UTC)[reply]
That case, it needs to be added to Template:Multi notice links. FloblinTheGoblin (talk) 01:15, 17 December 2025 (UTC)[reply]
sure. It looks good. Just replace e[x]ample with [moderator or admin name, meybe]. ~2025-32362-48 (talk) 13:59, 17 December 2025 (UTC)[reply]

Edit size guidelines

[edit]

I think we should mention in a guideline that you should typically not do more than one section per edit (with exceptions).

This is a problem I've had for a while, and I think adding this as advice could help.

It's based on this conversation. What do you think?

Wikieditor662 (talk) 03:11, 18 December 2025 (UTC)[reply]

The issue with that is that the qualitative 'size' of an edit does not exactly equate to being by section. For example, fixing a few typos throughout the article may affect multiple sections, but would not be a 'large' edit. It's always going to be situational. CMD (talk) 03:32, 18 December 2025 (UTC)[reply]
I was thinking your example would go under the exceptions.
But perhaps it would be better if instead of having it be one main way and add exceptions, we could show what type of edits should be done under different scenarios? Wikieditor662 (talk) 03:43, 18 December 2025 (UTC)[reply]
I feel as though this may be a bit of rules overreach unless editors really feel that there's a problem with editors tending to make overly large edits. In my editing experience such things are usually done by newer editors or unregistered accounts. DonIago (talk) 04:08, 18 December 2025 (UTC)[reply]
Well, we could certainly be light with the punishments or sanctions (if any); it's meant more for advice. Wikieditor662 (talk) 04:38, 18 December 2025 (UTC)[reply]
Personally, I think the most convenient approach varies based on the specific situation, and I think there are too many variables to provide an simple list of scenarios. In general, it's helpful for editors to make edits that are easily reviewable one at a time by someone else, but how to best achieve that varies. Sometimes there isn't really a good way to do this, and sometimes editors aren't perfect and make extra edits, or larger edits than others might prefer to review. isaacl (talk) 05:42, 18 December 2025 (UTC)[reply]
 Comment: Wikipedia:Avoid instruction creep seems relevant here, "having too many rules may drive away editors." GeogSage (⚔Chat?⚔) 09:27, 18 December 2025 (UTC)[reply]
I think edits, like commits in software development when using source control, should be, as much as possible, conceptually one complete idea/thing. Often that means it's within a single section but very often it's across the entire article. That you should be able to describe the why of an edit in couple dozen words at most, and not too much a mix of different things, is more important than the exact size or placement of the changes. Skynxnex (talk) 15:27, 18 December 2025 (UTC)[reply]
Yup. If edits are likely to be uncontroversial (minor grammar tweaks, typo fixes etc) there really isn't any need to spread them over multiple edits. Anything that someone might object to needs to be done separately, with a clear edit summary. And if you don't want your minor tweaks reverted, do them first. This isn't something you can make hard and fast rules about though, and trying to come up with anything that won't cause more trouble than it is worth looks to be a futile exercise. AndyTheGrump (talk) 15:55, 18 December 2025 (UTC)[reply]
Grammar and typo fixes can be an exception.
Along with that, a singular type of 'fact' that needs to changed/corrected in several sections of an article can also be excused eg. The correct range of a missile being updated in both the Development and Specifications sections. Cdr. Erwin Smith (talk) 20:14, 18 December 2025 (UTC)[reply]
So that's no structural changes to articles any more? NebY (talk) 22:17, 18 December 2025 (UTC)[reply]