I don’t know if this is an acceptable format for a submission here, but here it goes anyway:

Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

In our previous research (Content Simplification), we have identified two needs:

  • The need for readers to quickly get an overview of a given article or page
  • The need for this overview to be written in language the reader can understand

Etc., you should check the full text yourself. There’s a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

This hasn’t been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn’t the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

Taking a quote from the page for the usability study:

“Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level.”

Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they ‘use AI for everything’. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don’t think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there’s no judgment of their actual quality, and they’re only asking for people’s feedback on how they should be presented. I filled it out and couldn’t even find the space to say that e.g. the summary they show is written almost insultingly, like it’s meant for particularly dumb children, and I couldn’t even tell whether it is accurate because they just scroll around in the video.

Very extensive discussion is going on at the Village Pump (en.wiki).

The comments are also overwhelmingly negative, some of them pointing out that the summary doesn’t summarise the article properly (“Perhaps the AI is hallucinating, or perhaps it’s drawing from other sources like any widespread llm. What it definitely doesn’t seem to be doing is taking existing article text and simplifying it.” - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

I’m glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it “summarises”. Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

Finally, some comments are problematising the whole situation with WMF working behind the actual wikis’ backs:

This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed “early and often” of new developments. We shouldn’t be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others’) statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that’s an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

Again, I recommend reading the whole discussion yourself.

EDIT: WMF has announced they’re putting this on hold after the negative reaction from the editors’ community. (“we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together”)

  • warm
    link
    fedilink
    1769 days ago

    If they add AI they better not ask me for any money ever again.

    • @6nk06@sh.itjust.works
      link
      fedilink
      English
      679 days ago

      Or moderators. Why would they need those people when the AI can fix everything for free and even improve articles?

      • @Monument@lemmy.sdf.org
        link
        fedilink
        English
        289 days ago

        Right! I can’t wait to hear about all the new historical events!

        I wonder if anyone witnessed the burning of the Library of Alexandria and felt a similar sense of despair for the future of knowledge.

        • @arrow74@lemm.ee
          link
          fedilink
          English
          149 days ago

          You can download a copy of Wikipedia in full today before they turn it to shit.

          Unlike the people in Alexandria, you can spend less that $20 and 20 minutes to download the whole thing and preserve it yourself

      • warm
        link
        fedilink
        78 days ago

        Kbin.earth is on mbin, I think kbin is dead.

          • warm
            link
            fedilink
            88 days ago

            Mbin is a fork and continuation of /kbin, but community-focused.

            Kbin was destined to fail without opening up to community collaboration. I greatly preferred it over lemmy. So I will stick with Mbin now and Kbin.earth has been a small but nice Mbin instance.

  • @Cheradenine@sh.itjust.works
    link
    fedilink
    English
    1229 days ago

    Wikipedia articles already have lead in summaries.

    Fuck right off with this

    A future experiment will study ways of editing and adjusting this content.

    • @MDCCCLV@lemmy.ca
      link
      fedilink
      English
      329 days ago

      A lot of them for the small articles and stubs are written very technically and don’t provide an explanation for complex subjects if you aren’t already familiar with it. Then you have to read 4 subjects down just to figure out the jargon for what they’re saying

      • @Takapapatapaka@lemmy.world
        link
        fedilink
        English
        329 days ago

        I agree, having experienced this especially on mathematics pages. But on the other hand, from my experience, the whole article is very technical in those cases : I’m not sure making a summary would help, and im not sure you can provide a summary both correct and easily understandable in those cases.

      • @catloaf@lemm.ee
        link
        fedilink
        English
        139 days ago

        Math articles are the worst. They always jump right into calculus and stuff. I usually have to hope there’s a simple English article for those!

        • @AbouBenAdhem@lemmy.world
          link
          fedilink
          English
          11
          edit-2
          9 days ago

          This is one thing I can see an actual use case for (as an external tool, not as part of WP): Create a summary, not of the article itself, but of the prerequisite background knowledge. And tailored to the reader’s existing knowledge—like, “what do I need to know to understand this article assuming I already know X but not Y or Z”.

      • @Cheradenine@sh.itjust.works
        link
        fedilink
        English
        99 days ago

        I’d agree with that, both are problematic.

        A lot of stubs should be deleted until they are expanded, they’re often more confusing than knowing nothing at all. I don’t think an LLM summary will help here though.

        Reading a few articles deep is not only a pain in the ass, but is going to dissuade those who won’t do it. There’s also the issue that when you do wade in it might link to something that is poorly cited and confusing. Again, I think an LLM is going to make things worse here.

      • @BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        29 days ago

        Maybe it’s a result of Wikipedia trying to be more of an “online encyclopedia” vs a digital information hub or learning resource. I don’t think it’s a problem on its own but I do think there should be a simplified version of every article.

  • @doctortofu@reddthat.com
    link
    fedilink
    English
    79
    edit-2
    8 days ago

    Et tu, Wikipedia?

    My god, why does every damn piece of text suddenly need to be summarized by AI? It’s completely insane to me. I want to read articles, not their summaries in 3 bullet points. I want to read books, not cliff notes, I want to read what people write to me in their emails instead of AI slop. Not everything needs to be a fucking summary!

    It seriously feels like the whole damn world is going crazy, which means it’s probably me… :(

    • Dr. Moose
      link
      fedilink
      English
      189 days ago

      This ignorance is my biggest pet peeve today. Wikipedia is not targeting you with this but expanding accessibility to people who don’t have the means to digest a complex subject on their lunch break.

      TL;DR: check your privilege

    • Maeve
      link
      fedilink
      139 days ago

      It’s not you.

      “It is no measure of health to be well-adjusted to a profoundly sick society.” Krishnamurti

      • @liv@lemmy.nz
        link
        fedilink
        English
        17 days ago

        For those of us who do skip the AI summaries it’s the equivalent of adding an extra click to everything.

        I would support optional AI, but having to physically scroll past random LLM nonsense all the time feels like the internet is being infested by something equally annoying/useless as ads, and we don’t even have a blocker for it.

        • @FourWaveforms@lemm.ee
          link
          fedilink
          English
          26 days ago

          I think it would be best if that’s a user setting, like dark mode. It would obviously be a popular setting to adjust. If they don’t do that, there will doubtless be grease monkey and other scripts to hide it.

  • @RaoulDook@lemmy.world
    link
    fedilink
    English
    589 days ago

    If people use AI to summarize passages of written words to be simpler for those with poor reading skills to be able to more easily comprehend the words, then how are those readers going to improve their poor reading skills?

    Dumbing things down with AI isn’t going to make people smarter I bet. This seems like accelerating into Idiocracy

    • @vermaterc@lemmy.ml
      link
      fedilink
      English
      109 days ago

      Wikipedia is not made to teach people how to read, it is meant to share knowledge. For me, they could even make Wikipedia version with hieroglyphics if that would make understanding content easier

      • @RaoulDook@lemmy.world
        link
        fedilink
        English
        99 days ago

        Novels are also not made to teach people how to read, but reading them does help the reader practice their reading skills. Beside that point, Wikipedia is not hard to understand in the first place.

        • A Wild Mimic appears!
          link
          fedilink
          English
          29 days ago

          I am not a native speaker, but my knowledge of the english language is better than most people i know, having no issues reading scientific papers and similar complex documents. Some wikipedia article intros, especially in the mathematics, are not comprehensible for anyone but mathematicians, and therefore fail the objective to give the average person an overview of the material.

          It’s fine for me if i am not able to grasp the details of the article because of missing prerequisite knowledge (and i know how to work with integrals and complex numbers!), but the intro should at least not leave me wondering what the article is about.

    • Dr. Moose
      link
      fedilink
      English
      89 days ago

      Do you give toddlers post-grad books to read too? This is such an idiotic slippery slope fallacy that it just reeks of white people privilege.

  • @markovs_gun@lemmy.world
    link
    fedilink
    English
    398 days ago

    Wikipedia articles are already quite simplified down overviews for most topics. I really don’t like the direction of the world where people are reading summaries of summaries and mistaking that for knowledge. The only time I have ever found AI summaries useful is for complex legal documents and low-importance articles where it is clear the author’s main goal was SEO rather than concise and clear information transfer.

  • @ace_garp@lemmy.world
    link
    fedilink
    English
    359 days ago

    These LLM-page-summaries need to be contained and linked, completely separately, in something like llm.wikipedia.org or ai.wikipedia.org.

    In a possible future case, that a few LLM hallucinations have been uncovered in these summaries, it would cast doubts about the accuracy of all page content in the project.

    Keep the generated-summaries visibly distinct from user created content.

    • @AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      439 days ago

      IIRC, they weren’t trying to stop them—they were trying to get the scrapers to pull the content in a more efficient format that would reduce the overhead on their web servers.

      • Lv_InSaNe_vL
        link
        fedilink
        English
        289 days ago

        You can literally just download all of Wikipedia in one go from one URL. They would rather people just do that instead of crawling their entire website because that puts a huge load on their servers.

        • palordrolap
          link
          fedilink
          179 days ago

          Ah, but the clueless code monkeys, script kiddies and C-levels who are responsible for writing the AI companies’ processing code only know how to scrape from someone else’s website. They can’t even ask their (respective) company’s AI for help because it hasn’t been trained yet. (Not that Wikipedia’s content will necessarily help).

          They’re not even capable of taking the ZIP file and hosting the contents on localhost to allow the scraper code they got working to operate on something it understands.

          So hammer Wikipedia they must, because it’s the limit of their competence.

  • @vermaterc@lemmy.ml
    link
    fedilink
    English
    299 days ago

    I’m ok with auto generated content, but only if it is clearly separated from human generated content, can be disabled at any time and writing main articles with AI is forbidden

  • @Matriks404@lemmy.world
    link
    fedilink
    English
    29
    edit-2
    8 days ago

    TIL: Wikipedia uses complex language.

    It might just be me, but I find articles written on Wikipedia much more easier to read than shit sometimes people write or speak to me. Sometimes it is incomprehensible garbage, or without much sense.

    • @barsoap@lemm.ee
      link
      fedilink
      English
      17
      edit-2
      8 days ago

      It really depends on what you’re looking at. The history section of some random town? Absolutely bog-standard prose. I’m probably missing lots of implications as I’m no historian but at least I understand what’s going on. The article on asymmetric relations? Good luck getting your mathematical literacy from wikipedia all the maths articles require you to already have it, and that’s one of the easier ones. It’s a fucking trivial concept, it has a glaringly obvious example… which is mentioned, even as first example, but by that time most people’s eyes have glazed over. “Asymmetric relations are a generalisation of the idea that if a < b, then it is necessarily false that a > b: If it is true that Bob is taller than Tom, then it is false that Tom is taller than Bob.” Put that in the header.

      Or let’s take Big O notation. Short overview, formal definition, examples… not practical, but theoretical, then infinitesimal asymptotics, which is deep into the weeds. You know what that article actually needs? After the short overview, have an intuitive/hand-wavy definition, then two well explained “find an entry in a telephone book”, examples, two different algorithms: O(n) (naive) and O(log n) (divide and conquer), to demonstrate the kind of differences the notation is supposed to highlight. Then, with the basics out of the way, one to demonstrate that the notation doesn’t care about multiplicative factors, what it (deliberately) sweeps under the rug. Short blurb about why that’s warranted in practice. Then, directly afterwards, the “orders of common functions” table but make sure to have examples that people actually might be acquainted with. Then talk about amortisation, and how you don’t always use hash tables “because they’re O(1) and trees are not”. Then get into the formal stuff, that is, the current article.

      And, no, LLMs will be of absolutely no help doing that. What wikipedia needs is a didactics task force giving specialist editors a slap on the wrist because xkcd 2501.

      • @antonim@lemmy.dbzer0.comOP
        link
        fedilink
        English
        58 days ago

        As I said in an another comment, I find that traditional encyclopedias fare better than Wikipedia in this respect. Wikipedians can muddle even comparatively simple topics, e.g. linguistic purism is described like this:

        Linguistic purism or linguistic protectionism is a concept with two common meanings: one with respect to foreign languages and the other with respect to the internal variants of a language (dialects). The first meaning is the historical trend of the users of a language desiring to conserve intact the language’s lexical structure of word families, in opposition to foreign influence which are considered ‘impure’. The second meaning is the prescriptive[1] practice of determining and recognizing one linguistic variety (dialect) as being purer or of intrinsically higher quality than other related varieties.

        This is so hopelessly awkward, confusing and inconsistent. (I hope I’ll get around to fixing it, btw.) Compare it with how the linguist RL Trask defines it in his Language and Linguistics: The Key Concepts:

        [Purism] The belief that words (and other linguistic features) of foreign origin are a kind of contamination sullying the purity of a language.

        Bam! No LLMs were needed for this definition.

        So here’s my explanation for this problem: Wikipedians, specialist or non-specialist, like to collect and pile up a lot of cool info they’ve found in literature and online. When you have several such people working simultaneously, you easily end up with chaotic texts with no head or tails, which can always be expanded further and further with new stuff you’ve found because it’s just a webpage with no technical limits. When scholars write traditional encyclopedic texts, the limited space and singular viewpoint force them to write something much more coherent and readable.

    • @baatliwala@lemmy.world
      link
      fedilink
      English
      48 days ago

      I’m from a country where English isn’t the primary language, people tend to find many aspects of English complex

      • @Matriks404@lemmy.world
        link
        fedilink
        English
        38 days ago

        I am also from a country that English is not widely spoken, in fact most people are not able to make a simple conversation (they will tell you they know ““basic English”” though).

        I still find it easier to read Wikipedia articles in English, than than understand some relatives, because they never precisely say what the fuck they want from me. One person even say such incomprehensible shit, that I am thinking their brain is barely functional.

  • Dr. Moose
    link
    fedilink
    English
    27
    edit-2
    9 days ago

    AI threads on lemmy are always such a disappointment.

    Its ironic that people put so little thought into understanding this and complain about “ai slop”. The slop was in your heads all along.

    To think that more accessibility for a project that is all about sharing information with people to whom information is least accessible is a bad thing is just an incredible lack of awareness.

    Its literally the opposite of everything people might hate AI for:

    • RAG is very good and accurate these days that doesn’t invent stuff. Especially for short content like wiki articles. I work with RAG almost every day and never seen it hallucinate with big models.
    • it’s open and not run a “big scary tech”
    • it’s free for all and would save millions of editor hours and allow more accuracy and complexity in the articles themselves.

    And to top it all you know this is a lost fight even if you’re right so instead of contributing to steering this societal ship these people cover their ears and “bla bla bla we don’t want it”. It’s so disappointingly irresponsible.

    • @Don_alForno@feddit.org
      link
      fedilink
      English
      189 days ago

      I’ll make a note to get back to you about this in a few years when they start blocking people from correcting AI authored articles.

    • qevlarr
      link
      fedilink
      English
      118 days ago

      The point is they should be fighting AI, not open the door even an inch to AI on their site. Like so many other endeavors, it only works because the contributors are human. Not corpos, not AI, not marketing. AI kills Wikipedia if they let that slip. Look at StackOverflow, look at Reddit, look at Google search, look at many corporate social media. Dead internet theory is all around us.

      Wikipedia is trusted because it’s all human. No other reason

    • @antonim@lemmy.dbzer0.comOP
      link
      fedilink
      English
      8
      edit-2
      8 days ago

      RAG is very good and accurate these days that doesn’t invent stuff.

      In the OP I linked a comment showing how the summary presented in the showcase video is not actually very accurate and it definitely does invent some elements that are not present in the article that is being summarised.

      And in general the “accessibility” that primarily seems to work by expressing things in imprecise, unscientific or emotionally charged terms could well be more harmful than less immediately accessible but accurate and unambiguous content. You appeal to Wikipedia being “a project that is all about sharing information with people to whom information is least accessible”, but I don’t think this ever was that much of a goal - otherwise the editors would have always worked harder on keeping the articles easily accessible and comprehensible to laymen (in fact I’d say traditional encyclopedias are typically superior to Wikipedia in this regard).

      and would save millions of editor hours and allow more accuracy and complexity in the articles themselves.

      Sorry but you’re making things up here, not even the developers of the summaries are promising such massive consequences. The summaries weren’t meant to replace any of the usual editing work, they weren’t meant to replace the normal introductory paragraphs or anything else. How would they save these supposed “millions of editor hours” then? In fact, they themselves would have to be managed by the editors as well, so all I see is a bit of additional work.

    • @rmuk@feddit.uk
      link
      fedilink
      English
      79 days ago

      How dare you bring nuance, experience and moderation into the conversation.

      Seriously, though, I am a firm believer that no tech is inherently bad, though the people who wield it might well be. It’s rare to see a good, responsible use of LLMs but I think this is one of them.

      • Venia Silente
        link
        fedilink
        English
        18 days ago

        Whether technology is inherently bad is of nearly no matter. The problem we’re dealing with is the technologies with exherent badness.

    • @phantomwise@lemmy.ml
      link
      fedilink
      English
      68 days ago

      I don’t think the idea itself is awful, but everyone is so fed up with AI bullshit that any attempt to integrate even an iota of it will be received very poorly, so I’m not sure it’s worth it.

      • Dr. Moose
        link
        fedilink
        English
        18 days ago

        I don’t think it’s everyone either - just a very vocal minority.

    • @FourWaveforms@lemm.ee
      link
      fedilink
      English
      28 days ago

      I don’t trust even the best modern commercial models to do this right, but with human oversight it could be valuable.

      You’re right about it being a lost fight, in some ways at least. There are lawsuits in flight that could undermine it. How far that will go remains to be seen. Pissing and moaning about it won’t accelerate the progress of those lawsuits, and is mainly an empty recreational activity.

  • @KnitWit@lemmy.world
    link
    fedilink
    English
    249 days ago

    Never thought I’d cancel my recurring donation for them, but just sent the email. I hope they change their mind on this, but as I told them, I will not support this.

  • @deathbird@mander.xyz
    link
    fedilink
    English
    189 days ago

    This is not the medicine for curing what ails Wikipedia, but when all anyone is selling is a hammer…

  • bitwolf
    link
    fedilink
    English
    189 days ago

    Guess they’re going to double down on the donation campaign considering the cost involved with ai

  • [R3D4CT3D]
    link
    fedilink
    English
    189 days ago

    “Most readers in the US can comfortably read at a grade 5 level,[CN]”

    so where is the citation? did they just pull a number from their butt? hmm…

    srsly, this is some bs.

      • sillyplasm
        link
        fedilink
        English
        10
        edit-2
        9 days ago

        frankly, I’m not quite surprised ._.
        edit: upon reading the article, I now wonder if it’s possible for your literacy to go down. I used to be such a bookworm in grade school, but now I have to reread stuff over and over in order to comprehend what’s going on.

        • @Carnelian@lemmy.world
          link
          fedilink
          English
          149 days ago

          You might just be chronically tired or worn down from the stresses of life. It’s pretty common.

          Another thing is as we get older a lot of people will choose more “challenging” adult books and then just be totally bored lol. I read young adult and kids books sometimes (how can I give a book to a child if I haven’t read it myself?) and it’s always surprising to me how they can be ripped through in no time at all.

          But in general I think you’re probably right that literacy can decrease with disuse. It seems like most things about the mind and body trend that way

          • @applemao@lemmy.world
            link
            fedilink
            English
            59 days ago

            The mind is a muscle. Don’t ignore it. Especially now, if you use your mind you’ll be light-years ahead of ai addicts.

          • ladfrombrad 🇬🇧
            link
            fedilink
            English
            39 days ago

            But in general I think you’re probably right that literacy can decrease with disuse

            Maths is a really good example of this.

            At one point I really enjoyed doing long division in my head but as time goes on (and you don’t exercise that sponge…), it becomes lazy.

      • Dr. Moose
        link
        fedilink
        English
        29 days ago

        I’m genuonely confused how is that even possible in a developed country such as US. Do people not read at all? As in an article or gossip magazine - all of those would get you there.

        Is it just country side folk drinking beer and watching fox news? It can’t be 50% of all people. How.

        • @Ledericas@lemm.ee
          link
          fedilink
          English
          29 days ago

          basically the 2nd sentence is a product defunding education in red states, and under funding everywhere else. another issue is “participation grades for basically almost failing and failing classes”.