“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

Some very out of touch people in the Wikimedia Foundation. Fortunately the editors (people who actually write the articles) have the sense to oppose this move in mass.

  • Wikipedia would probably be the only organization that I would trust with AI. They’ve been using it for a while now to flag sections that might need to be rewritten, but they don’t let the AI write anything itself, only notify human editors that there might be a problem. Or, at least that was what I heard a couple of ywars ago when they talked about it last.

    • @sculd@beehaw.orgOP
      link
      fedilink
      164 days ago

      That is not the case here. These are not bots which flagged issues, but literally a LLM to help with writing “summaries”, which is why the reaction is so different.

    • ɔiƚoxɘup
      cake
      link
      fedilink
      English
      44 days ago

      For som reason, “ywars” changed your voice into that of a pirate, and it made me cackle. Thanks 💛

        • ɔiƚoxɘup
          cake
          link
          fedilink
          English
          33 days ago

          Fair. I should really quit using autocomplete and stop using Gboard for privacy reasons. Honestly, I’m just a little bit away from de-googling and going graphene. Just gotta spin up immich and a few other servers.