It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

  • @DarkSurferZA@lemmy.world
    link
    fedilink
    English
    307 days ago

    Brah, if your CEO edits the prompt, it’s not unauthorized. It may be undesirable, but it really ain’t unauthorised

  • Dr. Moose
    link
    fedilink
    English
    18
    edit-2
    7 days ago

    They say that they’ll upload the system prompt to github but that’s just deception. The Twitter algorithm is “open source on github” and hasn’t been updated for over 2 years. The issues are a fun read tho https://github.com/twitter/the-algorithm/issues

    There’s just no way to trust that anything is running on the server unless it’s audited by 3rd party.

    So now all of these idiots going to believe “but its on github open source” when the code is never actually being run by anyone ever.

  • Cosmoooooooo
    link
    fedilink
    English
    648 days ago

    Yeah, billionaires are just going to randomly change AI around whenever they feel like it.

    That AI you’ve been using for 5 years? Wake up one day, and it’s been lobotomized into a trump asshole. Now it gives you bad information constantly.

    Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?

    Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?

      • @ilinamorato@lemmy.world
        link
        fedilink
        English
        148 days ago

        Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.

        Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.

    • LostXOR
      link
      fedilink
      118 days ago

      That’s a good reason to use open source models. If your provider does something you don’t like, you can always switch to another one, or even selfhost it.

        • LostXOR
          link
          fedilink
          48 days ago

          Yep, not arguing for the use of generative AI in the slightest. I very rarely use it myself.

      • ArchRecord
        link
        fedilink
        English
        108 days ago

        While true, it doesn’t keep you safe from sleeper agent attacks.

        These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)

        https://arxiv.org/pdf/2401.05566

        It’s obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company’s servers that can then be updated with any given additional payload) but I personally think we’ll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.

    • @otacon239@lemmy.world
      link
      fedilink
      English
      58 days ago

      I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.

    • @applemao@lemmy.world
      link
      fedilink
      English
      28 days ago

      Yep, I knew this from the very beginning. Sadly the hype consumed the stupid, as it always will. And we will suffer for it, even though we knew better. Sometimes I hate humanity.

  • @Kurious84@eviltoast.org
    link
    fedilink
    English
    31
    edit-2
    8 days ago

    Musk made the change but since AI is still as rough as his auto driving tech it did t work like he planned

    But this is the future folks. Modifying the AI to fit the narrative of the regime. He’s just too stupid to do it right or he might be stupid and think these llms work better than they actually do.

    • @lennivelkant@discuss.tchncs.de
      link
      fedilink
      English
      16 days ago

      Are we talking about the same guy that opted to scrap all sensors for his self-driving cars because he figures humans can drive with eyes only, they don’t need more than a camera?