Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
Thank you, thank you, thank you. I hate Musk more than anyone but holy shit this is embarrassing.
“BREAKING: I asked my magic 8 ball if trump wants to blow up the moon and it said Outlook Good!!! I have a degree in political science.”
Yup, it’s literally a bullshit machine.
Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
This is correct.
In this case it is true though. Soon after grok3 came out, there were multiple prompt leaks with instructions to not bad mouth elon or trump
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
I thought we all learned that from DeepSeek, when we asked it history questions… and it didn’t know the answer. It was censoring.