• 0 Posts
  • 679 Comments
Joined 2 years ago
cake
Cake day: October 16th, 2023

help-circle
rss


  • Tangentially, the other day I thought I’d do a little experiment and had a chat with Meta’s chatbot where I roleplayed as someone who’s convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I’ve been meaning to continue the chat and see how far and how fast it goes but I’m just too aghast for now. This shit is so fucking dangerous.




  • Speaking purely in terms of literary value, I agree that the output is complete nonsense word salad, but it becomes intriguing precisely because Geoff is evidently finding deep meaning in it and has absorbed the concepts and now writes as if the LLM had taken over his mind. It’s very effective horror as far as I’m concerned.







  • @Amoeba_GirltoSneerClubyes scott we know you are
    link
    English
    61 month ago

    I saw the post and that it was in earnest but screw it I’m still taking it as a reference to how Simone would reel in and groom young female students for Jean-Paul’s benefit.



  • Note that the train of thought thing originated from users as a prompt “hack”: you’d ask the bot to “go through the task step by step, checking your work and explaining what you are doing along the way” to supposedly get better results. There’s no more to it than pure LLM vomit.

    (I believe it does have the potential to help somewhat, in that it’s more or less equivalent to running the query several times and averaging the results, so you get an answer that’s more in line with the normal distribution. Certainly nothing to do with thought.)