An interesting development, but I doubt it’ll be a good thing, especially at first. This looks like the kind of thing that will be an entirely new threat vector and a huge liability, even when used in the most secure way possible, but especially when used in a haphazard way that we’ll certainly see from some of the early adoptors.
Just because you can do a thing, does not mean that you should.
I almost feel like this should have an NSFW tag because this will almost certainly not be safe for work.
Edit: looks like the article preview is failing to load… I’ll try to fix it. … Nope. Couldn’t fix.
You’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.
I don’t buy the “it’s a neural network” argument. We don’t really understand consciousness or thinking … and consciousness is possibly a requirement for actual thinking.
Frankly, I don’t think thinking in humans is based anywhere near statical probabilities.
You can of course apply statistics and observe patterns and mimic them, but coorilation is not causation (and generally speaking, society is far too willing to accept coorilation).
Maybe everything reduces to “neural networks” in the same way LLM AI models them … but that seems like an exceptionally bold claim for humanity to make.
It makes sense that you don’t buy it. LLMs are built on simplified renditions of neural structure. They’re totally rudimentary.