An interesting development, but I doubt it’ll be a good thing, especially at first. This looks like the kind of thing that will be an entirely new threat vector and a huge liability, even when used in the most secure way possible, but especially when used in a haphazard way that we’ll certainly see from some of the early adoptors.

Just because you can do a thing, does not mean that you should.

I almost feel like this should have an NSFW tag because this will almost certainly not be safe for work.

Edit: looks like the article preview is failing to load… I’ll try to fix it. … Nope. Couldn’t fix.

  • Dark Arc
    link
    fedilink
    English
    1
    edit-2
    1 day ago

    I don’t buy the “it’s a neural network” argument. We don’t really understand consciousness or thinking … and consciousness is possibly a requirement for actual thinking.

    Frankly, I don’t think thinking in humans is based anywhere near statical probabilities.

    You can of course apply statistics and observe patterns and mimic them, but coorilation is not causation (and generally speaking, society is far too willing to accept coorilation).

    Maybe everything reduces to “neural networks” in the same way LLM AI models them … but that seems like an exceptionally bold claim for humanity to make.

    • ɔiƚoxɘupOP
      link
      fedilink
      English
      122 hours ago

      It makes sense that you don’t buy it. LLMs are built on simplified renditions of neural structure. They’re totally rudimentary.