But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.

  • Natanael
    link
    fedilink
    311 days ago

    All you need to argue is that its operators have responsibility for its actions and should filter / moderate out the worst.

    • @Opinionhaver@feddit.uk
      link
      fedilink
      English
      1
      edit-2
      11 days ago

      That still assumes level of understanding that these models don’t have. How could you have prevented this one when suicide was never explicitly mentioned?

      • Natanael
        link
        fedilink
        111 days ago

        You can have multiple layers of detection mechanisms, not just within the LLM the user is talking to

          • Natanael
            link
            fedilink
            1
            edit-2
            11 days ago

            I’m told sentiment analysis with LLM is a whole thing, but maybe this clever new technology doesn’t do what it’s promised to do? 🤔

            Tldr make it discourage unhealthy use, or else at least be honest in marketing and tell people this tech is a crapshot which probably is lying to you