• @Timatal
    link
    English
    5
    edit-2
    2 days ago

    This is sort of the type of problem that a specifically trained ML model could be pretty good at.

    This isn’t that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn’t trust it.

  • @meyotch@slrpnk.net
    link
    fedilink
    English
    32 days ago

    The accuracy is similar to what a carny running the guess-your-weight hustle could achieve.

  • @Etterra@discuss.online
    link
    fedilink
    English
    22 days ago

    Please remember that the LLM does not actually understand anything. It’s predictive, as in it can predict what a person would say, but it doesn’t understand the meaning of it.

  • @abcdqfr@lemmy.world
    link
    fedilink
    English
    22 days ago

    Can’t wait to be called a fat ass with 95% semantic certainty. Foolish machine, you underestimate my power! I’m a complete fat ass!!