You must log in or register to comment.
This is sort of the type of problem that a specifically trained ML model could be pretty good at.
This isn’t that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn’t trust it.
The accuracy is similar to what a carny running the guess-your-weight hustle could achieve.
Please remember that the LLM does not actually understand anything. It’s predictive, as in it can predict what a person would say, but it doesn’t understand the meaning of it.
Can’t wait to be called a fat ass with 95% semantic certainty. Foolish machine, you underestimate my power! I’m a complete fat ass!!