No, I’m talking about human learning and the danger imposed by treating an imperfect tool as a reliable source of information as these companies want people to do.
Whether the erratic information is from tokenization or hallucinations is irrelevant when this is already the main source for so many people in their learning, for example, a new language.
Hallucinations aren’t relevant to my point here. I’m not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.
The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for drawing any conclusions about their behaviour on other tasks.
That’s a fair point when these LLMs are restricted to areas where they function well. They have use cases that make sense when isolated from the ethics around training and compute. But the people who made them are applying them wildly outside these use cases.
These are pushed as a solution to every problem for the sake of profit with intentional ignorance of these issues. If a few errors impact someone it’s just a casualty in the goal of making it profitable. That can’t be disentwined from them unless you limit your argument to open source local compute.
Well – and I don’t meant this to be antagonistic – I agree with everything you’ve said except for the last sentence where you say “and therefore you’re wrong.” Look, I’m not saying LLMs function well, or that they’re good for society, or anything like that. I’m saying that tokenization errors are really their own thing that are unrelated to other errors LLMs make. If you want to dunk on LLMs then yeah be my guest. I’m just saying that this one type of poor behaviour is unrelated to the other kinds of poor behaviour.
No, I’m talking about human learning and the danger imposed by treating an imperfect tool as a reliable source of information as these companies want people to do.
Whether the erratic information is from tokenization or hallucinations is irrelevant when this is already the main source for so many people in their learning, for example, a new language.
Hallucinations aren’t relevant to my point here. I’m not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.
The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for drawing any conclusions about their behaviour on other tasks.
That’s a fair point when these LLMs are restricted to areas where they function well. They have use cases that make sense when isolated from the ethics around training and compute. But the people who made them are applying them wildly outside these use cases.
These are pushed as a solution to every problem for the sake of profit with intentional ignorance of these issues. If a few errors impact someone it’s just a casualty in the goal of making it profitable. That can’t be disentwined from them unless you limit your argument to open source local compute.
Well – and I don’t meant this to be antagonistic – I agree with everything you’ve said except for the last sentence where you say “and therefore you’re wrong.” Look, I’m not saying LLMs function well, or that they’re good for society, or anything like that. I’m saying that tokenization errors are really their own thing that are unrelated to other errors LLMs make. If you want to dunk on LLMs then yeah be my guest. I’m just saying that this one type of poor behaviour is unrelated to the other kinds of poor behaviour.