They got very good results with just making the model bigger and train it on more data. It started doing stuff that was not programmed in the thing at all, like writing songs and having conversations, the sort of thing nobody expected an autocomplete to do. The reasoning was that if they keep making it bigger and feed it even more days, that the line would keep going up. The the fanboys believed it, investors believed it and many business leaders believed it. Until they ran out of data and datacenters.
it’s such a weird stretch, honestly. songs and conversations are not different to predictive text, it’s just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.
It helped that this advanced autocorrect could get high scores on many exams at university level. That might also mean the exams don’t test logic and reasoning as well as the teachers think they do.
They got very good results with just making the model bigger and train it on more data. It started doing stuff that was not programmed in the thing at all, like writing songs and having conversations, the sort of thing nobody expected an autocomplete to do. The reasoning was that if they keep making it bigger and feed it even more days, that the line would keep going up. The the fanboys believed it, investors believed it and many business leaders believed it. Until they ran out of data and datacenters.
it’s such a weird stretch, honestly. songs and conversations are not different to predictive text, it’s just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.
It helped that this advanced autocorrect could get high scores on many exams at university level. That might also mean the exams don’t test logic and reasoning as well as the teachers think they do.