These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
His politics weren’t perfect, but he got more nazis killed than a lot of people with much worse takes, and was a genuinely brilliant reasonably ethical contributor to a lot of cool shit that should have fucking stayed cool.
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
Adding weights doesn’t make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
There isn’t any more training data to improve with, and these programs have stated polluting the internet with bad data that will make them even dumber and incorrect in the long run.
We’re done here until there’s a fundamentally new approach that isn’t repetitive training.
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
Transformers were pretty novel in 2017, I don’t know if they were really around before that.
Anyway, I’m doubtful that a larger corpus is what’s needed at this point. (Though that said, there’s a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) I’m also doubtful that scaling up is going to keep working, but it wouldn’t surprise that much me if it does keep working for a long while. My guess is that there’s some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it. Who can really say though.
Maybe they should call it what it is
Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.
Hey now, that’s unfair and queerphobic.
These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
Fair lol
Alan Turing was the GOAT
RIP my beautiful prince
Also, thank you for being basically a person. This topic does a lot to convince me those aren’t a thing.
His politics weren’t perfect, but he got more nazis killed than a lot of people with much worse takes, and was a genuinely brilliant reasonably ethical contributor to a lot of cool shit that should have fucking stayed cool.
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
Adding weights doesn’t make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
There isn’t any more training data to improve with, and these programs have stated polluting the internet with bad data that will make them even dumber and incorrect in the long run.
We’re done here until there’s a fundamentally new approach that isn’t repetitive training.
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
Also literally all the resources in the world.
Transformers were pretty novel in 2017, I don’t know if they were really around before that.
Anyway, I’m doubtful that a larger corpus is what’s needed at this point. (Though that said, there’s a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) I’m also doubtful that scaling up is going to keep working, but it wouldn’t surprise that much me if it does keep working for a long while. My guess is that there’s some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it. Who can really say though.