But an LLM as a node in a framework that can call a python library
Isn’t how these systems are configured. They’re just not that sophisticated.
So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.
Deepseek gets at the edges of this through their partitioned model. But you’re still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.
It doesn’t scale to AGI but it does reduce hallucinations
It has to scale to AGI, because a central premise of AGI is a system that can improve itself.
It just doesn’t match the OpenAI development model, which is to scrape and sort data hoping the Internet already has the solution to every problem.
The only thing worse than the ai shills are the tech bro mansplainaitions of how “ai works” when they are utterly uninformed of the actual science. Please stop making educated guesses for others and typing them out in a teacher’s voice. It’s extremely aggravating
The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.
We don’t know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? It’s unclear if there’s some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe there’s a key hypothesis it will never formulate on the quest for self-improvement.
Isn’t how these systems are configured. They’re just not that sophisticated.
So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.
Deepseek gets at the edges of this through their partitioned model. But you’re still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.
It has to scale to AGI, because a central premise of AGI is a system that can improve itself.
It just doesn’t match the OpenAI development model, which is to scrape and sort data hoping the Internet already has the solution to every problem.
The only thing worse than the ai shills are the tech bro mansplainaitions of how “ai works” when they are utterly uninformed of the actual science. Please stop making educated guesses for others and typing them out in a teacher’s voice. It’s extremely aggravating
The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.
We don’t know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? It’s unclear if there’s some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe there’s a key hypothesis it will never formulate on the quest for self-improvement.