My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.
I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.
Yeah it has a name. The more you talk the more people Believe you are smart. It partly based on the tendency to believe what we hear first and then we check if it is.
My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.
Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.
And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.
so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence
my point is, evolution doesn’t need to be involved in this paradigm. it could just be something children learn - this thing talks and is therefore more interactive than this other thing that doesn’t talk.
Additionally, at the time in pre-history when assessing the intelligence of something could determine your life or death and thereby ability to reproduce, language may not have been a great indicator of intelligence. For example, if you encounter a band of whatever hominid encroaching on your territory, there may not be a lot of talking. You would know they were intelligent because they might have clothing or tools, but it’s likely nothing would be said before the spears started to be thrown.
My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.
I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.
Yeah it has a name. The more you talk the more people Believe you are smart. It partly based on the tendency to believe what we hear first and then we check if it is.
I don’t think the mechanisms of evolution are necessarily involved.
We’re just not used to interacting with this type of pseudo intelligence.
My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.
Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.
And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.
my point is, evolution doesn’t need to be involved in this paradigm. it could just be something children learn - this thing talks and is therefore more interactive than this other thing that doesn’t talk.
Additionally, at the time in pre-history when assessing the intelligence of something could determine your life or death and thereby ability to reproduce, language may not have been a great indicator of intelligence. For example, if you encounter a band of whatever hominid encroaching on your territory, there may not be a lot of talking. You would know they were intelligent because they might have clothing or tools, but it’s likely nothing would be said before the spears started to be thrown.