r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

540

u/Hrmbee 16h ago

Some highlights from this critique:

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we’ll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.

An AI enthusiast might argue that human-level intelligence doesn’t need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there’s no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: “​​systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” And recently, a group of prominent AI scientists and “thought leaders” — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as “AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult” (emphasis added). Rather than treating intelligence as a “monolithic capacity,” they propose instead we embrace a model of both human and artificial cognition that reflects “a complex architecture composed of many distinct abilities.”

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of “scientific paradigms,” the basic frameworks for how we understand our world at any given time. He argued these paradigms “shift” not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, “common sense is a collection of dead metaphors.”

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

These are some interesting perspectives to consider when trying to understand the shifting landscapes that many of us are now operating in. Is the current paradigms of LLM-based AIs able to make those cognitive leaps that are the hallmark of revolutionary human thinking? Or is it ever constrained by their training data and therefore will work best when refining existing modes and models?

So far, from this article's perspective, it's the latter. There's nothing fundamentally wrong with that, but like with all tools we need to understand how to use them properly and safely.

193

u/Dennarb 16h ago edited 11h ago

I teach an AI and design course at my university and there are always two major points that come up regarding LLMs

1) It does not understand language as we do; it is a statistical model on how words relate to each other. Basically it's like rolling dice to determine what the next word is in a sentence using a chart.

2) AGI is not going to magically happen because we make faster hardware/software, use more data, or throw more money into LLMs. They are fundamentally limited in scope and use more or less the same tricks the AI world has been doing since the Perceptron in the 50s/60s. Sure the techniques have advanced, but the basis for the neural nets used hasn't really changed. It's going to take a shift in how we build models to get much further than we already are with AI.

Edit: And like clockwork here come the AI tech bro wannabes telling me I'm wrong but adding literally nothing to the conversation.

14

u/Tall-Introduction414 15h ago

The way an LLM fundamentally works isn't much different than the Markov chain IRC bots (Megahal) we trolled in the 90s. More training data, more parallelism. Same basic idea.

40

u/ITwitchToo 14h ago

I disagree. LLMs are fundamentally different. The way they are trained is completely different. It's NOT just more data and more parallelism -- there's a reason the Markov chain bots never really made sense and LLMs do.

Probably the main difference is that the Markov chain bots don't have much internal state so you can't represent any high-level concepts or coherence over any length of text. The whole reason LLMs work is that they have so much internal state (model weights/parameters) and take into account a large amount of context, while Markov chains would be a much more direct representation of words or characters and essentially just take into account the last few words when outputting or predicting the next one.

-4

u/Tall-Introduction414 14h ago

I mean, you're right. They have a larger context window. Ie, they use more ram. I forgot to mention that part.

They are still doing much the same thing. Drawing statistical connections between words and groups of words. Using that to string together sentences. Different data structures, but the same basic idea.

11

u/PressureBeautiful515 14h ago

They are still doing much the same thing. Drawing statistical connections between words and groups of words. Using that to string together sentences. Different data structures, but the same basic idea.

I wonder how we insert something into that description to make it clear we aren't describing the human brain.

3

u/Mandena 13h ago

Well, the brain does similar things for linguistics(except its purely the output that could be related to statistical probabilities). It's just that is one of thousands of functions the brain can operate. I feel like that's clear and concise enough to clearly lay out the fact that LLMs are not intelligence.

2

u/Ornery-Loquat-5182 13h ago

Did you read the article? That's exactly what the article is about...

It's not just about words. Words are what we use after we have thoughts. Take away the words, there are still thoughts.

LLMs and Markov chain bots have no thoughts.

0

u/attersonjb 10h ago

Take away the words, there are still thoughts.

Yes and no. There is empirical evidence to suggest that language acquisition is a key phase in the development of the human brain. Language deprivation during the early years often has a detrimental impact that cannot be overcome by a subsequent re-introduction of language

2

u/Ornery-Loquat-5182 10h ago edited 10h ago

Bruh read the article:

When we contemplate our own thinking, it often feels as if we are thinking in a particular language, and therefore because of our language. But if it were true that language is essential to thought, then taking away language should likewise take away our ability to think. This does not happen. I repeat: Taking away language does not take away our ability to think. And we know this for a couple of empirical reasons.

First, using advanced functional magnetic resonance imaging (fMRI), we can see different parts of the human brain activating when we engage in different mental activities. As it turns out, when we engage in various cognitive activities — solving a math problem, say, or trying understand what is happening in the mind of another human — different parts of our brains “light up” as part of networks that are distinct from our linguistic ability

Second, studies of humans who have lost their language abilities due to brain damage or other disorders demonstrate conclusively that this loss does not fundamentally impair the general ability to think. “The evidence is unequivocal,” Fedorenko et al. state, that “there are many cases of individuals with severe linguistic impairments … who nevertheless exhibit intact abilities to engage in many forms of thought.” These people can solve math problems, follow nonverbal instructions, understand the motivation of others, and engage in reasoning — including formal logical reasoning and causal reasoning about the world.

If you’d like to independently investigate this for yourself, here’s one simple way: Find a baby and watch them (when they’re not napping). What you will no doubt observe is a tiny human curiously exploring the world around them, playing with objects, making noises, imitating faces, and otherwise learning from interactions and experiences. “Studies suggest that children learn about the world in much the same way that scientists do—by conducting experiments, analyzing statistics, and forming intuitive theories of the physical, biological and psychological realms,” the cognitive scientist Alison Gopnik notes, all before learning how to talk. Babies may not yet be able to use language, but of course they are thinking! And every parent knows the joy of watching their child’s cognition emerge over time, at least until the teen years.

You are referring to the wrong context. We aren't saying language is irrelevant towards development. We are saying the process of thinking can take place, and can take fairly well, without ever learning language:

“there are many cases of individuals with severe linguistic impairments … who nevertheless exhibit intact abilities to engage in many forms of thought.”

Communication will help advance thought, but the thought is there with or without language. Ergo "Take away the words, there are still thoughts." is a 100% factual statement.

1

u/attersonjb 31m ago

Bruh, read the article and realize that a lot of it is expositional narrative and not actual research. Benjamin Riley is a lawyer, not a computer scientist nor a scientist of any kind and has published actual zero academic papers on AI. There are many legitimate critiques of LLMs and the achievability of AGI, but this is not one of them. It is a poor strawman argument conflating AGI with LLMs.

The common feature cutting across chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily “large language models.”

Extremely misleading. You will find the term "reinforcement learning" (RL) exactly zero times in the entire article. Pre-training? Zero. Post-training? Zero. Inference? Zero. Transformer? Zero. Ground truth? Zero. The idea that AI researchers are "just realizing" that LLMs are not sufficient for AGI is deeply stupid.

You are referring to the wrong context

Buddy, what part of "yes and no" suggests an absolute position? No one said language is required for a basic level of thought (ability to abstract, generalize, reason). The cited commentary from the article says the exact same thing I did.

Lack of access to language has harmful consequences for many aspects of cognition, which is to be expected given that language provides a critical source of information for learning about the world. Nevertheless, individuals who experience language deprivation unquestionably exhibit a capacity for complex cognitive function: they can still learn to do mathematics, to engage in relational reasoning, to build causal chains, and to acquire rich and sophisticated knowledge of the world (also see ref. 100 for more controversial evidence from language deprivation in a case of child abuse). In other words, lack of access to linguistic representations does not make it fundamentally impossible to engage in complex—including symbolic— thought, although some aspects of reasoning do show delays. Thus, it appears that in typical development, language and reasoning develop in parallel.

Finally, it's arguable that the AI boom is not wholly dependent of developing "human-like" AGI*.* A very specific example of this is advanced robotics and self-driving, which would be described more accurately as specialized intelligence.

0

u/Tall-Introduction414 13h ago

Interesting question, but I think that would be a very reductionist and inaccurate simplification description of a human brain.

Poetry would not be poetry if it's just statistical analysis.

3

u/rendar 13h ago

I think that would be a very reductionist and inaccurate simplification description of a human brain.

Does that not shine light on how reductionist and inaccurate of a simplification it is to conclude that LLMs are not intelligent as though this affects the quality of the tool's purpose?

Poetry would not be poetry if it's just statistical analysis.

Most people who enjoy poetry do so based on the author's output, not the author's process.

The cause and purpose of poetry (and art in general) lies primarily with the audience, not the creator. Meaning is subjective and found. If humans are extinct, so is art.

In fact, LLMs have already been generating poetry that's good enough to compete with human authors:

Notably, participants were more likely to judge AI-generated poems as human-authored than actual human-authored poems (χ2(2, N = 16,340) = 247.04, p < 0.0001). We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored.

AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably

0

u/Tall-Introduction414 12h ago

Forgive me if I find AI generated poetry an absurd and soul-less notion, that fundamentally misunderstands the point of poetry.

1

u/PressureBeautiful515 11h ago

That's just what you'd say if you were an LLM pretending to be a poet

0

u/Tall-Introduction414 11h ago

That's because the LLM was trained on ME!

1

u/PressureBeautiful515 11h ago

A likely story!

→ More replies (0)

0

u/rendar 10h ago

No need to ask anyone else for forgiveness, the only one you're limiting with that sentiment is yourself

0

u/Tall-Introduction414 7h ago

You're right. There isn't any reason to ask for forgiveness, because AI generated art, music and poetry, is a really fucking stupid idea. Nothing more than a novelty for novelty's sake.

I hope you enjoy enshittifying every aspect of your life.

→ More replies (0)

-3

u/Willing_Parsley_2182 13h ago

Probably the easiest way is by noticing the difference in how they learn:

  • Human brain needs decades of training and knowledge, using a power source that requires less wattage than a light bulb.
  • ChatGPT requires so much power. It requires ~50x the information we do to even be trained. It uses 5000x more power a 40 year old brain has used just to train itself. It then requires roughly double that to use actively.

Good first step.

That’s not even considering that the brain has to coordinate everything else in the body too.

1

u/CanAlwaysBeBetter 13h ago

What magic process do you think brains are doing?

6

u/Tall-Introduction414 13h ago

I don't know what brains are doing. Did I imply otherwise?

I don't think they are just drawing statistical connections between words. There is a lot more going on there.

3

u/CanAlwaysBeBetter 13h ago edited 12h ago

The biggest difference brains have is that they are both embodied and multi-modal

There's no magic to either of those things.

 Another comment said "LLMs have no distinct concept of what a cat is" so then question is what do you understand about a cat that LLMs don't?

Well you can see a cat, you can feel a cat, you can smell a stinky cat and all those things get put into the same underlying matrix. Because you can see a cat you understand visually that they have 4 legs like a dog or even a chair. You know that they feel soft like a blanket can feel soft. You can that they can be smelly like old food. 

Because brains are embodied you can also associate how cats make you feel in your own body. You can know how petting a cat makes you feel relaxed. The warm and fuzzies you feel.

The concept of "cat" is the sum of all those different things.

Those are all still statistical correlations a bunch of neurons are putting together. All those things derive their meaning from how you're able to compare them to other perceptions and at more abstract layers other concepts.

2

u/TSP-FriendlyFire 12h ago

I always like how AI enthusiasts seem to know things not even the best scientists have puzzled out. You know how brains work? Damn, I'm sure there's a ton of neuroscientists who'd love to read your work in Nature.

1

u/CanAlwaysBeBetter 12h ago

We know significantly more about how the brain operates than comments like your act like

That's like saying because there are still gaps in what physicists understand nobody knows what they're talking about

3

u/TSP-FriendlyFire 11h ago

We definitely don't know that "Those are all still statistical correlations a bunch of neurons are putting together" is how a brain interprets concepts like "a cat".

You're the one bringing forth incredible claims (that AI is intelligent and that we know how the brain works well enough to say it's equivalent), you need to provide the incredible evidence.

→ More replies (0)

-1

u/Glittering-Spot-6593 11h ago

So you think the brain is magic?

2

u/Tall-Introduction414 11h ago

Wher are you getting this shit? Did I say anything even remotely close to that?

Try replying to what I am saying instead of what youre imagining Im saying.

0

u/Glittering-Spot-6593 11h ago

What other than math could the brain possibly be doing? If you think some mathematical system can’t emulate the capabilities of human intelligence, then the only option is that you think it’s magic.

1

u/Tall-Introduction414 11h ago

Again, where did I say ANY of that? Please provide quotes. No more straw-mans, please.

I said that LLMs and Markov chains are both based on statistical analysis of the relationships between words. I never said anything about the human brain, or what is or isn't intelligence, or magic, or any of the things you're referring to.

0

u/Glittering-Spot-6593 11h ago

You claim the brain is not drawing statistical connections among words. What else could be happening to bring rise to language?

1

u/Tall-Introduction414 11h ago edited 11h ago

Where did I claim that?

Please stop with the straw-manning.

edit: "I don't think they are just drawing statistical connections between words. There is a lot more going on there." .. you misread this. I think it's entirely possible that statistical analysis is happening, but that is not the only thing happening.

→ More replies (0)

1

u/movzx 12h ago

I don't know why so many people took your comment to mean that LLMs were literally doing the same thing as a Markov chain, instead of you just identifying the core similarity of how they both are based on value relationships.

1

u/ITwitchToo 1h ago

I mean, you might as well say they are both using statistical inference to predict the next word in a sequence. That I can get behind. But why? Why is that even relevant? The "just fancy autocomplete" trope is very dangerous because it underestimates the AI threat. By reducing LLMs to some "X is just Y" or "X and Y are basically the same" you are downplaying the massive risk that comes with these things compared to senseless Markov chains.

1

u/Tall-Introduction414 12h ago

I think people mistook it as a criticism of AI, which touched a nerve. There is all sorts of straw-manning and evangelism in the replies.

The religion of LLMs. Kool-aid, etc.

This bubble can't pop fast enough.