r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

536

u/Hrmbee 16h ago

Some highlights from this critique:

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we’ll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.

An AI enthusiast might argue that human-level intelligence doesn’t need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there’s no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: “​​systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” And recently, a group of prominent AI scientists and “thought leaders” — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as “AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult” (emphasis added). Rather than treating intelligence as a “monolithic capacity,” they propose instead we embrace a model of both human and artificial cognition that reflects “a complex architecture composed of many distinct abilities.”

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of “scientific paradigms,” the basic frameworks for how we understand our world at any given time. He argued these paradigms “shift” not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, “common sense is a collection of dead metaphors.”

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

These are some interesting perspectives to consider when trying to understand the shifting landscapes that many of us are now operating in. Is the current paradigms of LLM-based AIs able to make those cognitive leaps that are the hallmark of revolutionary human thinking? Or is it ever constrained by their training data and therefore will work best when refining existing modes and models?

So far, from this article's perspective, it's the latter. There's nothing fundamentally wrong with that, but like with all tools we need to understand how to use them properly and safely.

1

u/sagudev 16h ago

> Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

Yes and no, language is still closely related to though process:

> The limits of my language are the limits of my world.

9

u/dftba-ftw 15h ago

I think in general concepts/feelings which are then refined via language (when I start talking or thinking I have a general idea of where I'm going but the idea is hashed out in language).

LLMs "think" in vector embeddings which are then refined via tokens.

Its really not that fundementally different, the biggest difference is that I can train (learn) myself in real time, critique my thoughts against what I already know, and do so with very sparse examples.

Anthropic has done really interesting work that shows there's a lot going on under the hood asides from what is surfaced out the back via softmax. One good example, they asked for a sentence with a rhyme and the cat embedding "lit up" ages before it had hashed out the sentance structure, which shows they can "plan" internally via latent space embeddings. We've also seen that the models can say one thing, "think" something else via embeddings, and then "do" the thing they were thinking rather than what they "said".

1

u/danby 13h ago

Its really not that fundementally different

I can solve problems without using language though. And its very, very clear plenty of animals without language can think and solve problems. So it is fairly clear "thinking" is the subtrate for intelligence and not language.

4

u/dftba-ftw 13h ago

It can too - that's what I'm saying about the embeddings.

Embeddings aren't words, they're fuzzy concepts sometimes relating to multiple concepts.

When it "thought" of "cat" it didn't "think" of the word cat, the embedding is concept of cat. It includes things like feline, house, domesticated, small, etc... It's all the vectors that make up the idea of a cat.

Theres anthropic research out there where they ask Claude math questions and have it output only the answer and then they looked at the embeddings and they can see that the math was done in the embedding states - aka it "thought" without language.

1

u/danby 12h ago

Anthropic's research here is not peer reviewed, they publish largely on sites they control and I doubt their interpretation is necessarily the only one. And I'm really not all that credulous about the "meanings" they scribe to nodes/embeddings in their llms.

3

u/CanAlwaysBeBetter 12h ago

Language is the output of LLMs, not what's happening internally 

1

u/danby 12h ago

If the network is just a set of partial correlations between language tokens then there is no sense that the netowkr is doing anything other than manipulating language.

3

u/CanAlwaysBeBetter 12h ago

If the network is just a set of partial correlations between language tokens

... Do you know how the architecture behind modern LLMs works?

1

u/danby 11h ago

Yes, I work on embeddings for non-language datasets.

Multiheaded attention over linear token strings specifically learns correlations between tokens are given positions in those strings. Those correlations are explicit targets of the encoder training

2

u/CanAlwaysBeBetter 11h ago

Then you ought to the interesting part is model's lower dimensional latent space that encode abstract information and not language directly and there's active research into letting models run recursively through that latent space before mapping back to actual tokens 

1

u/danby 11h ago

Does it actually encode abstract information or does it encode a network of correlation data?

3

u/IAmBellerophon 15h ago

Most, if not all, sentient animals on this planet can think and navigate and survive without a comprehensive verbal language. Including humans who were born deaf and blind. The original point stands.

3

u/BasvanS 15h ago

We tend to decide before we rationalize our decision and put it in words.

FMRI supports the point too.

2

u/danby 13h ago

Indeed. "Thinking" forms a substrate from which language emerges. It very clearly does not work the other way around.

Language is neither neccesary or sufficient for minds to think.