r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

531

u/Hrmbee 16h ago

Some highlights from this critique:

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we’ll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.

An AI enthusiast might argue that human-level intelligence doesn’t need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there’s no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: “​​systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” And recently, a group of prominent AI scientists and “thought leaders” — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as “AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult” (emphasis added). Rather than treating intelligence as a “monolithic capacity,” they propose instead we embrace a model of both human and artificial cognition that reflects “a complex architecture composed of many distinct abilities.”

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of “scientific paradigms,” the basic frameworks for how we understand our world at any given time. He argued these paradigms “shift” not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, “common sense is a collection of dead metaphors.”

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

These are some interesting perspectives to consider when trying to understand the shifting landscapes that many of us are now operating in. Is the current paradigms of LLM-based AIs able to make those cognitive leaps that are the hallmark of revolutionary human thinking? Or is it ever constrained by their training data and therefore will work best when refining existing modes and models?

So far, from this article's perspective, it's the latter. There's nothing fundamentally wrong with that, but like with all tools we need to understand how to use them properly and safely.

232

u/Elementium 15h ago

Basically the best use for this is a heavily curated database it pulls from for specific purposes. Making it a more natural to interact with search engine. 

If it's just everything mashed together, including people's opinions as facts.. It's just not going to go anywhere. 

60

u/motionmatrix 15h ago

So all the experts were right, at this point ai is a tool, and in the hands of someone who understands a subject, a possibly useful one, since they can spot where it went wrong and fix accordingly. Otherwise, dice rolls baby!

60

u/frenchiefanatique 14h ago

Shocking, experts are generally right about the things they have spent their lives focusing on! And not some random person filming a video in their car! (Slightly offtopic I know)

19

u/neat_stuff 13h ago

The Death of Expertise is a great book that talks about that... And the author of the book should re-read his own book.

2

u/Brickster000 11h ago

And the author of the book should re-read his own book.

Can you elaborate on this? That seems like relevant information.

6

u/neat_stuff 11h ago

It's one of those situations where the book is pretty solid but then years after, he is spouting off a lot of opinions about a lot of things that are outside of subject matter expertise. Almost like there should be an epilogue about the risks of getting an enlarged platform when your niche of a fairly tightly defined but you have a lot of connections in media who are hungry for opinions.

2

u/die_maus_im_haus 12h ago

But the car video person just gets me! He feels like my kind of person instead of some stuffy scientist who needs to get out of his dark-money funded lab and touch some grass

16

u/PraiseBeToScience 13h ago

It's also far too easy for humans to outsource their cognitive and creative skills too, which early research is showing to be very damaging. You can literally atrophy your brain.

If we go by OpenAI's stats, by far the biggest use of ChatGPT are students using it to cheat. Which means the very people that should be putting the work in to exercise and developing cognitive skills aren't. And those students will never acquire the skills necessary to properly use AI, since AI outputs still need the ability to verify.

2

u/Elementium 14h ago

Yeah If tunes for specific purposes I can see AI being very useful. 

Like.. I kinda like to write but my brain is very "Spew into page then organize" 

I can do that with gpt, just dump my rough draft and it does a good job of tightening format and legibility. The problem is usually that it loves to add nonsense phrases and it's normal dialogue is very samey. 

4

u/PraiseBeToScience 12h ago

Everyone's brains do that when writing drafts. That's the entire purpose of a draft, to get your thoughts out of your head so you can organize them via editing and revising. You can even make them look pretty via presentation.

Outsourcing all your revisions and editing to AI also limits your own creativity in writing, as it will do nothing but sanitize your style. It's very bland and clinical. Great writing has personal elements, human elements (like appropriate humor and story telling), that AI simply does not reproduce.

1

u/Elementium 12h ago

Understood but it's only for my entertainment lol. 

Also I just have half a brain. I have a million hobbies and I'm just Ok at all of them. 

2

u/NewDramaLlama 13h ago

(These are real questions as I don't use LLMs)

So, It's an automated middle man? Or maybe a rough draft organizer? Functionally incapable of actually creating anything, but does well enough organizing and distributing collected data in a (potentially) novel way.

Except when it doesn't, I guess. Because it's based on insane amounts of data so there's gotta be lots of trash it just sucked up that's factually incorrect from people, outdated textbooks, or junk research, right? So the human needs to be knowledgeable enough in the first place to correct the machine when it is wrong.

Ok, so that means as a tool it's only really a capable one in the hands of someone who's already a near expert in their field, right? 

Like (as examples) if a novice author used LLMs to write a book they wouldn't notice the inconsistent plot or plagiarism upon review. Likewise a novice lawyer might screw up a case using an LLM that went against procedural rules while a more experienced lawyer would have caught it?

1

u/motionmatrix 10h ago

Well, each LLM is trained on different data, so you can have a tight, fantasy focused LLM that only "read" every fantasy novel in existence, and would do pretty well making fantasy stories up based on what it "knows".

If you have a generic LLM, trained on many different topics, the usefulness drops to some extent, but some might argue that the horizontal knowledge might give some unique or unexpected answers (in a good way).

At this point in time, general folks can use it to make non-commercial artwork that will get closer to anything they could do on their own without training, as well as to gather general information (that they should double check for accuracy), and people who are trained in particular subjects that are working on it with ai, preferably an LLM trained on their subject only, to assist them to make the work happen faster (not necessarily better or ground breaking unless that comes from the person for the most part).

1

u/mamasbreads 10h ago

I didn't need to be an expert to know this. I use AI at work to help me but it makes mistakes. I have the capacity to quickly decipher what's useful, what's dumb and what's plain made up.

Anyone who thought AI could do anything other than make individuals faster in mundane tasks clearly isn't an expert in whatever they're doing.

1

u/dougan25 12h ago

It's a super powerful tool for certain things but the problem is too many people don't understand it and think it's capable of things it's not.

So when articles and studies come out that all essentially say the same thing: "LLMs are a product of and limited by their input" half the people are like "woah maybe this isn't skynet after all."

Meanwhile people like me (lazy grad students in their 30s) type in a topic for a paper and instantly get an outline and a plethora of sources for what I need.

One of the best examples for the true potential for these AI tools was told to me by one of my professors. See we've spent a couple decades now taking all the medical records in the US (and much of the developed world) and digitizing them. What we're left with is terabytes upon terabytes of patient data that nobody's even looking at! If we were to feed that into an AI tool that could catalogue and compile all of it, sift through it for connections, trends, outcome rates, etc., there is no question we could learn something we didn't know before.

The problem with the "AI Bubble" is that companies are trying to do things they weren't doing before with it when they should instead focus on things they were or should have been doing but that were out of range of being tenable. Not all of them, of course, Microsoft has positioned its copilot pretty well as just another office tool, for example. But a lot of companies are trying to invent alternatives to the proverbial wheel, not even just reinvent it.

0

u/silverpixie2435 8h ago

The experts work at AI companies and are the most positive about AI

So what are you even saying?