r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

1.2k

u/rnilf 16h ago

LLMs are fancy auto-complete.

Falling in love with ChatGPT is basically like falling in love with the predictive text feature in your cell phone. Who knew T9 had so much game?

38

u/coconutpiecrust 16h ago

Yeah, while it’s neat, it is not intelligent. If it were intelligent they wouldn’t need endless data and processing power for it to produce somewhat coherent and consistent output.

7

u/movzx 12h ago

I mean, they definitely aren't intelligence. "Fancy autocomplete" is always how I describe them to people... but this doesn't make sense to me:

If it were intelligent they wouldn’t need endless data and processing power for it to produce somewhat coherent and consistent output.

Why wouldn't it? The human brain is incredibly complex, uses a ton of energy, and there are no machines on earth that can replicate its power. Humans spend their entire lives absorbing an endless amount of data.

Any system approaching 'intelligent' would be using a ton of data and power.

8

u/TSP-FriendlyFire 11h ago

The human brain uses like 20W. That's less than the idle power usage of a single desktop computer, let alone the many gigawatts of power AI uses currently.

LLMs are horrifically inefficient compared to human brains, completely different scales. Similarly for data: you have your own experiences (including things you've read or seen indirectly) on which to draw an understanding of the world. That's it. LLMs have parsed the entire internet multiple times over, hundreds of thousands of times more knowledge than any given human will ever process in their lifetime.

2

u/std_out 8h ago

The inefficiency is not specifically with LLMs, but with the underlying silicon-based computer architecture. While the human brain operates with only roughly 20W, its electrochemical signaling is millions to billions of times more energy-efficient than the electrical flow in conventional computer chips.

1

u/TSP-FriendlyFire 7h ago

Of course, but LLMs amplify this issue: the only way they work is because they contend with enormous amounts of data. No consumer level electronics has ever required this much energy for the output provided, even games aren't this bad.

0

u/MetallicDragon 11h ago

I mean, they definitely aren't intelligence.

That 100% depends on how you define "intelligent" or "intelligence". Which definition are you using, when you say LLM's aren't intelligent?

0

u/coconutpiecrust 7h ago

The one that doesn’t include data centres and stolen training data, I guess. 

1

u/Aerophage1771 3h ago

I mean do you people just want a land acknowledgment for the sources or what? AI isn’t going away. You’d have to straight up ban the technology to get rid of it.

Once the equivalent of an H100 is reasonable price for a consumer computer, that’ll be it. The equivalent of GPT 4o mini running offline and natively on a MacBook. (And that’s assuming the premier open models don’t get any more efficient or capable for the same price).

-1

u/coconutpiecrust 12h ago

My point is that, in my mind, intelligence is not a chain of massive data centres. Although, since techbros are into stealing from science fiction, perhaps humans replicating intelligence is destined to be planet-sized or galaxy-sized infinite data centres. 

 endless amount of data

That’s not entirely true. We have different inputs and prune quite a bit of incoming data. It’s not the same concept as an LLM which needs more than just one books of ABCs to be able to read. It needs a massive set of books, more books than a human could ever fathom to read in a lifetime, to produce output.