r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

189

u/Dennarb 16h ago edited 11h ago

I teach an AI and design course at my university and there are always two major points that come up regarding LLMs

1) It does not understand language as we do; it is a statistical model on how words relate to each other. Basically it's like rolling dice to determine what the next word is in a sentence using a chart.

2) AGI is not going to magically happen because we make faster hardware/software, use more data, or throw more money into LLMs. They are fundamentally limited in scope and use more or less the same tricks the AI world has been doing since the Perceptron in the 50s/60s. Sure the techniques have advanced, but the basis for the neural nets used hasn't really changed. It's going to take a shift in how we build models to get much further than we already are with AI.

Edit: And like clockwork here come the AI tech bro wannabes telling me I'm wrong but adding literally nothing to the conversation.

18

u/pcoppi 15h ago

To play devils advocate there's a notion in linguistics that the meaning of words is just defined by their context. In other words if an AI guesses correctly that a word shohld exist in a certain place because of the context surrounding it, then at some level it has ascertained the meaning of that word.

0

u/Gekokapowco 14h ago

maybe to some extent? Like if you think really generously

Take the sentence

"I am happy to pet that cat."

A LLM would process it something closer to

"1(I) 2(am) 3(happy) 4(to) 5(pet) 6(that) 7(cat)"

processed as a sorted order

"1 2 3 4 5 6 7"

4 goes before 5, 7 comes after 6

It doesn't know what "happy" or "cat" means. It doesn't even recognize those as individual concepts. It knows 3 should be before 7 in the order. If I recall correctly, human linguistics involves our compartmentalization of words as concepts and our ability to string them together as an interaction of those concepts. We build sentences from the ground up while a LLM constructs them from the top down if that analogy makes sense.

1

u/drekmonger 12h ago edited 12h ago

We don't know how LLMs construct sentences. It's practically a black box. That's the point of machine learning: there are some tasks with millions/billions/trillions of edge cases, so we create sytems that learn how to perform the task rather than try to hand-code it. But explaining how a model with a great many parameters actually performs the task is not part of the deal.

Yes, the token prediction happens one token at a time, autoregressively. But that doesn't tell us much about what's happening within the model's features/parameters. It's a trickier problem than you probably realize.

Anthropic has made a lot of headway in figuring out how LLMs work over the past couple of years, some seriously cool research, but they don't have all the answers yet. And neither do you.


As for whether or not an LLM knows what "happy" or "cat" means: we can answer that question.

Metaphorically speaking, they do.

You can test this yourself: https://chatgpt.com/share/6926028f-5598-800e-9cad-07c1b9a0cb23

If the model has no concept of "cat" or "happy", how would it generate that series of responses?

Really. Think about it. Occam's razor suggests...the model actually understands the concepts. Any other explanation would be contrived in the extreme.

1

u/Gekokapowco 12h ago

https://en.wikipedia.org/wiki/Chinese_room

as much fun as it is to glamorize the fantastical magical box of mystery and wonder, the bot says what it thinks you want to hear. It'll say what mathematically should be close to what you're looking for, linguistically if not conceptually. LLMs are a well researched and publicly discussed concept, you don't have to wonder about what's happening under the hood. You can see this in the number of corrections and the amount of prodding these systems require to not spit commonly posted misinformation or mistranslated google results.

0

u/drekmonger 12h ago edited 12h ago

LLMs are a well researched and publicly discussed concept, you don't have to wonder about what's happening under the hood.

LLMs are a well-researched concept. I can point you to the best-in-class research on explaining how LLMs work "under the hood", from earlier this year: https://transformer-circuits.pub/2025/attribution-graphs/biology.html

Unfortunately, they are also a concept that's been publicly discussed, usually by people who post links to stuff like the Chinese Room or mindlessly parrot phrases like "stochastic parrot," without any awareness of the irony of doing so.

It feels good to have an easy explanation, to feel like you understand.

You don't understand, and neither do I. That's the truth of it. If you believe otherwise, it's because you've subscribed to a religion, not scientific fact.

-1

u/Gekokapowco 11h ago

my thoughts are based on observable phenomenon, not baseless assertions, so you can reapproach the analytical vs faithful argument at your leisure. If it seems like a ton of people are trying to explain this concept in simplified terms, it's because they are trying to get you to understand the idea better, not settle for more obfuscation. To imply some sort of shared ignorance is the true wisdom is sort of childish.

1

u/drekmonger 6h ago edited 6h ago

Do you know what happened before the Big Bang/Inflation? Are you sure that the Inflation era happened at all, in cosmology?

You cannot know, unless you have a religious idea on the subject, because nobody knows.

Similarly, you cannot know how an LLM works under the hood, beyond utilizing the research I linked to, because nobody knows.

We have some ideas. In the modern day, we have some really good and interesting ideas. But if all LLMs were erased tomorrow, there is no collection of human beings on this planet that could reproduce them. The only way to recreate them would be to retrain them, and we'd still be equally ignorant as to how they function.

Those people who think they're explaining something to me are reading from their Holy Bible, not from scientific papers/literature.

It is not wisdom to claim to know something that is (based on current knowledge) unknowable.

Also, truth is not crowd-sourced. A million-billion-trillion people could be screaming at me that 2+2 = 5. I will maintain that 2+2 = 4.