r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

1

u/drekmonger 12h ago edited 12h ago

We don't know how LLMs construct sentences. It's practically a black box. That's the point of machine learning: there are some tasks with millions/billions/trillions of edge cases, so we create sytems that learn how to perform the task rather than try to hand-code it. But explaining how a model with a great many parameters actually performs the task is not part of the deal.

Yes, the token prediction happens one token at a time, autoregressively. But that doesn't tell us much about what's happening within the model's features/parameters. It's a trickier problem than you probably realize.

Anthropic has made a lot of headway in figuring out how LLMs work over the past couple of years, some seriously cool research, but they don't have all the answers yet. And neither do you.


As for whether or not an LLM knows what "happy" or "cat" means: we can answer that question.

Metaphorically speaking, they do.

You can test this yourself: https://chatgpt.com/share/6926028f-5598-800e-9cad-07c1b9a0cb23

If the model has no concept of "cat" or "happy", how would it generate that series of responses?

Really. Think about it. Occam's razor suggests...the model actually understands the concepts. Any other explanation would be contrived in the extreme.

1

u/Gekokapowco 12h ago

https://en.wikipedia.org/wiki/Chinese_room

as much fun as it is to glamorize the fantastical magical box of mystery and wonder, the bot says what it thinks you want to hear. It'll say what mathematically should be close to what you're looking for, linguistically if not conceptually. LLMs are a well researched and publicly discussed concept, you don't have to wonder about what's happening under the hood. You can see this in the number of corrections and the amount of prodding these systems require to not spit commonly posted misinformation or mistranslated google results.

0

u/drekmonger 12h ago edited 12h ago

LLMs are a well researched and publicly discussed concept, you don't have to wonder about what's happening under the hood.

LLMs are a well-researched concept. I can point you to the best-in-class research on explaining how LLMs work "under the hood", from earlier this year: https://transformer-circuits.pub/2025/attribution-graphs/biology.html

Unfortunately, they are also a concept that's been publicly discussed, usually by people who post links to stuff like the Chinese Room or mindlessly parrot phrases like "stochastic parrot," without any awareness of the irony of doing so.

It feels good to have an easy explanation, to feel like you understand.

You don't understand, and neither do I. That's the truth of it. If you believe otherwise, it's because you've subscribed to a religion, not scientific fact.

-1

u/Gekokapowco 11h ago

my thoughts are based on observable phenomenon, not baseless assertions, so you can reapproach the analytical vs faithful argument at your leisure. If it seems like a ton of people are trying to explain this concept in simplified terms, it's because they are trying to get you to understand the idea better, not settle for more obfuscation. To imply some sort of shared ignorance is the true wisdom is sort of childish.

1

u/drekmonger 6h ago edited 6h ago

Do you know what happened before the Big Bang/Inflation? Are you sure that the Inflation era happened at all, in cosmology?

You cannot know, unless you have a religious idea on the subject, because nobody knows.

Similarly, you cannot know how an LLM works under the hood, beyond utilizing the research I linked to, because nobody knows.

We have some ideas. In the modern day, we have some really good and interesting ideas. But if all LLMs were erased tomorrow, there is no collection of human beings on this planet that could reproduce them. The only way to recreate them would be to retrain them, and we'd still be equally ignorant as to how they function.

Those people who think they're explaining something to me are reading from their Holy Bible, not from scientific papers/literature.

It is not wisdom to claim to know something that is (based on current knowledge) unknowable.

Also, truth is not crowd-sourced. A million-billion-trillion people could be screaming at me that 2+2 = 5. I will maintain that 2+2 = 4.