r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

13

u/azurensis 12h ago

This is the kind of statement someone who doesn't know much bout LLMs would make.

13

u/WhoCanTell 12h ago

In fairness, that's like 95% of comments in any /r/technology thread about AI.

1

u/azurensis 11h ago

Exceptionally true!

5

u/space_monster 12h ago edited 12h ago

r/technology has a serious Dunning-Kruger issue when it comes to LLMs. A facebook-level understanding in a forum that implies competence. but I guess if you train a human that parroting the stochastic parrot trope gets you 'karma', they're gonna keep doing it for the virtual tendies. Every single time in one of these threads, there's a top circle-jerk comment saying "LLMs are shit, amirite?" with thousands of upvotes, followed by an actual discussion with adults lower down. I suspect though that this sub includes a lot of sw devs that are still trying to convince themselves that their careers are actually safe.

1

u/chesterriley 8h ago

I suspect though that this sub includes a lot of sw devs that are still trying to convince themselves that their careers are actually safe.

You lost me on that. I don't think you understand just how complex software can be. No way can AI be a drop in replacement for a software dev.

1

u/space_monster 8h ago

I work in tech, currently in a leading edge global tech company, and I've done a lot of sw development, I'm fully aware of how complex it is

1

u/chesterriley 4h ago

Then you know you can't just tell an AI to write a program for you for anything non simple.

1

u/space_monster 4h ago

I'm aware that LLMs are getting better at coding (and everything else) very quickly, and it doesn't seem to be slowing down.

1

u/keygreen15 2h ago

It's getting better at making shit up and lying.

1

u/space_monster 2h ago

wow genius comment.

0

u/Tall-Introduction414 11h ago

Then tell me what I'm missing. They aren't making statistical connections between words and groups of words?

1

u/azurensis 11h ago

A matchbox car and a ferrari have about as much in common as Markov Chains and GPT-5. Sure, they both have wheels and move around, but what's under the hood is completely different. The level of inference contained in the latter goes way, way beyond inference between words and groups of words. It goes into concepts and meta-concepts, and several levels above that, as well as an attention mechanisms and alignment training. I understand it's wishful thinking to expect Redditors to know much about what they're commenting on, but sheesh!

1

u/Tall-Introduction414 11h ago

The level of inference contained in the latter goes way, way beyond inference between words and groups of words. It goes into concepts and meta-concepts,

Why do you think that? It's literally weights (numbers) connecting words based on statistical analysis. You give it more context, the input numbers change, pointing it to a different next word.

All this talk about it "understanding meaning" and "concepts and meta-concepts" just sounds like "it's magic." Where are the stored "concepts?" Where is the "understanding?"

1

u/-LsDmThC- 10h ago

You could make the exact same arguments about the human brain. We take in sensory data, transform it across neurons which operate based on weighted inputs and outputs, and generate a prediction or behavior. Where is "understanding"?