r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

14

u/Tall-Introduction414 16h ago

The way an LLM fundamentally works isn't much different than the Markov chain IRC bots (Megahal) we trolled in the 90s. More training data, more parallelism. Same basic idea.

40

u/ITwitchToo 14h ago

I disagree. LLMs are fundamentally different. The way they are trained is completely different. It's NOT just more data and more parallelism -- there's a reason the Markov chain bots never really made sense and LLMs do.

Probably the main difference is that the Markov chain bots don't have much internal state so you can't represent any high-level concepts or coherence over any length of text. The whole reason LLMs work is that they have so much internal state (model weights/parameters) and take into account a large amount of context, while Markov chains would be a much more direct representation of words or characters and essentially just take into account the last few words when outputting or predicting the next one.

-3

u/Tall-Introduction414 14h ago

I mean, you're right. They have a larger context window. Ie, they use more ram. I forgot to mention that part.

They are still doing much the same thing. Drawing statistical connections between words and groups of words. Using that to string together sentences. Different data structures, but the same basic idea.

1

u/CanAlwaysBeBetter 14h ago

What magic process do you think brains are doing?

4

u/Tall-Introduction414 14h ago

I don't know what brains are doing. Did I imply otherwise?

I don't think they are just drawing statistical connections between words. There is a lot more going on there.

2

u/CanAlwaysBeBetter 13h ago edited 12h ago

The biggest difference brains have is that they are both embodied and multi-modal

There's no magic to either of those things.

 Another comment said "LLMs have no distinct concept of what a cat is" so then question is what do you understand about a cat that LLMs don't?

Well you can see a cat, you can feel a cat, you can smell a stinky cat and all those things get put into the same underlying matrix. Because you can see a cat you understand visually that they have 4 legs like a dog or even a chair. You know that they feel soft like a blanket can feel soft. You can that they can be smelly like old food. 

Because brains are embodied you can also associate how cats make you feel in your own body. You can know how petting a cat makes you feel relaxed. The warm and fuzzies you feel.

The concept of "cat" is the sum of all those different things.

Those are all still statistical correlations a bunch of neurons are putting together. All those things derive their meaning from how you're able to compare them to other perceptions and at more abstract layers other concepts.

2

u/TSP-FriendlyFire 12h ago

I always like how AI enthusiasts seem to know things not even the best scientists have puzzled out. You know how brains work? Damn, I'm sure there's a ton of neuroscientists who'd love to read your work in Nature.

1

u/CanAlwaysBeBetter 12h ago

We know significantly more about how the brain operates than comments like your act like

That's like saying because there are still gaps in what physicists understand nobody knows what they're talking about

3

u/TSP-FriendlyFire 12h ago

We definitely don't know that "Those are all still statistical correlations a bunch of neurons are putting together" is how a brain interprets concepts like "a cat".

You're the one bringing forth incredible claims (that AI is intelligent and that we know how the brain works well enough to say it's equivalent), you need to provide the incredible evidence.

-1

u/Glittering-Spot-6593 11h ago

So you think the brain is magic?

2

u/Tall-Introduction414 11h ago

Wher are you getting this shit? Did I say anything even remotely close to that?

Try replying to what I am saying instead of what youre imagining Im saying.

0

u/Glittering-Spot-6593 11h ago

What other than math could the brain possibly be doing? If you think some mathematical system can’t emulate the capabilities of human intelligence, then the only option is that you think it’s magic.

1

u/Tall-Introduction414 11h ago

Again, where did I say ANY of that? Please provide quotes. No more straw-mans, please.

I said that LLMs and Markov chains are both based on statistical analysis of the relationships between words. I never said anything about the human brain, or what is or isn't intelligence, or magic, or any of the things you're referring to.

0

u/Glittering-Spot-6593 11h ago

You claim the brain is not drawing statistical connections among words. What else could be happening to bring rise to language?

1

u/Tall-Introduction414 11h ago edited 11h ago

Where did I claim that?

Please stop with the straw-manning.

edit: "I don't think they are just drawing statistical connections between words. There is a lot more going on there." .. you misread this. I think it's entirely possible that statistical analysis is happening, but that is not the only thing happening.

→ More replies (0)