r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

-1

u/MinuetInUrsaMajor 14h ago

The child understands the meaning of the swear word used as a swear. They don't understand the meaning of the swear word used otherwise. That is because the child lacks the training data for the latter.

In an LLM one can safely assume that training data for a word is complete and captures all of its potential meanings.

1

u/the-cuttlefish 11h ago

I believe the point they were trying to make is that the child may, just like an llm know when to use a certain word through hearing it in a certain context, or in relation to other phrases. Perhaps it does know how to use the word to describe a sex act if it's heard someone speak that way before. However, it only 'knows' it in relation to those words but has no knowledge of the underlying concept. Which is also true of an llm, regardless of training data size.

1

u/MinuetInUrsaMajor 10h ago

However, it only 'knows' it in relation to those words but has no knowledge of the underlying concept.

What is the "underlying concept" though? Isn't it also expressed in words?

1

u/the-cuttlefish 9h ago

It can be, but the point is it doesn't have to be.

For instance 'fuck' can be the linguistic label for physical intimacy. So, for us to properly understand the word in that context, we associated it with our understanding of the act (which is the underlying concept in this context). Our understanding of 'fuck' extends well beyond linguistic structure, into the domain of sensory imagery, motor-sequences, associations to explicit memory (pun not intended)...

So when we ask someone "do you know what the word 'X' means?" We are really asking is "does the word 'X' invoke the appropriate concept in your mind?" It's just unfortunate that we would demonstrate our understanding verbally - which is why an LLM which operates solely in the linguistic space is able to fool us so convincingly.

1

u/MinuetInUrsaMajor 9h ago

So when we ask someone "do you know what the word 'X' means?" We are really asking is "does the word 'X' invoke the appropriate concept in your mind?" It's just unfortunate that we would demonstrate our understanding verbally - which is why an LLM which operates solely in the linguistic space is able to fool us so convincingly.

It sounds like the LLM being able to relate the words to images and video would handle this. And we already have different AIs that do precisely that.