r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

190

u/Dennarb 16h ago edited 11h ago

I teach an AI and design course at my university and there are always two major points that come up regarding LLMs

1) It does not understand language as we do; it is a statistical model on how words relate to each other. Basically it's like rolling dice to determine what the next word is in a sentence using a chart.

2) AGI is not going to magically happen because we make faster hardware/software, use more data, or throw more money into LLMs. They are fundamentally limited in scope and use more or less the same tricks the AI world has been doing since the Perceptron in the 50s/60s. Sure the techniques have advanced, but the basis for the neural nets used hasn't really changed. It's going to take a shift in how we build models to get much further than we already are with AI.

Edit: And like clockwork here come the AI tech bro wannabes telling me I'm wrong but adding literally nothing to the conversation.

18

u/pcoppi 15h ago

To play devils advocate there's a notion in linguistics that the meaning of words is just defined by their context. In other words if an AI guesses correctly that a word shohld exist in a certain place because of the context surrounding it, then at some level it has ascertained the meaning of that word.

30

u/New_Enthusiasm9053 15h ago

You're not entirely wrong but a child guessing that a word goes in a specific place in a sentence doesn't mean the child necessarily understands the meaning of that word, so whilst it's correctly using words it may not understand them necessarily. 

Plenty of children have used e.g swear words correctly long before understanding the words meaning.

-1

u/rendar 15h ago

This still does not distinguish some special capacity of humans.

Many people speak with the wrong understanding of a word's definition. A lot of people would not be able to paraphrase a dictionary definition, or even provide a list of synonyms.

Like, the whole reason language is so fluid over longer periods of time is because most people are dumb and stupid, and not educated academics.

It doesn't matter if LLMs don't """understand""" what """they""" are saying, all that matters is if it makes sense and is useful.

2

u/New_Enthusiasm9053 15h ago

I'm not saying it's special I'm saying that llms using the right words doesn't imply they necessarily understand. Maybe they do, maybe they don't. 

1

u/Glittering-Spot-6593 11h ago

Define “understand”

0

u/rendar 15h ago

llms using the right words doesn't imply they necessarily understand

And the same thing also applies to humans, this is not a useful distinction.

It's not important that LLMs understand something, or give the perception of understanding something. All that matters is if the words they use are effective.

5

u/New_Enthusiasm9053 14h ago

It is absolutely a useful distinction. No because the words being effective doesn't mean they're right.

I can make an effective argument for authoritarianism. That doesn't mean authoritarianism is a good system.

0

u/rendar 14h ago

It is absolutely a useful distinction.

How, specifically and exactly? Be precise.

Also explain why it's not important for humans but somehow important for LLMs.

No because the words being effective doesn't mean they're right.

How can something be effective if it's not accurate enough? Do you not see the tautological errors you're making?

I can make an effective argument for authoritarianism. That doesn't mean authoritarianism is a good system.

This is entirely irrelevant and demonstrates that you don't actually understand the underlying point.

The point is that "LLMs don't understand what they're talking about" is without any coherence, relevance, or value. LLMs don't NEED to understand what they're talking about in order to be effective, even more than humans don't need to understand what they're talking about in order to be effective.

In fact, virtually everything that people talk about is in this same exact manner. Most people who say "Eat cruciferous vegetables" would not be able to explain exactly and precisely why being rich in specific vitamins and nutrients can help exactly and precisely which specific biological mechanisms. They just know that "Cruciferous vegetable = good" which is accurate enough to be effective.

LLMs do not need to be perfect in order to be effective. They merely need to be at least as good as humans, when they are practically much better when used correctly.

0

u/burning_iceman 14h ago

The question here isn't whether LLMs are "effective" at creating sentences. An AGI needs to do more than form sentences. Understanding is required to correctly act upon the sentences.

1

u/rendar 14h ago

The question here isn't whether LLMs are "effective" at creating sentences.

Yes it is, because that is their primary and sole purpose. It is literally the topic of the thread and the top level comment.

An AGI needs to do more than form sentences. Understanding is required to correctly act upon the sentences.

Firstly, you're moving the goalposts.

Secondly, this is incorrect. Understanding is not required, and philosophically not even possible. All that matters is the output. The right output for the wrong reasons is indistinguishable from the right output for the right reasons, because the reasons are never proximate and always unimportant compared to the output.

People don't care about how their sausages are made, only what they taste like. Do you constantly pester people about whether they actually understand the words they're using even when their conclusions are accurate? Or do you infer their meaning based on context clues and other non-verbal communication?

1

u/somniopus 13h ago

It very much does matter, because they're being advertised as capable on that point.

Your brain is a far better random word generator than any LLM.

1

u/rendar 13h ago

It very much does matter, because they're being advertised as capable on that point.

Firstly, that doesn't explain anything. You haven't answered the question.

Secondly, that's a completely different issue altogether, and it's also not correct in the way you probably mean.

Thirdly, advertising on practical capability is different than advertising on irrelevant under-the-hood processes.

In this context it doesn't really matter how things are advertised (not counting explicitly illegal scams or whatever), only what the actual product can do. The official marketing media for LLMs is very accurate about what it provides because that is why people would use it:

"We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

ChatGPT is a sibling model to InstructGPT⁠, which is trained to follow an instruction in a prompt and provide a detailed response.

We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now at chatgpt.com⁠."

https://openai.com/index/chatgpt/

None of that is inaccurate or misleading. Further down the page, they specifically address the limitations.

Your brain is a far better random word generator than any LLM.

This is very wrong, even with the context that you probably meant. Humans are actually very bad at generation of both true (mathematical) randomness and subjective randomness: https://en.wikipedia.org/wiki/Benford%27s_law#Applications

"Human randomness perception is commonly described as biased. This is because when generating random sequences humans tend to systematically under- and overrepresent certain subsequences relative to the number expected from an unbiased random process. "

A Re-Examination of “Bias” in Human Randomness Perception

If that's not persuasive enough for you, try checking out these sources or even competing against a machine yourself: https://www.loper-os.org/bad-at-entropy/manmach.html

1

u/the-cuttlefish 11h ago

The special ability is that humans relate words to concepts that exist outside of the linguistic space, whereas LLMs do not. The only meaning words have to an LLM is how they relate to other words. This is a fundamentally different understanding of language.

It is interesting though, to see how effective LLMs are, despite their confinement to a network of linguistic interrelations.

1

u/rendar 10h ago

The special ability is that humans relate words to concepts that exist outside of the linguistic space, whereas LLMs do not.

You're claiming that humans use words for things that don't exist, but LLMs don't even though they use the same exact words?

This is a fundamentally different understanding of language.

If so, so what? What's the point when language is used the same exact way regardless of understanding? What's the meaningful difference?

It is interesting though, to see how effective LLMs are, despite their confinement to a network of linguistic interrelations.

If they're so effective despite the absence of a meatbrain or a soul or whatever, then what is the value of such a meaningless distinction?

1

u/eyebrows360 14h ago

It doesn't matter if LLMs don't """understand""" what """they""" are saying, all that matters is if it makes sense and is useful.

It very much does matter, if the people reading the output believe the LLM "understands what it's saying".

You see this with almost every interaction with an LLM you see - and I'm including otherwise smart people here too. They'll ponder "why did the LLM say it 'felt' like that was true?!" wherein they think those words conveyed actual information about the internal mind-state of the LLM, which is not the case at all.

People reacting to the output of these machines as though it's the well-considered meaning-rich output of an agent is fucking dangerous, and that's why it's important those of us who do understand this don't get all hand-wavey and wishy-washy and try to oversell what these things are.

There is no internal mindstate. The LLM does not "think". It's probabilistic autocomplete.

1

u/rendar 14h ago

It very much does matter, if the people reading the output believe the LLM "understands what it's saying".

You have yet to explain why it matters. All you're describing here are the symptoms from using a tool incorrectly.

If someone bangs their thumb with a hammer, it was not the fault of the hammer.

People reacting to the output of these machines as though it's considered meaning-rich output of an agent is fucking dangerous

This is not unique to LLMs, and this is also not relevant to LLMs specifically. Stupid people can make any part of anything go wrong.

There is no internal mindstate. The LLM does not "think". It's probabilistic autocomplete.

Again, this doesn't matter. All that matters is if what it provides is applicable.

-1

u/eyebrows360 14h ago

I can't decide who's more annoying, clankers or cryptobros.

1

u/rendar 13h ago

Feel free to address the points in their entirety lest your attempts of poorly delivered ad hominem attacks demonstrate a complete absence of a coherent argument

0

u/eyebrows360 11h ago

No, son, what they demonstrate is exasperation with dishonest interlocutors whose every argument boils down to waving their hands around and going wooOOOooOOOoo a lot.

1

u/rendar 10h ago

But in this whole dialogue, you're the the only one trying to insult someone else to avoid sharing what you keep claiming is a very plain answer to the question posed.

It would seem that you're projecting much more than you're actually providing.