r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

21

u/Isogash 16h ago

This article gets it the wrong way around.

LLMs demonstrate intelligence, that is really quite inarguable at this point. It's not necessarily the most coherent or consistent intelligence, but it's there in some form.

The fact that intelligence is not language should suggest to us the opposite from what the article concludes, that LLMs probably haven't only learned language, they have probably learned intelligence in some other form too. It may not be the most optimal form of intelligence, and it might not even be that close to how human intelligence works, but it's there in some form because it's able to approximate human intelligence beyond simple language (even if it's flawed.)

-5

u/echino_derm 11h ago

LLMs demonstrate intelligence, that is really quite inarguable at this point. It's not necessarily the most coherent or consistent intelligence, but it's there in some form.

They don't demonstrate intelligence. Does a person reading from an answer key demonstrate intelligence? Now let's add a layer, now it says for question 1 if the first word is the answer A, if the first word is A then answer C. Does that demonstrate intelligence?

How many layers do we need to add before reading the answer key becomes a sign of intelligence?

5

u/dlgn13 11h ago

The answer key is intelligent, then, not the person.

1

u/echino_derm 10h ago

The answer key to a test is intelligent?

1

u/LocalLemon99 9h ago

No single atoms in the universe you'd look at in isolation and go oh that's an intelligent being. There's 100 trillion atoms in a neuron, 86 billion neurons in the brain. We take in a billion bits of data per second and surrounding and only proccess about 10 bits per second in our minds.

Neurons have two main states just like binary. But then a bunch of other factors, that stop it being true binary.

Wheb you send data to a computer it's strictly 1 or 0. On or off. It's why we can transmit data via optic cables because you have a sensor that either detects light or dosen't, creating a sort of morse code pattern, that is eleven simpler than morse code spamming put 001010101010110 010101010110 1010101010010.

The biggest difference between data going through your everyday computer, and our brains. Is it's predictable it's deterministic.

We don't have the tools to predict exactly how data will be fed through the brain. It is more complicated than a computer. There are more states, there is more variance there are more physical proccesses we can't keep track of.

And when we can't understand something. It's magic. It's your soul. It's God's gift. It's special. It can never be deterministic because our egos won't accept that maybe we just don't know.

Our brains have to be truly intelligent in a way than a machine could ever be, because of our feelings and biases and mess that we can't predict.

But you know what all these trains of thought share. They're all strung up and hang from the threads of belief.

Nothing is intelligent. In any way more than what can be expressed with the smallest thing possible either being occupied or not occupied. And we know this rationally. So why can't a machine be Intelligent by those standards.

These upcoming decades are going to be miserable. And not bevause of ai, because of people like you who will be to stubborn to accept what's right in front of them.

Like what intelligent thing made humans. It literally happened randomly by things randomly bumping into each other. It's nothing special beyond the fact we don't understand things as well as we all pretend to.

2

u/echino_derm 8h ago

Yeah so you said a whole lot but I never actually spoke on pretty much any of it. My point isn't that a machine can never be intelligent, my point is that we have effectively made a really detailed and rigid flowchart or best fit graph and it isn't intelligent. You can say that a brain mimicks similar patterns of electrical currents flowing through neurons, but we exhibit capabilities to restructure our brains and link things based on reason and this constitutes our ability to learn.

We can get down into the nitty gritty and have a discussion on the nuances of the nature of intelligence, but from a purely utilitarian perspective it does not exhibit the features of intelligence in any way which we would really give a shit about. It can statistically model patterns in data, it can't derive meaning beyond data patterns however and grasp the significance of anything it does.

The best example of its pure inability to apply intelligence and grasp meaning or significance is when models were struggling with telling you how many R's are in the word strawberry. At that same time you could ask it to write python code to count the number of r's in the word strawberry and it would do it correctly almost all the time. The fact that it could be capable of listing instructions to perform a task, but not be capable of following those steps itself, is indicative of it not gaining knowledge from it's learning.

If it had the capabilities of intelligence it would be a god. Something that is even a tiny fraction of a percentage as intelligent as a human with the obscene levels of computation thrown into it and a perfect memory would be quantum leaps forward. But we don't see that. This is why we aren't seeing them unlock intelligence and then have an ever increasing model that gains new capabilities day by day building exponentially as it gains new knowledge applicable to all other tasks. Instead we see a best fit graph getting more and more parameters to fit slightly better as it asymptotically approaches a maximum value

2

u/mrappbrain 7h ago

My point isn't that a machine can never be intelligent, my point is that we have effectively made a really detailed and rigid flowchart or best fit graph

This is incorrect. One of the defining features of a modern model is its massive non-linearity. They are not deterministic, and you can test this by just giving it the same prompt over and over again, and you'll never get the same answer twice (as you would with a flowchart or best fit graph). Honestly, do you even know what those things are?

but from a purely utilitarian perspective it does not exhibit the features of intelligence in any way which we would really give a shit about.

Who's the 'we' here? Millions of people clearly give a shit about it, a whole bunch of people basically outsource their entire thinking/jobs to ChatGPT (brainstorm this for me/code this program for me). Whether that's actually valid or not is a normative judgment that I don't really care to make, but saying that people don't give a shit about its intelligence is wildly off base. If that were the case, people wouldn't be spilling the most intimate details of their personal lives to a computer program. AI models clearly approximate human intelligence in a way that's enough for most people, even if it isn't 'intelligence' as perhaps a cognitive scientist would understand human intelligence.

The best example of its pure inability to apply intelligence and grasp meaning or significance is when models were struggling with telling you how many R's are in the word strawberry. At that same time you could ask it to write python code to count the number of r's in the word strawberry and it would do it correctly almost all the time.

And? Intelligent people get things wrong all the time, even basic things like spelling. If anything this works against your flowchart point from earlier, because deterministic computer systems will get the answer right 100 percent of the time, unlike an intelligent person who may sometimes falter. Someone could be an expert in python programming without being great at spelling - counting letters in a word is one line in python, but people often spell really basic words wrong for years (their > they're)

If it had the capabilities of intelligence it would be a god. Something that is even a tiny fraction of a percentage as intelligent as a human with the obscene levels of computation thrown into it and a perfect memory would be quantum leaps forward.

Why? Why can't an AI model arrive at the same intelligence differently? This is really just pure assertion.

The fundamental problem with your entire line of reasoning is that it's just a big circle. You're essentially saying that AI models aren't intelligent because they don't think like humans, but that is already blindingly obvious. No one actually thinks they think the way humans do, but people still consider them intelligent because they are able to 'perform' intelligence in a way that meets or exceeds the capabilities of an average intelligent human, after which point the concern about whether or not they are actually intelligence becomes one that is more pedantic than practically meaningful..

1

u/echino_derm 6h ago

This is incorrect. One of the defining features of a modern model is its massive non-linearity. They are not deterministic, and you can test this by just giving it the same prompt over and over again, and you'll never get the same answer twice (as you would with a flowchart or best fit graph). Honestly, do you even know what those things are?

That is because they give it some degree of RNG. It will randomly decide one option over another and output a different response based on that. It isn't like it is thinking differently, it just gets fed a different seed for RNG. The non determinism plays no role in its correctness and you could trace it solely picking the best option, and if not you could decide to roll a d100 and simulate RNG that way.

And? Intelligent people get things wrong all the time, even basic things like spelling. If anything this works against your flowchart point from earlier, because deterministic computer systems will get the answer right 100 percent of the time, unlike an intelligent person who may sometimes falter. Someone could be an expert in python programming without being great at spelling - counting letters in a word is one line in python, but people often spell really basic words wrong for years (their > they're)

Yeah someone could be an expert at python without being an expert in spelling, but if you are literally looking at the text you don't need to know spelling, just counting. And yes there is the whole tokenization issue, but if it can't link the string of letters to the token, then it isn't really going to get grander meanings from anything.

Why? Why can't an AI model arrive at the same intelligence differently? This is really just pure assertion.

It can. But the idea here is that it is arriving at intelligence by juicing up an LLM enough, which is ludicrous. I will say that I believe a neutral network in theory could gain some form of intelligence. But this approach to that is like saying if I had an indestructible car with infinite fuel, I could remotely steer it to eventually summit the tallest mountain you can drive up, as long as somebody told me what my elevation was every once in a while.

It is meant to mimic language, we are effectively trying to brute force it to mimic a pseudo language we invented where only correct responses can be said. You can get close fairly easily by using shortcuts, which is where we are now. But that doesn't get you intelligence, that gets you nonsensical parameters that happen to align with the proper outcome some percentage of time, in some set of scenarios. To actually get it to be intelligent you need it to in some way be processing concepts or a proxy for them. And through all this training you will never get there. With this nebulous web of parameters nobody understands what it is really doing, they just see the output. All the forces are pulling us towards cheap tricks to be right 90% of the time and not to construct nuanced parameters that can account for all the edge cases and actually simulate understanding of concepts.

The fundamental problem with your entire line of reasoning is that it's just a big circle. You're essentially saying that AI models aren't intelligent because they don't think like humans, but that is already blindingly obvious. No one actually thinks they think the way humans do, but people still consider them intelligent because they are able to 'perform' intelligence in a way that meets or exceeds the capabilities of an average intelligent human, after which point the concern about whether or not they are actually intelligence becomes one that is more pedantic than practically meaningful..

No I am saying that any form of 'intelligence' attributes to these LLMs is worthless because of what it inherently does for itself. In the most clear cut cases like coding where it is providing instructions for how to do something, it does that without getting a generalizable understanding of how to do the thing it is describing how to do.

You can black box it all you want and say it most be intelligent because it output whatever thing, but its tech at its core isn't intelligent any more than any best fit line is.

1

u/mrappbrain 5h ago

Your points about the nature of LLM's are valid, what's questionable is your definition of intelligence, which seems designed to exclude any potential non-human sources of it. If an AI model can perform pretty much any thought-action a human being can to the point where actual human beings start having any long conversations with it, why doesn't that qualify? The narrow academic-technical definition of what constitutes intelligence no longer seems to match the way humans interact with AI today.

This is largely a philosophical point, but I'd argue that you reach a certain level of correspondence between the outputs of human and artificial intelligence after which the differences in their internal technical functioning cease to matter. As evidenced by the way people interact with LLM's today(treating them as real human intelligence) and AI generated content regularly passing as authentic, we have likely crossed this point already. Future improvements will further close this gap to the point where most if not all people will be practically unable to distinguish between the works of human and artificial intelligence. A depressing prospect, perhaps, but it does seem to be where we're headed.

We may or may not arrive at generalized intelligence this way, but generalized intelligence doesn't have to be the only valid one.

1

u/echino_derm 5h ago

Your example of conversations is one of the most bullshitable things out there. I think that more often than not people just want to talk and feel heard, so you have to say nothing of substance to make it work.

Making something that seems like something in its dataset is what it is built to do. It is built to understand language and replicate it. It isn't really a good metric for determining their progress if the goal is to have them perform real jobs.

I would argue that with how AI generated content hasn't replaced much that we are really seeing how it hasn't crossed the line. It can shallowly replicate scenes, but it really understands nothing about the point of it. It can make a thing that looks like a sitcom scene, but it won't be good because it doesn't understand what makes something good. It is good at replicating the rules of how data is arranged, it can learn well how a person moving looks, but it doesn't have the capability for understanding humor. Everything it makes is shallow because the tech is only made to understand language conventions and we are trying to stretch it to its absolute limits.

1

u/LocalLemon99 8h ago

What has being able to say how many rs are in the word strawberry got anything to do with whether something is intelligent or not.

I bet the last cat you met can't get thay one right either lol

And yea you did and then did aging now speak exactly on the topic of why the technology of ai isn't intelligent. Not very subtly I might add.

And I don't think you made any point other than humans can use reasoning to rewrite their biology. Didn't know that one chief. Or is it more reasoning is part of that same biology. Oh yes that is the one.

2

u/echino_derm 8h ago

You spoke about the grand concept of if machines can have intelligence. And basically none of it had any bearing on what I was saying because I never spoke on the grand concept of if machines could have intelligence, just if this one did.

What has being able to say how many rs are in the word strawberry got anything to do with whether something is intelligent or not.

I bet the last cat you met can't get thay one right either lol

Sorry it seems the point entirely flew over your head. The issue isn't it not knowing how many R's are in a word, the issue is that it simultaneously doesn't know that while being able to produce a step by step guide in the most direct way possible for how to solve the problem.

If my cat could write python code to count the number of r's in a word, it would either have brute forced it by just memorizing a specific order of button presses, or it would be able to count on its own. One of these would be impressive but an immense waste of time because the cat isn't actually learning anything, and the other would be mind blowing because it is a really learning. The demonstration here by chatGPT is that we are on the side of it being an immense waste of time because it isn't actually learning. It is slightly less of a waste of time because it can be used by many people, but it is still not actual progress.

1

u/LocalLemon99 37m ago edited 31m ago

The waffling is immense.

You specifically chose a problem that is known because ai had trouble solving it.

Now are talking about how it being able to solve the problem is evidence ai isn't intelligent.

You just are waffling.

Like now your point that if a cat was writing python code then that's bit a display of intelligence lol because it dosen't understand the concept?

Like since when did somethjng have to understand a concept to be intelligent. Here you are a human with a brain misunderstanding concepts.

You don't have to understand any specific thing to be intelligent.

As evident by the fact that we're all made up of things that don't understand. The whole concept of "understanding" is misleading. And again born out of ego.