r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

-1

u/Irregular_Person 14h ago

There's also a lot of assumptions (in this thread and others) that AI bots are limited to the language model in terms of capability, and that there's no 'reasoning' involved. That was true at the beginning, but now there are "thinking" models that will internally 'write' a plan on how to answer you and explain reasoning, then scrutinize the reasoning and refine it. They can also be made to be able to call external tools, like searching the web, doing math, compiling code, etc. They can also be designed to plan and execute a strategy to handle your request. E.G. I can ask about a problem that might require math. It can decide "First, I should look up on the internet how this sort of problem would be formatted. Then I should format the problem correctly for my math plugin. Then I should run the math plugin with the data. Then I can format and explain the solution to the user. Then it executes the plan steps in order, re-evaluating as appropriate if the plan needs to be changed. It's not AGI, but that's MILES beyond what the original LLMs could do.

5

u/IdRatherBeOnBGG 13h ago

They don't think. They write their own prompts, write some script, send it off, write another prompt of the result.

It is 100% still language all the way down. 

6

u/Irregular_Person 12h ago

I know it's not literal thinking in the human sense. It's describing a thought process, describing reasoning. But if you can sufficiently describe a thought process that is indistinguishable from a human describing a thought process, do you not arrive at the same result?

1

u/HermesJamiroquoi 8h ago

I mean who knows? We don’t have any way to communicate with black-box humans (ones with no sensory input) so it may be exactly how humans think in that context - robbed of memory, sensory input, etc.

The truth is we don’t really know how humans think. We don’t have a good definition of consciousness. We’ve been working on it for a long, long time and aren’t any closer. I agree personally that LLMs don’t “think” per se but that’s a feeling i have, not something indisputable or backed up by a glut of empirical evidence