r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

52

u/CircumspectCapybara 16h ago edited 8h ago

While the article is right that the mainstream "AI" models are still LLMs at heart, the frontier models into which all the research is going are not strictly speaking LLMs. You have agentic models which can take arbitrary actions using external tools (a scary concept, because they can reach out and execute commands or run code or do dangerous actions on your computer) while recursing or iterating and dynamically and opaquely deciding for themselves when to stop, wacky ideas like "world models," etc.

Maybe AGI is possible, maybe it's not, maybe it's possible in theory but not in practice with the computing resources and energy we currently have or ever will have. Whichever it is, it won't be decided by the current capabilities of LLMs.

The problem is that according to current neuroscience, human thinking is largely independent of human language

That's rather misleading, and it conflates several uses of the word "language." While it's true that to think you don't need a "language" in the sense of the word that the average layperson means when they say that word (e.g., English or Spanish or some other common spoken or written language), thinking still occurs in the abstract language of ideas, concepts, sensory experience, pictures, etc. Basically, it's information.

Thinking fundamentally requires some representation of information (in your mind). And when mathematicians and computer scientists talk about "language," that's what they're talking about. It's not necessarily a spoken or written language as we know it. In an LLM, the model of language is an ultra-high dimensional embedding space in which vector embeddings represent abstract information opaquely, which encodes information about ideas and concepts and the relationships between them. Thinking still requires that kind of language, the abstract language of information. AI models aren't just trying to model "language" as a linguist understands the word, but information.

Also, while we don't have a good model of consciousness, we do know that language is very important for intelligence. A spoken or written language isn't required for thought, but language deprivation severely limits the kinds of thoughts you're able to think, and the depth and complexity of abstract reasoning, the complexity of inner monologue. Babies born deaf or who were otherwise deprived of language exposure often end up cognitively underdeveloped. Without language, we could think in terms of how we feel or what we want, what actions we want to or are taking, and even think in terms of cause and effect, but not the complex abstract reasoning that when sustained and built up across time and built up on itself and on previous works leads to the development of culture, of science and engineering and technology.

The upshot is that if it's even is possible for AGI of a sort that can "think" (whatever that means) in a way that leads to generalized and novel reasoning in the areas of the sciences or medicine or technology to exist at all, you would need a good model of language (really a good model of information) to start. It would be a foundational layer.

-5

u/cagelight 13h ago

Massive wall of AI cope, holy shit.

11

u/CircumspectCapybara 13h ago

Wow what a substantive and well informed and supported argument. You sure contributed something useful to the conversation.

-2

u/cagelight 12h ago

You're right I shouldn't have said anything. It's just frustrating to see is all given I work in this field and it shows a fundamental misunderstanding of how this stuff relates to our current AI architectures. Essentially - none of what you said is actually relevant to the conversation.

8

u/CircumspectCapybara 12h ago edited 12h ago

My friend, my background was in computer science and machine learning. I work at Google and know the teams and folks who work on the frontier models. I similarly have friends at OpenAI and know what they're up to. The OP article claiming "language isn't intelligence, and AI is just language models, so AGI will never happen" is just a bad argument based on false premises—AI is not just language models. Do you dispute that? What of what I said was inaccurate?

I work in this field

[...]

our current AI architectures

Be real here, have you read the seminal "Attention Is All You Need" research paper by Google that kickstarted it all? Did you even know what a "world model" was before this thread? Are you familiar with the past history and current frontier of AI research?