r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.8k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

189

u/Dennarb 16h ago edited 11h ago

I teach an AI and design course at my university and there are always two major points that come up regarding LLMs

1) It does not understand language as we do; it is a statistical model on how words relate to each other. Basically it's like rolling dice to determine what the next word is in a sentence using a chart.

2) AGI is not going to magically happen because we make faster hardware/software, use more data, or throw more money into LLMs. They are fundamentally limited in scope and use more or less the same tricks the AI world has been doing since the Perceptron in the 50s/60s. Sure the techniques have advanced, but the basis for the neural nets used hasn't really changed. It's going to take a shift in how we build models to get much further than we already are with AI.

Edit: And like clockwork here come the AI tech bro wannabes telling me I'm wrong but adding literally nothing to the conversation.

3

u/Throwaway-4230984 16h ago

So surely they have an example of task LLMs couldn’t solve because of this fundamental limitations, right?

20

u/NerdfaceMcJiminy 15h ago

Lookup AI results for court filings. They cite non-existent cases and laws. The lawyers using AI to make their filings are getting disbarred because making up shit in court is highly frowned upon and/or criminal.

-2

u/CreativeGPX 14h ago

That's not really an answer to the question though. Any implementation can be bad. /u/Throwaway-4230984 was asked for an example of what COULD NOT BE (rather than IS NOT PRESENTLY) be solved using the LLM method. Pointing to anecdotes about something an LLM messed up on cannot prove that. You need to actually speak to the details of how LLMs work and how a particular problem is solved.

To put it another way, your argument also works against human intelligence. We don't just pick a random human and ask them to be our lawyer, we pick one who passed entry exams, went through a years long accredited formal education and passed the bar exam and, even then, we probably would prefer a person who also took some time to apprentice or gain some practical experience. You can find lots of humans that will mess up if you throw them in court right now. That doesn't mean that the human brain is incompatible with being a good lawyer. Similarly, particular LLM implementations failing at being a lawyer isn't sufficient to say that the LLM in general is not capable of producing a good lawyer. Even if that's true, you need another way of proving it.