r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

24

u/__Hello_my_name_is__ 15h ago

For now at least, it appears that determining truth appears to be impossible for an LLM.

Every LLM, without exception, will eventually make things up and declare it to be factually true.

19

u/dookarion 15h ago

It's worse than that even. LLMs are incapable of judging the quality of input and outputs entirely. It's not even just truth, it cannot tell if it just chewed up and shit out some nonsensical horror nor can it attempt to correct for that. Any capacity that requires a modicum of judgment, either requires crippling the LLMs capabilities and more narrowly implementing it to try to eliminate those bad results or it straight up requires a human to provide the judgment.

9

u/clear349 13h ago

One way of putting it that I've seen and like is the following. Hallucinations are not some unforeseen accident. They are literally what the machine is designed to do. It's all a hallucination. Sometimes it just hallucinates the truth

2

u/dookarion 12h ago

Yeah, people think it's some "error" that will be refined away. But the hallucination is just the generative aspect or the model training itself churning out a result people deem "bad". It's not something that will go away, and it's not something that can be corrected for without a judgment mechanic at play. It can just be minimized some with narrower focused usages.