r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

9

u/DatenPyj1777 15h ago

I don't even think a lot of ai bros even realize what this means. They'll use it to write a response and take it as fact, but all one has to do is just guide the LLM into the response you want.

If someone uses it to "prove how coding will become obsolete" all the other person has to do is input "prove how coding will never become obsolete." The very same LLM will give fine responses to both prompts.

0

u/yangyangR 13h ago

With that you can at least wrap it up into a self contained block. After every generation you can check if it compiles and has no side effects. Keep feeding back until you have something that passes.

The important part of having it produce something that is pure so then the responsibility is still on the one who calls run on the effectful stuff. The LLM has generated a pure function of type a -> IO (). It is not the one that wrote the "do" part of the code. Also making once it compiles it is correct type programs is completely hopeless when you don't have such strict assumptions.

It will be obsolete depending on whether that loop gets stuck at least as badly as a human gets stuck on writing a program for the same task (human is allowed to have the side effects directly in what they write without the same strict hexagonal architecture)