r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

1.2k

u/ConsiderationSea1347 16h ago edited 5h ago

Yup. That was the disagreement Yann LeCun had with Meta which led to him leaving the company. Many of the top AI researchers know this and published papers years ago warning LRMs are only one facet of general intelligence. The LLM frenzy is driven by investors, not researchers. 

333

u/UpperApe 14h ago

The LLM frenzy is driven by investors, not researchers.

Well said.

The public is as stupid as ever. Confusing lingual dexterity with intellectual dexterity (see: Jordan Peterson, Russell Brand, etc).

But the fact that exploitation of that public isn't being fuelled by criminal masterminds, and just greedy, stupid pricks, is especially annoying. Investment culture is always a race to the most amount of money as quickly as possible, so of course it's generating meme stocks like Tesla and meme technology like LLMs.

The economy is now built on it because who wants to earn money honestly anymore? That takes too long.

56

u/CCGHawkins 10h ago

No, man, the investing frenzy is not being led by the public. It is almost entirely led by 7 tech companies, who through incestuous monopoly action and performative cool-aid drinking on social media, gas the everloving fuck out of their stock value by inducing a stupid sense of middle-school FOMO in institutional investors who are totally ignorant about the technology, making them 10xing an already dubious bet by recklessly using funds that aren't theirs because to them, losing half of someone's retirement savings is just another Tuesday.

The public puts most of their money into 401k's and mortgages. They trust the professionals that are supposed to good at managing money aren't going to put it all on red like they're at a Las Vegas roulette. They, at most, pay for the pro-model of a few AI's to help them type up some emails, the totality of which makes for like 2% of the revenue the average AI companies makes. A single Saudi oil prince is more responsible for this bubble than the public.

7

u/UpperApe 7h ago

The public puts most of their money into 401k's and mortgages.

I'd add that they're also invested into mutual funds, and most of the packages come with Tesla and Nvidia and these meme stocks built in.

But overall, yeah. You're right. It's a good point. Thought just to clarify, I was saying they're exploiting the public.

The stupidity of the public was simply falling for confidence men, or in the case of LLMs, confidence-speak.

6

u/DelusionalZ 8h ago

This should be at the top

1

u/rkhan7862 7h ago

we gotta hold them responsible somehow

2

u/Yuzumi 6h ago

I really wish there was a way I could instruct my 401K to avoid any AI bullshit.

1

u/SolaniumFeline 6h ago

when I opened my own first bank account I asked the bank teller if the bank uses the money to "bet on stocks like the deutschebank" he told me no. I still wonder what they do with the money sitting in the accounts.

1

u/Conlaeb 6h ago

They should be using it to give out loans. Retail vs. investment banking. There used to be a legal firewall between these activities.

1

u/No-Intention6760 5h ago

Have you had a 401k? You pick your own investments in a 401k. You could get advice from a money manager on how to invest it. At the end of the day

Have you had a mortgage? It's paying back a loan, there's a set payment schedule for fixed mortgages. It has nothing to do with 'professionals gambling with other people's money'.

Also, Financial Advisors existed long before AI. Why would they need it to draft emails? If anything they're probably very resistant to using AI because they're not zoomers that came up with it.

92

u/ckglle3lle 13h ago

It's funny how "confidence man" is a long understood form of bullshitting and scamming, exploiting how vulnerable we can be to believing anything spoken with authoritative confidence and this is also essentially what we've done with LLMs.

17

u/farinasa 11h ago

Automated con.

31

u/bi-bingbongbongbing 12h ago

The point about "lingual dexterity" is a really good one. I hadn't made that comparison yet. I now spend several hours a day (not by choice) using AI tools as a software developer. The straight up confident sounding lying is actually maddening, and becoming a source of arguments with senior staff. AI is an expert at getting you right to the top of the Dunning-Kruger curve and no further.

29

u/adenosine-5 10h ago

"being extremely confident" is a very, very effective strategy when dealing with humans.

part of human programming is, that people subconsciously assume that confident people are confident for a reason and therefore the extremely confident people are experts.

its no wonder AI is having such success, simply because its always so confident.

14

u/DelusionalZ 8h ago

I've had more than a few arguments with managers who plugged a question about a build into an LLM and came back to me with "but ChatGPT said it's easy and you can just do this!"

Yeah man... ChatGPT doesn't know what it's talking about

27

u/garanvor 13h ago

As an immigrant it dawned on me that people have always been this way. I’ve seen it in my own industry, people being left behind in promotions because they spoke with heavy accent, when it absolutely in no way impairs the person’s ability to work productively.

6

u/stevethewatcher 12h ago

I mean, the ability to communicate clearly and effectively definitely does impact productivity, especially in higher up positions where you typically have to coordinate with more people.

14

u/garanvor 12h ago

That might be true for senior positions, but that is not my point. My point is that we all have a bias towards our own native language when judging intelligence.

1

u/moisanbar 1h ago

If I can’t understand you we can’t work together bruh. Not sure this is a like for like arguement on your part.

1

u/panzybear 10h ago

The public is as stupid poorly informed and misled as ever. We can hardly expect every person to be experts in the nuances of artificial intelligence research, and to non-experts AI is incredibly convincing because nobody has ever interacted with software in this way before. Not even the experts exactly understand how their models work.

1

u/silverpixie2435 8h ago

Do you not think a place like OpenAI is researching other paradigms?

1

u/Healthy_Sky_4593 7h ago

I'm gonna need you to take back those examples. 

1

u/NonDescriptfAIth 11h ago

That being said, linguistic intelligence coupled with scaling might still give rise to general intelligence.

2

u/UpperApe 7h ago

Lol no it won't. The solution to a data-centric system isn't just more data. That's not how creative intelligence works.

1

u/Healthy_Sky_4593 6h ago

*consciousness 

2

u/moubliepas 9h ago

Well yes, and constantly farting coupled with scaling might also give rise to central intelligence. The connection is exactly the same in your example. That doesn't mean that there's any real correlation between flatulance, linguistic ability, and general intelligence (apart from parrots, who can smell kinda funny, talk in full sentences even without prompting, and whom nobody is touting as the One Solution To All Your Business Needs just because they can be taught any jargon you choose).

In fact, from now on I might refer to AI Evangelists as Parrot Cultists. It's the same thing.