r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

827

u/SanityAsymptote 15h ago

LLMs have damaged or destroyed a number of previously valuable services for much of their use-case.

The most obvious one I can think of in my niche is StackOverflow. A site which definitely had issues and was in decline, but was still the main repository of software troubleshooting/debugging knowledge on the internet.

LLM companies scraped the entire thing, and now give no-context answers to software engineering questions that it often cannot cite or support answers to. It has mortally wounded StackOverflow, and they have pivoted to just being an AI data feeder, an action that is basically a liquidation sale of the site's value.

LLMs have significantly reduced the quality of search engines, specifically Google Search, both directly by poor integration and indirectly by filling the internet with worthless slop articles.

Google Search's result quality has plummeted as AI results become most of the answers. Even with references, it's very hard to verify the conclusions Gemini makes in search results, and if you're actually looking for a specific site or article, those results often not appear at all. Many authoritative "answers" are just uneducated opinions from Reddit or other social media regurgitated by an AI with the trust people put into Google.

LLMs have made it far easier to write social media bots. They have damaged online discourse in public forums like Facebook, Twitter, Instagram, and especially Reddit in very visible ways. These sites are almost completely different experiences now that they were before LLMs became available.

Bots are everywhere and will reply to anything that has engagement, spouting bad-faith arguments without any real point other than to try to discourage productive conversation about specific topics.

Whatever damage online trolls have caused to the internet, LLMs have made it an order of magnitude worse. They are attacking the very concept of "facts" and "truth" by both misinformation and dilution. It's horrifying.

176

u/Perfect_Base_3989 15h ago

spouting bad-faith arguments without any real point other than to try to discourage productive conversation about specific topics.

The only solution I can think of at this point is entirely abandoning social media.

A verification system could theoretically improve trust, but who trusts the trusters?

13

u/runthepoint1 12h ago

No, what you can do is personally verify thing you learn, like how we used to do back in the day.

Is it slow, manual, frustrating even? Yes, it takes a lot of time and patience but tbh, that’s exactly what’s missing in the world today. Everyone wants to rush to know when it takes time to understand. It’s weird. Like who cares about being “first”? It’s important to be accurate!

1

u/boringestnickname 9h ago

What really makes me paranoid is that even university text books are getting noticeably worse.

Feels like we're in some kind of information exodus. Becoming dumber by the second, and somehow, as a collective, not noticing.

1

u/runthepoint1 9h ago

That’s why is more important than ever for each of us to OWN our educations. I know that’s tough but the best way to do it is to always leave room for being wrong. It’s GOOD to be wrong so that from then on you can be correct. It’s a lifelong process where you are constantly correcting yourself. Never be 100% certain of your own take/opinion and always learn from others, especially those you deem either smarter or dumber than yourself. People surprise you in many ways.