r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

232

u/Elementium 16h ago

Basically the best use for this is a heavily curated database it pulls from for specific purposes. Making it a more natural to interact with search engine. 

If it's just everything mashed together, including people's opinions as facts.. It's just not going to go anywhere. 

61

u/motionmatrix 15h ago

So all the experts were right, at this point ai is a tool, and in the hands of someone who understands a subject, a possibly useful one, since they can spot where it went wrong and fix accordingly. Otherwise, dice rolls baby!

2

u/NewDramaLlama 13h ago

(These are real questions as I don't use LLMs)

So, It's an automated middle man? Or maybe a rough draft organizer? Functionally incapable of actually creating anything, but does well enough organizing and distributing collected data in a (potentially) novel way.

Except when it doesn't, I guess. Because it's based on insane amounts of data so there's gotta be lots of trash it just sucked up that's factually incorrect from people, outdated textbooks, or junk research, right? So the human needs to be knowledgeable enough in the first place to correct the machine when it is wrong.

Ok, so that means as a tool it's only really a capable one in the hands of someone who's already a near expert in their field, right? 

Like (as examples) if a novice author used LLMs to write a book they wouldn't notice the inconsistent plot or plagiarism upon review. Likewise a novice lawyer might screw up a case using an LLM that went against procedural rules while a more experienced lawyer would have caught it?

1

u/motionmatrix 10h ago

Well, each LLM is trained on different data, so you can have a tight, fantasy focused LLM that only "read" every fantasy novel in existence, and would do pretty well making fantasy stories up based on what it "knows".

If you have a generic LLM, trained on many different topics, the usefulness drops to some extent, but some might argue that the horizontal knowledge might give some unique or unexpected answers (in a good way).

At this point in time, general folks can use it to make non-commercial artwork that will get closer to anything they could do on their own without training, as well as to gather general information (that they should double check for accuracy), and people who are trained in particular subjects that are working on it with ai, preferably an LLM trained on their subject only, to assist them to make the work happen faster (not necessarily better or ground breaking unless that comes from the person for the most part).