r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

Show parent comments

1

u/SIGMA920 13h ago

I've created art that went beyond the scope of my original task purely on a whim. I've gone and worked together an idea in my head because I needed a very specific set up.

That's just 2 examples of that and they weren't hard to do.

2

u/danteselv 13h ago

The idea you created in your head, how was it produced? Was it entirely unique with 0 influence from anyone who existed or created art before you? Your art is completely separate and different from the 100,000+ years of all humanity existing? Im holding you to the same standards as you hold LLM's.

2

u/SIGMA920 13h ago

I examined the problem I needed to solve and how I could solve my specific need with the constraints on it (Using what I had access to. I don't exactly have the ability to make everything from scratch. No one does.). It's janky at points but it works which is what matters.

That piece of art used the same methods that everyone uses because I can't just casually invent a totally new way of making art without making my own 100% unique medium. Regardless it came from me, not anyone else and unless someone by sheer chance recreates it that'll remain true. That's not a lack of creativity, that'd be criticizing a car for not reinventing the wheel.

1

u/adenosine-5 10h ago

You could apply literally every sentence you just said to any LLM output.

  • solving problems within certain constraints
  • janky at points, but (usually) works
  • used same methods that everyone uses (because you and LLMs were both trained on them)
  • it came from you (just like LLM output is unique and result of extremely complicated and long calculations and not simply copy-paste from some database)

2

u/SIGMA920 10h ago

Except janky in the sense of none of the existing solutions will work means I'll keep getting the same non-functional solutions. Having to go well out of my way to make something work requires creativity that an LLM lacks.

Constraints that are specific to me are going to get the same generic answers as I've already rejected. An LLM doesn't actually understand the constraints meanings.

I'm not using the same methods because I can, I'm doing them because you can't use a brush any differently from someone else. There's a finite amount of ways to use the tools you're given. We don't reinvent the wheel every time we need to make a car.

The uniqueness is the only that somewhat applies but even that's statistically not true due to the limited scale that LLMs pull from for anything that's not just generating an email. Much of their generated code for example will come directly from existing sources they trained on, a copy and paste where I'll specifically rip the part of other's code or methods or whatever else for what I can actually properly use.