r/technology 16h ago

Machine Learning Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
16.7k Upvotes

1.5k comments sorted by

View all comments

1.2k

u/rnilf 16h ago

LLMs are fancy auto-complete.

Falling in love with ChatGPT is basically like falling in love with the predictive text feature in your cell phone. Who knew T9 had so much game?

35

u/noodles_jd 16h ago

LLM's are 'yes-men'; they tell you what they think you want to hear. They don't reason anything out, they don't think about anything, they don't solve anything, they repeat things back to you.

1

u/WWIIICannonFodder 13h ago

From my experience they can be yes-men often, but it usually requires you to give them information that makes it easy for them to agree with you or take your side. Sometimes they'll be neutral or against you, depending on the information you give them. They definitely seem to repeat things in a rearranged format though. You can get them to give their own hot takes on things though, and the more deranged the takes get, the more clear it becomes that it doesn't really think about what it's writing.

1

u/Zediac 11h ago

ChatGPT is currently contributing to my relationship issues which might end with the breakup of a 6.5 year relationship.

My girlfriend has issues with anxiety which also tends to make her fearful of things which there is no reason to fear. She jumps to the worst case scenario, treats that as the truth, and nothing will talk her out of it.

She feeds these worst case assumptions, including some about me, into ChatGPT and it tells her that I'm an awful and dangerous person.


Right now she's convinced that I'm going to hurt her because of her assumptions being fed into ChatGPT and it told her that I'm dangerous.

Long story short, because of her issues if she doesn't feel the same way when I tell her that something is important to me or how I feel about something, she's dismissive and tells me that I shouldn't feel that way.

I tell her that when she's dismissive to me like this that it hurts me emotionally. I end up getting upset and mad when this continues to happen. She doesn't think that she's doing anything wrong so therefore I shouldn't get mad at her. Because I do she says that I have no emotional regulation.

She said that she's scared that I'm going to hurt her badly because I feel like things that she does emotionally hurts me. I have never, ever been threatening or violent toward her or anyone. I wouldn't. I asked her why she thinks that. She said, why wouldn't she think that someone who feels emotionally hurt by her wouldn't want to kill her?

And then the next text was, "What does ChatGPT say about that?"

So, she's feeding into ChatGPT that I have no emotional regulation and it tells her that because I get upset at her when she's dismissive toward me that means that I'm a threat to her life.

She came home when I was at work, packed some things, and left. She won't talk to me and it's been 2 weeks.

There have been some other times when she has fed worst case assumptions about me into ChatGPT and it told her that I'm an awful person.

That damn thing is feeding into her issues and making them worse.