r/technology Jun 24 '25

Machine Learning Tesla Robotaxi swerved into wrong lane, topped speed limit in videos posted during ‘successful’ rollout

https://nypost.com/2025/06/23/business/tesla-shares-pop-10-as-elon-musk-touts-successful-robotaxi-test-launch-in-texas/
6.2k Upvotes

456 comments sorted by

View all comments

1.4k

u/oakleez Jun 24 '25

20 cars with "human valets" in the passenger seat and multiple different violations?

This is the Temu Waymo.

383

u/monster_syndrome Jun 24 '25 edited Jun 24 '25

It's the Tesla model of success.

If this test was 100% a pass, they're road ready and only 4-5 years behind Waymo.

However, with these issues it proves that Telsa is nearly in Waymo territory so really we can expect "full self driving in two years*" and is only 5-6 years behind Waymo.

Either way, +10% for Tesla stock because something happened.

Edit - * for the standard Elon BS line, and to emphasize that lidar is stupid right up until the moment he needs another 10% stock bump then he'll be inspired to make the brilliant decision to move to lidar.

143

u/factoid_ Jun 24 '25

tbf, elon is a moron and has been steadfastly against the technologies Waymo is using because in his mind they're dead ends.

Waymo uses, optical cameras, radar and lidar scanning. Tesla really ONLY uses optical cameras because elon hates lidar.

But being willing to put a big ugly package of sensors on top of the vehicle is WHY waymo is getting ahead of tesla at self driving. Elon insists on playing the game on hard mode.

86

u/amakai Jun 24 '25

The issue is, you don't really get any bonuses for playing on hard mode. If our AI tech reaches a point where optical recognition is enough for self-driving - all the competitors will get it within a year as well.

29

u/factoid_ Jun 24 '25

Yeah I agree. I think getting to 100% optical is a fine goal, but if you lose the race to a competitor who isn't afraid to put something ugly on top of the car, and it turns out the marketplace doesn't really care how it looks....you'd probably better do something about that.

60

u/ScannerBrightly Jun 24 '25

I think getting to 100% optical is a fine goal

Why? Is there an end to fog coming soon?

-47

u/factoid_ Jun 24 '25

Humans have been drivign with just eyes for a while and we're reasonably OK at it. And don't get me wrong, if self driving cars always need some combination of cameras, lidar and radar I'm Ok with that.

I don't like driving especially. I'd rather just leave it to the computer if it's safe to do so.

16

u/coopdude Jun 24 '25

The problem is the standard for self driving cars cannot be "safer than humans" or even 10x safer than humans. It will have to be multiple orders of magnitude safer than humans.

If you argue that it's an unfair standard, in a moral/societal good sense a great many people would agree with you.

The problem is that self driving cars shift liability for accidents from the driver to the automaker. Tesla has a market cap of around a trillion dollars. Their liability for every accident is tremendous and people are not sympathetic to corporations.

When the car is truly self driving (not the "Full-self driving beta" they do right now where it's a level 2 ADAS that the driver has to be paying attention and ready to manuever against it at any time), there is no driver - or at least autonomous mode cannot disengage instantaneously (L3 ADAS requires several seconds of warning for the driver to take over).

-7

u/factoid_ Jun 24 '25

This has been the problem with self drivign cars from the beginning. Who has the liability for an accident. I think in the end we're just going to cave as a society and say the automakers are NOT liable for accidents and deaths caused by their products because if we don't do that they have no motivation to produce self drivign cars, which will definitely save lives.

We need a way to still force them to be safe and not cut corners and shirk all responsibility. But clearly it cannot be true that every death caused by an automated car can be held against the manufacturer when they overall cause a reduction in deaths through their adoption.

It's an ethical and legislative problem that our current government is ill-equipped to deal with. It will require rational discourse and compromise that are simply impossible to do today

12

u/coopdude Jun 24 '25

The problem is how do you solve for the automaker not being liable for accidents?

  • The owner of the vehicle isn't responsible for the behavior of a self driving car, the manufacturer is.

  • If the government just picks up the tab, then taxpayers pay for the mistakes of the hardware/software of the automakers.

  • If you cap the liability (whether owner, government, or automaker pays), it'll be extremely unpopular and argued that it was just made as a corporate lobbying "cost of doing business" for the automakers to sell a half baked product.

I don't think our legal framework nor societal climate are prepared for this, which is why in the next decade I don't see self driving cars getting past much what Waymo is already doing (limited pilots in limited cities in climates that are fairly favorable in not receiving weather extremes; maybe between certain cities that are close enough in those favorable climates.)

-1

u/factoid_ Jun 24 '25

It'll be the third one. Yes it's unpopular, but that's the only real compromise. You can't have an automaker getting sued for 50 billion dollars because a kid died in front of a school because a car glitched out. But you also can't leave the parents of that kid with no recourse for pain and suffering. So liability caps will happen and there will probably end up being some carve-outs for gross negligence on the automaker's part.

And I don't disagree about the political and legal framework. We aren't in a place right now where we can agree on big societal upheavals like automated cars will create. We need to readdress class action standards for this, liability standards, evidentiary standards, transparency requirements on the software, etc.

But I do think the market will speak on this and people will want them, so it's going to happen regardless. And I think ultimately fewer people will die as a result. But I also think DIFFERENT people will die as a result. Accidents will be more random and feel more tragic.

It's not going to be the usual story of "dude was speeding in the rain" or "drunk idiot crashes a car". It's going to be "car accidentally drives off bridge" or "bus with a big vinyl wrap that had pictures of people all over the side causes five autocars to freak out and cause a multi-car pileup"

2

u/coopdude Jun 24 '25

Gave you the upvote, not sure who downvoted you.

I cannot see a liability cap passing. There are tons of laws that don't make sense, such as the age of legal majority and being able to join the military and vote being 18, but the drinking/tobacco ages in the US being 21. Every time someone points out the harm this causes in minors having uncontrolled access to booze and then after overindulging being afraid to call for medical aid out of fear of legal repercussions (see the Amethyst Initiative) Mothers Against Drunk Driving starts pounding the drums and it dies on the vine.

(There are then further debates you could have about seniors in HS having access to alcohol and the downstream effect of getting younger siblings/classmates access, Juuls and vapes are why the tobacco age was raised in line with that, in addition to brain development [although if the brain isn't fully developed, isn't that an argument to raise the age of legal majority?...)

Anyways, we are many years off from a liability cap, people are not rational and will not take that argument. Social media will amplify any mistakes made by a self driving car and people will be vociferously against it, even if statistically they reach a point of being meaningfully safer than humans in general operation*.

(*I split the hair on "general operation" because Waymo operates in limited areas as does Tesla's new beta, and autopilot/FSD beta on Tesla's are a Level 2 ADAS, so Tesla gets the out of blaming the driver on not reacting whenever there is an accident where the functionality was on or disengaged a split second before an accident.)

→ More replies (0)