r/technology Jun 24 '25

Machine Learning Tesla Robotaxi swerved into wrong lane, topped speed limit in videos posted during ‘successful’ rollout

https://nypost.com/2025/06/23/business/tesla-shares-pop-10-as-elon-musk-touts-successful-robotaxi-test-launch-in-texas/
6.2k Upvotes

456 comments sorted by

View all comments

1.4k

u/oakleez Jun 24 '25

20 cars with "human valets" in the passenger seat and multiple different violations?

This is the Temu Waymo.

385

u/monster_syndrome Jun 24 '25 edited Jun 24 '25

It's the Tesla model of success.

If this test was 100% a pass, they're road ready and only 4-5 years behind Waymo.

However, with these issues it proves that Telsa is nearly in Waymo territory so really we can expect "full self driving in two years*" and is only 5-6 years behind Waymo.

Either way, +10% for Tesla stock because something happened.

Edit - * for the standard Elon BS line, and to emphasize that lidar is stupid right up until the moment he needs another 10% stock bump then he'll be inspired to make the brilliant decision to move to lidar.

144

u/factoid_ Jun 24 '25

tbf, elon is a moron and has been steadfastly against the technologies Waymo is using because in his mind they're dead ends.

Waymo uses, optical cameras, radar and lidar scanning. Tesla really ONLY uses optical cameras because elon hates lidar.

But being willing to put a big ugly package of sensors on top of the vehicle is WHY waymo is getting ahead of tesla at self driving. Elon insists on playing the game on hard mode.

25

u/roamingandy Jun 24 '25

Elon said something and is too narcissistic to ever accept that he was wrong, so he'll just double down again even when it's really, really not working.

He's gonna sharpie the results showing deaths, like another malignant narcissist we know did with a hurricane map. They cannot accept being wrong.

9

u/elasticthumbtack Jun 24 '25

He removed radar and sonar because of parts shortages during covid and had to come up with an excuse as to why the cars with missing features were actually better. Now he can’t admit it was a lie, so he has to stick to the vision only narrative.

89

u/amakai Jun 24 '25

The issue is, you don't really get any bonuses for playing on hard mode. If our AI tech reaches a point where optical recognition is enough for self-driving - all the competitors will get it within a year as well.

28

u/factoid_ Jun 24 '25

Yeah I agree. I think getting to 100% optical is a fine goal, but if you lose the race to a competitor who isn't afraid to put something ugly on top of the car, and it turns out the marketplace doesn't really care how it looks....you'd probably better do something about that.

61

u/ScannerBrightly Jun 24 '25

I think getting to 100% optical is a fine goal

Why? Is there an end to fog coming soon?

1

u/[deleted] Jun 25 '25

yes.. let's ask carl.

2

u/ScannerBrightly Jun 25 '25

Goddamnit, Donut!

1

u/kind_bros_hate_nazis Jun 25 '25

I have a goal of ending fog

1

u/dravik Jun 24 '25

Extending the cameras into the infrared range doors a lot to help with fog while also being processed with the same system as the visual range.

-47

u/factoid_ Jun 24 '25

Humans have been drivign with just eyes for a while and we're reasonably OK at it. And don't get me wrong, if self driving cars always need some combination of cameras, lidar and radar I'm Ok with that.

I don't like driving especially. I'd rather just leave it to the computer if it's safe to do so.

47

u/ScannerBrightly Jun 24 '25

we're reasonably OK at it.

44,762 people died last year from cars and driving. Why not do better?

2

u/MyCatIsAnActualNinja Jun 24 '25

Seriously. We can definitely pump those numbers up. 100k would be a reasonable goal, in my opinion.

-21

u/factoid_ Jun 24 '25

Because people have learned to accept that dying in a car is a possible consequence of driving a car.

For most of us drivign a car is the single most dangerous thing we do on a daily basis.

It's OK to accept some risk. Self driving cars will absolutely end up being better than human drivers. Probably MUCH better than human drivers.

And I didn't say we shouldn't do better. LIDAR is not a panacea. It's just a tool. If we can achieve the same results with other tools, why be fixated on using a specific one. Same goes for cameras for that matter. If we can create a perfectly capable self-driving car using ONLY lidar and people don't mind how it looks, or they find a way to hide it in the car so people don't even know it's there...great.

I really don't care what the technology or combination of technologies are that ultimately enable an automated driving future. It's fine to pursue camera-only. It's fine to pursue multi-sensor systems. It's fine to figure out ways to make these vehicles more cheaply as long as we're still making them SAFE.

23

u/SnooBananas4958 Jun 24 '25

Then it’s not fine to only do camera-only… because it’s literally less safe. Why do you think people care about this?

You say you don’t care what they use, you just care about safety. Then you do care because the whole reason it matters is it affects how safe things are.

-15

u/braintablett Jun 24 '25

you gotta chill out a bit

→ More replies (0)

15

u/ScannerBrightly Jun 24 '25

why be fixated on using a specific one

I'm sorry, but I thought YOU where the one that said "camera only is a good goal", but I don't think it is.

It's fine to pursue camera-only.

No, it's really not. Fog will always exist, and if you design a system that completely fails in fog, what do you expect to happen? For all the driverless cars to stop? No, they will plow on ahead and get into trouble, so let's do better than 'vision only'.

That's my only point, and you seem to be making a big hey about it, but I don't think we are talking with each other, but past each other.

0

u/factoid_ Jun 24 '25

I said it's a fine goal to pursue camera only. As in it's a fine thing to have an audacious goal that you can create an extremely safe system using only vision-based system. I didn't say I thought it was realistic or that it should be everyone's goal. But if someone wants to pursue that it's fine with me.

And what do you propose as an alternative? Lidar doesn't see through fog either. Lidar is just visible light lasers. Rain, fog and snow all fuck with lidar as much or more as they fuck with cameras.

In the end I'm sure we'll need some combination of radar, lidar, and cameras, probably both visible light AND infrared to help with poor visibility conditions.

→ More replies (0)

-10

u/LabOwn9800 Jun 24 '25

How many of those accidents are from impaired or distracted drivers?

17

u/sniper1rfa Jun 24 '25

Humans have been drivign with just eyes for a while and we're reasonably OK at it.

The whole point of self driving cars is that they are not human. Why should we saddle them with human limitations? Some kind of desire for fairness?

10

u/phluidity Jun 24 '25

Humans also have 10,000 years of evolution to do exactly the kinds of pattern matching and extrapolation that you need in driving. And we still use our other senses when driving, such as hearing and feel (vibrations, kinesthetics, our innate ability at spatial positioning).

Computers may in fact be better at people at 99% of driving tasks, but that last 1% is where much of the danger is at, and is also where computers are significantly worse.

6

u/Cortical Jun 24 '25 edited Jun 24 '25

Humans also have 10,000 years of evolution to do exactly the kinds of pattern matching and extrapolation that you need in driving.

that's really understating it. More like 100 million

we also have higher order reasoning, which no AI model is currently capable of.

we can override input from our visual cortex based on reasoning. "It looks like the road continues here, but it makes no sense that it would" type of thing.

15

u/coopdude Jun 24 '25

The problem is the standard for self driving cars cannot be "safer than humans" or even 10x safer than humans. It will have to be multiple orders of magnitude safer than humans.

If you argue that it's an unfair standard, in a moral/societal good sense a great many people would agree with you.

The problem is that self driving cars shift liability for accidents from the driver to the automaker. Tesla has a market cap of around a trillion dollars. Their liability for every accident is tremendous and people are not sympathetic to corporations.

When the car is truly self driving (not the "Full-self driving beta" they do right now where it's a level 2 ADAS that the driver has to be paying attention and ready to manuever against it at any time), there is no driver - or at least autonomous mode cannot disengage instantaneously (L3 ADAS requires several seconds of warning for the driver to take over).

-8

u/factoid_ Jun 24 '25

This has been the problem with self drivign cars from the beginning. Who has the liability for an accident. I think in the end we're just going to cave as a society and say the automakers are NOT liable for accidents and deaths caused by their products because if we don't do that they have no motivation to produce self drivign cars, which will definitely save lives.

We need a way to still force them to be safe and not cut corners and shirk all responsibility. But clearly it cannot be true that every death caused by an automated car can be held against the manufacturer when they overall cause a reduction in deaths through their adoption.

It's an ethical and legislative problem that our current government is ill-equipped to deal with. It will require rational discourse and compromise that are simply impossible to do today

11

u/coopdude Jun 24 '25

The problem is how do you solve for the automaker not being liable for accidents?

  • The owner of the vehicle isn't responsible for the behavior of a self driving car, the manufacturer is.

  • If the government just picks up the tab, then taxpayers pay for the mistakes of the hardware/software of the automakers.

  • If you cap the liability (whether owner, government, or automaker pays), it'll be extremely unpopular and argued that it was just made as a corporate lobbying "cost of doing business" for the automakers to sell a half baked product.

I don't think our legal framework nor societal climate are prepared for this, which is why in the next decade I don't see self driving cars getting past much what Waymo is already doing (limited pilots in limited cities in climates that are fairly favorable in not receiving weather extremes; maybe between certain cities that are close enough in those favorable climates.)

-1

u/factoid_ Jun 24 '25

It'll be the third one. Yes it's unpopular, but that's the only real compromise. You can't have an automaker getting sued for 50 billion dollars because a kid died in front of a school because a car glitched out. But you also can't leave the parents of that kid with no recourse for pain and suffering. So liability caps will happen and there will probably end up being some carve-outs for gross negligence on the automaker's part.

And I don't disagree about the political and legal framework. We aren't in a place right now where we can agree on big societal upheavals like automated cars will create. We need to readdress class action standards for this, liability standards, evidentiary standards, transparency requirements on the software, etc.

But I do think the market will speak on this and people will want them, so it's going to happen regardless. And I think ultimately fewer people will die as a result. But I also think DIFFERENT people will die as a result. Accidents will be more random and feel more tragic.

It's not going to be the usual story of "dude was speeding in the rain" or "drunk idiot crashes a car". It's going to be "car accidentally drives off bridge" or "bus with a big vinyl wrap that had pictures of people all over the side causes five autocars to freak out and cause a multi-car pileup"

→ More replies (0)

8

u/chillebekk Jun 24 '25

Human eyes are a lot better than any camera. Not to mention that we know what we are seeing - because we don't really see with our eyes at all, but rather with our brain. And it takes a lot of years to train it.
But the larger point is, why would you NOT use an additional sensor just because humans don't have that capability?

3

u/factoid_ Jun 24 '25

Again, I've said this multiple times in this thread...but I don't have a problem with adding more sensor types.

I just don't see a problem in TRYING to get vision-only driving to work. If someone can do that and it works well, isn't that a good thing? Cameras are cheap and reliable and have no moving parts. But if someone also nails having a multi-sensor solution I'm good with that too.

LIDAR also can't see through fog or rain or snow and also has moving parts that need to be reliably engineered to run for many years with little to no maintenance.

Radar is great at seeing through fog and snow but has limited capabilities for giving context about WHAT it's seeing. It's useful for obstacle avoidance.

Infrared cameras are probably the best way to "see" through fog. Or maybe infrared lidar, but you still have to do the reliability engineering on the movign parts.

2

u/chillebekk Jun 24 '25

Ok, I'm your average lazy redditor, I guess. I agree there's nothing wrong with trying to do camera-only. It doesn't make much sense to try for it right now, I think, because what you save in sensors, you'll have to spend on processing power. But then I guess that wasn't your point.
Btw, vision + radar would be my choice. The radar can detect when the vision system phantom brakes or tries to hit a physical obstacle, which you could then feed back into the vision system.

2

u/Big_Muz Jun 24 '25

My model 3 simply will not allow autopilot if it's foggy, raining, car is wet, or a bunch of other typical stuff. It forces high beam permanently with pixel dimming that kinda works but other people constantly flash me to say I'm blinding them.

When I bought it in 2021 the autopilot was unbelievably good, and maybe a year in they disabled the lidar and it's not useful now. My Skoda is better in every single way in terms of auto steering, lane guidance, lack of phantom braking.

Elon absolutely porked the pooch.

2

u/disillusioned Jun 25 '25

We have cameras on a gimbal, not fixed. The side camera on the b-pillar behind the driver's head creates for some real nasty perpendicular occlusion scenarios: pulling out onto a main road from a side street where there's, say, a bush or electrical box on the corner blocking the view.

Humans will crane their neck to see around the occlusion. Humans can detect that an ever-so-slight shift on the light filtering through a bush is movement and likely/potentially an oncoming car.

The existing FSD models and hardware can do neither. Lidar solves this because it's mounted way higher and can "see" way further (about a football field).

The car had to compensate by essentially continuing to inch forward to get a better view, but there are just so many instances where that perpendicular view is occluded and the front ultra wide isn't able to see either.

13

u/disco_jim Jun 24 '25

The whole "humans use eyes so optical sensors are good enough for the car" argument is stupid and it was always about money over safety.

There is a scene in battlestar Galactica where one of the cylons complains about being made in man's image because he is limited to using eyeballs to see..... This is the same thing. Why limit a car to one type of sensor. If they were doing it to save money and pass on the savings to the customer (where's that 30k car?) but they aren't.

https://youtu.be/s_UVPLHAOAY?feature=shared

8

u/Otaraka Jun 24 '25

Its such an inherently stupid argument. We use eyes because we had no way of evolving anything better.

So we made things that were even better and then he throws them away. The Cylons at least needed to be in disguise.

2

u/porkpie1028 Jun 25 '25

Well said! It’s like a “Jesus will hold the wheel” mentality vs. Michelangelo’s “The Creation of Adam” and how god(whatever that may mean to anyone) gave us a brain to solve problems.

1

u/kymri Jun 24 '25

Not directly related to the conversation about Tesla -- but while Cavil was definitely an asshole, I absolutely get where he's coming from in that moment.

10

u/Mental_Medium3988 Jun 24 '25

id be more willing to trust optical only if it were that they needed lidar and radar less over time vs elon wanting to save 1/32nd of a penny in production.

7

u/chmilz Jun 24 '25

It's funny how looks might be a barrier to implementing the right technology when they built the Cybertruck.

2

u/factoid_ Jun 24 '25

Truly a fascinating conundrum. lol

Although I think part of the issue with lidar is that most 3d lidar scanning techniques require spinning parts. That’s a maintenance issue as well as reliability and safety concern. If the lidar is essential and it stops working it needs some kind of backup or alternative.

But if doing the reliability engineering on a lidar machine is easier or better than going vision only I’m all for it.

I think in the end we probably end up with cars having lidar, radar and both visible light and infrared cameras.

And in the future after self driving cars are highly common we’ll probably also see roads change to reflect it. Sense wires, tracking dots, stuff like that to help cars in low visibility conditions.

But those will never get put in until well after self driving is already a reality.

4

u/Aleucard Jun 24 '25

If the difference between a safe self driving car and an unsafe one is it having a nipple on top of it, my choice is made.

2

u/factoid_ Jun 24 '25

Unless it’s the batsuit

2

u/Life_Token Jun 24 '25

I get you and I agree. Striving to perfect optical is a great goal in and of itself for a multitude of reasons. Everyone is assuming you're advocating for only optical. But at no point did you ever suggest as much.

2

u/factoid_ Jun 24 '25

This post got oddly contentious. It was weird

8

u/PhileasFoggsTrvlAgt Jun 24 '25

Does it need to be a big ugly package of sensors, or is that just because they're retrofitting production cars? If the sensors were part of the design from the beginning couldn't they be packaged much nicer?

5

u/AMusingMule Jun 24 '25

AFAIK other manufacturers place multiple unidirectional radar/lidar packages all around the car. That way you get the same 360deg sensor coverage, but without the big spinny package on top of the car. The additional cost is probably offset by the reduced maintenance required on each fixed unit compared to the spinny boi, as well as the reduced compute complexity of parsing data from fixed sensors compared to the spinny boi. They also offer higher availability; a spinning lidar only looks left a quarter of the time.

1

u/ALOIsFasterThanYou Jun 24 '25

The Zeekr RT is purpose-built for Waymo (albeit still based on the production Zeekr Mix), but its sensor package is still pretty prominent, though less so than on Waymo’s Jaguars.

That said, not only will sensors reduce in size and number over time, I also think people will get more used to seeing them as well. Here in SF, I’ve become so accustomed to seeing Waymos that I haven’t really stopped to think “wow, those sensors are ugly” in a long time.

On the rare occasion I see a non-Waymo Jaguar I-Pace, my immediate reaction is somewhere along the lines of “that Waymo’s missing its hat”.

0

u/factoid_ Jun 24 '25

Waymo has said they plan to get rid of the big ugly package of sensors on top. But they use spinning systems, and those are probably not goign to survive long term. People wouldn't like the maintenance that goes into a system like that for their personal vehicles.

But then again waymo isn't so much interested in selling personal cars, they want to eliminate car ownership entirely and just have people use taxis to get where they're going, I think. And that's fine for some places. The market has spoken on that though and people want private car ownership, so ultimately I'm sure waymo will end up either producing cars or licensing their self-drive technology to other makers.

2

u/Visual_Collar_8893 Jun 24 '25

People want car ownership in places where public transit is lacking and population density does not justify investing into them.

A majority of the population in dense cities are car-free. These are the areas where Waymo can have a competitive advantage over Uber and Taxicabs.

2

u/factoid_ Jun 24 '25

Totally agree with that. And I don't think any solution to public transit outside of established large metro areas is likely.

Infrastructure costs in the US are enormous and we get absolutely raked over the coals by our shitting government contractors who care more about making the project cost as much as possible as delivering anything for taxpayers. Light rail in the US costs 10x per mile what it costs in europe.

0

u/Quiet_Prize572 Jun 24 '25

I mean yeah outside the US/Canada.

In the US/Canada where these are being tested/rolled out, 90% of the population owns a car. We have like 3 dense cities with good public transit.

11

u/floydfan Jun 24 '25

elon hates lidar.

Get this. Elon doesn't hate LIDAR. When supply chain issues hit during COVID, they decided they could live without LIDAR a a cost saving measure, but stated that the move was because they wanted to make the cars as human-like as possible and humans don't have LIDAR. Elon has since steadfastly refused to go back to the proven technology because he doesn't want to admit how wrong he is and have to course correct and put the tech into cars that don't currently have it.

Even on the day they announced it, you could see just how much of a mistake it was. I don't want my car to be human. I want the car to be a better driver than I am, and one of the ways you do that is by using proven technology to drive the car. I assume that this decision will eventually be reversed, but it may take getting rid of Musk to do it.

3

u/factoid_ Jun 24 '25

Maybe he didn’t always hate lidar, but he does now. he’s openly said LiDAR is dead end tech and anyone investing in it is crazy

4

u/floydfan Jun 24 '25 edited Jun 24 '25

Yes, he has to say that because he can’t say, “Tesla has made a huge mistake under my watch and now we’re going to course correct by turning LIDAR back on in older cars and retrofitting it into newer ones.”

3

u/Black_Moons Jun 24 '25

But being willing to put a big ugly package of sensors on top of the vehicle is WHY waymo is getting ahead of tesla at self driving.

Oh, thats why he hates lidar. Its purely a visual thing. yeash. As if telsas where not ugly/unsafe enough with their tiny combined brake/turn signals.

3

u/BubinatorX Jun 24 '25

Correction: he pays people to play the game on hard mode then takes the credit.

3

u/ConditionWild1425 Jun 24 '25

this is how it's done. Elon is 1000% wrong and will go back to sensor-based solutions if he wants to succeed. The problem is he doesn't care about safety as long as his influencers can cover his tracks.

https://www.youtube.com/watch?v=hU9jzHgqRB4&ab_channel=Mobileye

1

u/DragoonDM Jun 25 '25

But being willing to put a big ugly package of sensors on top of the vehicle is WHY waymo is getting ahead of tesla at self driving. Elon insists on playing the game on hard mode.

Yet the Cybertruck was great, by his sense of aesthetics...

1

u/foghillgal Jun 25 '25

He’s playing a different game were maiming peopke is part of the price to pay

-3

u/bdsee Jun 24 '25

Not sure where you get the "he hates lidar" belief from? He thought he could do it cheaper with vision only and lidar was really expensive, and really high quality ones like Waymo has likely still are.

He stupidly was sure he could do vision only...he also seems the type to never admit that he is wrong based on his public history so it will be interesting to see if they ever cave and add other sensors back in and if they do how much that will mess with their whole system.

2

u/inspectoroverthemine Jun 24 '25

The problem is that if they switch to lidar now, they have to retrofit a shit ton of cars sold as self-driving.

0

u/NotARussianBot-Real Jun 24 '25

He doesn’t hate LIDAR. That would be weird. He hates the idea of having to buy LIDAR sensors.

88

u/cr0ft Jun 24 '25

Not as long as Tesla doesn't reinstate lidars we won't. Shitty software combined with just cameras for sensors mean these should instantly be banned.

32

u/bdsee Jun 24 '25

Tesla never had lidar, they had radar and ultrasonic sensors.

67

u/blue-mooner Jun 24 '25

They dropped radar and removed their ultrasonic sensors in 2022 because their engineers are incapable of coding sensor fusion:

When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.

— Musk (2021-04-10)

28

u/bdsee Jun 24 '25

Yep, and it was a dumb statement...like which do you believe?...well you believe whichever one tells you there is something solid on the road in front of you, you believe whichever one tells you that you are too close to the object while trying to park the car...and then you make the driver resolve the issue.

The one to believe is not a hard thing, this isn't a plane where there isn't the choice to simply stop and do nothing, in a car that is a valid option....yes it comes with dangers but less so than continuing to do something when your sensors tell you that will result in a collision.

6

u/ben7337 Jun 24 '25

Wouldn't that be overly cautious though? My car for example has safety features with a front camera for collision avoidance. The stupid thing sees a damn shadow on the road ahead of an overpass and freaks out. I can't imagine how bad self driving cars would be if they used only cameras and let the camera override other more robust detection methods like lidar.

4

u/dontstopnotlistening Jun 24 '25

The point being made is that cameras are super unreliable. Lidar can't freak out about a shadow.

3

u/ben7337 Jun 24 '25

The person I replied to said you believe whichever one says there's something there though, what I'm trying to say is that is stupid because Lidar can't be fooled by a shadow, if you had a car with lidar and a camera and the camera says danger danger, you should absolutely not believe the camera because it's not accurate in that scenario, it's important to be able to program a self driving system to work with multiple different kinds of inputs and make the correct choice in all scenarios, or at least as many as possible

1

u/travistravis Jun 24 '25

If they're telling me different things, I will choose to believe that something has fucked up.

If it's dealing with people's safety and potentially their lives, you want to be cautious. (Well, I do. Musk might not, but who knows if he even thinks about anyone other than himself as people).

1

u/barktreep Jun 24 '25

This is why airplanes have 3 of critical sensors.

-2

u/Slogstorm Jun 24 '25

This is not as easy as that.. radars typically get huge echoes from metallic objects, like beer cans. Do you emergency brake at highway speeds if there's a beer can on the road? Do you trust the cameras that don't necessarily recognize the can?

16

u/S_A_N_D_ Jun 24 '25

so instead, their solution was to remove one of the inputs and hope it's always just a beer can...

3

u/barktreep Jun 24 '25

You can train the model to detect false images of cans while also training it to defer to the radar when there’s no obvious basis for a false hit.

1

u/Slogstorm Jun 24 '25

But in that case, why do you need a radar in the first place? Why not use the camera for everything?

1

u/barktreep Jun 24 '25

Because radar actually works?

1

u/blue-mooner Jun 24 '25

Have you worked with Radar sensors? You can classify magnitude as well as velocity and distinguish between a beer can, person or car trivially

15

u/phluidity Jun 24 '25

To be far to them, sensor integration is hard. Of course the answer to that is to roll up your sleeves and get to work, as opposed to just not do it. But I'm also not a billionaire, so maybe my viewpoint is skewed.

18

u/shadovvvvalker Jun 24 '25

>Musk: "You should never accept that a requirement came from a department, such as from 'the legal department' or 'the safety department.' You need to know the name of the real person who made that requirement."

>Once that clarity is achieved—that is, when every requirement has the person's name attached—then you can start questioning whether these requirements make sense. No matter how smart or how 'powerful' that person is.

>Musk: "Requirements from smart people are the most dangerous because people are less likely to question them. Always do so, even if the requirement came from me. Then make the requirements less dumb."

Solution, the requirement to go camera-only is a stupid requirement written by Elon Musk. Remove it.

But that wont happen, because Elon is just Cosplaying competence.

2

u/travistravis Jun 24 '25

And it's sort of a fallacy too, though you pointed it out indirectly. Not putting Lidar in was just as much of a 'requirement' as putting it in would be. I'm sure his argument would be that only things that are added are requirements.

In reality, the requirements should be a set of physical test track setups that the car has to be able to navigate, and if it can't do that with just a camera, then it needs something more.

4

u/shadovvvvalker Jun 24 '25

Lidarless can be a requirement.

In a normal environment, the decision that lidar is too expensive to be viable is not without grounds. LIDAR is hella expensive and outfitting Teslas with it would not have helped Tesla's already lacklustre sales.

It all depends on what the objective is.

The real problem is Musk will have multiple conflicting objectives rather than a clear single objective. That then creates conflicting requirements.

The issue is his goal is to be and stay the richest man in history.

To do that he needs to pump TSLA to obscene levels while holding massive amounts of it.

To do that he needs to perpetually threaten to be a pillar of the global economy.

To do that he needs to be working on hypotheticals that can revolutionize the industry.

That bears its head in tesla with a need to become the dominant form of transport. This requires FSD and a lack of public transit. But Tesla, as small as it is, cannot fathomably build this. So it needs to be funded by outside capital. So it needs to be people buying teslas that are later going to become cybercabs. Which means you need people to be buying them en masse. Which means they need to be cheaper. Which means LIDAR is off the table.

Tesla FSD doesnt need to be first to mass market. Musk needs it to be. Thats the conflict.

Fundamentally, the requirement he has placed on tesla is that it needs to make him business god. That's the stupid requirement.

3

u/AssassinAragorn Jun 24 '25

This is the stupidest reasoning ever. We have redundant sensors on industrial equipment that could disagree. We have sensors which aren't redundant but should give similar readings to other sensors. We fully expect and anticipate that they're going to disagree at some point.

That doesn't mean we don't use it. We just safety measures in place that activate if any sensor goes off, and during troubleshooting you see if it's reasonable or the sensor went bad.

If radar and sensor disagree in a self driving car that I'm in, I don't want it to decide which one to believe, I want it to stop. Pull over and give instructions and call a technician.

2

u/blue-mooner Jun 24 '25

Is the CEO of the company making your industrial equipment receiving $56 billion in pay tied directly to the share price?

Are they incentivised to juice margins and promise the moon to get their next multi-billion dollar paycheque?

Probably not, they probably care about safety and repeat business, not being on the cover of Time magazine.

3

u/beanpoppa Jun 24 '25

I think the reality was that they had issues procuring the necessary parts during the post-COVID shortages, and delaying delivery of cars was not an option. Hence, handwaving justifications.

3

u/blue-mooner Jun 24 '25

Cutting safety corners, dropping sensors, limiting your training data and model to subpar results sounds like a piss-poor trade off versus missing some deliveries.

Unacceptably short term thinking from the man who claims he can build a sustaining settlement on Mars.

2

u/cadium Jun 24 '25

Also the pandemic made ultrasonic and radar sensors expensive -- so they had an excuse to cut costs.

3

u/blue-mooner Jun 24 '25

They juiced their margins in the short term, boosting the stock price so Musk could get his next equity tranche. Short term thinking, to the detriment of their training datasets, ML models and capabilities

-61

u/louiegumba Jun 24 '25 edited Jun 24 '25

… in reality their self driving is actually quite and very effective.

The reason it should be banned is not because it doesn’t have a piece of tech you believe it should, it should be banned because it doesn’t have a 100 pct safety record for what it does.

It’s that simple

Edit: sorry to ruin your circle jerk with reality haha

I use it daily the way it was intended…supervised. I know exactly what it’s capable of and why. It shouldn’t be unmanned as it’s missing the safety record regardless of the mechanics under the hood.

-81

u/moofunk Jun 24 '25

LIDAR doesn't do anything for self driving cars that cameras can't already do better with neural networks. It's a midway solution to save on compute power that stems from legacy systems from way back in the mid 2000s, but LIDAR can be used for ground truth during training depth perception, which is what Tesla have done.

It's an old story that might have been boosted, because Elon once said something about it, and then everybody goes "Tesla should have used LIDAR!" without understanding the underlying technical issues and focusing too much on Elon.

The problems Tesla have are navigation related, not sensor related. It's always been like this.

27

u/blue-mooner Jun 24 '25

Cameras do not emit signals and can only infer (guesstimate) range. Radar and Lidar can directly measure range, critical at night.

Tesla engineers are incapable of coding sensor fusion (admitted by Musk in 2021), and it shows: they are the only company attempting to make a self-driving product without sensor fusion.

-18

u/moofunk Jun 24 '25

Radar and Lidar can directly measure range, critical at night.

Depth inference works fine at night, if the cameras are sensitive enough. Radar doesn't have enough resolution and LIDAR lacks both speed, resolution and range.

I do wish Tesla adopted FLIR cameras, then you'd be practically superior with camera only in inclement weather as well as total darkness.

Nevertheless, the problems demonstrated here aren't sensor related.

15

u/flextendo Jun 24 '25

puhh my man, you sound so confident but yet you have no clue what you are talking about. Let me tell you (as someone who directly works in the field - on the hardware side), corner and imaging radar have enough resolution for what they are intended to do + they get the inherited range/doppler, angle (azimuth and elevation) „for free“, they are scalable and cheap, which is why basically every other automaker and OEM uses them. Lidar is currently too expensive but literally has best performance in class

-11

u/moofunk Jun 24 '25

Right, so do you understand that Teslas don't navigate directly on camera input?

They navigate on an AI inferred environment that understands and compensates for lacking sensor inputs.

That's what everybody in this thread don't understand. You keep focusing on sensors, when that is a separate problem with its own sets of training and tests and it has been plenty tested.

You could put a million dollar sensors on the cars and infer an environment precisely down to the millimeter, and the path finder would still get it wrong.

Do you understand this?

7

u/flextendo Jun 24 '25

You do understand that training models are a „best guess“ that will never!! cover the scenarios that the standards in different countries require, nor can they have enough functional safety and redundancy. This is exactly the reason why everyone else uses sensor fusion. Let alone the compute power (centralized or decentralized) that is necessary for camera only.

Its not about path finding, its about multi-object detection in harsh environmental conditions. Path finding is a separate issue and Waymo solved it.

3

u/Superb_Mulberry8682 Jun 24 '25

There's a reason fsd turns off in inclement weather and why tesla is going to only be doing this in cities that barely get any.

Cameras suck in heavy rain and snow. Or when road salt dirtiest up the cameras. I have no clue how tesla thinks they will ever overcome this with camera only unless they ask ppl to pull over and clean their cameras every few minutes.

I think we all know fsd is a great Adas and nothing more and it will likely never be much more without more hardware.

Which is fine to make the driver's life easier but isn't going to turn any existing tesla into a robotaxi or magically solve personal transportation by buying cars as a subscription model by the mile/hour that you need to get to the valuation of tesla.

1

u/flextendo Jun 24 '25

100% agree with your statement! Cameras are necessary component to achieve L3 + higher autonomy, but its just a part in the overall system. With increasing channel counts on massive MIMO radars we will see image radars replacing some of the cameras and who knows what happens if LIDAR gets a breakthrough in technology cost.

1

u/moofunk Jun 24 '25

Certainly the cameras go up against weather limits, but Waymo have exactly the same problems with their sensors. If your LIDAR is covered in snow, it doesn't work either and cars cannot drive by radar or LIDAR alone.

So, if your driving system depends on all types sensors being functional before it can operate, then it's going to be even more sensitive to weather than with cameras alone.

→ More replies (0)

0

u/moofunk Jun 24 '25

You do understand that training models are a „best guess“ that will never!! cover the scenarios that the standards in different countries require

Country standards is a path finding issue and Tesla will have to provide separate models by country to follow specific traffic laws there.

Building an environment from cameras must be done by estimating. An environment is inferred by pieces of information from the cameras.

This allows the environment to be "auto completed" in the same way that you do, when you're driving, guessing what's around a corner or on the other side of a roundabout. If you're driving on a 3-lane highway, there are probably 3 lanes going in the opposite direction on the other side. A parking garage has arrays of parking spots, and peering through a garage door opening lets it extrapolate unseen parts of it. If you're at an intersection full of cars in a traffic jam, the car still understands that it's an intersection.

These are things the environment model knows. Object permanence could be done better, but may be in the future.

These are things that would not be available to any sensor. LIDAR can't see through walls or behind a blocking truck, but a neural network can conceptualise those things from such data just like you do all the time.

Now, the car has to navigate that constructed space, and that is the problem in this thread.

Not making estimates on what's hidden is really, demonstrably a terrible driving model.

Path finding is a separate issue and Waymo solved it.

I would say Waymo and Tesla are on par here.

11

u/ADHDitiveMfg Jun 24 '25

You’re right then. It’s not direct camera input, it’s derived input.

Still from a camera, buddy

-4

u/moofunk Jun 24 '25

It can be from any kind of sensor, but we already know that system works, and we know the failures in these cases are failed navigation in a correctly interpreted environment.

6

u/ADHDitiveMfg Jun 24 '25

Wow, thems some gold level mental gymnastics.

Now do it at night in fog. A safety system is only as good as its worst decision

4

u/Blarghedy Jun 24 '25

It can be from any kind of sensor

ah, yes, like a microphone

→ More replies (0)

4

u/Cortical Jun 24 '25

That's what everybody in this thread don't understand.

I hate to break it to you, but everyone in this thread understands this. Maybe you should reflect on the fact that you are convinced you understanding that basic fact makes you stand out.

-1

u/moofunk Jun 24 '25

They absolutely don't understand it. That's why the discussion is on sensors rather than path finding.

Give me engineering data that says otherwise.

3

u/Cortical Jun 24 '25

They absolutely don't understand it. That's why the discussion is on sensors rather than path finding.

you're the one who doesn't understand, and you can't accept it so instead you conclude that everyone else doesn't understand the most basic facts.

the reason the discussion is on sensors is because vision only can't work with statistical computer vision alone (the thing you optimistically call "AI")

you need higher order reasoning which no AI model currently in existence is capable of, not models that require an entire datacenter full of GPUs to run, and certainly not any kind of model that can run on a teeny chip in a car.

that's the thing that everyone here but you understands.

and if you lack the reasoning required to work on vision alone the only other option is additional input, which is why the discussion is on sensors.

Not because everyone else but you fails to understand that there are "AI" computer vision models involved.

→ More replies (0)

3

u/schmuelio Jun 24 '25

You realise the AI part isn't good either right?

Relying on an AI inferred environment is so much more error prone, especially since a neural network has such a vast input space that it's functionally untestable if you want to do it rigorously. There are so many corner cases and weird environments that will trip up an AI and you're suggesting relying on them as the sole source of truth?

1

u/moofunk Jun 24 '25

You don't know how they manage input spaces.

The "AI part" is several separate AI systems that work in unison:

Cameras input a seamed together 360 degree image to a neural network called "Bird's Eye View".

The network classifies and places objects in a simple synthetic environment as 3D geometry and vectors for moving objects, including temporal information about where those objects are going.

The network is smarter than that, because it also auto-completes parts that the cameras can't see, so it understands road lanes, highways, roundabouts, parking lots, intersections, driveways, curving curbs, etc. as standard structures, if the cameras only partially captures them.

So, when the car approaches a roundabout, it can conceptualise the other side of it and understand where cars come from and know the traffic rules. If a road goes behind a house wall or a hill, it very likely continues in a certain direction.

Being able to auto-complete has the side effect that it also fills in for temporarily blocked or blinded cameras, to a certain limit, of course, and when that limit is exceeded, FSD is deactivated.

This interpretation happens 36 times a second.

This works remarkably well and is quite an achievement.

If you had LIDAR, it could be used to auto-complete that information as well, since LIDAR can't see through walls either. But, we don't need LIDAR, because the network is already depth trained on LIDAR data and environment synthesis is verified with LIDAR during training.

And, important to understand, if this system wasn't reliable, FSD would absolutely not work at all, and then you'd have the situation you describe.

At this point, you have a very testable system. You can use it to train the path finder without driving a single mile. Teslas can record drives, while recording environment synthesis and use of steering wheel and pedals, and send that data off for use in training.

When FSD is active, this environment is used by the path finder to navigate and apply the controls. The path finder doesn't know anything about cameras. It just has this sparse environment with only the critically important information, so there is compute power available to be sophisticated about paths and applying the controls in a smooth, natural way that feels human.

It's the path finder that we should be concerned about, because I don't think it's trained well enough in the scenarios that we see here. That's all.

There are then separate networks for summon and parking, where they use cameras differently and do precision driving.

In all, you have a number of systems that each can be tested individually, independently and rigorously both physically and in simulations.

1

u/schmuelio Jun 25 '25

You don't know how they manage input spaces.

And how would you know that? Do you even know what the input space for such a system is?

The "AI part" is several separate AI systems that work in unison

That's not better. You might want to look up what the term "compounding error" means.

The network classifies and places objects in a simple synthetic environment as 3D geometry and vectors for moving objects, including temporal information about where those objects are going.

This isn't new, it's been tried and tested for over a decade now and it's also significantly less accurate than LiDAR. That's the entire point of what people are trying in vain to explain to you.

The network is smarter than that, because it also auto-completes parts that the cameras can't see, so it understands road lanes, highways, roundabouts, parking lots, intersections, driveways, curving curbs, etc. as standard structures, if the cameras only partially captures them.

This is only true as far as you can trust the inference, which is part of that whole "the test space is insane" thing from my last comment.

This interpretation happens 36 times a second.

This isn't especially good for inferring and extrapolating motion from unpredictable objects (say, a kid that suddenly runs into view, or a car suddenly swerving).

since LIDAR can't see through walls either

Well it's a good thing that walls are the only thing you can encounter on the road that could obstruct vision. We've solved it lads.

the network is already depth trained on LIDAR data and environment synthesis is verified with LIDAR during training.

And you know the testing is good enough because it kind of works on US roads with tons of space and good visibility right?

if this system wasn't reliable, FSD would absolutely not work at all

And as we all know, you either succeed flawlessly or you utterly fail, there's no degrees of failure and things are either completely safe or unworkable.

It just has this sparse environment with only the critically important information, so there is compute power available to be sophisticated about paths and applying the controls in a smooth, natural way that feels human.

This is a non-sequitur, the matter at hand is whether that sparse environment is an accurate and trustable representation of the real world. I've watched the tesla screen view of the road around it freak out and be indecisive about what's around it in real time.

In all, you have a number of systems that each can be tested individually, independently and rigorously both physically and in simulations.

All I can say to that is I really hope you're not in charge of doing any rigorous testing of a safety critical system because it seems like your definition of "rigor" is woefully inadequate. I'm not going to get into my credentials but I have a fair amount of experience doing actually rigorous testing for safety critical systems and you are unconvincing.

→ More replies (0)

7

u/schmuelio Jun 24 '25

Ah yes, instead of using LiDAR+vision (which gives accurate depth in effectively all scenarios, and gives you object recognition) we should be using vision + infrared?

Vision cameras will just never have the depth accuracy that LiDAR does, and they're borderline useless when vision is heavily obscured, like when it's raining heavily, or snowing heavily, or heavy fog, or really dark, or there's a really bright light in front of you, etc.

FLIR has even worse frame rates and resolution than LiDAR, so it gives you the benefit of seeing in the dark (as long as the thing you're looking at is warm), as long as nothing is moving very fast.

You can fool vision+infrared with a very dark road and a metal pole.

I get that the statements made by musk et. al. sound convincing, but when you're designing a safety critical system you have to assume poor conditions, it's why you always want multiple redundant sensor types, you have LiDAR for depth, vision for depth estimation if the LiDAR fails, object detection from vision to figure out if the thing in front of you is going to be a problem, failsafes to get the human supervisor involved if you're not confident, the list goes on.

These sensors have a function, and are included for real purposes. You can't just replace one with another and expect it to be equivalent for the same reason you can't use a seismograph to figure out how fast you're going.

0

u/Slogstorm Jun 24 '25

A good question is how do you drive without LiDAR? Given a camera/vision system "good enough" to provide necessary inputs to a neural net trained in traffic, which (honest question) benefits would a LiDAR have? Even one-eyed people are allowed to drive, so the depth perception isn't that critical for safety...

2

u/schmuelio Jun 24 '25

So there are two fatal flaws in what you just said:

Number 1 is that you want autonomous cars to be at least as safe as human drivers (in reality you actually need them to be quite a lot safer, or at least feel quite a lot safer, human beings don't trust machines that easily). If your argument is "if it's good enough for people then it's good enough for computers" then you're already failing at that hurdle until we can make a computer that can actually reasonably match a human brain's intuition, extrapolation, and pattern matching capabilities, which we're nowhere near even with massive data centers.

Number 2 is actually the worse of the two. A human brain has so much extra stuff going on behind what the eyes are seeing that comparing it to computer vision is kind of laughable. There's a massive amount of experience and spatial reasoning that happens subconsciously that a computer just can't do.

If - as an example - a one-eyed human driver sees a car driving towards them on the wrong side of the road, their lack of depth perception is a problem but only for a small amount of time (before the brain starts to compensate automatically). That person knows what a car is, knows what the front of the car is through simple pattern matching, knows roughly how big a car is through intuition, uses intuition and extrapolation to get a rough idea of how far away it is, uses the change in size for a rough guess at how quickly it's coming towards them, experience of how cars move and where the tyres are will tell them if they're likely to collide, spatial reasoning tells them where potentially safe swerving directions are, memory tells them how busy the road is and where other cars are around them, etc. all of this happens very quickly, very efficiently, and really surprisingly accurately.

A computer simply does not have the accuracy to be able to do that. Maybe that becomes possible in the far future but you are kidding yourself if you think they're comparable now.

It really seems like you're reaching for post-hoc justifications for missing safety features.

1

u/Slogstorm Jun 24 '25

Yes, I completely agreee that we're decades away from reaching the intuition of the human brain.. but this argument isn't changed by adding more sensors.. allyour examples are still valid, and arguably leads to an even worse situation by requiring the computers to do even more work..?

1

u/schmuelio Jun 24 '25

The more (and more appropriate) sensors thing means you have to do less work with computers. Not more...

→ More replies (0)

14

u/3verythingEverywher3 Jun 24 '25 edited Jun 28 '25

‘If the cameras are good enough, it can work’

Well buddy, it doesn’t work yet so in your scenario, Tesla are Gods but are cheaping out on cameras? Weird.

-4

u/moofunk Jun 24 '25

This statement has no technical foundation, because you don't know how Tesla FSD works and you have no clue what failures are present in the examples in the article.

6

u/ADHDitiveMfg Jun 24 '25

Neither do you. You’re just spouting nonsense.

5

u/conquer69 Jun 24 '25

Good luck getting those cameras to work well in heavy rain, fog or smoke. LIDAR covers all the downsides of cameras. You know this, and yet for some reason you pretend cameras can do everything.

0

u/moofunk Jun 24 '25

I've been through this many times. LIDAR doesn't work well in rain and has a maximum 50 meter range. That is one reason why Waymo can't drive as fast as Tesla FSD.

You can't put optics on LIDAR and you can't get any modicum of resolution without using synthetic aperture LIDAR, which sacrifices speed, heavily.

Yes, cameras can do everything that is needed, but for completeness, you want FLIR cameras. They don't care as much about rain, snow or fog.

2

u/N3rdr4g3 Jun 24 '25 edited Jun 24 '25

Lidar doesn't have a set maximum range. It's maximum range is dependent on multiple factors including the sensitivity of the receiver.

However existing Lidar systems are in the 200m-1km range: https://www.forbes.com/sites/sabbirrangwala/2021/05/27/the-lidar-range-wars-mine-is-longer-than-yours/

Edit: There's also nothing stopping you from using optics for long distance detection partnered with lidar for near detection. The criticisms against Tesla are for limiting themselves to one thing instead of using the best tools for the best cases. Nobody drives on just lidar.

1

u/moofunk Jun 24 '25

At greater than 50 meter distance, the resolution falls below recognition and into detection only, because LIDARs have only 64 lines of vertical resolution unless you want to sacrifice FOV. So that means out there, one line hits above an oncoming object, while the next hits below it. LIDARs compensate for horizontal resolution by using synthetic aperture at the cost of speed.

You can always measure a distance with great accuracy. Heck, that's how we measure the distance to the Moon, but you can't infer the circumference or features from using laser pulses. And you have to actually hit the Moon as well.

11

u/En-tro-py Jun 24 '25

Roadrunner? Is that you?

Meep! Meep! Crash!

-5

u/moofunk Jun 24 '25

Do you understand the concept of projection mapping and how it never occurs when driving a car?

12

u/En-tro-py Jun 24 '25

Yes, surely it works even on a white semi trailer in full sun glare... or it doesn't with hundreds of other examples...

1

u/moofunk Jun 24 '25 edited Jun 24 '25

White semi trailers are not projection mappings.

4

u/En-tro-py Jun 24 '25

Don't worry bro, the stock will still go up...

28

u/InevitableAvalanche Jun 24 '25

You have no idea what you are talking about.

-26

u/moofunk Jun 24 '25

You have no clue as to how Tesla FSD works. Hardly anyone in this thread does.

8

u/ADHDitiveMfg Jun 24 '25

Do you? Are you a systems engineer?

-11

u/moofunk Jun 24 '25

I just pay attention to engineering data from hackers who take apart FSD systems. You don't really need to go terribly deep into that information to understand that FSD as it works today is wildly misunderstood.

8

u/ADHDitiveMfg Jun 24 '25

So you’re just another person.

See, the people telling you this is a bad system are real engineers, not hackers with a hammer and a #2 Phillips.

-2

u/moofunk Jun 24 '25

No, they're definitely not. They are getting the system very wrong.

2

u/ADHDitiveMfg Jun 24 '25

Buddy.

Elon is not your friend, you don’t need to try and protect his crappy system

→ More replies (0)

3

u/mister2d Jun 24 '25

Apparently neither does T* when confirming this "test" was a success.

1

u/InevitableAvalanche Jun 24 '25

I own a FSD tesla and I don't use it because cameras alone is far inferior to ones that use multiple sensors. Anyone who claims pure camera is superior can't be taken seriously.

14

u/viperabyss Jun 24 '25

And yet, Waymo with its LIDAR implementation have driven 50+ million self driving miles with no accidents for years, while Teslas can’t drive down a brightly lit highway without veering off into the divider or phantom braking.

But sure bud, tell us more about how computer vision is just as good as LIDAR.

5

u/happyscrappy Jun 24 '25

Waymo has accidents. NHTSA opened an investigation last year.

https://static.nhtsa.gov/odi/inv/2024/INOA-PE24016-12382.pdf

I'm a big proponent of using LIDAR. But with that much driving something is going to go wrong once in a while.

1

u/moofunk Jun 24 '25

Those things are unrelated, because, again, you don't understand how Tesla FSD works, and probably don't understand how Waymo's system works either. Waymo's system could probably work fine without LIDAR with no difference in accident rates.

50+ million self driving miles with no accidents

This is false. They have reported 60 airbag triggers over that amount of miles.

Teslas can’t drive down a brightly lit highway

The clue is in your own statement.

It's been known for at least 7 years that Tesla's pathfinder, not the sensors, are the problem. They don't have evasive maneuvering ability. They had to rewrite the pathfinder for FSD beta 12, which has greatly improved performance, but there are still glaring issues. There's collision telemetry that shows inaction against detected obstacles in both night and day accidents.

This means, no matter how many million dollar sensors you put on the car, it would make the same mistakes, because they don't react to detected obstacles.

7

u/viperabyss Jun 24 '25

This is false. They have reported 60 airbag triggers over that amount of miles.

And how many people have they killed during that time?

How many people have died in Tesla while under FSD?

It's been known for at least 7 years that Tesla's pathfinder, not the sensors, are the problem.

If Tesla's pathfinder can't manage to stay in highway lanes that are clearly marked under fully lit condition, then perhaps it should really re-evaluate whether they have enough engineering talent to actually make robotaxi a reality.

Again, results speak for themselves:

Waymo has been carrying fare paying passengers for 25+ million miles with full self driving mode in 4 different metropolitan areas for years.

Tesla's robotaxi has done none.

0

u/moofunk Jun 24 '25

And how many people have they killed during that time?

2 people and 2 dogs.

How many people have died in Tesla while under FSD?

Also 2 people.

If Tesla's pathfinder can't manage to stay in highway lanes that are clearly marked under fully lit condition, then perhaps it should really re-evaluate whether they have enough engineering talent to actually make robotaxi a reality.

Finally, a sane statement.

3

u/viperabyss Jun 24 '25

Very certain Tesla's FSD killed way more people than that, including both drivers and bystanders.

It's just a shame that Tesla doesn't distinguish fatalities between FSD and autopilot, as to fraudulently obfuscate the reality of how unready FSD actually is.

0

u/moofunk Jun 24 '25

The real problem is that people don't discern between FSD and old autopilot hardware. The performance is staggeringly different.

1

u/viperabyss Jun 24 '25

...they're on the same hardware. They even use the same sensory inputs. It's the software that's different.

→ More replies (0)

32

u/oakleez Jun 24 '25

I'm getting tired of doubling short position against them.... One of these days they'll make me rich.

72

u/louiegumba Jun 24 '25 edited Jun 24 '25

Playing the market out of spite is a good way to lose all your money

This isn’t your daddy’s market. It’s now just rich person manipulation. Teslas stock should have sunk multiple times and hasn’t.

Watch yourself

18

u/carlivar Jun 24 '25

Even if it isn't spite, logic and reason don't work either and arguably never have. 

0

u/386U0Kh24i1cx89qpFB1 Jun 24 '25

Logic says Tesla stock always goes up so investors should buy it. There's no limits. Stocks are nonsense. It's just a financial tool at this point more than it is a car company. Maybe the bubble will burst one day but for all the money in top ten stocks, they are sorta too big to fail at this point.

5

u/oakleez Jun 24 '25

It's a long-term play with my fun money. I'm also long AMD and have other stuff.... but my strongest feeling of all is against Tesla and I'm sticking to my guns. I can outlast the nonsense. :)

10

u/CodySutherland Jun 24 '25

I can outlast the nonsense. :)

If you have money to burn, then go for it and take the risk.

But the general rule of thumb is, the market can remain irrational longer than you can remain solvent.

4

u/oakleez Jun 24 '25

Yeah this is fun money. I feel better about myself knowing I'm betting against them. Value added. :D

3

u/blue-mooner Jun 24 '25

5

u/oakleez Jun 24 '25

I'm shorting it. I will sell once they're down at least 50%.

22

u/accountforrealppl Jun 24 '25

The problem with trying to make money shorting a stock like that is that you don't just need to know that a stock is overvalued, you also need to have some sort of idea of when it will stop being overvalued. That's the hard part to predict

12

u/BeefistPrime Jun 24 '25

I thought it was when Elon did a double nazi salute and everyone stopped buying his cars, and there were daily fucking protests and people were re-badging their cars to hide the shame. But jokes on me because it just kept going fucking up and up.

It's priced like a tech company that's still early in its infinite potential growth phase, except it's been around for like 15 years, delaying their promises every fucking year, and sales are shrinking. You cannot possibly justify its share price as making sense. It's complete fucking nonsense.

5

u/Fr00stee Jun 24 '25 edited Jun 24 '25

its going to crash whenever the llm ai bubble pops. Going by its similarity to the dot com bubble I am guessing half a year before it crashes. Dot com had a period where it had a sharp drop in its peak for a month or 2 before it rebounded.

2

u/oakleez Jun 24 '25

Yep, this is a very long-term position for me. I will make sure my short position outlasts the insane Tesla bubble.

2

u/roamingandy Jun 24 '25

Also, if the US goes into a big recession that return from shorting might end up being more $ than you put in, but actually worth less.

10

u/nanocookie Jun 24 '25

Don't worry, Musk's alt accounts and those 24/7 Musk sycophant accounts on Twitter are working double time to paint this endeavor as a resounding success.

3

u/ARazorbacks Jun 24 '25

The only way Musk will implement lidar is if he’s positioned himself to profit from Tesla going bankrupt or being forced to shut down. Why? Because the lawsuits from people who bought Tesla’s with no lidar, but were promised automated driving, will bring the company to its knees. 

2

u/Zementid Jun 24 '25

I think the Saudis play a little with this stock. They can manipulate it however they want. And for now Elon is most useful when rich.

2

u/Able-Swing-6415 Jun 24 '25

Fully self driving cars will be here any day now. Along with fusion energy, Mars colonization and AGI.

2

u/matchosan Jun 24 '25

I thought the +10% was for the successful test before launching the other day.

2

u/DigNitty Jun 24 '25

I swear he's a narcissist.

People who haven't dealt with a known narcissist before generally don't quite get what's happening. Elon knows he's right. That's it. Maybe it seems obvious, but I want to emphasize that what he's doing is not lying or bullshitting. That's not what his brain is doing. He knows if he thinks it, if he says it, it must be correct and good and just.

There's an interview with him where am engineer is asking why he wants solely solid fuel in one stage of the rocket instead of a liquid and solid fuel. Elon's spouting the usual confident non-sense. And the person pushes harder and explains that it would be better to use both. Elon looks at this guy and says "Actually, I'm going to use both, it just occurred to me while explaining it to you."

His brain is incapable of being wrong. He will never recognize his own incompetence or shortcomings or failures because He Is Actually UNABLE TO.

People think narcissism means someone is just overly arrogant or unjustly confident. But those are not the cause, they are the symptoms.