xbmcuser 2 days ago

The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief. Why the fuck would you want ai cars to be the same as humans you should want them to be many times better and radarr/lidar sonarr type tech make them better

  • toddmorey 2 days ago

    This was purely an effort to improve margins on the cars that they tried to sell with other types of rationale. From the way I've seen him operate his companies, treat his employees, and now work with the government, he has a high tolerance for risk paired with a very low tolerance for perceived inefficiencies plus too little patience to fully understand the problem.

    He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.

    • kelipso 2 days ago

      It's pretty funny because lidar used to be many thousands of dollars and now it's down to hundreds and they're still sticking to just regular cameras. Funny in the sense that Teslas are many times more dangerous than lidar enabled cars but anyway.

      • labrador 2 days ago

        Yes, it's become clear to me that it's only a matter of time before Tesla adds LiDAR and says that was the plan all along. Meanwhile, what better way to test vision-only than by releasing it to the beta testing public? Elon Musk knows people will die but in his ketamine addled brain he's convinced himself that a vision only self-driving Tesla is safer than a human driver so it's ok. People were going to die anyway statistically speaking.

        • fragmede 2 days ago

          it's okay. the media will be along to tell us how unsafe ketamine is because they don't like Elon.

      • 4d4m 2 days ago

        How much is a solid state lidar now?

        • kelipso a day ago

          Hesai, a Chinese company, sells lidar for cars and they plan to sell at $200 later this year, their current one is around $400. You would need four for a car. I assume the ones for Waymo would be a bit more expensive.

    • mindslight 2 days ago

      The sour irony is that "move fast and break things" was formulated in the low-stakes world of web entertainment software, which was able to become so prominent precisely because of the stability of having our more pressing needs being predictably taken care of (for the most part).

    • Ygg2 2 days ago

      To quote Black Adder: Some of you may die, but some of us will live!

      • throw0101d 2 days ago

        "Some of you may die, but it's a sacrifice I am willing to make." — Lord Farquaad, Shrek, https://www.imdb.com/title/tt0126029/characters/nm0001475?it...

        • smallmancontrov 2 days ago

          The irony is almost palpable, invoking this quote against a system that is driving 600 million miles a quarter without the 8 deaths a human would cause over the same distance.

          • toddmorey 2 days ago

            The argument here isn't against self-driving. Flying is also 100x safer than driving, but there's still real value in continuing to use the best technology we have available to make flying even safer.

            "Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame."

            • smallmancontrov 2 days ago

              Ok, what other ADAS is driving over 2 billion miles a year?

              As for "best technology available," the galaxy brains in this thread are tossing out numbers assuming that sensor fusion never fails and that correlated failure modes can be neglected, which is wild. I never thought Elon was a genius -- he's a business guy willing to make big bets on interesting tech, nothing more nothing less -- but if the confidently incorrect engineering claims on display in this thread are any indication, maybe you guys should be calling him a genius after all, because this ain't it.

            • josephcsible 2 days ago

              > there's still real value in continuing to use the best technology we have available to make flying even safer.

              That's true, but it would obviously be completely unreasonable to stop all flying in the meantime.

              • TheBicPen 2 days ago

                Grounding entire fleets of aircraft when a safety issue is discovered is completely normal, and that's the way things should happen. Remember when the MCAS issue was discovered on Boeing 737 MAXes after 2 fatal crashes? They were all grounded for a while, and rightfully so.

                • dontTREATonme 11 hours ago

                  Two fatal crashes that resulted in 300+ fatalities iirc. Human driven cars have many more fatalities than self driving cars and no one is talking about stopping all road travel until we can fix it. Your argument is disingenuous at best if not deliberately deceptive.

          • daveguy 2 days ago

            [citation needed]

            Tesla releases all self driving statistics including time since disengagement of self-driving before accident, right?

            • smallmancontrov 2 days ago

              https://www.nhtsa.gov/laws-regulations/standing-general-orde...

              The numbers include all incidents where self-driving was active within 30 seconds of the crash, because of course they do. The meme that Tesla is allowed to simply bypass reporting on a technicality would be absurd if it weren't so voraciously spread by post-truth luddites. Look, if you want to dunk on Elon, I get it, but he does so many real shitty things that I would ask you to focus on those and not make shit up.

              • Veserv 2 days ago

                That number is incidents successfully detected, not ground truth. Last I looked, the majority of recorded fatalities went undetected by Tesla telematics. That demonstrates their accident detection/reporting is empirically incapable of producing a robust ground truth estimate. The number presented is a lower bound with no identifiable upper bound.

                A upper bound could be easily produced by Tesla. All vehicle fatalities identified by police are recorded in the national Fatality Analysis Reporting System (FARS) [1]. Tesla could investigate every recorded Tesla fatality by VIN and cross-reference with their telemetry to determine if their systems were available at all or active on or near the time of the fatal crash. This would cost them between a few thousand dollars to a few million dollars annually, depending on desired precision, which compared to the billions it makes annually is minuscule sum to affirmatively establish safety.

                They intentionally choose to not do so. Instead deliberately deceiving consumers and the public by conflating the lower bound with a upper bound in all official safety messaging.

                Even ignoring their gross incompetence or maliciousness in not doing simple safety analysis when numerous lives are on the line. The mere fact they conflate a lower bound with a upper bound in their messaging is scientific malfeasance of the highest order. Their reporting has zero credibility when they make such clear and intentional misreporting purely for their own benefit to the detriment of the public and even their own customers.

                [1] https://www.nhtsa.gov/research-data/fatality-analysis-report...

              • bumby 2 days ago

                If Tesla is Level 2, that link doesn’t support the OPs point about “all” incidents, but rather only those involved in a fatality, hospital treatment, airbag deployment, or towed vehicle. Am I reading that link correctly?

                • smallmancontrov 2 days ago

                  The problem with "all" incidents is that reporting bias drowns the whole thing in noise. I clipped a curb the other year and didn't report it to anyone -- am I messing up the statistics? In contrast, hospitalizations and bodies are concrete enough to all but eliminate reporting bias.

                  Of course, at first this just substitutes one type of problem for another because even shitty drivers have to drive many lifetimes of distance before killing anyone so you wind up with the unfortunately named "shot noise" from small-N. But Tesla FSD drives so many miles that the statistics are no longer small-N. If supervised FSD was worse than human, we should be seeing many more bodies than we do, so supervised FSD is clearly better than human.

                  Of course, that number needs to be adjusted for critical interventions before we can say anything about unsupervised FSD, and in an ideal universe I wish Tesla was forced to disclose those figures. Unfortunately, we live in a universe where luddites are firmly in control of this conversation and I cannot deny that a forced disclosure would be heavily abused in a way that costs lives. Still, regulators get to see the numbers before approving unsupervised rollouts, and as a compromise this makes neither myself nor the luddites happy but I suppose it will do.

                  • bumby 2 days ago

                    It almost belies a need for some sort of standardized testing as well. Self-reporting seems like a weak point that can be gamed.

                    To your point about luddites, that’s why I think it’s erroneous to use “as good as a human” as the metric. It will need to be much better than a human but those people reluctantly trust it enough to hand over control.

              • daveguy 2 days ago

                First, do not put words in my mouth. I did not claim he was "bypassing" federal reporting. Although he quite literally bypassed much more strict self driving reporting in CA by claiming it was ADAS instead of ADS. So maybe know the facts before calling someone a Luddite.

                What I literally said was Tesla doesn't release all of the self-driving statistics, like time to disengagement. They are noticeably more cagey about releasing that kind of info claiming it as "trade secret". Did you see the data sets you pointed to before pointing to them? Full of Tesla crashes, and full of redactions.

                I don't need to dunk on Elon. He dunks on himself. Some people just worship him too hard to realize it.

          • Mawr 2 days ago

            FSD is not able to drive in the same conditions humans are so you're comparing humans driving in challenging conditions to FSD driving in good weather on highways.

    • ben_w 2 days ago

      Yup, it's a product development process known as "Muntzing": https://en.wikipedia.org/wiki/Muntzing

      While it was absolutely vital to getting the costs of the original Tesla Roadster and SpaceX launches way down… it can only work when you are able to accept "no, stop" as an answer.

      Rockets explode when you get them wrong, you can't miss it.

      Cars crashing more than other models? That's statistics, which can be massaged.

      Government work? There's always someone complaining no matter what you do, very easy to convince yourself that all criticism is unimportant, no matter how bad it gets. (And it gets much worse than the worst we've actually seen from DOGE and Trump — I don't actually think they'll get to be as bad as the Irish Potato Famine, but that is an example of leaders refusing to accept what was going on).

      • tzs 2 days ago

        After reading the Wikipedia article on Muntzing it is worth also reading the article about Muntz himself [1]. He may have been the first person to sell TVs by diagonal screen size instead of screen width.

        [1] https://en.wikipedia.org/wiki/Madman_Muntz

  • energy123 2 days ago

    Multiple independent sensor streams is like having multiple pilots fly a plane instead of just one pilot. The chance of a fatal error (false negative identification of an object near the car) decreases from p to p^n. The size of that decrease is not intuitive: if p=0.0001 it now becomes a much smaller number with the introduction of a second pilot or second independent sensor stream (0.0001^2).

    Now the errors are not all independent so it's not as good as that, but many classes of errors are independent (e.g. two pilots getting a heart attack versus just one) so you can get pretty close to that p^n. Musk did not understand this. He's just not as smart as he's made out to be.

    • jfengel 2 days ago

      I misread your first sentence, in that having multiple pilots is not necessarily a good thing, like multiple cooks.

      You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human. If the copilot ever has to say "don't do that it's going to kill us all" it's a crisis.

      Lidar is a good backup but the car must be able to with without it. You can't drive just with lidar; it's like driving by Braille. Lidar can't even read a stop light. If they cannot handle it with just the visuals, the car should be not be allowed on the road.

      I concur that is terrifying that he was allowed to go without the backup that stops it from killing people. Human co drivers are not a good enough backup.

      But he's also not wrong that the visual system must be practically perfect -- if it's possible at all. Which it surely isn't yet.

      • Retric 2 days ago

        Cars unlike aircraft don’t need to move forward to maintain safety. If the lidar normally has very good uptime but happens to break, you need to safely come to a complete stop and that’s basically it.

        Trusting vision systems for 30 seconds or even 30 minutes over the lifetime of a car is very different than trusting them for 30,000 hours. So for edge cases sure, add a “this is an emergency drive to a hospital without LiDAR” mode but you don’t need to handle normal driving without them.

        • bee_rider 2 days ago

          There are cases where cars do need to move.

          * train tracks as mentioned by another comment

          * getting out of the way of emergency vehicles or following instructions from emergency workers

          I don’t think this detracts from the overall point—if the uptime is very good then the times when LiDAR doesn’t work and the car needs to move might practically never happen. Or the car might be able to move forward to a safe position in some limp-along mode.

          WRT “this is an emergency drive to a hospital without LiDAR,” I think that would be pretty bad to include. What exactly qualifies as an emergency (this will be abused by some users)? And anyway, in most cases it is better to have an ambulance for an emergency. Finally, an emergency on my part doesn’t entitle me to endanger society. Rather, the types of behavior that the car can perform without all the sensors should be kept track of. If the car can limp along without LiDAR, then that’s something it can do (with the caveat that some roads are not safe to drive far under the speed limit on).

          • Retric 2 days ago

            The same basic questions arise for things like a tire blowout.

            At minimum cars need to handle being on a curve when LiDAR or vision etc cuts off. Stopping on a freeway or railroad is a real risk even if people can get out, but now you’re doing risk mitigation calculations not just trying to handle normal driving. “What if someone is asleep in a level 5 car?” is a question worth considering, but we’re a long way self driving without any from of human backup.

            The hospital thing is for the very real possibility of being outside of cellphone range, and represents the apex of gracefully shutting down the ride vs refusing to travel while in a less safe condition.

            • bumby a day ago

              We absolutely should expect safety critical software to mitigate risk. The idea of “welp, the software is confused so just toss it back to a human” is a cop-out in risk mitigation. At the most generous, it constitutes an “administrative” control, which is the least preferred option, tantamount to giving directions in the users manual. The general control hierarchy is: remove the hazard, engineering control, PPE, administrative control.

              • Retric a day ago

                I agreed, but thing about risk mitigation is there’s tradeoffs.

                A self driving Taxi with a tire pressure sensor error vs total break failure are simply wildly different situations and each should get different responses. Further, designing for a possibility sleeping but valid human driver is very different than designing for a likely empty vehicle.

                • bumby a day ago

                  I don’t think anyone who work in risk mitigation would conflate those scenarios. Risk = probability x consequence. What you’re pointing to is the difference in consequence. It’s already acknowledged in mature risk mitigation programs. Eg, FMEAs list fault consequence, severity, likelihood, and detectability and don’t assume all faults are the same. The tradeoff is an engineering/business decision to assign appropriate mitigations to land in an acceptable risk posture.

                  If you aren’t mitigating it with the appropriate controls, you aren’t managing risk. My point is just passing the buck to the human is not an appropriate control in many critical scenarios.

                  • Retric a day ago

                    > What you’re pointing to is the difference in consequence.

                    No. A break failure doesn’t guarantee a specific negative outcome, it dramatically raises the probability of various negative consequences.

                    My point is risk mitigation is about lowering risk but there may be no reasonably safe options. A car stopped on a freeway is still a high risk situation, but it beats traveling at 70 MPH without working cameras.

                    • bumby a day ago

                      I'm sorry but I think you’re mixing things up in the traditional sense of risk management. The probability is covered separately in risk management. A brake failure doesn't guarantee a particular consequence, but it does bound it and general practice is to assign the worst consequence. To use it on the Max scenario, MCAS can fault and not cause a plane to crash, but it still should be characterized as a "catastrophic" fault because it can cause loss of life in some scenarios. The probability is determined separately. Then post-mitigation consequence/likelihood/detectability are assessed. Take a look at how FMEAs are conducted if you still disagree; it’s fairly standardized in other safety critical domains.

                      I agree there may be cases where there are no reasonably safe options. That means your engineered system (especially in a public-facing product) is not ready for production because you haven't met a reasonable risk threshold.

                      • Retric a day ago

                        FMEA/FMECA are only the first step in a system reliability study, it isn’t an analysis of mitigation strategies. Fault tree analysis (etc) concerns itself with things like how long an aircraft can fly with an engine failure. Doing that involves an assessment of various risks.

                        > I agree there may be cases where there are no reasonably safe options. That means your engineered system (especially in a public-facing product) is not ready for production because you haven't met a reasonable risk threshold.

                        Individual failures should never result in such scenarios, but ditching in the ocean may be the best option after a major fuel leak, loss of all engine power, etc.

                        • bumby a day ago

                          FMEA is not a “reliability study”. Reliability studies are inputs to FMEAs. Where do you think the likelihood data comes from? If you’re doing an FMEA before a reliability study, your probabilities are just guesses.

                          I don’t think any FMEA is going to list “ditch into the ocean” as an acceptable mitigation. Ie it will never be a way to buy risk down to an acceptable level.

                          >it isn’t an analysis of mitigation strategies.

                          Take a look at NASAs FMEA guidebook. It clearly lists identifying mitigations as part of the FMEA process. You’ll see similar in other organizations but possibly with different names (“control” instead of “mitigation”)

                          https://standards.nasa.gov/sites/default/files/standards/GSF...

                          • Retric a day ago

                            Semantics and recursion aside. That separation where “ditch into the ocean” is not a listed mitigation for FMEA, but is eventually considered and added to manuals is why I’m saying it’s incomplete.

                            100% AI systems can be safer than human in the loop systems avoiding suicide by pilot etc, but conversely that means the AI must also deal with extreme edge cases. It’s a subtle but critical nuance.

                            • bumby a day ago

                              Based on your answers, I'm guessing you haven't been involved in this process. (Not saying it as a bad thing, but just as a rationale for clarifying with further discussion).

                              An FMEA (and other safety-critical/quality products) go through an approval process. So if "ditch into the ocean" is not on the FMEA, it means they should have other mitigations/controls that bought the risk down to an acceptable level. They can't/shouldn't just push forward with a risk that exceeds their acceptable tolerance. If implemented correctly, the FMEA is complete insomuch as it ensured each hazard was brought to an acceptable risk level. And certainly, a safety officer isn't going to say the system doesn't need further controls because they put "ditch into the ocean" in the manuals. If that's the rationale, it begs the question "Why wasn't the risk mitigated in the FMEA and hazard analysis?" Usually it's because they're trying to move fast due to cost/schedule pressure, not because they managed the risk. There are edge cases, but even something like a double bird strike can be considered an acceptable risk because the probability is so low. Not impossible, but low enough. That’s what “ditch in the ocean” operations are for.

                              I agree that software system can improve safety but we shouldn't assume so without the relevant rigor that includes formal risk mitigation. Software tends to elicit interfacing faults; the implication being as the number of interfaces increases the potential number of fault modes can increase geometrically. This means it is much harder to test/mitigate software faults, especially when implementing a black-box AI model. My hunch is that many of those trying to implement AI in safety-critical applications are not rigorously mitigating risk like more mature domains. Because, you know, move fast and break things.

        • throw0101d 2 days ago

          > So for edge cases sure, add a “this is an emergency drive to a hospital without LiDAR” mode but you don’t need to handle normal driving without them.

          As a comparison for planes, there are not "this is an emergency, please land yourself" buttons for smaller aircraft:

          * https://www.garmin.com/en-US/blog/aviation/five-ways-garmin-...

          * https://www.garmin.com/en-US/legal/aluse/

          • Retric a day ago

            Interesting: “Autoland is designed to be used in emergency situations only and should not be used in nonemergency situations where the pilot is fully capable of landing the aircraft.” https://www.garmin.com/en-US/legal/aluse/

        • larvaetron 2 days ago

          > Cars unlike aircraft don’t need to move forward to maintain safety.

          They do if they're crossing railroad tracks.

          • Retric 2 days ago

            You can get out of a car on railroad tracks. However, “safely come to a complete stop” includes turns and freeways which are more serious concerns.

            That’s why I said 30 seconds / 30 minutes not 3 seconds. The idea is to go to somewhere safe and pull off the road not just slam on the breaks and hope your not on a curve.

          • david38 2 days ago

            A narrow case easily accounted for

            • ben_w 2 days ago

              Enumerating all the narrow edge cases has proven surprisingly difficult.

              • triplesec 2 days ago

                but train tracks is not that kind of edge case, tbh, it's so vanishingly situational

                • ben_w 2 days ago

                  "Train tracks" are a subset of all the edge cases in the set "car can't just stop and stay stopped and that's safe".

          • maximilianthe1 2 days ago

            In case of self-drive malfunction you still have a gas pedal under a human's foot

      • travisjungroth 2 days ago

        > You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human.

        This is not how flying works in a multi-crew environment. It’s a common misconception about the dynamic.

        Both pilots have active roles. Pilots also generally take turns who is manipulating the flight controls (“flying the airplane”) each flight.

    • Havoc 2 days ago

      >Multiple independent sensor streams is like having multiple pilots fly a plane.

      Don't think that's the right analogy. Realistically you'd aim to combine them meaningfully. A bit like two eyes gives you depth perception.

      You assume 1+1 is less than two, when really you'd aim for >2

      • dawnerd 2 days ago

        That’s the thing, they’re already merging multiple data streams (cameras) and dealing with visual anomalies. Pretty nonsense they can’t figure out different sensors when other companies are doing it just fine.

      • Izkata 2 days ago

        I remember this being his reasoning for it, that the lidar should be unnecessary if they could get multi-camera depth perception to work like it does in humans.

        • sudosysgen 2 days ago

          But it won't with current camera technology. The mix of high acuity at the center, moveable camera, wide peripheral vision, object tracking and non-frame-based sensing that allows human eyes to sense the depth of moving objects in low light is not commercially available with camera systems under ~10000$. It's cheaper to install LIDAR than to install cameras which are uniformly as good as the human eye. To get the acuity of a human eye at the center you'd need ~400MPx, or have a gimbaled camera to point towards an object of interest, and tracking small fast objects with such a camera (and at night!) is very very expensive in hardware.

    • vitus 2 days ago

      I think that's an oversimplification, and helps mainly in the case where one sensor is totally offline.

      If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust? What if the one that says you're about to crash is feeding you bad data? And what if the resulting course correction leads to a different failure?

      I'd hope that we've learned these lessons from the 737 Max crashes. In both cases, one sensor thought that the plane was at imminent risk of stalling, and so it forced the nose of the plane down, thereby leading to an entirely different failure mode.

      Now, of course, having two sensors is better than just having the one faulty sensor. But it's worth emphasizing that not all sensor failures are created equal. And of course, it's important to monitor your monitoring.

      • breadwinner 2 days ago

        > If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust?

        Neither. You stop the car and tow it to the nearest repair facility. (Or have a human driver take over until the fault is repaired.)

      • Spooky23 2 days ago

        The Elon magic is getting people to bikeshed bullshit like this and ignore the bigger issues.

        You don’t gouge out your ears because you hear something walking around at night that doesn’t appear to be accurate. As a human, your executive function makes judgements based on context and what you know. Your eyesight is degraded in the dark, so your brain pays more attention to unexpected sound.

        The argument for lidar, sonar or radar isn’t that cameras are “bad”, it’s that they perform very well in circumstances where visual input may not. As an engineer, you have an ethical obligation to consider the use case of the product.

        It’s not at all like the 737max issue - many companies have been able to successfully implement these features I have a almost decade old Honda that uses camera and radar sensors to manage adaptive cruise and lane keeping features flawlessly.

        In the case of Tesla and their dear leader, they tend to make dogmatic engineering decisions based on personal priorities. They then spackle in legal and astroturf marketing bullshit to dodge accountability. The folly of relying on cameras or putting your headlights inside of a narrow cavity in the car body (cybertruck) is pretty obvious if you live in a place that has winter weather and road salt.

        • vitus 2 days ago

          What? I was responding to the specific claim that two independent sensors always make your failure rates go down. Not that any particular source of sensor data is strictly inferior.

          I agree that optimizing for vehicle cost and shipping shoddy software at the expense of human lives is the wrong tradeoff, and having different sensors that provide more coverage of situations is generally preferable if you can afford it.

          But suppose you have a radar sensor that's faulty, and so it periodically thinks that there's something right in front of it, and so it causes the car to slam on its brakes. That's likely going to cause an accident if you're traveling at highway speeds. Does that mean that we shouldn't use radar in any circumstances? Of course not. But sensors are not infallible, and we need to keep that in mind when designing these systems.

          • Spooky23 2 days ago

            If you’re Tesla, the answer is going to treat it as a driver assistance tech… turn off the automatic feature and throw an alarm. Hopefully the driver has time to reorient, if not, his problem.

            If you’re Waymo, you’ve built in a higher standard of sensor and diagnostics, at the expense of the aesthetic.

            That’s always been the issue with Tesla… they push 99% solutions for 99.99% problems.

      • agubelu 2 days ago

        The difference with the 737 Max crashes is that there was only one sensor feeding data to the system, not two. If there's a discrepancy between the two sensors, disconnect the automation and let the human take control. And unlike planes, you can safely stop a car in almost all scenarios.

        • vitus 2 days ago

          > The difference with the 737 Max crashes is that there was only one sensor feeding data to the system, not two.

          That's fair. I was likely remembering that there were two sensors, since there's one for the pilot and one for the copilot, but not remembering that only one of them was in use.

          The other lessons worth noting are that 1) you need to give the human some indicator that something's wrong, and 2) you need to give the human some way of overriding the system.

          > And unlike planes, you can safely stop a car in almost all scenarios.

          Agreed, mostly. We do have to be careful about how the vehicle disengages and comes to a safe stop, or else we risk repeating the incident where Cruise hit a pedestrian, then pulled to the side of the road while dragging the pedestrian under the vehicle.

          • throwaway31131 2 days ago

            Let's also keep in mind that a few poor executions implementing control systems doesn't mean all of control system theory is bad...

    • timschmidt 2 days ago

      I thought it was consensus that the move away from lidar was driven by a lack of supply compared to desired EV production numbers, and the things said about making vision only work have all been about coping with that reality.

      • artursapek 2 days ago

        They didn't let lack of battery supply stop them.

        • timschmidt 2 days ago

          I can imagine a useful EV without lidar (my ICE vehicles certainly don't have one). I cannot imagine a useful EV without a battery.

          • elsonrodriguez 2 days ago

            Tesla’s valuation isn’t based on their ability to make a BEV.

            • timschmidt 2 days ago

              Isn't it? Seems to be what they do.

              • wraaath 2 days ago

                No - it's based on hype and dreams. If it was solely based on fundamentals TSLA would carry an car company multiple, dropping a digit off it's share price. At that price, Elon would have a significant chunk of his TSLA holdings margin called away and be less of a threat to the world and the people on it as a whole.

                • dontTREATonme 11 hours ago

                  This type of earnest hyperbole always makes me chuckle, people really have become completely detached from reality

              • jdiff 2 days ago

                Tesla's valuation is based on the heady tales that Musk spins. It's not based on the fact that reality falls far short of them.

                • timschmidt 2 days ago

                  Engineering entirely new things isn't like making another set of silverware, there are unforeseeable complications, and all forward looking statements are estimates or projections at best. Success seems to depend most on how many times you're able to try.

                  Maybe I'm just old, but development roadmaps are always vapor, until they aren't, which happens sometimes. Always been that way.

                  To be clear, I'd much rather my vehicle have superhuman multisensory awareness than only superhuman awareness. And I think it's fair for regulators to involve themselves with vehicle engineering, as all our safety depends on it. I've also watched the AI day presentations about their vehicle training system, and read their disclaimer text for enabling FSD, and it seems like they're doing a lot to advance the state of the art.

                  • michaelt 2 days ago

                    By traditional measures like profit-to-equity ratio, Tesla is overpriced coated to the likes of Toyota, Ford and BMW.

                    And not just a little bit - it’s way overpriced.

                    There are a bunch of possible explanations for this. One is that investors believe full self driving will come out really soon and work really well.

                    • timschmidt 2 days ago

                      I've followed EV development since the 90s, with great excitement and sadness around the EV1, and I built and daily drove EVs before I could find one to buy commercially. I appreciate that Tesla (and now BYD) have forced the hands of the traditional ICE vehicle manufacturers who were content in their partnerships with the oil industry and planned obsolescence.

                      I own a 1978 Suzuki Carry and a Miles Electric ZX40ST.

                      I have watched a fair amount of https://www.youtube.com/@MunroLive with interest about the implementations from all manufacturers. Sandy is in my home state of Michigan, birthplace of the auto industry, in which I've been multi-generationally involved, and he knows his stuff. He has criticisms for all manufacturers, but over the years Tesla seem to have listened more than most, to the point of Elon speaking for hours with Sandy on podcasts about technical aspects of the vehicles and production. I also appreciate that Tesla seem to make more of their cars in the US than any other manufacturer. Honda seems to be the only one comparable. I think the future's electric. I don't really care who makes it, but I'd like it to be well engineered, and made locally.

                      I'd personally probably think traditional ICE manufacturers and oil industries are overvalued, but I'm probably wrong as there's clearly lots of business for those industries which doesn't seem to be going anywhere.

          • catlikesshrimp 2 days ago

            I can't imagine a "full self driving" car that performs worse than humans be allowed in the streets.

            In medicine, the ethics commitee would shot the project early.

            Edit: Comparing "ev vehicle" with "autonomous vehicle"

            • timschmidt 2 days ago

              I wouldn't use medicine as an example in this case. Medicine does all kinds of ill advised things because they're worse for the disease than the patient. And all kinds of things they know will kill the patient if there's opportunity to learn and improve other lives.

              It seems like Tesla's been pretty clear with drivers who've opted in to FSD that they're helping to test and develop a system which isn't perfect.

              Just off the top of the head, look up Henrietta Lacks for a notable example of how medicine has handled informed consent.

              • catlikesshrimp 2 days ago

                > "It seems like Tesla's been pretty clear with drivers who've opted in to FSD that they're helping to test and develop a system which isn't perfect."

                Strawman No matter how many contracts and disclaimers they get in their favor. The "Full self driving" system is causing accidents (and deaths)

                Turn lights, rearview cameras, safety belts, all exist and are regulated to prevent the manufacturer dumping responsibility on the drivers.

                I emphasize "Full self driving" Tesla brand because the name was declared unlawful in California. https://www.govtech.com/policy/new-california-law-bans-tesla...

                • timschmidt 2 days ago

                  > The "Full self driving" system is causing accidents (and deaths)

                  Well yeah, and so is every other driver on the road. That is not the relevant metric. The real question is whether or not it is safer than the average driver. Or safer than great aunt Marge who doesn't have the best eyesight or hearing, but somehow still has a drivers license.

      • searealist 2 days ago

        At the time, Lidar cost about $80k for the main unit.

    • NoTeslaThrow 2 days ago

      I'm not really following the attempt at logic here but under this logic surely you'd WANT multiple types of sensors for all the same reasons.

    • ravenstine 2 days ago

      I dunno because I'm pretty sure that each sense a person loses increases their risk of mortality. If what you are saying is true, then shouldn't we all be better off wearing blinders, a clothespin over our noses, and rely on echolocation?

    • Gud 2 days ago

      Not at all. The car should be smart enough to figure out which sensor is faulty. Otherwise, the car should not be driving itself at all.

      It's more like, a pilot has access to multiple sensors. Which they do.

      • swid 2 days ago

        The comment you are replying to is saying the chance of error decreases with more pilots since there is redundancy. They are not saying it’s like too many chefs in a kitchen.

        • lucianbr 2 days ago

          The comment really needs the words "each pilot acts as a backup for the other pilots" added, or something similar.

        • Gud 2 days ago

          Ah thanks, misunderstood!

      • Volundr 2 days ago

        I think you and GP are in agreement. If you have a sensor with a really bad .5 chance of failure, by having to your chance of failure decreases to 0.5^2=0.25.

        • throwaway31131 2 days ago

          if fails are independent...

          • Volundr 2 days ago

            Right, that was also discussed in the comment.

    • throwaway31131 2 days ago

      Does that mean a Tesla only has one camera? If not, you're dealing with "multiple independent sensor streams" regardless.

    • treis 2 days ago

      He's just saying whatever will let him sell cars.

    • dathinab 2 days ago

      > Musk did not understand this.

      more likely

      he is a ruthless ** who doesn't care about people dying and slightly increasing the profit margin is for him worth more then a some people dying

      regulators/law allowing self driving companies to wriggle out of responsibility didn't help either

      lidar was interesting for him as long as it seemed light Tesla could maybe dominate the self driving marked through technological excellency the moment it was clear it won't work he abandoned technological excellence in favor of micro optimizing profit at the cost of read safety

      which shouldn't be surprising for anyone I mean he also micro optimized workplace safety in SpaceX away not only until it killed someone but even after it did (stuff like this is why there where multiple investigations against his companies until Trump magiced them away)

      the thing is he has intelligent people informing him about stuff, including how removing lidar will statistically seen kill people, so it's not that he doesn't know, it's that he doesn't care

      • robocat 2 days ago

        > and slightly increasing the profit margin is for him worth more then a some people dying

        The same argument likely applies to you (assuming you are in a wealthy nation) so you are likely just as ruthless::

        We could all slightly decrease our disposable incomes (spent on shit we don't need) and increase the life expectancy or QoL for someone in a poor country.

        Me too: I spent thousands on a holiday (profit for my soul) and I didn't give the money to a worthy charity. I'm no fan of Musk, but I think there's better arguments for dumping on him if you really need to do that.

        • dathinab 2 days ago

          I don't think such out of context nihilistic arguments are helpful at all, actually I'm pretty sure they are harmful to getting any change done.

          I also never said there aren't many other things bad about Musk.

          And comparing doing decisions out of greed which will straight forward risk the live of many and you aren't even gaining that much from it (compared with what you already do gain from the same source) with "we as a society could if we where a hive mind life more frugal and help another society else where" is just a very pointless thing.

          But even if they where the same, what does that change, just because you also do something bad doesn't mean it's less bad or more tolerable or should be tolerated.

          > We could all slightly decrease our disposable incomes (spent on shit we don't need) and increase the life expectancy or QoL for someone in a poor country.

          But we can't, or more specifically any specific individual can't, they can at best try to influence things by voting with their money and votes. But that is a completely different context.

          And that doesn't mean you shouldn't spend you money with care and donate money if you can afford it.

      • FireBeyond 2 days ago

        Right, he despises environmental regulations because he’s “trying to get us to a new home”, so who cares what happens to this one? Especially if it delays him doing so.

        Someone said in an interview “Elon desperately wants the world saved. But only if by him.”

        • ddq 2 days ago

          Earth is the only home, any other planetary body would be akin to homelessness. Words cannot adequately express the sheer small-minded ignorance of these feckless technocrats, the unadulterated hubris to presume themselves remotely capable of surpassing eons of biospheric development with their fucking spaghetti code that can't even drive a car properly. With such a level of dangerous stupidity posing an existential threat to all life in this solar system, it is nothing short of the gravest immorality to allow such a dim-witted, mean-spirited, drugged-out, misguided moron to remain in a position of power and control.

          There is no planet B.

        • janice1999 2 days ago

          > “Elon desperately wants the world saved. But only if by him.”

          I believe that's also Lex Luthor's motivation in All Star (?) Superman.

  • liendolucas 2 days ago

    > instead he is a fraud/charlatan and...

    Just see him talking about things at NeuraLink. Musk wouldn't exist if it weren't for the people working for him. It's a clown that made it to the top in a very dubious way.

    • glitchc 2 days ago

      Not dubious. He was rich to begin with and used that wealth to make more.

      • ravenstine 2 days ago

        Yes, though he wouldn't be relevant if people didn't believe he is a genius.

        • crote 2 days ago

          A lot of people believe you need to be a hard-working genius to get rich, so anyone rich must logically be a hard-working genius. Similarly, they believe anyone poor must be a lazy idiot.

          In reality getting rich has more to do with opportunity, connections, and luck - but accepting that means you've got to accept that the American Dream has always been a lie. It's much easier to convince yourself that people like Elon are geniuses.

    • rstuart4133 21 hours ago

      > Musk wouldn't exist if it weren't for the people working for him.

      I've decided Musk core talent is creating and running an engineering team. He's done it many times now - Tesla, SpaceX, Paypal, even twitter.

      It's interesting because I suspect he isn't a particular good engineer himself, although the only evidence I have for that is tried to convert Paypal from Linux to Windows. His addiction to AI's getting results quickly isn't a good look either. To make the product work in the long term the technique has to get you 100% of the way there, not the 70% we see in Tesla and of now DOGE. He isn't particularly good at running businesses either, as both twitter shows and his solar roof's show.

      But that doesn't matter. He's assembled lots of engineering teams now, and he just needs a few of them to work to make him rich. Long ago it was people who could build train lines faster and cheaper than anyone else that drove the economy, then it was oil fields, then I dunno - maybe assembly lines powered by humans. But now wealth creation is driven by teams of very high level engineers duking it out, whether they be developing 5G, car assembly lines or rockets. Build the best team and you win. Musk has won several times now, in very different fields.

    • plun9 2 days ago

      What's wrong with him talking about things at Neuralink?

      • tim333 2 days ago

        And while I haven't seen the talking, Musk never claimed to be a neurologist or have expertise in that area.

    • dgrin91 2 days ago

      I see this get thrown around once in a while and I really don't get it. Isn't this true of basically every leader? Gates, Jobs, Buffet, Obama, they all wouldn't existing without their teams. Isn't that just obvious? Isn't one of the important markers of a good leader to be able to build a good team?

      • mmooss 2 days ago

        > I see this get thrown around once in a while and I really don't get it. Isn't this true of basically every leader? Gates, Jobs, Buffet, Obama, they all wouldn't existing without their teams. Isn't that just obvious? Isn't one of the important markers of a good leader to be able to build a good team?

        The others don't claim the extremes of power and genius, based to a large extent on what their teams do. They also build good teams - look at DOGE, for example.

      • tim333 2 days ago

        Buffett did pretty well before having a team. I still think Musk is quite good at physics/engineering type stuff. It's not everyone who can start with modest money and transform industries (rockets and evs mostly).

  • pfannkuchen 2 days ago

    You do want them to be better than humans, but vision quality is not really a major source of human accidents. Accidents are typically caused by driving technique, inattention, or a failure to accurately model the behavior of other drivers.

    Put another way - would giving humans superhuman vision significantly reduce the accident rate?

    The issue here is that the vision based system failed to even match human capabilities, which is a different issue from whether it can be better than humans by using some different vision tech.

    • crazygringo 2 days ago

      > Put another way - would giving humans superhuman vision significantly reduce the accident rate?

      Yes? Incredibly?

      If people had 360° instant 3D awareness of all objects, that would avoid so many accidents. No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.

      It would be a gigantic improvement in the rate of accidents.

      • shepherdjerred 2 days ago

        Idk, many people are just bad at driving due to impatience.

        They don’t leave enough space/time to react even if they did have enough awareness

        • crazygringo 2 days ago

          Sure but that's not all people.

          If you assume a constant rate of attention, but then massively increase what people are aware of during that attention, that's a massive increase in safety.

          Safety is affected by lots of factors. Increasing any of them increases safety -- you don't need to increase all of them to get improvements.

          • shepherdjerred 2 days ago

            ... I don't know.

            Many people are in accidents that are entirely not their fault, e.g. being rear-ended. You can't do anything about those who drive unsafely, distracted, under the influence, etc.

            Assuming you have a "good" driver, there are still plenty of times they might be into accidents for reasons that aren't because of awareness. For example, road conditions, behavior of other drivers, and just normal, honest mistakes.

            At least for me, my own sense aren't really the limiting factor in my safety. My eyes are good enough. Modern cars have the equivalent of bowling lane bumpers with auto-centering, pre-collision warning, and blind spot warning/indicators.

            • crazygringo 2 days ago

              I don't understand what you're trying to say.

              Your eyes aren't good enough, compared to if you had 360° LIDAR awareness, which is the comparison here.

              You're listing all these other causes of accidents, which no one disputes. There's still a whole range of accidents caused by limitations in our spatial awareness because we can only ever be looking in one direction at a time.

              You're talking about "normal, honest mistakes". That's what I'm talking about. There would be less of those if we had magic 360° LIDAR in our bodies. A lot less. Especially at night, but during the day too.

              This isn't about blaming anybody. It's just the simple scientific fact that human senses are limited, and self-driving cars shouldn't limit themselves to human senses. Human senses aren't good enough for preventing all preventable accidents, no matter how "honest" drivers are.

              • shepherdjerred 2 days ago

                The question was:

                > Put another way - would giving humans superhuman vision significantly reduce the accident rate?

                I'm saying that human vision is not the cause of most accidents. Most accidents are caused by distraction, incorrect decisions/errors of judgement, road conditions, etc.

                • crazygringo 2 days ago

                  And I'm saying a ton of those "incorrect decisions/errors of judgement" would no longer exist if we had superhuman LIDAR abilities.

                  You're using existing human vision as your baseline, which is not the right comparison here. The question is a baseline of superhuman abilities. So there's a ton of accidents we wouldn't get into if we had those superhuman abilities.

                  Basically everything involving a collision with an object we weren't aware of in time because it wasn't in our limited field of view and wasn't well-lit, that could have been avoided if we had been. Which is a decent proportion of accidents.

                  But the bigger, original point is that LIDAR is better than cameras, and better for avoiding accidents.

                  • pfannkuchen 2 days ago

                    > No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.

                    Of these, only dark objects at night is related to lidar vs vision. What percent of accidents is attributable to not seeing dark objects at night?

                    Limited field of view is solved by having a bunch of cameras, that is what is orthogonal to LiDAR vs vision.

                    I’m open to an argument here, but you have not provided any compelling rebuttal. You apparently feel like you have, though.

                    • crazygringo 2 days ago

                      What are you even talking about?

                      You use some made-up idea like "superhuman vision" and then you're asserting that I haven't provided a "compelling rebuttal"?

                      You're not engaging in good-faith conversation here. If you want to understand the clear, obvious benefits of LIDAR over cameras, it's a Google search away. And it's not just about at night -- it's about weather, it's about greater accuracy, especially at greater distances, it's about more accurate velocity, and so forth. It's not rocket science to understand that earlier, more confident detection of a car or deer or child suddenly moving into the street is going to reduce collisions.

                      • pfannkuchen 2 days ago

                        Maybe I need to explain better.

                        Obviously LiDAR improves the maximum achievable capabilities of a self driving system. I am not disputing that. I don’t think anyone would or could dispute that, it’s just trivially true.

                        The point I am trying to make is more nuanced than that.

                        Basically I’m thinking about whether this decision to ditch lidar is actually stupid, or if there could be a sensible explanation that is plausibly true. I am proposing what I believe is a plausibly true sensible explanation for the decision.

                        If you think about whether accidents would be reduced by having lidar, the answer is again obviously yes. But if you think about this as a business decision, it is not so simple. The question is not whether accidents would be reduced by having lidar. The question is whether accidents would be reduced enough to justify the added cost and complexity to the system.

                        So then, how do we figure that out? Well, we think about the causes of human car accidents ranked by percent of car accidents caused. What would be the items in the top 95% let’s say? Then, which of those items can be addressed exclusively by a lidar enabled system.

                        To isolate can-only-be-fixed-by-lidar problems, I imagine a thought experiment where we have a human with lidar for eyeballs. Which of the top k causes of accidents would be fixed by giving humans lidar eyeballs. Well, maybe not that many, actually. That is my point.

                        This could be argued to be immoral due to I guess making decisions about death based on business considerations. But it’s probably fairly easy to convince oneself as a Tesla executive that the car (M3 anyway) being affordable is critical to getting the safer-than-human driving tech adopted, so making the car more expensive actually fails to save more people than the lidar would save, etc.

                  • dzhiurgis 2 days ago

                    Instant 360 vision doesn’t guarantee awareness. Most crashes happen in perfect conditions.

      • pfannkuchen 2 days ago

        You seem to be missing my point.

        Looking in one spot instead of another is included in what I’m calling “attention”. Of course paying attention to everything all the time would be and is a huge improvement. That is orthogonal to the type of vision tech being used. All approaches used in self driving systems today look everywhere all the time.

        • crazygringo 2 days ago

          You were responding to a comment criticizing Musk for removing "radar/lidar".

          So I'm assuming that the "superhuman vision" you described meant specifically if people had LIDAR, since that was the subject at hand.

          LIDAR is superior to cameras because it much more accurately detects objects. It's an advantage all the time, and an especially huge advantage at night.

          So LIDAR isn't orthogonal to anything. It's the entire point. If people had LIDAR, the accident rate would be significantly reduced.

          You're arguing that LIDAR won't help cars, because by analogy it wouldn't help people if it was a native biological sense. But that's wrong.

    • maxerickson 2 days ago

      The improved sensors improve awareness, improving situational awareness of human drivers would have a huge impact.

      • layer8 2 days ago

        Humans have limited awareness bandwidth. I’m doubtful that improved “sensors” would change the fact that one can only take in and focus on very few things at a time. If anything, filtering down the input to the relevant features would consume more brain resources and possibly take longer.

    • NoTeslaThrow 2 days ago

      > but vision quality is not really a major source of human accidents.

      Never driven before?

      • gruez 2 days ago

        I've driven before, and agree with the OP. Now what?

        • ruined 2 days ago

          let's experiment. i'll reduce your vision quality, then you try driving

          • gruez 2 days ago

            It's pretty obvious from the context of the thread that "vision quality" refers to LIDAR vs cameras, and that human vision is good enough, not that being legally blind doesn't somehow doesn't affect driving quality.

            • NoTeslaThrow 2 days ago

              Human vision isn't good enough, though. Otherwise why not just drive the car yourself?

              • gruez 2 days ago

                >Human vision isn't good enough, though.

                Maybe if your standard is "scheduled commercial passenger flights" level of safety.

                >Otherwise why not just drive the car yourself?

                There's plenty of reasons why humans can be more dangerous outside of vision quality. For instance, being distracted, poor reaction times, or not being able to monitor all angles simultaneously.

                • NoTeslaThrow 2 days ago

                  > For instance, being distracted, poor reaction times, or not being able to monitor all angles simultaneously.

                  As a pitch for self driving, it's going to be a long time before I trust a computer to do the above better than I do. At the very least adding sensors I don't have access to will give me assurance the car won't drive into a wall with a road painted on it. I don't know how on earth you'd market self-driving as competent without being absurdly conservative about what functionality you claim to be able to deliver. Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.

                  Perhaps vision is sufficient, but it seems hopelessly optimistic to expect to be able to pitch it without some core improvement over human driving (aside from my ability to take the hand off the wheel while driving).

                  Edit: hilariously, there's already a video demonstrating this exact scenario: https://youtu.be/IQJL3htsDyQ

                  • gruez 2 days ago

                    >At the very least adding sensors I don't have access to will give me assurance the car won't drive into a wall with a road painted on it.

                    This is as relevant as self driving cars not being able to detect anti-tank mines. If you want to intentionally cause harm, there are far easier ways than erecting a wall in the middle of a roadway and then painting a mural on it. If you're worried about it accidentally occurring, the fact that there's no incidents suggests it's at least unlikely enough to not worry about.

                    >Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.

                    Sounds like this is less about the tech used (ie. cameras vs lidar) and how "smooth" the car appears to behave.

            • Veserv 2 days ago

              But Tesla Vision is, currently, legally below minimum human vision requirements and has historically been sold despite being nearly legally blind.

              Driving requirements in many states demand 20/40 vision in at least one eye [1]. 20/20 visual acuity is a arc-resolution of approximately 1 arc-minute [2] thus 20/40 vision is approximately a arc-resolution of 2 arc-minutes or 30 pixels per degree of field of view. Legally blind is usually cited as approximately 20/200 which is approximately 10 arc-minutes or 6 pixels per degree of field of view.

              Tesla Vision HW3 contains 3 adjacent forward cameras at different focal lengths and Tesla Vision HW4 contains 2 adjacent forward cameras of different focal lengths and as such those cameras can not be used in conjunction to establish binocular vision [3]. As such, we should view each camera is a zero-redundancy single-sensor and is thus a "single-eye" case.

              We observe that Tesla Vision HW3 has a 35 degree camera for 250m, 50 degree camera for 150m, and 120 degree camera for 60m [4]. The Tesla Vision HW4 has a 50 degree camera for 150m, and 120 degree camera for 60m [4]. A speed of 100 km/h corresponds to ~28 m/s as such the cameras trailing times of ~10s, ~6s, ~2s. Standard safe driving practices dictates a 2-3 second follow, so most maneuvers would be dictated by the 60m camera and predictive maneuvers would be dictated by the 150m camera.

              We observe that the HW3 forward cameras have a horizontal resolution of 1280 pixels resulting in a arc-resolution of ~25.6 pixels per degree for the 150m camera and ~11 pixels per degree for the 60m camera, the camera used for the majority of actions. Both values are below minimum vision requirements for driving with most states with the wide angle view within a factor of two of being considered legally blind.

              We observe that the HW4 forward cameras have a horizontal resolution of 2896 pixels resulting in a arc-resolution of ~58 pixels per degree for the 150m camera and ~24 pixels per degree for the 60m camera. The 60m camera, which should be the primary camera for most maneuvers, fails to meet minimum vision requirements in most states.

              It is important to note that there are literally hundreds of thousands if not millions of HW3 vehicles on the road using sensors that fail to meet minimum vision requirements. Tesla determined that a product that fails to meet minimum vision requirements is fit for use and sold it for their own enrichment. The same company that convinced customers to purchase systems when they promised: "We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."[5] in 2016 when they were delivering HW2. Despite the systems being delivered clearly not reaching even minimum vision requirements and, in fact, being nearly legally blind.

              [1] https://eyewiki.org/Driving_Restrictions_per_State

              [2] https://en.wikipedia.org/wiki/Visual_acuity

              [3] https://en.wikipedia.org/wiki/Tesla_Autopilot_hardware

              [4] https://www.blogordie.com/2023/09/hw4-tesla-new-self-driving...

              [5] https://web.archive.org/web/20240730071548/https://tesla.com...

              • gruez 2 days ago

                >We observe that the HW3 forward cameras have a horizontal resolution of 1280 pixels resulting in a arc-resolution of ~25.6 pixels per degree for the 150m camera and ~11 pixels per degree for the 60m camera, the camera used for the majority of actions. Both values are below minimum vision requirements for driving with most states with the wide angle view within a factor of two of being considered legally blind.

                This isn't as much as a slam dunk as you think it is. The fallacy is that assuming visual acuity requirements are chosen because they're required for safe maneuvering, when in reality they're likely chosen for other tasks, like reading signs. A Tesla doesn't have to do those things, so it can potentially get away with lower visual acuity. Moreover if you look at camera feeds from HW3/HW4 hardware, you'll see they're totally serviceable for discerning cars. It definitely doesn't feel like I'm driving "legally blind" or whatever.

                https://youtu.be/Odu9O4MhfW0

          • treis 2 days ago

            That's called night and people drive in it just fine.

            • Timon3 2 days ago

              You are aware that more accidents happen per mile at night? If people drove at night just fine, this wouldn't be the case.

              • treis 2 days ago

                That just means we're successful 99.998% of the time instead of 99.999% of the time. It's not a particularly significant difference.

                • Timon3 2 days ago

                  It's significant enough to directly prove that degraded vision leads to worse driving. This is very significant if we decide whether a driving system should only use vision, which can degrade.

    • sudosysgen 2 days ago

      Human eyes are in many ways far superior to reasonably priced vision sensors. This isn't giving humans superhuman vision, it's changing the tradeoffs human vision has (without changing the cognitive process that it coevolved with to begin, and which is the most important part of why we get into accidents).

      There is no affordable vision system that's as good as human vision in key situations. LiDAR+vision is the only way to actually get superhuman vision. The issue isn't the choice of vision system, it's to choose vision itself, and besides the lesson from the human sensory system is to have sensors that go well with your processing system, which again would mean LiDAR.

      If humans could integrate a LiDAR-like system where we could be warned of approaching objects from any angle and accurately gauge the speed and distance of multiple objects simultaneously, we would surely be better drivers.

    • luckylion 2 days ago

      Will limiting auto-pilots to human-level vision not increase their accident rate?

    • harimau777 2 days ago

      Isn't radar/lidar less like super vision and more like spidey sense? I'd love to give human drivers an innate sense of exactly how far away things are and how fast they are closing.

    • flashman 2 days ago

      seems like vision quality might be a major source of robot accidents

  • moralestapia 2 days ago

    >This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief.

    That was Karpathy's decision [1] and, yes, I also have that perception of him.

    I know this is not going to be well received because he's one of HN's pet prodigies but, objectively, it was him.

    1: https://www.forbes.com/sites/bradtempleton/2022/10/31/former...

    (one of many)

    • breadwinner 2 days ago

      From the article: While Elon Musk is best known for making statements on this, Karpathy was his go-to guy on backing up that reasoning.

      I read that as Musk wanted this done and asked Karpathy to find a way.

  • rayiner 2 days ago

    > The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief.

    Yeah, he was arguably wrong about one thing so his building both the world's leading EV company and the world's leading private rocket company was fake.

    As they say, the proof of the pudding is in the eating. Between Tesla, SpaceX, and arguably now xAI, the probability of Musk's genius being a fluke or fraud is close to zero.

    • watwut 2 days ago

      He was not wrong about one thing, he was frequently wrong. He is highly charizmatic bullshitter that gets away with fraud and lies, that can secure help from people when he needs.

      But, he is frequently wrong, it just does not matter. He was occasionally right, like with Tesla back then.

      • stephenapple 2 days ago

        Of total road deaths per year, 14% are Motorcycles, and the 5 Tesla fatalities are 0.0008% of motorcycle deaths. I think a more poignant subject is how did they have us arguing over this so quickly? Most posts are just anti-Elon. Boggling...

    • fzeroracer 2 days ago

      > probability of Musk's genius being a fluke or fraud is close to zero.

      We already know he's an objective fraud because he literally cheats at video games and was caught cheating. As in, he hired people to play for him and then pretended the accomplishments were his own. Which maps very well to literally everything he's done.

  • xhkkffbf 2 days ago

    I can see that cutting out the LIDAR could qualify as "cheap." And maybe he's being risky on betting on lowering the cost by simplifying the technology.

    But why make the jump to "fraud/charlatan"? Every system needs to be finite. We can invest in every bell and whistle. Furthermore, he's upfront about the decision. Fraud requires deception.

    • hatsix 2 days ago

      According to Musk, my car was supposed to be unsupervised driving by now. The shift to vision only has consumed all of their resources for the past several years, and my car has been left behind. There have been giant leaps in vision only, but it still isn't better than the vision+radar.

      So, I was deceived. I didn't buy the car because of the deception, but I did buy FSD because of it.

      Also, FSD disengaging when it gets sensor confusion should be considered criminal fraud. FSD should never disengage without a driver action.

  • Workaccount2 2 days ago

    It's because the sensor suite for lidar is expensive and HD cameras are basically a commodity at this point.

    So if your goal is to pump out $20k self driving cars, then you need cameras to be good enough. So the logic becomes "If humans can do it, so can cameras, otherwise we have no product, no promise."

    • breadwinner 2 days ago

      Cameras have poor dynamic range and can be easily blinded by bright surfaces. While it is true that humans do fine with only eyes, our eyes are significantly better than cameras.

      More importantly, expectations are higher when an automated system is driving the car. It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society, and that means it has to have better-than-human levels of vision (which lidars provide).

      • josephcsible 2 days ago

        How many strangers' lives is a loved one's life worth? If your answer is anything other than "1", how does that square with other people having their own loved ones, and your loved ones being strangers to them?

        • breadwinner 2 days ago

          Don't understand your point. Everyone's life is important, which is why I said: the expectation is that in each individual injury accident a human clearly could not have handled the situation any better. It is insufficient for self-driving cars to be as safe as, or even slightly better than human drivers. Which is why, better-than-human sensors are needed.

          • josephcsible 2 days ago

            I'm saying that the only way "slightly better than human drivers" wouldn't be good enough is if some people's lives are worth more than others'.

    • sudosysgen 2 days ago

      I wish this myth would die. Anyone who picks up a camera would know that it isn't true, there are many things even very expensive cameras can't do that humans can. Specifically, the mix of high acuity when needed but wide angle, low-light movement performance, and tracking fast objects is something that only a camera system in the tens of thousands of dollars can do, and those are all relevant to driving.

  • lukeschlather 2 days ago

    I drive a beater used car, I've contemplated installing aftermarket lidar on it, i don't want to drive as a human only relying on being able to see everything by turning my head.

  • stephc_int13 2 days ago

    I fully agree on this.

    Computer Vision has turned out to be a very tough nut to crack and that should have been visible from anyone doing serious work in the field since at least 15 years ago.

    In any case, any safety critical system should be build with redundancy in mind, with several sub systems working independently.

    Using more and better sensors is only a problem when building a cost sensitive system, not a safety critical one, and very often those sensors are expensive because they are niche, that can be mediated with mass scale.

  • EasyMark 2 days ago

    Good point. You don't get rid of a technology that improves your results drastically until you have a replacment. His visual systems are still a failure when compared to lidar based ones. Just see how well other self driving systems are doing in comparison.

  • CalChris 2 days ago

    Radar. I remember some of his nonsense about disambiguating, despite the previous AP disambiguating just fine. Same with the rain sensor. This is a cheap part. Radar isn't very expensive either.

    To be fair, the ghost brakes on TACC have reduced. But I tend to control my wipers with voice.

    • dawnerd 2 days ago

      I still get just as much phantom breaking. I’ve narrowed it down to the car using map speed limit data and losing track of where it is. It’s very consistent on 5 south in LA. The same spot every day.

  • kjkjadksj 2 days ago

    Everyone is harping on the engineering. It is a marketing reason first and foremost. Lidar units are ugly as hell. No consumer would ever buy one of those waymo jaguars even if its better if tesla can do 95% of that without looking like a beluga whale.

    • outer_web 2 days ago

      Didn't early Teslas have lidar? They didn't look like AWACS.

      • tim333 a day ago

        I think they had radar but not lidar.

  • LMYahooTFY 2 days ago

    I guess I just have to accept that for the foreseeable future, any article in any way related to Elon Musk will result in a lot of angry low quality comments that get lots of upvotes.

    Instead of critiquing the article for liberal use of words like "overwhelmingly", "unique", and " 100% of Teslas" on an n=5 cars, with limited data and a very questionable analysis of the Snohomish accident, we discuss how Musk is a fraud.

  • ModernMech 2 days ago

    I'm a roboticist who had learned from researchers involved in the 2007 DARPA urban challenge. The key takeaway from that event was that every single car that finished the race was equipped with the 3D velodyne LIDAR. It was that technology that enabled driverless cars to work as a concept.

    Why? Because it provided information that people had to infer, and that you couldn't easily get from camera. So absent the human inference engine that allowed human drivers to work, we would have to rely on highly-precise measurement instruments like LiDAR instead.

    Musk's huge error was in thinking "Well humans have eyes and those are kind of like cameras, therefore all you need are cameras to drive"

    But no! Eyes are not cameras, they are extensions of our brains. And we use more than our eyes to navigate roads, in fact there's a huge social aspect to driving. It's not just an engineering challenge but a social one. So from the get-go he's solving the wrong problem.

    I knew this guy was full of it when he started talking about driverless cars being 5 years out in 2015. Just utter nonsense to anyone who was actually in that field, especially if he thought he could do it without LiDAR. He called his system "autopilot" which was deceptive, but I was completely off him when he released "full self driving - beta" onto public streets. Reckless insanity. What made me believe he is criminally insane is this particular timeline (these are headlines, you can search them if you want to read the stories):

      2016 - Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says
      2016 - Tesla working on Autopilot radar changes after crash
      2017 - NTSB Issues Final Report and Comments on Fatal Tesla Autopilot Crash
      2019 - Tesla didn’t fix an Autopilot problem for three years, and now another person is dead
      2021 - Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars Tesla announces transition to ‘Tesla Vision’ without radar, warns of limitations at first
      2022 - Former Head Of Tesla AI Explains Why They’ve Removed Sensors; Others Differ
      2022 - Tesla Dropping Radar Was a Mistake, Here is Why
      2023 - Tesla reportedly saw an uptick in crashes and mistakes after Elon Musk removed radar from its cars
      2023 - Elon Musk Overruled Tesla Engineers Who Said Removing Radar Would Be Problematic: Report
      2023 - How Elon Musk knocked Tesla’s ‘Full Self-Driving’ off course
      2023 - The final 11 seconds of a fatal Tesla Autopilot crash
    
    Now I get to add TFA to the chronical.

    The man and his cars are a menace to society. Tesla would be so much further along on driverless cars without Musk.

    • kjkjadksj 2 days ago

      I mean what does lidar do but tell you distance? A rangefinder optic from 1940 can also tell you distance with two little offset windows purely optically and accurately. Millions of rolls of films shot on this principle of optical distance finding. And yet, this is no good now, because IMO its musks idea and thats enough to poison it among the armchair engineers than any other reason. Telling to this point is how everyone just parrots the same comment that optical self driving is bad without actually providing evidence to support their point. Just arguing on precedent established in forums and social media.

      • ModernMech 2 days ago

        What poisons the idea among engineers is not that Musk came up with it (he didn't), but that he can't justify it using the language we speak, either by theory or by practice. Instead what we get from him is marketing, stunts, keynotes, sleights of hand, bravado, exaggerations, unfulfilled promises, premature rollouts of untested tech... the man is an anti-engineer, it's hard to trust him.

        What made cars successful at the DUC was omnidirectional 3D distances to everything around you provided by the Velodyne LiDAR. So if we have a way to get that kind of data without a LiDAR, that would be fine.

        Moreover, what a LIDAR gives you is an honest-to-god measurement. This whole idea of using deep learning to get range data from camera data is not measuring anything, it's making an inference. Which is why it's fooled by a looney tunes wall.

        And like I said, we lack any inference engine that is better than the human brain. So betting your entire strategy on an inference engine that doesn't exist, bucking industry practice and the consensus of the engineering community, gets the following results:

          - The NHTSA’s self-driving crash data reveals that Tesla’s self-driving technology is, by far, the most dangerous for motorcyclists, with five fatal crashes that we know of.
          - This issue is unique to Tesla. Other self-driving manufacturers have logged zero motorcycle fatalities in the same time frame.
          - The crashes are overwhelmingly Teslas rear-ending motorcyclists.
        
        If this problem is unique to Tesla, and Tesla is unique in relying solely on optical sensory input, then we can conclude relying solely on optical sensory input is bad. Nothing to do with the fact that Musk is the champion.
      • mturmon 2 days ago

        It does not seem like you have much experience with either LiDAR or stereo vision. These systems have well known performance characteristics and advantages. LiDAR has its limitations (in terms of point density and scan rate, and cost), but its range estimate accuracy just blows stereo vision away for automotive settings.

        The error on range estimates from stereo “disparity” goes up as range squared for fundamental-physics reasons. Accurate calculation of disparity relies on well calibrated optics (clean, physically rigid) and it’s easy to disrupt that.

        Stereo is also target-sensitive. Stereo ranging is enhanced by certain target features, like large, smooth surfaces with texture (say, stucco walls, or car grilles). It is made more difficult by smaller target surfaces with weird curvature.

        I’m sure that stereo that did well on auto-sized surfaces would do much worse for pedestrian or motorcycle (size and shape) surfaces.

      • crote 2 days ago

        There is a huge difference between measuring the distance to a single target a few miles away, and measuring the distance to dozens of moving, accelerating, and overlapping objects in close proximity to each other and to the reference point.

        Besides, rangefinding optics weren't exactly amazing in the 1940s either. It's why the introduction of radar made such a massive difference in naval warfare.

        • kjkjadksj 2 days ago

          Autofocus systems are designed to stop race cars in travel today already. And besides, why argue hypotheticals here when I look out my window and I see teslas self driving just fine? The system is proven despite your feelings about it. And if it was truly unsafe or compromised we'd have seen a good deal of evidence a this point what with the amount of users self driving and the mileage they've been traveling, not to mention how many people in the media are so desperate to find such evidence and publish some pulitzer attempt about it given the sentiment around Musk these days.

          • ModernMech a day ago

            > why argue hypotheticals here when I look out my window and I see teslas self driving just fine?

            I'll reiterate:

              2019 - Tesla didn’t fix an Autopilot problem for three years, and now another person is dead
            
            That's not a hypothetical. How do you figure this result indicates the Tesla theory of camera-only navigation is working out "just fine"? This level of professional negligence should be considered a crime.

            > And if it was truly unsafe or compromised we'd have seen a good deal of evidence

            We see that evidence all the time. Teslas veering into oncoming traffic, hitting parked vehicles, driving through loony tunes walls where other cars stop, being fooled by smokescreens where other cars are not, and decapitating multiple people in a similar way that would have been mitigated by LiDAR. And now apparently real-ending motorcyclists.

            This whole self driving scam is an exercise in the 80/20 rule. They spent 20% of the time to get 80% of the results, and that's why you claim "I see teslas self driving just fine"

            But Tesla has been promising the other 20% will be here in 5 years for 10 years. That last 20% is the difference between the system being "Full" self driving and a fraud. Right now what we see is a vaporware fraud, and they're not going to be able to deliver.

      • ndsipa_pomu 2 days ago

        The important thing about LIDAR is not so much that it is reporting a distance, but that it is reporting distances to actual objects. Cameras don't distinguish objects and so there has to be a lot of processing to try to tease out the objects from the background and this appears to be what's causing issues with Teslas.

    • plun9 2 days ago

      Tesla's camera-based self-driving system is still likely at least five years away from being ready, Hall said. Still, he said, "Elon Musk is right about not needing lidar."

      https://www.bizjournals.com/sanjose/news/2022/11/09/heres-wh...

      • ModernMech a day ago

        Hall was ousted from velodyne. He's salty.

          He began selling shares because he didn't like the direction of the company and was unhappy about being forced out of it, he told the Business Journal. From almost the moment he was pushed out until this spring, he fought a war with Velodyne in the press, at shareholder meetings and in the courts. Although the lawsuit he filed against the company is ongoing, he eventually decided the company was a "dump" and couldn't be salvaged.
        
        https://archive.is/oxwoh
  • dzhiurgis 2 days ago

    Data point. The only lidar equipped car in US is volvo ex90 which sold 3k cars in almost a year. Lidar is not enabled yet.

fsh 2 days ago

This problem has been solved more than a decade ago by radar sensors (standard on many mid-range cars at the time). They detect imminent collisions with almost perfect accuracy and very little false positives. Having better sensor data is always going to beat trying to massage crappy data into something useful.

  • redox99 2 days ago

    Radars are not as good as you think. They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

    The "with almost perfect accuracy and very little false positives" part is not true.

    If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.

    https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...

    • throwaway31131 2 days ago

      > They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

      Radar can absolutely detect a stationary object.

      The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.

      So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.

      And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.

      • redox99 2 days ago

        They can detect stationary objects, yes. But there's so much clutter from things like overpasses, road signs, and other objects that confuse the radar, that for things like adaptive cruise control, stationary objects are often intentionally filtered out or assigned much lower priority. So you detect moving objects (which stand out because of doppler shift).

        • y1n0 2 days ago

          Well, if we're talking about radar in a moving car, it's the other vehicles moving at the same speed that appear stationary.

          Non-moving vehicles are seen as approaching you at whatever speed you are moving at. Along with all the other things you mentioned.

          So they all have doppler shift but the "stationary" things approaching your car at your speed actually have much higher shift than the traffic around you.

          • porphyra a day ago

            Yea, and they are designed to filter out actually stationary things by ignoring anything that's coming at you at the same speed as the current car speed (from measuring wheel rotation).

      • potato3732842 2 days ago

        Imagine if you could only see in a very narrow portion of the visible light spectrum, like only green or something. That's kind of how radar "sees" (I'm grossly over-simplifying here but point is it doesn't see the way we do).

        It's hard to tell something sticking off the back of a truck or a motorcycle behind a vehicle without false positive triggering off of other stuff and panic braking at dumb times, something early systems (generically, not any particular OEM) were known for, which is why they were mostly limited to warnings not actual braking.

        And while one can make bad faith comments all day about that not technically being the fault of the system doing the braking allowing such systems to proliferate would be a big class action lawsuit, and maybe even a revision of how liability is handled, waiting to happen.

        • throwaway31131 2 days ago

          I guess it boils down to this.

          Earlier in the thread people were saying removing lidar was bad becuase multiple sensors types is good, presuming the camera stay either way, and one is not replacing camera with radar. I agree with this. It's usually trivially easy to corner case defeat one sensor type, as your example shows, regardless of sensor type. They all have one weakness or another.

          That's why things like military systems have many sensor types. They really don't want to miss the incoming object so they measure it many different ways. Defeating many different sensor types is just way harder and therefore more unlikely to occur naturally.

          And yes, control systems can absolutely reliably combine the input of many sensors. This has been true for decades.

          Frankly I surprised more of these systems don't take advantage of sound. It's crazy cheap and society has been adding sound alerts to driving for a long time (sirens, car horns, train horns, etc.)

        • hwillis 2 days ago

          > Imagine if you could only see in a very narrow portion of the visible light spectrum, like only green or something.

          No, that's how lidar works. Lidars have a single frequency and a very narrow bandwidth. Automotive radars have a bandwidth of 1-5 GHz. They operate around 80 GHz, which is very well reflected by water (including people) and moderately reflected by things like plastic. 80 GHz is industrially used to measure levels of plastic feedstock.

          Compare TSA scanner images, which are ~300 GHz: https://www.researchgate.net/figure/a-Front-and-back-millime...

          You are correct that most automotive radars like Bosch units [1] are very low detail though. Most of them don't output images or anything- they run proprietary algorithms that identify the largest detection frequencies (usually a limited number of them) and calculate the direction and distance to them. Unlike cameras and lidars they do not return raw data, so naturally when building driver assistance companies instead relied on cameras and lidar. Progress was instead driven by the manufacturers and with smaller incentives the progress is slower.

          [1]: https://www.bosch-mobility.com/en/solutions/sensors/front-ra...

  • hwillis 2 days ago

    Backing this up: automotive radar uses a band at ~80 GHz. The wavelength is ~3.7 millimeters, which lets you get incredible resolution. Not quite as good as the TSA airport scanners that can count your moles through your shirt, but good enough to see anything bigger than a golf ball.

    For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.

    Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.

    [1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.

    • porphyra a day ago

      Agreed. Also, the fact that current automotive radars return a point cloud (instead of, say, a volumetric density grid) is sad. But it will be a while before processing power can catch up, and by the time you have the equivalent of hundreds of 5090s on your car, you will also be able to drive flawlessly by running a giant transformer model on vision inputs.

  • plun9 2 days ago

    This isn't true. You can try using adaptive cruise control with lane-keeping on a radar-equipped car on an undivided highway. Radar is good at detecting distance and velocity, but can't see lane lines. In order to prevent collisions, you would need to know precisely the road geometry and lane positions, which may come from camera data, and combine that information with the vehicle information.

    • jamincan a day ago

      I do this all the time with no problem at all. I drive a 2023 VW Taos, for what that is worth.

  • outside1234 2 days ago

    But what if you were on Ketamine and thought you could resolve it with a camera?

  • mitthrowaway2 2 days ago

    Radar is great for detecting cars, not as great for detecting pedestrians.

  • whiplash451 2 days ago

    Are we sure Teslas dont have radars? We know they don’t have lidars, but that’s irrelevant.

BeetleB 2 days ago

The other comment pointing this out has been (ridiculously) flagged, so I'll repeat the point that was made:

The analysis is useless if it doesn't account for the base rate fallacy (https://en.m.wikipedia.org/wiki/Base_rate_fallacy)

The first thing I thought before even reading the analysis was "Does the author account for it?" And indeed he makes no mention that he did.

So after reading the whole article I have no idea whether Tesla's automatic driving is any worse at detecting motorcycles than my Subaru's (which BTW also uses only visual sensors).

Antidisclaimer: I hate both Teslas and Musk. And my hate for one is not tied to the other.

  • btrettel 2 days ago

    The base rate was discussed early in the article, but not by that name:

    > It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame.

    • PaulRobinson 2 days ago

      That gives you absolute rate, but not relative rate.

      There are not many other cars out there (in comparison), with a self-driving mode. There are so many Teslas in the World out there driving around, that I think you'd have to considerably multiply all the others combined to get close to that number.

      As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size. I'd want to see a motorcycle fatality rate per auto-driver-mile number, and even then, I'd want it adjusting for prevalence of motorcycles in the local population: the number in India, Rome, London and South California vary quite a bit.

      • polygamous_bat 2 days ago

        > As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size.

        This puts the burden on companies which may hesitate to put their “self driving” methods out there because it has trouble with detecting motorcyclists. There is a solid possibility that self driving isn’t being rolled out by others because they have higher regard for human life than Tesla and its exec.

      • azinman2 2 days ago

        “ADAS self-driving technology”

        ADAS is fairly common. It was in my VW and BMW, and I’m certain many other cars have it too.

    • BeetleB 2 days ago

      That is not addressing the base rate.

      To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.

      Even more illustrative, if 10 people on motorcycles had died from Teslas, and 1 person had died from that sole non-Tesla, then that non-Tesla would be deemed much, much more dangerous than Tesla.

      • btrettel 2 days ago

        It does address the base rate, though not in a fully satisfactory way. You're correct to point out the number of Teslas on the road vs. other vehicles, as is this person who mentions driving hours: https://news.ycombinator.com/item?id=43601681

        The replies to my comment seem to me to be addressing the question of what the appropriate reference class is, not the base rate fallacy.

        • BeetleB 2 days ago

          > The replies to my comment seem to me to be addressing the question of what the appropriate reference class is, not the base rate fallacy.

          The base rate fallacy is fundamentally about the relative rates in the population, and I don't see that data in the article.

          • btrettel 2 days ago

            Again, I think it comes down to what the "population" (reference class) is. Implicit in what I quoted was the population being companies. I personally would criticize it on that point, not that they are making the base rate fallacy because I think from their perspective they are not.

            Seems like semantics to me, I don't think we actually disagree on much.

      • polygamous_bat 2 days ago

        > To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.

        However, in such a case, “base rate fallacy” would prevent you from blaming Tesla even if it had a 98% fatality rate. How do you square that? What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?

        • BeetleB 2 days ago

          > What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?

          You handle it the same way any new technology is introduced. Standards and regulations, and these evolve over time.

          When the first motor car company started selling cars, pedestrians died. The response wasn't to ban cars altogether.

          The appropriate response would be to set some rules, examine the incidents, see if any useful information can be gleaned.

          And of course, once more models are out there with self driving abilities, we compare between them as well.

          Here, we can get better data than what's in the article: What is the motorcycle death rate with cars with no automated driving? If, per mile, it's higher than with Teslas with automated driving, then Tesla is already ahead. The article is biased right from the get go: It compares only cars with "self-driving" (whatever that means) capabilities, and inappropriately frames the conversation.

          If I'm a motorcyclist, I want to know two things:

          1. If all cars were replaced with Teslas with self driving capabilities, am I safer than the status quo?

          2. If all self driving cars were replaced with other cars with self driving capabilities, am I safer than the status quo?

          The article fails to answer these basic questions.

    • outer_web 2 days ago

      To add, even if the fatality numbers are small, accelerating into a person is a pretty outrageous failure mode.

      • BeetleB 2 days ago

        My Subaru has done this many times, and with regular cars, not motorcycles.

        • bink 2 days ago

          Your car has accelerated into a person many times and yet you continue to use this feature/car?

          • BeetleB 2 days ago

            Find me a car with autonomous capabilities (e.g. adaptive cruise control) that never does it, and I'll consider it.

            As all car manufacturers point out: A prerequisite to enabling any safety mechanism is that the driver overrides when it fails. This includes blind spot detection, lane drift detection/correction, and adaptive cruise control. It is understood that when I enable it, I'm responsible for its behavior given that I can override it.

            But that's all an aside. The point isn't that I continue to drive it, but that this is not something special about Teslas.

            • outer_web 2 days ago

              We are probably in the weeds here but there is a difference between notification safety mechanisms and ones that take action. And to get really pedantic, emergency breaking doesn't permit user override (either functionally or simply because of how abrupt it is).

              And autonomy features are a different domain altogether.

    • 93po 2 days ago

      Another part of the problem: Waymo, for example, doesn't provide motorcycle specific stats. They only provide collisions with vehicles, bicycles, and pedestrians. There's no breakdown of vehicle type. So the basis of this article is already bullshit and likely done just for "space man bad" reasons

      • sidibe 2 days ago

        Well we can come up the number of Waymo-involved motorcycle fatalities from such a breakdown easily from the total fatalities which is 0

  • jsight 2 days ago

    It is worse than that. The other adas providers do not all automatically report this stuff. He's comparing five reports gathered meticulously to a self reporting system that drops the vast majority of incidents.

    It is a bad article.

    • porphyra a day ago

      People will gobble up all kinds of bad articles to reinforce their anti-Tesla bias. This reminds me of the "iSeeCars" study that somehow claimed that teslas have the highest fatality rate per mile travelled, even though:

      * They basically invented the number of miles travelled, which is off by a large factor compared to the official figure from Tesla

      * If you take into account the fact that the standard deviation is proportional to the square root of the number of fatal accidents, the comparison has absolutely no statistical significance whatsoever

  • buyucu 2 days ago

    Base rate fallacy is not relevant here.

    There are 5 dead people who would be alive today had tesla not pushed this to the market.

    • ezfe 2 days ago

      Well that makes no sense because then you could say there are X people who would be alive today if they had not ridden a motorcycle.

lynndotpy 2 days ago

I don't like Tesla and the premature "FSD" announcement was a huge set back AV research. An AV without lidar killing motorcyclists is not surprising, to say the least. And this is a damning report.

That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?

  • cpncrunch 2 days ago

    “This issue is unique to Tesla. Other self-driving manufacturers have logged zero motorcycle fatalities in the same time frame”

    Doesnt give number of driving hours for Tesla vs others though.

    • robwwilliams 2 days ago

      Yep! And no stats at all. Pathetic article.

      • rad_gruchalski 2 days ago

        Do not believe any statistic you didn’t manipulate yourself.

        • robwwilliams 2 days ago

          As others have pointed out in this thread you need to correct for miles driven by Teslas in autopilot versus other vehicles driven in autopilot mode. Without this adjustment, the data are meaningless. And with counts of 5 versus 0 you are deep into Poisson noise level. So yes, I stand by “pathetic”.

          • rad_gruchalski 2 days ago

            There’s no such thing as „other vehicles driven on autopilot”. There’s cruise control, adaptive cruise control, and limited rollouts of level 3 autonomous driving locked to certain regions. You know why? Because everyone else except of Musk knows the tech is NOT ready. It doesn’t matter how many miles driven. What matters is Teslas don’t have radars and every other brand has.

      • FireBeyond 2 days ago

        Wait til you hear that Tesla doesn’t count fatalities in its accident data.

        Or that any collision that doesn’t involve airbag deployment is not actually an accident, according to Tesla.

        You were saying something about stats?

    • honeybadger1 2 days ago

      yes, it's missing the millions of FSD miles vs the minutes of drives of all of the rest combined.

      • lynndotpy 2 days ago

        This is not the case. Waymo alone has claimed 50 million rider-only miles as of December 2024. That would mean Waymo travelled at least 833,000 miles per hour on its driverless miles! (Unless you mean "minutes" in the literal sense, which can be any amount of time, and would apply to every vehicle.)

        It's worth noting Waymo's rider-only miles is a stronger claim than "FSD" miles. "Fully self driving" is Tesla branding (and very misleading, and expects an attentive human behind the wheel ready to take over in a split second.)

        • honeybadger1 2 days ago

          tesla is almost at 4 billion fsd miles, its public, you can query the API yourself.

      • buyucu 2 days ago

        at least 5 people would be alive today if tesla had not pushed unreliable technology on the market.

        • tordrt a day ago

          Surely you would have to take into account all the near accidents and crashes that fsd has prevented as well.

  • christophilus 2 days ago

    I’m in the same camp. I think self driving shouldn’t be allowed as it currently stands. But, this is probably the XKCD heatmap phenomenon.

    How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.

    https://www.xkcd.com/1138/

    • para_parolu 2 days ago

      Would also be interesting to see: what % of rear end crashes caused by AV vs human driver

bryanlarsen 2 days ago

Prediction: over the next 4 years we're going to see lots of stories like this. Some of the stories will be fair, some won't. "Musk bad" stories get clicks. The next administration will be very anti-Tesla. Coincidentally, by that time self-driving will be considered mature enough to warrant proper regulation, rather than the experimental regulation status we have now.

Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo. Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's. Likely nobody but Waymo will have that record, and now nobody will be able to because they won't have access to the public roads to attain it.

This might be the ultimate regulatory lock in monopoly we've ever seen.

  • pwg 2 days ago

    > Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo.

    The solution seems easier, if only the regulators would pick up upon it.

    Under the current human driven auto regime, it is the human that is operating the machine who is liable for any and all accidents.

    For a self-driving car, that human driver is now a "passenger". The "operator" of the machine is the software written (or licensed) by the car maker. So the regulation that assures self-driving is up-to-snuff is:

    When operating in "self driving" mode, 100% of all liability for any and all accidents rests on the auto manufacturer.

    The reason the makers don't seem to care much about the safety of their self driving systems is that they are not the owners of the risk and liability their systems present. Make them own 100% of the risk and liability for "self driving" and all of a sudden they will very much want the self-driving systems to be 100% safe.

    • bryanlarsen 2 days ago

      Being both the owner and operator, Waymo already has full liability. It's a good proposal, but I don't think it's sufficient if your secondary goal is "screw Elon".

      Nor is it sufficient to ensure that self driving is significantly safer than human drivers. I don't think the public wants "slightly safer than humans".

  • bliteben 2 days ago

    Seems like the obvious solution to this, is if you collect driving data on public highways, the data has to be made available to the public. If you collect the data on private highways you are free to keep it private. If you don't intend to use it in a product on public highways it can remain private.

    Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.

  • skybrian 2 days ago

    Maybe, but why couldn’t a competitor prove their system works using safety drivers?

    If a competitor resold their system to other car companies, another possible scenario might be a duopoly like Apple versus Android.

    • bryanlarsen 2 days ago

      By that time Waymo will likely be over a billion miles of data, and you're likely going to need similar amounts of mileage to prove that your safety margin is >> 10X better than human.

  • dangjc 2 days ago

    Then Tesla should pay for back up drivers until their safety record meets the bar.

  • tim333 a day ago

    >ultimate regulatory lock in monopoly...

    In fairness to the regulators they have been pretty reasonable so far.

  • doctorpangloss 2 days ago

    Hmm, but all the real, registered self driving vehicles in California are safer than human drivers - and that the data is better documented, we know that better than comparing human drivers to each other.

    The regulations are doing really well, it’s a big victory for regulators, why not make Teslas abide by the same rules? Why not roll out such strict scrutiny gradually to all vehicles and drivers?

    You are talking about regulatory degrees that are about safety. It seems the thing that lawmakers change is reactive to other things. Like how much does the community depend on cars to survive? If you cannot eliminate car dependence you can’t really achieve a more moral legal stance than “People can and will buy cars that kill other people so long as it doesn’t kill the driver.”

  • smrtinsert 2 days ago

    What's the hard part? Maybe Tesla should stop pretending it doesn't need lidar?

  • dopidopHN 2 days ago

    It’s imply a next administration. And ediction of new regulation.

  • crazygringo 2 days ago

    > Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's.

    That's not a bad thing if Tesla is significantly worse than Waymo. That's desirable.

    The solution here seems like it would be for Tesla to become as safe as Waymo. If they can't achieve that, that's on them. Unfair press doesn't cause that.

    I mean, I care about not dying in a car accident. If Tesla is less safe, and this leads to people taking safer Waymos instead, I can't see that as anything but a good thing. I don't want to sacrifice my life so another company can put out more dangerous vehicles.

  • hiddencost 2 days ago

    [flagged]

    • bryanlarsen 2 days ago

      I still think we're going to get a next administration. But I think that getting there will make Jan 6 look extremely minor.

senkora 2 days ago

I see a lot of people saying that this isn't statistically significant. I think that that is probably true, but I also think that it is important to do the statistical test to make sure:

    tesla.mult = c(1/(5:2),1:5)
    data.frame(tesla.mult = tesla.mult, p.value = sapply(tesla.mult, (function(tesla.mult) { poisson.test(c(5, 0), c(tesla.mult, 1))$p.value })))

      tesla.mult      p.value
    1  0.2000000 0.0001286008
    2  0.2500000 0.0003200000
    3  0.3333333 0.0009765625
    4  0.5000000 0.0041152263
    5  1.0000000 0.0625000000
    6  2.0000000 0.1769547325
    7  3.0000000 0.3408203125
    8  4.0000000 0.5904000000
    9  5.0000000 1.0000000000
tesla.mult is how many times more total miles Teslas have driven with level-2 ADAS engaged compared to all other makers. We don't have data for what that number should be because automakers are not required to report it. I think that it is probably somewhere between 1/5 and 5. If you believe that the number is more than 1, then the result is not statistically significant.
  • jamincan a day ago

    Even though other manufacturers may not be reporting these numbers, Level 2 ADAS systems are pretty common as far as I can tell. Wouldn't any vehicle with adaptive cruise control and lane-keep assist be considered Level 2 ADAS?

    • senkora a day ago

      I’m not quite sure where the line is between Level 1 and Level 2 ADAS. Wikipedia says this:

      > ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.

      https://en.m.wikipedia.org/wiki/Advanced_driver-assistance_s...

      I think that Level 2 requires something more than adaptive cruise control and lane-keep assist, but that several automakers have a system available that qualifies.

      My intuition is that there are more non-Tesla cars sold with Level 2 ADAS, but Tesla drivers probably use the ADAS more often.

      So I don’t have high confidence what tesla.mult should be. I wish that we had that data.

kentonv 2 days ago

Hmm. The article's source is NHTSA data that goes up through February 2025 -- pretty recent.

The article cites 5 motorcycle fatalities in this data.

Four of the five were in 2022, when Tesla FSD was still closed beta.

The remaining incident was in April 2024.

(The article also cites one additional incident in 2023 where the injury severity was "unknown", but the author speculates it may have been fatal.)

I dunno, to me this specific data suggests a technology that has improved a lot. There are far more drivers on the road using FSD today than there were in 2022, and yet fewer incidents?

erikpukinskis 2 days ago

I’m normally quite skeptical of these kinds of “Tesla More Dangerous Than Other Brands” headlines since they tend to be B.S.

But this seems like a pretty legitimate accusation, and certainly a well researched write-up at the very least.

  • shadowgovt 2 days ago

    The only criticism I could leverage is that the difference between five and zero incidents is very hard to extrapolate information from.

    The author kind of plays this up a bit by insinuating that there are incidents we don't know of, and they probably aren't wrong that if there are five fatalities there are going to be many more near misses and non-fatal fender bender collisions.

    But for the number of millions of miles on the road covered by all vehicles, extrapolating from five incidents is doing a lot of statistical heavy lifting.

  • Workaccount2 2 days ago

    What gets me is that there are no other brands in Tesla's league. Tesla is the only consumer car that has "FSD" level ability.

    The competitors have to use pre-mapped roads and availability is spotty at best. There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.

    • dboreham 2 days ago

      That's because the competitors are aiming to provide a feature that works (all the time, consistently).

    • jajko 2 days ago

      You mean betatesting on millions of users? Yes traditional manufacturers are very wary of class action suits and generally have some reputation to uphold. Tesla, not so much... move fast, break things, kill few people, who cares current profit is all that matters

    • lern_too_spel 2 days ago

      Nobody has level 5. Waymo did 5 million miles in 2024 with nobody behind the wheel. Tesla did way more but required a human driver due to frequent disengagement. These are not the same. https://electrek.co/2024/09/26/tesla-full-self-driving-third...

      Level 4 is a commercially viable product. Mapping allows verification by simulation before deployment. Tesla offers level 3, which is not monetizable beyond being a gimmick.

      • sidibe 2 days ago

        I dont really care about the levels but I think Tesla has been building level 2 product, that will always supposedly be level 4 next year but they have never shown any intention of doing level 3.

        What is clear is Tesla is not currently capable of self driving and he has lied year after year after year about it.

        I think carmakers should have to be liable for their cars capabilities in the areas they allow them to be used.

    • FireBeyond 2 days ago

      > There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.

      Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.

      • josephcsible 2 days ago

        > Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.

        No they didn't. They said it turned out the vehicles didn't have all the hardware necessary, but that a free retrofit to add it will be forthcoming.

    • sroussey 2 days ago

      FSD meaning level 2? Lots of those.

      • Workaccount2 a day ago

        Please, I'd love to know because I want an FSD like system and don't want to buy a Tesla.

        As far as I am aware, everyone else's offerings only work in pre-mapped areas, i.e. Chevy's system only covers half my commute.

    • marxisttemp 2 days ago

      Aren’t you ignoring Waymo?

      • tim333 a day ago

        It's not 'consumer'.

  • fallingknife 2 days ago

    There is no statistical evidence cited to show that there really is a difference. And there is no data at all showing the rate of these crashes vs non self driving cars.

  • dagw 2 days ago

    Without knowing how many FSD miles Teslas have done compared to other brands, it's hard to judge. It could just be that Tesla owners are far more likely to trust and use their cars FSD capabilities, and thus end up in more FSD accidents. Other brands might have so bad FSD that people simply not trust it and basically never use it, and thus never get into an FSD accident.

    • audunw 2 days ago

      I don’t see any way you can spin this in a positive light. Yeah, there may be many more FSD miles on Tesla, but if that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem.

      We know this is one of the core issues of Tesla FSD: its capabilities have been hyped and over promised time and time again. We have countless examples of drivers trusting it far more than they should. And who’s to blame for that? In large part the driver, sure. But Elon has to take a lot of that blame as well. Many of those drivers would not have trusted it as much if it wasn’t for his statements and the media image he has crafted for Tesla.

      • Hobadee 2 days ago

        The problem is there isn't enough data here. Killing 5 motorcycles isn't great, but if human driver killed say 100 in the same sample, 5 is actually pretty good. Of course if human drivers kill 1 or 2 in the same sample, 5 is really bad.

        • bink 2 days ago

          The difference to me, as a motorcyclist, is that to prevent humans from crashing cars into motorcycles is complicated. There are a multitude of reasons someone may not be paying attention or may not see another vehicle. This problem with Tesla has an easily implemented solution that could be deployed now but isn't. We don't have such a solution for non-Tesla crashes into motorcycles. How many motorcyclists deaths are we willing to accept to save money? How much money is too much to prevent 5 deaths?

          • Ajedi32 a day ago

            Installing LIDAR isn't an "easily implemented solution", assuming that's what you mean. Otherwise why not mandate that all cars have LIDAR, self-driving or not?

            • bink 3 hours ago

              It doesn't have to be LIDAR. Other modern cars that have adaptive cruise control avoid hitting motorcycles using simple radar, and have been doing so for many years.

      • BeetleB 2 days ago

        > Yeah, there may be many more FSD miles on Tesla, but if that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem.

        I'd wager far more motorcyclists get hit by humans driving non-Teslas in non-autonomous modes. I could rephrase your comment to:

        "Yeah, but if humans drive cars without safety features, and that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem."

        ... to make the (faulty[1]) argument that driving with FSD turned on is better for motorcyclists.

        [1] Faulty not because it's false, but because it is a logical fallacy.

      • vachina 2 days ago

        It’s not hyped. You can literally watch how FSD perform all readily live on YouTube, and despite these shortcomings, they’re lightyears ahead of any self driving systems. And it’s going to just get better as they account for all these corner cases.

        Tesla is a victim of their own success, they’ve set the bar so high people now expect it to have 0.0000% failure rate.

    • Hikikomori 2 days ago

      How good can it be if it rear ends motorcycles?

      • ModernMech 2 days ago

        This.

        People arguing over base rates of motorcycle accidents as if Tesla didn't get fooled by a loony tunes wall. If Waymo had killed 5 motorcyclists in SF we would know. But they operate there without an incident for years.

        Meanwhile, Tesla just after releasing autopilot to the world a man is decapitated because the system is deficient. Then it happened again in 2016 under eerily similar circumstance. Then we observe Teslas hitting broad objects like fire trucks and busses.

        The correct response to that is not to say "Well what's the base rate for decapitations and hitting broad objects?"

        No, you find out the reason for this thing happening. And the reason is known: a deficient sensor suit that it prone to missing objects clearly in its field of view.

        So with the motorcycle situation, we already know what the problem is. This isn't a matter of a Tesla just interfacing with the statistical reality of getting rear-ended by a car. Because we know Teslas have a deficient sensor suit.

        • josephcsible 2 days ago

          > as if Tesla didn't get fooled by a loony tunes wall

          Important distinction: FSD didn't get fooled by a looney tunes wall. Legacy Autopilot did.

          • ModernMech a day ago

            That's not an important distinction, because the reason it was fooled is the sensor suite is deficient, which is the same whether it's FSD or Autopilot. Other less sophisticated systems are not fooled because they have sufficient sensors. FSD being a more sophisticated system doesn't negate the fact their sensor suite is fundamentally not up to the task.

            • josephcsible a day ago

              > the reason it was fooled is the sensor suite is deficient, which is the same whether it's FSD or Autopilot.

              But other people have tried to reproduce the experiment with FSD, and it wasn't fooled.

              • ModernMech a day ago

                Can you explain why FSD fails here?

                TEST 1 (FSD): https://youtu.be/9KyIWpAevNs?feature=shared&t=112

                  "Show FSD is activated on video, we did that... Here we go feet aren't touching, hands aren't touching.... it's going to hit the wall!" *slams on breaks, ends up about 2 meters from the wall* "Cannot see the wall" *inches forward until the car can see the wall* "Only sees the wall when I'm barely touching it"
                
                TEST 2 (FSD): https://youtu.be/9KyIWpAevNs?feature=shared&t=169

                  "Self drive, not touching anything." *manually slams on breaks, stops a few meters short* "That was gonna hit the wall" *inches forward* "Car... does... not... see... now it does." *only inches from the wall* "That would have been too late" (ya think??)
                
                TEST 3 (Autopilot): https://youtu.be/9KyIWpAevNs?feature=shared&t=267

                  "Does not see the wall... does not see the wall.... does not see the wall...." *manually slams on breaks to avoid hitting the wall*
                
                The FSD tests are not any better than the AP results from the original looney tunes test. So that's why I don't agree FSD vs. AP is an important distinction.

                Maybe sometimes the FSD is not fooled by the cartoon wall trick. But the results show that even in ideal conditions - full light, no weather, clean course - the thing can still fail.

                Research prototypes from two decades ago would not be fooled by this. The sophomores in my intro to robotics class could build a robot that would not be fooled by this. And yet Elon Musk and the geniuses at Tesla can't build a car that isn't fooled.

                Sidenote: It's amazing to me that we can't look up data and see the extensive government reports on the safety and capabilities of these systems. We are literally just deploying them on public streets and relying on random youtube celebrities to conduct these evaluations, because Musk has fully captured the people who would do this kind of regulation.

                Watching how fast these things accelerate and how close they have to be to a literal wall to see it is terrifying.

      • Ajedi32 a day ago

        It could be 100x better than human drivers and still rear end motorcycles occasionally. Without better statistical information this article tells you almost nothing.

    • SideburnsOfDoom 2 days ago

      > Other brands might have so bad FSD that people simply not trust it and basically never use it

      The issue is not quite how good the automation is in absolute terms, it's how good it is vs. how it is sold. Tesla is an outlier here, right down to the use of the term "FSD" i.e. "Full Self-Driving", when it's nothing of the sort.

      • dagw 2 days ago

        That's kind of my point. We don't know from the data if the problem is that Tesla has worse FSD than the non-accident brands, or if all brands are equally good/bad and Tesla owners are more likely to use the feature. Tesla could very well have the best FSD software on the market and still lead the accident stats simply because how their users are (ab)using the feature (which I suspect is closest to the truth)

        • SideburnsOfDoom 2 days ago

          > Tesla could very well have the best FSD software on the market

          Tesla could very well have the best FSD marketing on the software. And that's dangerous.

          Data point: the "First To Gain U.S. Approval For Level 3 Automated Driving System" is ... Mercedes-Benz

          https://www.forbes.com/sites/kyleedward/2023/09/28/mercedes-...

          The second will be, IDK, maybe BMW? https://www.bmwblog.com/2024/06/25/bmw-approval-for-combinin...

          • vachina 2 days ago

            > First To Gain U.S. Approval For Level 3 Automated Driving System

            It’s actually more a testament to Mercedes Benz’s ability to navigate regulatory slog.

            VW would’ve passed emissions if the technicians didn’t take it out of the lab to test for on-road conditions.

            • FireBeyond 2 days ago

              And Mercedes owning liability if you have an incident under their system? That is actually putting your money where your mouth is.

              Not just “the driver is only in the seat for legal reasons” bullshit (“but watch us throw a press conference to parade your talented data for the world if we think it will protect our reputation at your expense”).

            • sroussey 2 days ago

              Its Mercedes believing in their tech that they take responsibility if it fails.

              • vachina 2 days ago

                You didn’t get my point.

            • SideburnsOfDoom 2 days ago

              I see that you have also said "You can literally watch how (Tesla) FSD perform all readily live on YouTube"

              I for one find this line of argument - that videos on YouTube are a stronger signal than regulatory approval - to be absolutely wrongheaded, delusional and laughable. YouTube proves nothing, and as a platform, was never intended to. Unlike regulatory approval.

              • vachina 2 days ago

                Do you need approval to prove LLM is useful in improving your productivity? No? You try it yourself and make that conclusion.

                Comparing fsd to whatever Mercedes is doing is like chatgpt v Markov chains

                • SideburnsOfDoom 2 days ago

                  This reply makes even less sense than assuming that YouTube is an alternative to regulatory testing and approval.

              • sroussey 2 days ago

                "But I read it on the internet!"

        • watwut 2 days ago

          I would argue that if Tesla lies to buyers, then those act on the lie because they trust Tesla, then absolutely Tesla is responsible.

pelagic_sky 2 days ago

When I used to ride, getting rear ended at a stop was one of my biggest fears other than people blowing stop signs. I always left more space in front of me that would give me room to maneuver as I watched for the car behind me to make the stop.

  • bitmasher9 2 days ago

    I just want to compound on this one.

    If you get rear ended at a stoplight or stopsign it’s very likely the motorcyclist is not at fault. The motorcyclist suffers significantly more bodily injury than a car driver would in a similar collision. As a motorcyclist, you can tell that sometimes people just don’t see you because their brain is looking for something car shaped.

    When I ride, every time I stop at a stoplight or stopsign I am watching my rear view mirror to judge if the person behind me is going to stop, and I have an exit strategy if they don’t. Ive had some close calls.

    • m3047 2 days ago

      I put auto horns on my motorcycle(s). In the Seattle urban core (when I lived there, 15 years ago), if I honked then people looked for the car -- they weren't looking at me. Other than that, seems to have the intended effect.

  • petra303 2 days ago

    Lane filtering solves this fear. Should be legal nation wide.

    • bink 2 days ago

      As a motorcyclist, lane filtering also comes with some dangers. It's up to each motorcyclist to decide if it's worth it. I lane split in heavy traffic in CA but I'm also very wary about being the first vehicle into an intersection after a light changes. I see far more people running red lights around me than I do people rear ending stopped traffic at a light. I'm glad we have the option.

    • JKCalhoun 2 days ago

      Lane filtering is apparently "lane splitting" at lower speeds.

      • dharmab 2 days ago

        They're different. Filtering is between stopped cars, splitting is in moving traffic.

        • snozolli 2 days ago

          This is completely arbitrary Internet lore. Unless you have local laws that define the terms, they don't actually mean anything. Also, if you're "filtering", then you will, inevitably, find yourself "lane splitting" when the light turns green before you reach the front.

          • dharmab 2 days ago

            I do in fact have a local law defining the terms: https://ridetolive.utah.gov/lane-filtering/

            And I rarely find myself splitting when the light changes; I simply dip back into the lane when the light changes before traffic starts moving.

      • outer_web 2 days ago

        Been riding for two decades and my understanding has been that 'splitting' is the yank term and 'filtering' is the britbong one.

    • whartung 2 days ago

      Nobody is going to cite you for filtering if you move into the gap to avoid a collision.

      • snozolli 2 days ago

        Filtering as a matter of habit eliminates the risk. That's OP's point, not that motorcyclists should watch their mirrors like a hawk and jump between lanes when they decide that the driver behind them might not stop.

  • nicbou 2 days ago

    Mine was and still is people cutting into my lane when cornering. You turn a corner and bam, incoming car.

    Still, I always leave the bike in gear until the car behind me has stopped, and if I can, I stop slightly diagonally with enough space to move left or right to avoid getting sandwiched.

  • xandrius 2 days ago

    My latest fear is not having the person behind me to rear me (as you can keep an eye on them) but is whoever is behind them, hitting them and then myself getting hit as a result.

    2nd hand collision, still pretty dangerous.

  • jeffbee 2 days ago

    Statistically, as a motorcyclist you should be afraid of fatally rear-ending someone else, since that is the top cause of motorcyclist deaths in America. Of course that is within your control and if you feel like you have reduced that risk through personal practice then yeah getting rear-ended or left-hooked are the biggest dangers.

  • kjkjadksj 2 days ago

    If I am on a bicycle generally I pull out into the crosswalk. Way more visible and pedestrians don’t cuss at you like they would a blocking car because they get it.

bchris4 2 days ago

Whether you like Tesla or not: this blog post is a perfect example of how clickbait headlines twist things around. Nobody reads anymore - if you made it down to the nitty gritty of each actual example, it’s painfully obvious that many have almost nothing to do with the self driving software at all, other than how humans can interact with it to screw it all up. There’s:

- A drunk driver doing 100 in a 45 (by pressing down on the pedal) through a yellow light

- A driver who “didn’t see the motorcyclist” because he was looking at his PHONE, but who had the go pedal pressed down at 95-100% for as many as 10 seconds after hitting him, to the point where witnesses say the front wheels were spinning while up in the air

- Others with no detail- not the authors fault but from the ones we have, clearly there are often circumstances which would require more analysis before coming to this conclusion

dharmab 2 days ago

Related: Well-sourced video on the topic https://youtube.com/watch?v=yRdzIs4FJJg

  • DwnVoteHoneyPot 2 days ago

    FortNine videos are my favorite videos on YouTube. You're short changing them by just leaving the link without a description or title.

    The video title is "Tesla Autopilot Crashes into Motorcycle Riders - Why?"

    The amazing part is the one guy who created this video covered all insightful comments here in HN in one concise video 2 years ago.

  • redox99 2 days ago

    Considering how much FSD has improved in the last few months, a 2 year old video is not relevant.

    • only-one1701 2 days ago

      Not relevant to what? Those people are still dead.

      • janice1999 2 days ago

        Other peoples lives are a sacrifice some drivers alpha testing their cars firmware are willing to make.

      • redox99 2 days ago

        The shortcomings of 2 year old software is not relevant, because that's not what the cars run now.

        • jcranmer 2 days ago

          What evidence do you have that Tesla has meaningfully improved the software to fix the referenced issues since then?

          • redox99 2 days ago

            Evidence is a strong word. But based on personal driving, other drivers comments, and seeing tests by channels like Dirty Tesla and Chuck Cook, you can tell the difference between current FSD v13, and what we had 2 years ago (v10 or v11) is absolutely gigantic.

        • tyree731 2 days ago

          By this argument, since Tesla regularly updates the software, no one can discuss the weaknesses in their self driving software.

          • redox99 2 days ago

            I never said that. Just discuss the current version or at least something recent (any v13 version).

            • ModernMech 2 days ago

              Yeah but you see this goalpost moving all the time when talking Tesla. Every time someone says something is wrong, people will say "But what about the latest version" or "this will be fixed in the next version". Meanwhile, you can still find video on YouTube of cars on the latest version veering into oncoming traffic.

    • outside1234 2 days ago

      Just amazing to me how blind people are to how much of a fraud Musk is

      • bdangubic 2 days ago

        cause it is a cult following, not blindness…

Veserv 2 days ago

NHTSA Report ID 13781-3470 in Florida, April 2022 is most likely the fatal motorcycle crash Tesla influencer Chuck Cook was involved in as mentioned in his video here: https://www.youtube.com/watch?v=H8B0pX8TSOk

In the video Chuck Cook states he was involved in a fatal motorcycle accident in Florida in April 2022 where the motorcycle wheel fell off and then careened into his Tesla while Navigate on Autopilot was engaged. The reporting source for that NHTSA incident is "Field Report" and "Media" which lines up with his statement that a Florida Highway Patrol officer reviewed the footage and that Tesla most likely first learned of the crash from Chuck Cook's page classified as "media".

If the incident was not the crash involving Chuck Cook, then Chuck Cook's crash would be a crash that Tesla has left illegally unreported as I can see no other identifiable crash that could correspond to Chuck Cook's crash.

Mistletoe 2 days ago

Visual detection only doesn’t work and the man that forced Tesla to use it is now in control of government “efficiency”. This decade’s very dark humor just keeps getting worse.

  • sergiotapia 2 days ago

    Been driving my new model Y with FSD since Monday and it's really cool works great. Just drove from home to a restaurant without touching the steering wheel. It does work I'm livin it.

    • sergiotapia 2 days ago

      and it drove me back just now, flawlessly. i must be living in an alternate reality to these "journalists". they are so transparent about why they write this stuff hahaha

  • ReptileMan 2 days ago

    We have more than 100 years of proof that visual detection works quite well.

    The problem is that we don't have certification process before a tech is deployed on the streets - your AV must pass this and this tests before you are allowed to use it. We don't care how you got there lidar, gps/astrology, camera/ai are all ok as long as you pass

    • marcusb 2 days ago

      I think we have a lot of proof that human visual detection works well enough to “only” kill around 40k people/year in the US from traffic accidents.

      I have had a Tesla for several years now. The visual object detection display very frequently misidentifies objects. Just the other day, it detected my garage (which the car was parked next to) as a semi truck.

      I have never, with my wildly imperfect human vision, mistaken a building for a semi truck.

      More to the point of the OP, sometimes objects show up and disappear repeatedly.

      As I’ve said in other comments, I think EVs and autonomous driving will eventually improve the lot of humanity greatly, but there’s a lot to be desired in the current tech and not any point in trying to pretend it is better than it really is.

      • vachina 2 days ago

        You may have briefly misidentified a traffic light for a pedestrian light, but you quickly recovered and stopped for the light.

        Transient errors happen all the time, but somehow you trust yourself more to keep driving.

        • marcusb 2 days ago

          Sure. I probably have done something like that.

          What I’ve not done is seen a motocyclist in front of me, then decided it wasn’t really there, and then hit said motorcyclist.

    • whiplash451 2 days ago

      You’re confusing vision (the sensor) with perception (the compute that goes with it).

      We have zero evidence that vision-based computers are safe enough to drive.

    • dragonwriter 2 days ago

      > We have more than 100 years of proof that visual detection works quite well.

      No, we don't.

      We have evidence that visual detection (well, the whole human set of senses, but for driving most of the reliance is on visual detection), combined with human level intelligence, is sufficient for the human-level driving which is the absolute minimum bar for fully automated driving.

      Which is very different than “visual detection works quite well”.

      • ReptileMan 2 days ago

        Why are you disagreeing with me while proving my point?

        The signal we get is enough. It is the processing that is still lacking. And that is why there should be benchmarks to be passed. Cameras and microphones do provide enough information for successful driving.

        • dragonwriter 2 days ago

          > Why are you disagreeing with me while proving my point?

          I'm not sure what your point is, but what you actually said overstates the case for visual detection.

          > The signal we get is enough.

          Sure, we know that with processing far beyond what we currently can replicate, it can at least just meet the minimum bar.

          > And that is why there should be benchmarks to be passed

          Is it? I would think there should be proof and validation no matter which pieces were demonstrated adequate and which have not yet been.

          But the thing is that we don't even know enough to make a set of benchmarks that can be tested that provide confidence, and we maybe won't until we've had the several systems that demonstrate that they work well enough in real conditions from which to generalize a minimum set of standards that can be verified prior to getting into real conditions.

          So we do supervised testing in progressively-less-limited real world conditions of each system until we get to that point.

    • outer_web 2 days ago

      If half of the population had a lidar eye would your opinion change?

    • drowsspa 2 days ago

      Does Tesla accept liability for any accidents while FSD is enabled?

    • Finnucane 2 days ago

      each year there are millions of car accidents and tens of thousands of fatalities. Are we really sure visual detection works that great?

      • Rohansi 2 days ago

        Vision isn't the problem for the vast majority of those accidents. There are a lot of really bad drivers out there.

      • fallingknife 2 days ago

        We're talking engineering here. Great isn't the standard. The standard is good enough and within budget.

    • hkpack 2 days ago

      > We have more than 100 years of proof

      What are you talking about specifically?

      • idlephysicist 2 days ago

        I think they mean that because people have been driving automobiles for over 100 years and that proves (somehow) that visual detection works for identifying hazards on the road.

        But they seem to be conflating human vision with computer vision.

      • artursapek 2 days ago

        Humans driving cars, obviously.

firefoxd 2 days ago

It's pretty impressive that we have self-driving cars that takes us this far with Just cameras. But it probably won't get any better than this without a revamp.

That said, what if we make these self driving tools fully available to drivers. Drivers are pretty good at driving already with with relatively short training. Give them the superpowers of teslas like 360 view, predictive object path, and object detection and you have super drivers.

  • kjkjadksj 2 days ago

    I don’t know why everyone says “just cameras.” Are people imagining they are using some algorithm to detect some object and come up with a distance measure? I’m imagining they are just having the cameras autofocus and determining the focal distance which seems about the same as banking on a good lidar reflection to me.

    • ModernMech 2 days ago

      Consider how long it takes your camera to autofocus, and how sometimes it gets it wrong. Then consider how fast your car moves. In the time it takes your camera to autofocus, how far do you think a car can travel, and what happens to all the blurry things in its path that it can't see?

      • kjkjadksj 2 days ago

        A camera that is slow to autofocus is the result of market compromises to reach a price segment in a platform that is within the confines of a camera body dimension. That is not reflective of the state of the art however.

        • ModernMech a day ago

          Still, the whole rationale behind the camera-only approach is that cameras are cheaply sources, and therefore more suitable for mass production vehicles, unlike LiDAR which are more expensive and not as widespread.

          If you're now shifting to the idea we need to use state-of-the art sophisticated camera technologies instead of the commodity stuff, now you're back to paying LiDAR prices, so why not just use that?

          And anyway, Teslas sold today and for a long time now are supposed to have been sold with cameras sufficient to solve full self driving (not beta). If state of the art cameras are needed, I've got bad news for all those Tesla customers.

redox99 2 days ago

This article does not adjust for the fact that Teslas drive many more miles on Autopilot + FSD than any other brand. So it's a pretty worthless comparison.

  • sitkack 2 days ago

    Please stop spamming Tesla support over this thread.

  • lightedman 2 days ago

    "It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame."

    Doesn't matter when mileage isn't what's being compared - it's whether or not others have caused the same problem - PERIOD.

    • sodality2 2 days ago

      Statistically it does matter. If FSD has a 1-in-100M miles driven chance of killing a motorcyclist, and FSD drives 10M miles a day, within 10 days it's likely to see a trend starting. Whereas XYZ other brand may have a 1-in-30M miles driven chance of killing a motorcyclist, but only drives 1M miles a day, so without adjusting, FSD is more dangerous, but in reality, XYZ is.

    • justinrubek 2 days ago

      The distance driven absolutely has something to do with it. If I made my own self driving car that only drove 100 meters, then there would not be enough data to know anything.

      There's definitely an issue with Tesla's approach, though.

    • redox99 2 days ago

      That's just probability. In a binomial distribution with 0 < p < 1, the probability of 0 successes is nonzero P(x = 0) > 0

    • SideburnsOfDoom 2 days ago

      > Doesn't matter when mileage isn't what's being compared

      Maybe the mileage on other brands is lower, because Tesla is unique in hyping as "full self-driving" a product that's not ready? It would still be a problem unique to Tesla then.

    • siilats 2 days ago

      so if the other cars have 5x less miles on autopilot compared to tesla you expect to see 0 crashes even though the probability of a crash is the same

      • whiplash451 2 days ago

        Why are we moving the goal posts to « other manufacturers drive less miles so statistically it’s still OK to kill people »?

        • bitmasher9 2 days ago

          I would like the goal post somewhere around “Is safer in every situation than a skilled and concentrated human driver”

          • whiplash451 a day ago

            Right, and so by that standard, self-driving companies should have literally zero death attributed to them per year given how few miles they drive compared to human drivers.

            That is: until self-driving cars ride much more, obviously.

        • fallingknife 2 days ago

          Because currently 40,000 people die every year in the US from car crashes, so it has been decided that it's ok to kill a certain amount of people with cars for a long time.

outside1234 2 days ago

Just think of the liability you would have if your Tesla hit someone.

It is crazy that me that anyone trusts FSD given the downside.

  • benmo_atx 2 days ago

    My worry cuts the other way… FSD is more globally and consistently attentive than I am.

rickyc091 2 days ago

This is exactly why I don't trust FSD in stop and go traffic. The follow distance is too close on some versions where it will accelerate hard and start braking around 10 feet. One misstep and that's a rear end. The latest version is better, where I feel more confident with it's easing, but there's constantly regressions where it goes back to an aggressive braking where I'm always afraid it'll rear end the other driver.

xyst 2 days ago

It’s already anxious enough sharing the road with American drivers as a bicyclist. Now I have to worry about awful TSLA tech just not working or detecting me.

Maybe wouldn’t be an issue if bike infrastructure was separate from car traffic. Or USA had even a fraction of the foresight as the NL when it came to transportation infrastructure.

  • AnotherGoodName 2 days ago

    On the bright side the Cruise, Waymo and Zoox's are the greatest thing ever as a cyclist.

    They do not accelerate to turn in front of me but instead slow down and pull in behind with patience for their turn off. I have a few areas i need to take up a whole lane for a short time (no bike line) and they don't get angry and honk. In general i feel safe if they are around vs an unpredictable human driver.

    I think self-driving cars are going to save a lot of lives. Not Tesla's tech of course, they're too far behind, but the actual on the road right now self-driving cars are fantastic.

scarface_74 2 days ago

Everyone comparing autonomous vehicles to non autonomous vehicles and number of accidents misses one important impediment to people’s and the government’s acceptance of accidents for the former.

Currently over 40K people are killed in the US every year from automobile fatalities and people think that’s just the price we pay for driving. I am not making a value judgement about that statistic.

But if even 1 person dies because of an autonomous vehicle fatality, there are going to be investigations and experience is that the entire platform is going to be paused.

Besides who is liable for an autonomous vehicle fatality? Is Tesla going to assume liability as the “driver”? Are insurance companies?

On the other hand, Tesla is not exactly known for reliability.

https://insideevs.com/news/731559/tesla-least-reliable-used-...

And its owner is not exactly known for honesty.

  • luckylion 2 days ago

    But isn't that also because it's new? Plenty of people die in work accidents because equipment malfunctions, but unless it's a specific type of equipment blowing up at alarming rates, nobody asks to pause e.g. all power-drills.

    I believe that has been pretty much figured out, and insurance companies offer policies that cover it.

    Sure, they are usually not "intelligent", but are auto-pilots in cars?

    • scarface_74 2 days ago

      The people who die in workplace accidents are only affecting them. It’s completely different if an autonomous vehicle injured someone else.

      Also dangerous workplaces are carefully controlled access environments where only the people that work there are in danger.

      Imagine if airplane fatalities happen even 1% as often as car fatalities? Right now the ratio is around 770:1 per passenger mile.

      • luckylion 2 days ago

        I assume that airplanes would have to offer much more value for that to be accepted, but they are mostly a luxury while cars are a necessity - life as we know it in much of the Western world would come to a screeching halt without cars, but it continues mostly unchanged without planes.

        I don't think I agree that workplace accidents are that different. It's not primarily the workers who die who benefit from those machines that kill them - I'd say the workers _are_ someone else.

        It's just that we've gotten used to that, and we feel like the number is steady and low enough that it's easily outweighed by the gigantic benefits we get from industrialization. My assumption is that it's the same way for cars and car-related deaths, but we haven't extended it to self-driving cars yet.

        Individually, everybody sees the benefit, but once integration of FSD hits the commercial side and consumer prices are being reduced as a direct consequence, deaths become work-place accidents. Like a truck with a human driver killing an "innocent" pedestrian - it's accepted because we really want the trucks.

        • scarface_74 2 days ago

          > I don't think I agree that workplace accidents are that different. It's not primarily the workers who die who benefit from those machines that kill them - I'd say the workers _are_ someone else.

          I’m not saying only the workers benefit. I’m saying there is no outrage if some coal miners die in West Virginia and definitely there is no outrage about the working conditions of immigrant farmers, factory workers at Amazon warehouses, and definitely not the supply chains overseas.

          But if little Becky got killed by an autonomous car, you would have all sorts of hearings.

          • luckylion 2 days ago

            But if little Becky got killed by a non-autonomous car, there's a lot of shrugs, because cars are awesome.

            My point being that FSD cars are even more awesome and society will realize that once they are more numerous. They'll be safer and cheaper, and some collateral damage will be accepted just like it is today.

            • scarface_74 2 days ago

              I’m not saying that FSD wouldn’t be technically awesome. I am saying there would be a lot of political posturing if Becky got killed.

              The American citizens won’t accept vaccines, fluoride, and believe that immigrants are eating pets. The very definition of “conservative” is not accepting change.

1970-01-01 2 days ago

Tesla's Fight Club approach means they will fix the software before a VIP is killed. Five is many, but still not enough to unjustify the no-LIDAR approach. A class-action to FSD needs to happen before it comes to that.

MarkusWandel 2 days ago

I've never been a motorcyclist, but I've been told by one: Just remember, on a motorcycle, you're invisible. Didn't think that would ever be literally true.

9283409232 2 days ago

This does not bode well for the Tesla robotaxi pilot launching soon. Depending on how that goes, Q2/Q3 could be a bloodbath for Tesla and I'm sure he knows it.

  • SideburnsOfDoom 2 days ago

    > the Tesla robotaxi pilot launching soon.

    Won't happen soon. Previous Tesla's promises turned out to be hype to prop up the stock price. You should not put faith in this one.

Homer10 2 days ago

The average human eye has 10,000,000 X the dynamic range of even the finest broadcast video camera. So, when the Sun is low on the horizon, all Teslas are effectively blind. Lidar is good, until you get more than one of them in an area. Then the lidars totally interfere with each other resulting in total chaos. Self driving anything is just not ready yet, and won't be for a long time.

  • josephcsible 2 days ago

    > when the Sun is low on the horizon, all Teslas are effectively blind.

    Looking at the camera preview screen on a Tesla with the Sun in frame is all you need to do to prove that false.

yieldcrv 2 days ago

Okay lots of discussion on how the stats were collected

My question is this: would Teslas be improved with Lidar?

outer_web 2 days ago

This unfortunately undercuts the "ahead, alone, alive" maxim.

byyoung3 2 days ago

You might not like Elon but that doesn’t make him wrong.

  • sidibe 2 days ago

    Yup the continuing poor performance of his FSD is what makes him wrong

    • byyoung3 a day ago

      SOTA performance. the LiDAR ones have basically failed or completely overfit to a single city.

darthrupert 2 days ago

I would have been skeptical of news like this in the past, but the last 6 months have shown that Elon Musk doesn't give a fuck about human tragedy. Or truth.

  • ornornor 2 days ago

    It’s been obvious for far longer for a lot of us.

  • dingnuts 2 days ago

    if this hasn't been obvious for a decade you need a more varied news diet.

    • darthrupert 2 days ago

      He did deliver on several fronts for a long time and seemed like a serious person. But people can change, and not always for the better.

      But yeah, like Sam Harris says now, perhaps I didn't know the real Musk.

      • miltonlost 2 days ago

        He seemed like a serious person when he called a cave diver as a pedo guy (more than six months ago)? Maybe you need to rethink what "serious" means to you if all that matters is "delivering". Have some better morals.

        • darthrupert 2 days ago

          Being an idiot in Twitter has never meant anything. That's the crux here, really: Musk has proven quite recently that his Twitter persona reflected his actual being.

          People accusing me and others of "you understand this only now?" are full of shit. You didn't know, just like me. Holier than thou is distasteful.

  • drowsspa 2 days ago

    My turning point was the Thai cave rescue.

cannonpr 2 days ago

It often surprises me that people think Tesla's sensor suite is sufficient for safe self driving, with quotes like "well humans rely on vision."

Human vision is nothing like the cameras on a Tesla. Our eyes have far more advanced systems including the ability to microscan, pivot, capture polarisation, and a much more sophisticated field of depth. Frankly, current generation cameras are in many ways far more primitive than human eyes. And that's not even considering the over 20% of our brain mass dedicated to post processing.

All because Musk wants to pad his profits and keep the hype without paying for a lidar.

apples_oranges 2 days ago

government theoretically could mandate another non-visual obstacle sensor..

  • Robotbeat 2 days ago

    I think mandating performance makes way more sense.

    • dingnuts 2 days ago

      I'm sure the government, led by Elon Musk, has no conflict of interest with Tesla, run by Elon Musk, that would EVER give Tesla an unfair advantage against its competitors through regularly capture or something similar.

      • fallingknife 2 days ago

        Which is exactly why they should mandate performance and not procedure. It's easy to bias a procedure towards one company but very hard to bias a performance standard.

      • Muromec 2 days ago

        US government en general is know for it's great judgement, meritocracy and adherence to the highest ethical standards, not just the absence of conflicts of interest in particular.

usrusr 2 days ago

[flagged]

  • dingnuts 2 days ago

    "crippleds"?

    Jesus Christ I know the zeitgeist is a cultural backlash against political correctness but goddamn we're getting a little cavalier using 1930s-esque language about non minority groups here aren't we?

    You couldn't have said serious injuries vs deaths? Or deaths vs maimings?

    • usrusr 2 days ago

      It's a terrible thing to happen to a person. Language that minimizes negativity has great value when talking about (or with) persons in the post accident state. But here in this context, where we are talking about the event, in the abstract. That same minimization of negativity would only serve those willing to trade other people's safety for personal convenience. 1930s-esque language by the way excelled at glossing over horrors.

    • scarface_74 2 days ago

      Please don’t try to be an “ally”. I have a very slight disability that causes me to limp a little when I walk (or run and well trained I’m a decent runner) and I am not that easily offended.

      • aisenik 2 days ago

        Please don't try to speak for everyone who's impacted by the regressive trends in social practices.

        I have a slight limp and am a member of a scapegoated minority who uses slurs in-community, I'm not easily offended (I'm inherently offensive) and your lil' shoutdown is gross.

        • scarface_74 2 days ago

          I’m also in a minority community. Let’s just say I grew up on NWA (the rap group). I also roll my eyes at many of those “allies”.

          For good reason, as soon as Trump winked, all of the corporations and colleges immediately dropped the programs.

          • aisenik 2 days ago

            The rhetorical point of reflecting the grandparent's use of language is to highlight that it's a meaningless argument, in reality. It's just fallacy to appeal to a limp as right to shut down an appeal to modern civility over regressive barbarity in our language.

            My minority-status is only relevant to further illuminate the emptiness of the argument being made. Most people can claim some degree of structural disadvantage in their lives.

            The revival of eugenicist language is not a positive trend for any member of a disadvantaged demographic, regardless of how they wield their position in society.

alesso_x 2 days ago

The article gives a pretty misleading impression of where FSD stands today. It treats highway behavior as if nothing has changed, but that’s not accurate.

Before October 2024, the highway stack was still running on Tesla’s older software. A major shift happened with version 12.5.6, which brought the end-to-end neural network to highways for the first time.

Then in December, version 13.2.2 pushed things even further by scaling the model specifically for HW4/AI4. It’s a major step up from earlier versions like 12.3.x from April.

I absolutely feel for anyone who’s been involved in an accident while using FSD. But at the end of the day, the system still requires driver supervision. It’s not autonomous, and the responsibility ultimately falls on the person behind the wheel.

If you’re going to evaluate how FSD performs today, you really need to be looking at version 13 or greater. Anything older just doesn’t reflect what the system is capable of now.

  • hughw 2 days ago

    It's weird that we've come to accommodate this abuse of the plain English meaning of "full self driving".

  • YawningAngel 2 days ago

    The responsibility does not rest solely with the person behind the wheel. Tesla know or should know how people use this system, and engineer it such that it doesn't kill people when the end user does what they are foreseeably going to do.

    • empressplay 2 days ago

      If that was true then all cars should have ignition interlock devices, because it's easy for a manufacturer to foresee a user driving drunk.

      But they don't, and I'm not sure if there have been many successful cases of suing car manufacturers because their cars let people drive drunk(?)

  • don_neufeld 2 days ago

    How do those changes address these specific accidents?

    • alesso_x 2 days ago

      It’s a completely different system. Comparing it to the old one is like comparing apples to oranges. The original highway stack didn’t feel natural at all. It was like every move was hard-coded.

      Lane changes used to feel robotic. Speed had to be adjusted manually all the time just to feel comfortable.

      The new system feels much more human. It has driving profiles and adapts based on traffic, which makes the experience way smoother.

      Have you tried fsd? I use it almost everyday. I’m more arguing that the article is misleading about the current version of fsd. I do not doubt that the accidents happened on the older version.

      • alesso_x 2 days ago

        > The self-driving subject vehicle is a 2020 Tesla Model 3. The Tesla fatally struck a motorcyclist from behind at above 100MPH on a 45MPH speed limit road

        FSD cannot be turned on above 85 mph. If you try to accelerate above 85 mph it disengages.

        > That’s how you would strike a motorcyclist at such extreme speed, simply press the accelerator and all other inputs are apparently overridden.

        If the user is pressing the accelerator pedal, it overrides the autopilot. I’m not sure how FSD could reasonably be blamed in that situation.

        The article also talks about Traffic Aware Cruise Control (TACC). There is a nuance between the different systems. This is not Full Self-Driving (FSD). It is an older system that Tesla provides for free. All it does is try to maintain a set speed, and you still have to steer. If the user is pressing the accelerator pedal, it overrides the system. I am not sure how FSD could reasonably be blamed in that situation.