Software has always been about abstraction. This one, in a way, is the ultimate abstraction. However it turns out that LLMs are a pretty powerful learning tool. One just needs the discipline to use it.
> This one, in a way, is the ultimate abstraction.
Is that really true though? I hear the Mythical Man Month "no silver bullet" in my head.... It's definitely a hell of an abstraction, but I'm not sure it's the "ultimate" either. There is still essential complexity to deal with.
I agree with the sentiment about the post. I'm not a person who fills my life with busy though.
I quite like tactile buttons. That said, I've never been annoyed by my model 3s glove box, I use the pin. I have both stalks but the lack of other buttons seems just fine. I thought they did a pretty damn good job with the UX of the car beyond the auto wipers.
How often does one go in the glove box? It's so small and he center console is very spacious and more accessible. It's two quick taps on the screen for a passenger. If you wish to lock your glove box, many do, the solution is much better than a key.
fair points; we rarely use the glovebox because the central console is not only more accessible but also doesn't require fiddling with the touch screen to open ;)
I do agree that the UX is pretty good overall, the glovebox annoyed me (until we just stopped using it) and also the defogger (which we need all the time in the winter her) which took several taps on the screen until I discovered that I could customize the shortcut buttons at the bottom of the screen
Some automated things they definitely got right: auto-engage the emergency break while in park; auto-park when opening the door; auto-lock when leaving the car; auto-start the climate control when entering the car; auto-adjust the seat position based on driver detection.
But some things need work: The algorithm for the windshield wipers definitely needs some calibration -- the wipers come on at random times when there as no rain or water splashes; the lane departure "I'm taking control because you're going to crash" is way too sensitive and beeping at random moments; the collision sensor is also much too sensitive (yes I see the car and I'm already slowing down) (but I have to admit that I'd rather it err on the side of being too sensitive than not enough)
Are you aware of any law enforcement agencies that would risk loss of life for material objects? Even in the case of harm prevention, it's a failure if the perp dies. That's still seen as a policy or op failure.
Random passerby are not law enforcement professionals, they're untrained and therefore can't be held to such standards.
The case of Daniel Penny cited above is straightforward: "Neely boarded the car Penny was riding and reportedly began threatening passengers. After the train had left the station, Penny approached Neely from behind to apply the chokehold, and maintained it in a sitting position until Neely went limp a few minutes after the train had reached the next stop."
That's exactly what a successfully stopped threat looks like. That the threatening person ended up dying is unfortunate, but they did ultimately bring that upon themselves. They were free to stop being a threat to others at any time.
But then I don't know what you're trying to imply with the loss of life to protect material objects comment. Seems like an attempt to troll, because nobody is talking about that.
> But then I don't know what you're trying to imply with the loss of life to protect material objects comment. Seems like an attempt to troll, because nobody is talking about that.
From the thread (edited for clarity):
-> I've seen a phone jacking in this exact scenario and nobody moved to stop the guy running. Nobody on the train can help cause the doors have closed, and nobody on the platform has any idea anything just happened, or if they do the guy is well gone before they can put two and two together.
-> I'm not worried about the laptop. Pretty much everyone knows that any valuable laptop is a tracking device anyway. You should be worried about getting actually robbed, or even being attacked for no reason, while you're not paying attention.
-> Are you looking for examples? Off the cuff, in the past 2 years we've had 2 high-profile incidents: Jordan Williams and Daniel Penny.
Theft -> examples of loss of life during "successful interventions".
> That's exactly what a successfully stopped threat looks like.
We might be getting caught up on how to define successful here. If by successful you mean that the outcome was legal then I agree, and would say the outcomes of these trials were likely the appropriate outcome.
But if by successful you mean the best outcome, which is what I take it to mean, then I disagree. A successful intervention would be one where no-one was injured. I've spent years riding trains in Chicago where there's a pretty regular cohort of individuals suffering from various mental illness. I even lived in a building that partially served as a half-way house for such individuals. I've seen people do what Jordan Neely was claimed to do a couple dozen times without altercation. I've also seen people assaulted and knifes get pulled. There are ways to de-escalate a situation that doesn't result in a lethal outcome. That should be the definition of successful here.
> Random passerby are not law enforcement professionals, they're untrained and therefore can't be held to such standards.
The standard is the law. Vigilantism doesn't get a pass on the law just because it was good natured. Perhaps the law gives good natured people caution, but the alternative is much worse. "Legal hell" as it was put, is appropriate when involved in the death of an individual. That's just a consequence of living in a society that values human life.
Exactly — if the terminology were truly unacceptable, the FTC would likely have intervened much earlier.
Regulators implicitly allowed the ambiguity to persist, and are now attempting to reframe or correct it retroactively.
It’s possible there were political or practical considerations — for example, a belief that successful autonomous driving would make future regulation easier, or at least postpone difficult legislative debates.
We can’t know what the internal reasoning was, but the long delay suggests more than simple oversight.
It's not like they weren't told multiple times to look into it. Lina Khan confirmed it was on their radar. She's one of the most pro consumer chairs we've had. She had 4 years to make a move if she thought a lawsuit was appropriate.
Yes — that’s one plausible explanation, and it still fits the same structural question.
Whether the delay came from optimistic expectations about imminent progress, political or economic incentives, or simple regulatory inertia, the core issue remains the same:
the terminology was tolerated for a long period, and that tolerance allowed ambiguity to accumulate.
My point isn’t about defending Tesla or trusting regulators’ judgment — it’s about asking why the shift happened only after years of implicit acceptance, and what effects that delay had on public understanding and responsibility.
> Why was the terminology tolerated for years before being deemed unacceptable?
Politics and/or incompetence. Nothing to do with conspiracy theories. Government agencies are very transparent (implicitly historically; these days were explicitly and you can also now add outright corruption to that) in general (not just regarding Tesla specifically)
reply