Suddenly all this focus on world models by Deep mind starts to make sense. I've never really thought of Waymo as a robot in the same way as e.g. a Boston Dynamics humanoid, but of course it is a robot of sorts.
Google/Alphabet are so vertically integrated for AI when you think about it. Compare what they're doing - their own power generation , their own silicon, their own data centers, search Gmail YouTube Gemini workspace wallet, billions and billions of Android and Chromebook users, their ads everywhere, their browser everywhere, waymo, probably buy back Boston dynamics soon enough (they're recently partnered together), fusion research, drugs discovery.... and then look at ChatGPT's chatbot or grok's porn. Pales in comparison.
Google has been doing more R&D and internal deployment of AI and less trying to sell it as a product. IMHO that difference in focus makes a huge difference. I used to think their early work on self-driving cars was primarily to support Street View in thier maps.
There was a point in time when basically every well known AI researcher worked at Google. They have been at the forefront of AI research and investing heavily for longer than anybody.
It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.
But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.
Ex-googler: I doubt it, but am curious for rationale (i know there was a round of PR re: him “coming back to help with AI.” but just between you and me, the word on him internally, over years and multiple projects, was having him around caused chaos b/c he was a tourist flitting between teams, just spitting out ideas, but now you have unclear direction and multiple teams hearing the same “you should” and doing it)
Please, Google was terrible about using the tech the had long before Sundar, back when Brin was in charge.
Google Reader is a simple example: Googl had by far the most popular RSS reader, and they just threw it away. A single intern could have kept the whole thing running, and Google has literal billions, but they couldn't see the value in it.
I mean, it's not like being able to see what a good portion of America is reading every day could have any value for an AI company, right?
Google has always been terrible about turning tech into (viable, maintained) products.
Their unreleased LaMDA[1] famously caused one of their own engineers to have a public crashout in 2022, before ChatGPT dropped. Pre-ChatGPT they also showed it off in their research blog[2] and showed it doing very ChatGPT-like things and they alluded to 'risks,' but those were primarily around it using naughty language or spreading misinformation.
I think they were worried that releasing a product like ChatGPT only had downside risks for them, because it might mess up their money printing operation over in advertising by doing slurs and swears. Those sweet summer children: little did they know they could run an operation with a seig-heiling CEO who uses LLMs to manufacture and distribute CSAM worldwide, and it wouldn't make above-the-fold news.
Indeed, none of the current AI boom would’ve happened without Google Brain and their failure to execute on their huge early lead. It’s basically a Xerox Parc do-over with ads instead of printers.
Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.
The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.
In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either
The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions
Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)
Tesla built something like this for FSD training, they presented many years ago. I never understood why they did productize it. It would have made a brilliant Maps alternative, which country automatically update from Tesla cars on the road. Could live update with speed cameras and road conditions. Like many things they've fallen behind
I love Volvo, am considering buying one in a couple weeks actually, but they're doing nothing interesting in terms of ADAS, as far as I can tell. It seems like they're limited to adaptive cruise control and lane keeping, both of which have been solved problems for more than a decade.
It sounds like they removed Lidar due to supplier issues and availability, not because they're trying to build self-driving cars and have determined they don't need it anymore.
Is lane keeping really a solved problem? Just last year one of my brand new rented cars tried to kill me a few times when I tried it again, and so far not even the simple lane leaving detection mechanism worked properly in any of the tried cars when it was raining.
I’d suggest doing some research on software quality. Two years back I was all for buying one (I was considering an EX40), but I got myself into some Facebook groups for owners and was shocked at the dreadful reports of quality of the software and it completely put me off. I got an ID4 instead. Reports about the EX90 have been dreadful. I was very interested, and I still admire their look and build when they drive by - but it killed my enthusiasm to buy one for a few years until they get it right.
Without Lidar + the terrible quality of tesla onboard cameras.. street view would look terrible. The biggest L of elon's career is the weird commitment to no-lidar. If you've ever driven a Tesla, it gives daily messages "the left side camera is blocked" etc.. cameras+weather don't mix either.
At first I gave him the benefit of the doubt, like that weird decision of Steve Jobs banning Adobe Flash, which ran most of the fun parts of the Internet back then, that ended up spreading HTML5. Now I just think he refused LIDAR on purely aesthetic reasons. The cost is not even that significant compared to the overall cost of a Tesla.
That one was motivated by the need of controlling the app distribution channel, just like they keep the web as a second class citizen in their ecosystem nowadays.
he didn't refuse it. MobileEye or whoever cut Tesla off because they were using the lidar sensors in a way he didn't approve. From there he got mad and said "no more lidar!"
I think Elon announced Tesla was ditching LIDAR in 2019.[0] This was before Mobileye offered LIDAR. Mobileye has used LIDAR from Luminar Technologies around 2022-2025. [1][2] They were developing their own lidar, but cancelled it. [3] They chose Innoviz Technologies as their LIDAR partner going forward for future product lines. [4]
The original Mobileye EyeQ3 devices that Tesla began installing in their cars in 2013 had only a single forward facing camera. They were very simple devices, only intended to be used for lane keeping. Tesla hacked the devices and pushed them beyond their safe design constraints.
Then that guy got decapitated when his Model S drove under a semi-truck that was crossing the highway and Mobileye terminated the contract. Weirdly, the same fatal edge case occurred 2 more times at least on Tesla's newer hardware.
His stated reason was that he wanted the team focused on the driving problem, not sensor fusion "now you have two problems" problems. People assumed cost was the real reason, but it seems unfair to blame him for what people assumed. Don't get me wrong, I don't like him either, but that's not due to his autonomous driving leadership decisions, it's because of shitting up twitter, shitting up US elections with handouts, shitting up the US government with DOGE, seeking Epstein's "wildest party," DARVO every day, and so much more.
Sensor fusion is an issue, one that is solvable over time and investment in the driving model, but sensor-can't-see-anything is a show stopper.
Having a self-driving solution that can be totally turned off with a speck of mud, heavy rain, morning dew, bright sunlight at dawn and dusk.. you can't engineer your way out of sensor-blindness.
I don't want a solution that is available to use 98% of the time, I want a solution that is always-available and can't be blinded by a bad lighting condition.
I think he did it because his solution always used the crutch of "FSD Not Available, Right hand Camera is Blocked" messaging and "Driver Supervision" as the backstop to any failure anywhere in the stack. Waymo had no choice but to solve the expensive problem of "Always Available and Safe" and work backwards on price.
> Waymo had no choice but to solve the expensive problem of "Always Available and Safe"
And it's still not clear whether they are using a fallback driving stack for a situation where one of non-essential (i.e. non-camera (1)) sensors is degraded. I haven't seen Waymo clearly stating capabilities of their self-driving stack in this regard. On the other hand, there are such things as washer fluid and high dynamic range cameras.
(1) You can't drive in a city if you can't see the light emitted by traffic lights, which neither lidar nor radar can do.
Yeah its absurd. As a Tesla driver, I have to say the autopilot model really does feel like what someone who's never driven a car before thinks it's like.
Using vision only is so ignorant of what driving is all about: sound, vibration, vision, heat, cold...these are all clues on road condition. If the car isn't feeling all these things as part of the model, you're handicapping it. In a brilliant way Lidar is the missing piece of information a car needs without relying on multiple sensors, it's probably superior to what a human can do, where as vision only is clearly inferior.
Tesla went nothing-but-nets (making fusion easy) and Chinese LIDAR became cheap around 2023, but monocular depth estimation was spectacularly good by 2021. By the time unit cost and integration effort came down, LIDAR had very little to offer a vision stack that no longer struggled to perceive the 3D world around it.
Also, integration effort went down but it never disappeared. Meanwhile, opportunity cost skyrocketed when vision started working. Which layers would you carve resources away from to make room? How far back would you be willing to send the training + validation schedule to accommodate the change? If you saw your vision-only stack take off and blow past human performance on the march of 9s, would you land the plane just because red paint became available and you wanted to paint it red?
I wouldn't completely discount ego either, but IMO there's more ego in the "LIDAR is necessary" case than the "LIDAR isn't necessary" at this point. FWIW, I used to be an outspoken LIDAR-head before 2021 when monocular depth estimation became a solved problem. It was funny watching everyone around me convert in the opposite direction at around the same time, probably driven by politics. I get it, I hate Elon's politics too, I just try very hard to keep his shitty behavior from influencing my opinions on machine learning.
> but monocular depth estimation was spectacularly good by 2021
It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.
The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.
It was spectacularly good before 2021, 2021 is just when I noticed that it had become spectacularly good. 7.5 billion miles later, this appears to have been the correct call.
depth estimation is but one part of the problem— atmospheric and other conditions which blind optical visible spectrum sensors, lack of ambient (sunlight) and more. lidar simply outperforms (performs at all?) in these conditions. and provides hardware back distance maps, not software calculated estimation
Lidar fails worse than cameras in nearly all those conditions. There are plenty of videos of Tesla's vision-only approach seeing obstacles far before a human possibly could in all those conditions on real customer cars. Many are on the old hardware with far worse cameras
Always thought the case was for sensor redundancy and data variety - the stuff that throws off monocular depth estimation might not throw off a lidar or radar.
Monocular depth estimation can be fooled by adversarial images, or just scenes outside of its distribution. It's a validation nightmare and a joke for high reliability.
It isn't monocular though. A Tesla has 2 front-facing cameras, narrow and wide-angle. Beyond that, it is only neural nets at this point, so depth estimation isn't directly used; it is likely part of the neural net, but only the useful distilled elements.
Isn't there a great deal of gaming going on with the car disengaging FSD milliseconds before crashing? Voila, no "full" "self" driving accident; just another human failing [*]!
[*] Failing to solve the impossible situation FSD dropped them into, that is.
There are a sizeable number of deaths associated with the abuse of Tesla’s adaptive cruise control with lane cantering (publicly marketed as “autopilot”). Such features are commonplace on many new cars and it is unclear whether Tesla is an outlier, because no one is interested in obsessively researching cruise control abuse among other brands.
Seeing how its by a lidar vendor, I don't think they're biased against it. It seems Lidar is not a panacea - it struggles with heavy rain, snow, much more than cameras do and is affected by cold weather or any contamination on the sensor.
So lidar will only get you so far. I'm far more interested in mmwave radar, which while much worse in spatial resolution, isn't affected by light conditions, weather, can directly measure stuff on the thing its illuminating, like material properties, the speed its moving, the thickness.
Fun fact: mmWave based presence sensors can measure your hearbeat, as the micro-movements show up as a frequency component. So I'd guess it would have a very good chance to detect a human.
I'm pretty sure even with much more rudimentary processing, it'll be able to tell if its looking at a living being.
By the way: what happened to the idea that self-driving cars will be able to talk to each other and combine each other's sensor data, so if there are multiple ones looking at the same spot, you'd get a much improved chance of not making a mistake.
Maybe vision-only can work with much better cameras, with a wider spectrum (so they can see thru fog, for example), and self-cleaning/zero upkeep (so you don't have to pull over to wipe a speck of mud from them). Nevertheless, LIDAR still seems like the best choice overall.
Yep, and won't activate until any morning dew is off the sensors.. or when it rains too hard.. or if it's blinded by a shiny building/window/vehicle.
I will never trust 2d camera-only, it can be covered or blocked physically and when it happens FSD fails.
As cheap as LIDAR has gotten, adding it to every new tesla seems to be the best way out of this idiotic position. Sadly I think Elon got bored with cars and moved on.
From the perspective of viewing FSD as an engineering problem that needs solving I tend to think Elon is on to something with the camera-only approach – although I would agree the current hardware has problems with weather, etc.
The issue with lidar is that many of the difficult edge-cases of FSD are all visible-light vision problems. Lidar might be able to tell you there's a car up front, but it can't tell you that the car has it's hazard lights on and a flat tire. Lidar might see a human shaped thing in the road, but it cannot tell whether it's a mannequin leaning against a bin or a human about to cross the road.
Lidar gets you most of the way there when it comes to spatial awareness on the road, but you need cameras for most of the edge-cases because cameras provide the color data needed to understand the world.
You could never have FSD with just lidar, but you could have FSD with just cameras if you can overcome all of the hardware and software challenges with accurate 3D perception.
Given Lidar adds cost and complexity, and most edge cases in FSD are camera problems, I think camera-only probably helps to force engineers to focus their efforts in the right place rather than hitting bottlenecks from over depending on Lidar data. This isn't an argument for camera-only FSD, but from Tesla's perspective it does down costs and allows them to continue to produce appealing cars – which is obviously important if you're coming at FSD from the perspective of an auto marker trying to sell cars.
Finally, adding lidar as a redundancy once you've "solved" FSD with cameras isn't impossible. I personally suspect Tesla will eventually do this with their robotaxis.
That said, I have no real experience with self-driving cars. I've only worked on vision problems and while lidar is great if you need to measure distances and not hit things, it's the wrong tool if you need to comprehend the world around you.
This is so wild to read when Waymo is currently doing like 500,000 paid rides every week, all over the country, with no one in the driver's seat. Meanwhile Tesla seems to have a handful of robotaxis in Austin, and it's unclear if any of them are actually driverless.
But the Tesla engineers are "in the right place rather than hitting bottlenecks from over depending on Lidar data"? What?
Tesla has driven 7.5B autonomous miles to Waymo's 0.2B, but yes, Waymo looks like they are ahead when you stratify the statistics according to the ass-in-driver-seat variable and neglect the stratum that makes Tesla look good.
The real question is whether doing so is smart or dumb. Is Tesla hiding big show-stopper problems that will prevent them from scaling without a safety driver? Or are the big safety problems solved and they are just finishing the Robotaxi assembly line that will crank out more vertically-integrated purpose-designed cars than Waymo's entire fleet every day before lunch?
There's more Tesla's on the road than Waymo's by several orders of magnitude. Additionally the types of roads and conditions Tesla's drive under is completely incomparable to Waymo.
I wasn't arguing Tesla is ahead of Waymo? Nor do I think they are. All I was arguing was that it makes sense from the perspective of a consumer automobile maker to not use lidar.
I don't think Tesla is that far behind Waymo though given Waymo has had a significant head start, the fact Waymo has always been a taxi-first product, and given they're using significantly more expensive tech than Tesla is.
Additionally, it's not like this is a lidar vs cameras debate. Waymo also uses and needs cameras for FSD for the reasons I mentioned, but they supplement their robotaxis with lidar for accuracy and redundancy.
My guess is that Tesla will experiment with lidar on their robotaxis this year because design decisions should differ from those of a consumer automobile. But I could be wrong because if Tesla wants FSD to work well on visually appealing and affordable consumer vehicles then they'll probably have to solve some of the additional challenges with with a camera-only FSD system. I think it will depend on how much Elon decides Tesla needs to pivot into robotaxis.
Either way, what is undebatable is that you can't drive with lidar only. If the weather is so bad that cameras are useless then Waymos are also useless.
Pretty much. They banked on "if we can solve FSD, we can partially solve humanoid robot autonomy, because both are robots operating in poorly structured real world environments".
They started working on humanoid robots because Musk always has to have the next moonshot, trillion-dollar idea to promise "in 3 years" to keep the stock price high.
As soon as Waymo's massive robotaxi lead became undeniable, he pivoted to from robotaxis to humanoid robots.
Obviously both will exist and compete with each other on the margins. The thing to appreciate is that our physical world is already built like an API for adult humans. Swinging doors, stairs, cupboards, benchtops. If you want a robot to traverse the space and be useful for more than one task, the humanoid form makes sense.
The key question is whether general purpose robots can outcompete on sheer economies of scale alone.
What do you think I said that you're contradicting?
IMO the presence of safety chase vehicles is just a sensible "as low as reasonably achievable" measure during the early rollout. I'm not sure that can (fairly) be used as a point against them.
I'm comfortably with Tesla sparing no expense for safety, since I think we all (including Tesla) understand that this isn't the ultimate implementation. In fact, I think it would be a scandal if Tesla failed to do exactly that.
Damned if you do and damned if you don't, apparently.
Setting aside the anti-Tesla bias, none of what I said relies on Tesla claims. The "chase vehicle" claims are all based on third-party accounts from actual rideshare customers.
> IMO the presence of safety chase vehicles is just a sensible "as low as reasonably achievable" measure during the early rollout. I'm not sure that can (fairly) be used as a point against them.
Only if you're comparing them to another company, which you seem to be. So yes, yes it can.
Seriously, the amount of sheer cope here is insane. Waymo is doing the thing. Tesla is not. If Tesla were capable of doing it, they would be. But they're not.
It really is as simple as that and no amount of random facts you may bring up will change the reality. Waymo is doing the thing.
The vertical integration argument should apply to Grok. They have Tesla driving data (probably much more data than Waymo), Twitter data, plus Tesla/SpaceX manufacturing data. When/if Optimus starts on the production line, they'll have that data too. You could argue they haven't figured out how to take advantage of it, but the potential is definitely there.
Agreed. Should they achieve Google level integration, we will all make sure they are featured in our commentary. Their true potential is surely just around the corner...
"Tesla has more data than Waymo" is some of the lamest cope ever. Tesla does not have more video than Google! That's crazy! People who repeat this are crazy! If there was a massive flow of video from Tesla cars to Tesla HQ that would have observable side effects.
I know it’s gross, but I would not discount this. Remember why Blu-ray won over HDDVD? I know it won for many other technical reasons, but I think there are a few historical examples of sexual content being a big competitive advantage.
They couldn't even make burger flipping robots work and are paying fast food workers $20/hr in California.
If that doesn't make it obvious what they can and cannot do then I can't respect the tranche of "hackers" who blindly cheer on this unchecked corporate dystopian nightmare.
Erm, a dishwasher, washing machine, automated vacuum can be considered robots. Im confused as to this obsession of the term - there are many robots that already exist. Robotics have been involved in the production of cars for decades.
I think the (gray) line is the degree of autonomy. My washing machine makes very small, predictable decisions, while a Waymo has to manage uncertainty most of the time.
A robot is a robot, and a human is a creature that won't necessarily agree with another human on what the definition of a word is. Dictionaries are also written by humans and don't necessarily reflect the current consensus, especially on terms where people's understanding might evolve over time as technology changes.
Even if that definition were universally agreed on l upon though, that's not really enough to understand what the parent comment was saying. Being a robot "in the same way" as something else is even less objective. Humans are humans, but they're also mammals; is a human a mammal "in the same way" as a mouse? Most humans probably have a very different view of the world than most mice, and the parent comment was specifically addressing the question of whether it makes sense for an autonomous car to model the world the same way as other robots or not. I don't see how you can dismiss this as "irrelevant" because both humans and mice are mammals (or even animals; there's no shortage of classifications out there) unless you're completely having a different conversation than the person you responded to. You're not necessarily wrong because of that, but you're making a pretty significant misjudgment if you think that's helpful to them or to anyone else involved in the ongoing conversation.
No one is denying that robots existed already (but I would hardly call a dishwasher a robot FWIW)
But in my mind a waymo was always a "car with sensors", but more recently (especially having recently used them a bunch in California recently) I've come to think of them truly as robots.
In the same way people online have argued helicopters are flying cars, it doesn't capture what most people mean when they use the word "robot", anymore than helicopters are what people have in mind when they mention flying cars.
But somehow google fails to execute. Gemini is useless for programming and I don’t think even bother to use it as chat app. Claude code + gpt 5.2 xhigh for coding and gpt as chat app are really the only ones that are worth it(price and time wise)
I've recently switched to Claude for chat. GPT 5.2 feels very engagement-maxxed for me, like I'm reading a bad LinkedIn post. Claude does a tiny bit of this too, but an order of magnitude less in my experience. I never thought I'd switch from ChatGPT, but there is only so much "here's the brutal truth, it's not x it's y" I can take.
GPT likes to argue, and most of its arguments are straw man arguments, usually conflating priors. It's ... exhausting; akin to arguing on the internet. (What am I even saying, here!?) Claude's a lot less of that. I don't know if tracks discussion/conversation better; but, for damn sure, it's got way less verbal diarrhea than GPT.
Yes, GPT5-series thinking models are extremely pedantic and tedious. Any conversation with them is derailed because they start nitpicking something random.
But Codex/5.2 was substantially more effective than Claude at debugging complex C++ bugs until around Fall, when I was writing a lot more code.
I find Gemini 3 useless. It has regressed on hallucinations from Gemini 2.5, to the point where its output is no better than a random token stream despite all its benchmark outperformance. I would use Gemini 2.5 to help write papers and all, can't see to use Gemini 3 for anything. Gemini CLI also is very non-compliant and crazy.
To me ChatGPT seems smarter and knows more. That’s why I use it. Even Claude rates gpt better for knowledge answers. Not sure if that itself is any indication. Claude seems superficial unless you hammer it to generate a good answer.
Gemini is by far the best UI/UX designer model. Codex seems to the worst: it'll build something awkward and ugly, then Gemini will take 30-60 seconds to make it look like something that would have won a design award a couple years ago.
It is a bit mind boggling how behind they were considering they invented transformers and were also sitting on the best set of training data in the world, but they've caught up quite a bit. They still lag behind in coding, but I've found Gemini to be pretty good at more general knowledge tasks. Flash 3 in particular is much better than anything of comparable price and speed from OpenAI or Anthropic.
You need an AI angle if you want investment and up-boats.
Suggest a LLM-based chat that consumes feeds and provides a terrification-score rating letting you know how to calibrate your panic-levels, based on real data. Allow for real-time questions on how to purify water, if it's better to carry gold or ammo etc
Good luck. I'll give you 80 mil based on a 40% stake with voting rights.
Yes I agree, but why 10mph? Why not 5mph? or 2mph? You'll still hit them if they step out right in front of you and you don't have time to react.
Obviously the distances are different at that speed, but if the person steps out so close that you cannot react in time, you're fucked at any speed.
10mph will do serious damage still, so please for the sake of the children please slow yourself and your daughter's driving down to 0.5mph where there are pedestrians or parked cars.
But seriously I think you'd be more safe to both slow down and also to put more space between the parked cars and your car so that you are not scooting along with a 30cm of clearance - move out and leave lots of space so there is more space for sight-lines for both you and pedestrians.
Have you been in a waymo? It knows when there are pedestrians around (it can often see over the top of parked cars) and it is very cautious when there are people near the road and it frequently slows down.
I have no idea what happened here but in my experience of taking waymos in SF, they are very cautious and I'd struggle to imagine them speeding through an area with lots of pedestrians milling around. The fact that it was going 17mph at the time makes me think it was already in "caution mode". Sounds like this was something of a "worst case" scenario and another meter or 2 and it would have stopped in time.
I think with humans, even if the driver is 100% paying attention and eyes were looking in exactly the right place where the child emerged at the right time, there is still reaction times - both in cognition but also physically moving the leg to press the pedal. I suspect that a waymo will out-react a human basically 100% of the time, and apply full braking force within a few 10s of milliseconds and well before a human has even begun to move their leg.
You can watch the screen and see what it can detect, and it is impressive. On a dark road at night in Santa Monica it was able to identify that there were two pedestrians at the end of the next block on the sidewalk obscured by a row of parked cars and covered by a canopy of overgrown vegetation. There is absolutely no way any human would have been able to spot them at this distance in these conditions. You really can "feel" it paying 100% attention at all times in all directions.
Yep I've often noticed this as well - it has many many times detected humans that I can't even see (and I like to sit in the front), especially at night.
Sometimes it would detect something and I think "huh? Must be a false positive?" but sure enough it turns out that there really was someone standing behind a tree or just barely visible around a corner etc.
Sure none of those have run out in front of us, but the fact it is spotting them and tracking their movement before I am even aware they're there is impressive and reassuring.
> I suspect that a waymo will out-react a human basically 100% of the time, and apply full braking force within a few 10s of milliseconds and well before a human has even begun to move their leg.
Correct. Human reaction time is at its very best ~250ms. And that's when you're hyper-focused on reacting to a specific stimuli and actively try to respond to it as fast as possible.
During normal driving, a focused driver will react on the order of 1s. However, that's assuming actively paying attention to the road ahead. If you were to say, be checking your mirrors or looking around for any other reason this can easily get into multiple seconds. If you're say, playing on your phone (consider how many drivers do this), forget it.
A machine however is 100% focused 100% of the time and is not subject to our poor reaction times. It can brake in <100ms every time.
On the other hand, a software fault could make it run into an obstacle that'd be obvious to a human at full speed.
Many roads in London have parked cars on either side so only one can get through - instead of people cooperating you have people fighting, speeding as fast as they can to get through before someone else appears, or race on-coming cars to a gap in the parked cars etc. So when they should be doing 30mph, they are more likely doing 40-45. Especially with EVs you have near-instant power to quickly accelerate to get to a gap first etc.
And putting obstacles in the road so you cant see if someone is there? That sounds really dangerous and exactly the sort of thing that caused the accident in the story here.
Yes. They have made steady progress over the previous decades to the point where they can now have years with zero road fatalities.
> And putting obstacles in the road so you cant see if someone is there? That sounds really dangerous and exactly the sort of thing that caused the accident in the story here.
Counterintuitive perhaps, but it's what works. Humans adjust their behaviour to the level of perceived risk, the single most important thing is to make driving feel as dangerous as it is.
I think the humans in London at least do not adjust their behaviour for the perceived risk!
From experience they will adjust their behaviour to reduce their total travel time as much as possible (i.e. speed to "make up" for lost time waiting etc) and/or "win" against other drivers.
I guess it is a cultural thing. But I cannot agree that making it harder to see people in the road is going to make anything safer. Even a robot fucking taxi with lidar and instant reaction times hit a kid because they were obscured by something.
> I think the humans in London at least do not adjust their behaviour for the perceived risk!
Sure they do, all humans do. Nobody wants to get hurt and nobody wants to hurt anyone else.
(Yes there are few exceptions, people with mental disorders that I'm not qualified to diagnose; but vast majority of normal humans don't.)
Humans are extremely good at moderating behavior to perceived risk, thank evolution for that.
(This is what self-driving cars lack; machines have no fear of preservation)
The key part is perceived though. This is why building the road to match the level of true risk works so well. No need for artificial speed limits or policing, if people perceive the risk is what it truly is, people adjust instictively.
This is why it is terrible to build wide 4 lane avenues right next to schools for example.
There are always going to be outlier events. If for every one person who still manages to get hit—at slow, easily-survivable speeds—you prevent five others from being killed, it’s a pretty obvious choice.
I know the research and know that it's generally considered to be effective (at least in most European cities where it is done). I wonder whether there are any tipping points, e.g. drivers going into road rage due to excessive obstacles/trying to "make up for the lost time" etc., and whether it would work in the US (or whether drivers just would ignore the risk because they don't perceive pedestrians as existing).
Does physics work? If it does, then these physical obstacles work too. Go ahead, try to drive faster than 10mph through a roadway narrowed so much it's barely wider than your car, with curbs. And yeah, I'm describing a place in London.
There's often been a few cases of "disappeared" people who went missing and it turns out they actually crashed off the road somewhere and weren't found for a week or two.
That's extreme of course but there are probably a lot of accidents that happen in low-density rural country areas or late at night when there aren't many people around. The automatic e-call from the car gives exact GPS coordinates and severity of the accident, even if you are unconscious or if your phone that was neatly in the cup holder before the crash was flung somewhere else (potentially even flew out of the car etc) and you're trying to find it while someone might be dying in the seat next to you etc.
People didn't survive before all this. It's a mandatory feature now because it's so effective at saving lives. 2 to 10% reduction in fatalities and serious injuries apparently. Would you also question why we have mandatory airbags and traction control?!
right, but airbags, seatbelts, etc. are not internet connected. That's the critical distinction. I do not want the risks that come with my car connecting to the internet.
A much more reasonable ask would be for your car's systems to use your phone to place a call to emergency services. I absolutely do not want yet another internet connected device in my life, especially one like a car, where examples exist of hackers being able to disable the electronics remotely.
Lots of popular music is slop. Are you saying that e.g. Spice Girls or Coldplay or whatever is not slop? It is certainly popular with people even if it's musically and creatively bankrupt.
AI slop, Human slop - who cares if people are enjoying it.
>Are you saying that e.g. Spice Girls or Coldplay or whatever is not slop?
Your definition of "slop" seems to be "is popular with the mainstream." That isn't the definition used when applied to AI generated music. Spice Girls and Coldplay are leagues beyond anything an AI can currently produce in terms of artistic quality. Yes, there is artistic quality to popular culture.
And to most people it matters that human beings produce it. It may not matter to you - you may only consider music or any other form of art to be nothing more than a means of producing stimuli intended to create a pleasing endorphine response, but most people don't want to process art the way a machine processes data.
But why should you make the distinction between slop that is created by a human or AI? Why should you care if something terrible was created by an AI or a human?
For the same reason some people like buying local, or buying hand-made, or buying "Made in <insert country>". People aren't robots, and we know the consequences of our actions are not limited to the current moment and on the current side of the black box we happen to be on as consumers. Further, even in cases of pure observation, where there is no monetary, verbal, implicit, or indirect support - e.g. just looking at a piece of art we didn't pay to see - we care about things that are not represented solely by the observable qualities of an object, especially when it comes to art and craft and the effort of people we admire.
This is obvious, though. This part of human nature will never change, and there is no argument that can confront it, and no reason to want to formulate one unless:
A. It makes you money.
B. It appears to have dividing lines that match a larger culture war in which you have emotional stock.
Discs might be fine but good luck reading one with your kids iPhone.
Yes sure there are probably arcane ways to do it (and your 25 year old CD drive is probably going to die before the discs, assuming you still have a computer that it can connect to...IDE anyone?), but is the OP trying to archive their works, or are they trying to make them easily accessible? They say they want a website so I guess they want something simple and easy to read, and not some equivalent of a dusty archive box locked away in a storage facility somewhere.
They'll be loads of unexpected things that come up that can't be anticipated.
Just look at some of the websites that were abandoned in the early 2000-2010s but which are still actively hosted today but that are broken now due to modern browsers refusing to load cross-origin resources, or the server's ciphers are no longer accepted etc. They're still online, you just can't see the content with today's computers. You need a human (...or potentially an AI?) there to intervene and resolve those problems to keep it going.
Sure you might say well my writings are not using HTTPS or I don't make cross-origin requests, but that totally misses the point. Who knows in 50 years you may not even be able to read ASCII text in consumer browsers any more without specialist archival/library tools, just like we can't use what we're at the time totally legitimate SSL ciphers.
I think that archiving your writings is different from having your site active and casually available.
Google/Alphabet are so vertically integrated for AI when you think about it. Compare what they're doing - their own power generation , their own silicon, their own data centers, search Gmail YouTube Gemini workspace wallet, billions and billions of Android and Chromebook users, their ads everywhere, their browser everywhere, waymo, probably buy back Boston dynamics soon enough (they're recently partnered together), fusion research, drugs discovery.... and then look at ChatGPT's chatbot or grok's porn. Pales in comparison.
reply