Interesting adjacent theory is how much are datacenters becoming military target to strike as part of disrupting initial defenses. It doesn't seem it was the case in this instance, but I could see this becoming a more important target in future.
Seems like it should be somewhat easier to bomb 50 datacenters than it would be to hack and disrupt 1000s of different services.
Again, this is just me thinking out loud on a tangent and this doesn't have much to do with this story, but I felt it was an interesting thought to share nonetheless.
The more interesting question, is how many datacenters are just plonked next to a high-value military target?
For infrastructure reasons, we plonk datacenters down next to airports big enough to fly major hardware into, and near where the big oceanic cables come ashore… and for strategic reasons those are also the perfect places to place military bases
Data centers in space no longer look so unreasonable when the requirement is “redundancy against multi site bomb strikes mid op”. A little depressing when some pieces start to fit together.
I’m not exactly fully bought into the idea (for many practicality reasons) but it seems easier to build many (and replace) ground stations than data centers.
Additionally, StarLink et al are now able to directly communicate with cell phones. It therefore should be possible to route entirely in space between “data center satellites” and communications satellites and communicate directly with an end user device, avoiding the entire terrestrial internet.
That's so interesting. Are any of the US military (or other satellite state of the US) systems running in "normal" datacenters or do they have a few protected DoD datacenters in the US?
I do think that though, atleast from the Anthropic decision prior, we know that Anthropic which was used by DoD should be on normal AWS datacenters.
I am saying this because, Dod Threatened to force take the source code of Anthropic if they don't agree to aggregious demands so that means that they don't have the source code.
Perhaps DoD used Anthropic within AWS Military modular DC's but I find it extremely unlikely.
I am almost certain that even with OpenAI who bent its knee to DoD, its still hosted on regular infrastructure and DoD is using these AI models on pretty sensitive tasks (During the Venezeula Maduro's capture, Anthropic/Claude were used iirc to handle some data analysis)
IMO Tho, Any Employee from Anthropic/OpenAI might know better tho about how these models are deployed.
> Seems like it should be somewhat easier to nuke 50 datacenters than it would be to hack and disrupt 1000s of different services.
The bigger part of me seems that if we someone nukes 50 datacenters all at once or say all of Amazon's datacenters at once, then the data stored in there would simply be gone and given so many datacenters are located in Virginia,USA iirc or just so many companies being reliant on few datacenter providers.
The larger threat to me with the lose of data is firstly the panic within public fronting services but also, with Hedge Funds, Pension funds or banking datacenters who might be using these and if they lose the data, then its gonna cause even more public mayhem.
Some might be saying oh off-site backups exist but there has atleast been one instance, where a single Google accident had led to massive issues for a 135 Billion $ pension fund.
In my experience many middle eastern companies tend to only operate out of the Middle East AZ’s. They’re not backhauling their data and customers to us-east-1. If the goal is to severely disrupt middle eastern rivals, then you don’t need to hit every possible AWS datacenter.
Interesting, I didn't know that so thanks for telling me something new.
But why is this the case? Like, saving costs? Doesn't this recent attack on AWS DC does show that they aren't as safe as previously thought especially in a region of conflict.
Is there any particular reason as to why this is the case?
Notably they did have backups. As you would expect for a $135 billion dollar undertaking. It's just that restoring from a calamity tends to be time consuming (a key difference between failover and a backup).
IIUC part of the reason ballistic missiles have multiple warheads is that some of them detonate high up to knock out air defenses and other electronics allowing the rest to fall through to their targets. The last time we tried this experiment as a species was the starfish prime tests in 1962 which caused some electrical havoc in Hawaii. These days our systems are probably more delicate and sensitive? All that is to say, in a scenario where nukes are going off I'm not sure you'd even need to target any datacenters in particular.. they're probably all toast by default.
Now you're worried? Come on. He is using the Bully Pulpit to try and pressure other companies to toeing the line. At least someone had the balls to tell them to get fucked instead of kowtowing.
He is also clearly in the throes of dementia, as his father was. It’s a common symptom of dementia patients to become rude and violent as their facilities slip away
If only we had a constitutional process for removing presidents from office as they become obviously unfit for the office…
this reads like someone who hasn't seen dementia up close. I don't see his behavior as much different than term 1. simply more malevolent.
there's no obvious word searching. he's always been simplistic and unencumbered by the need for logical consistency. he was never a word smith. he has his stock phrases (eg, "many people are saying...<insert lie>"), which he uses as a crutch, but also to great effect.
as someone who HAS seen dementia from a to z, I don't see it here.
Yeah I don’t see it either. What I see is a guy openly admitting he’s a dictator because he thinks that’s what’s needed. A guy that knows he doesn’t have much to lose and wants to do whatever crazy shit comes into his head
yes, but this is his administration; full stop neogestapo. Hes surrounded and politically floated by white Christian nationalists. His dementia just lubricates the existing harm vectors.
I mean him being in the throes of dementia is certainly possible; but, more absolutely, he's a fucking asshole. The problem isn't just him, however; it's his entire administration, Congress, and the SCOTUS he stacked that further enable the insanity.
The only two things anthropic ask is that AI cannot be used for:
- domestic mass surveillance,
- autonomous kill decision.
That's it. The reason for the first one is clear: it violate the spirit of the fourth amendment at least.
The reason for the second is that if a kill decision is taken, let's say by an ICE agent who just got told 'im not mad at you' or something similar that would surely enrage him, he is responsible in front of the law. If it's an autonomous drone that shoots on political opponent/protestors, no one is responsible.
I will add that Google and anthropic made their AI play wargames. 93% of the time, their models escalate to the nuclear option.
By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
There are all sorts of algorithms in use that were once thought of as AI, but transitioned to being mere algorithms well before they entered public awareness, if they did that at all. Some are still useful and used everywhere, but they have never been thought of as AI by the public. For them, AI is a term that has long been reserved for some far-off, sci-fi future.
LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?
We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.
Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.
Useful algorithms will come out of all this, a lot of tears too, but not "AI".
> and we'll be able to see if all the hype was warranted.
Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.
Maybe AI is useful to you, but the US economy is currently buoyed by promises of AI replacing the workforce across the board.
Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.
> but the US economy is currently buoyed by promises of AI replacing the workforce across the board.
Still don't understand what's the end goal here. Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.
the end goal is productivity growth, aka the point of nearly every technology ever invented. The human story is about how we learn to do more with less.
> Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.
There is a third outcome that combines both of these.
LLMs can massively displace the workforce (and cause widespread social instability) AND the companies pouring hundreds of billions into them right now could, at the same time, fail to capture significant amounts of the labor savings value as late-mover alternatives run the race drafting their progress without the massive spend.
I'd honestly be surprised if this double-whammy isn't the outcome at this point. AI is going to have a massive impact on everything, but there is still no moat in sight.
Leaving aside the economic shitshow and other things.
I think you're right but for the wrong reasons wrt sustainable profit.
Specifically, overcounting how much it will cost in 5 years to run AI because you're extrapolating current high prices, and at the same time undercounting how the demand will drive efficiency gains.
I think our little corner of the world has a distorted view of AI in that it is actually proving useful for us. Once they passed a certain level of usefulness... I remember when they were still struggling just to output syntactically correct code, you know, like, 18 months ago or so... they became a useful tool that we can incorporate.
But there's a lot of things playing out to our advantage. Vast swathes of useful and publicly available training data. The rigorous precision of said data. Vast swathes of data we can feed it as input to our queries from our own codebases. While we never attained the perfect ideal we dreamed of, we have vast quantities of documentation at differing levels of abstraction that the training can compare to the code bases. We've already been arguing in our community about how design patterns were just level of abstraction our coding couldn't capture and AI has access now to all sorts of design patterns we wouldn't have even called design patterns because they still take lots of code to produce, but now for example, if I have a process that I need to parallelize it can pretty much just do it in any of several ways depending on what I need at that point.
It is easy to get too overexcited about what it can do and I suspect we're going to see an absolute flood of "We let AI into our code base and it has absolutely shredded it and now even the most expensive AI can't do anything with it anymore" in, oh, 3 to 6 months. Not that everyone is going to have that experience, but I think we're going to see it. Right now we're still at the phase where people call you crazy for that and insist it must have been you using the tool wrong. But it is clearly an amazing tool for all sorts of uses.
Nevertheless, despite my own experiences, I persist in believing there is an AI bubble, because while AI may replace vast swathes of the work force in 5-20 years, for quite a lot of the workforce, it is not ready to do it right this very instant like the pricing on Wall Street is assuming. They don't have gigabytes of high-quality training data to pour in to their system. They don't have rigorous syntax rules to incorporate into the training data. They don't have any equivalent of being guided by tests to keep things on the rails. They don't have large piles of professionally developed documentation that can be cross-checked directly against the implementation. It's going to be a slower, longer process. As with the dot-com bubble, it isn't that it isn't going to change the world, it is simply that it isn't going to change the world quite that fast.
i think the point is AI has to go much further and faster than it has in the past 3 years to justify the investments being made from the hype. The hype did its job now the AI industry has to execute and create the returns they promised. That is still very much up in the air, if they can't then the tech was over hyped.
It's high time to stop accumulating debt while providing free picture of pelicycles, just charge the full cost for them - enough to generate profits and pay back debt.
What we see now is literally burning money and energy to generate hype. The only true measures of success are financial and macroeconomic. If the hype is real, there should be no problem for the mighty AI to generate debt-free profits for its providers while the overall price level in the US goes down.
We observe the exact opposite which makes the AI hype act only as market manipulation for capital misallocation.
unlike the old hpc, where we only burned hundreds of millions for machines that were 80% efficient to get a 5 year lead, we are burning hundreds of billions on machines that are 30% efficient to get a 1 year lead.
AI is real but the socio-political environment is far from conductive to some form of productive use of it - as opposed to using it as a war-machine - AI isn't going to fail in that role but very few will be happy about it.
I mean, disillusionment is the least of my worries.
> most of the people that were the loudest won't say they were wrong
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
> It's laziness because they have little CS fundamentals to base such claims on
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
> However, current predictions about the future of software development (and the world in general) are speculative.
It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.
And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.
I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.
There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.
What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.
> a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so
This entire thread should be annihilated, but since you mentioned being pedantic...
You're correct that a pure encryption algorithm doesn't use hashing. But real-world encryption systems will include an HMAC to detect whether messages were altered in transit. HMACs do use hash functions.
Consumer robotics strikes me as an engineering tar pit so deep it leads to hell. If full self driving is hard due to the long tail of unusual special cases, this is orders of magnitude worse.
Take FSD but multiply the number of actuators and degrees of freedom by at least 10, more like 100. Add a third dimension. Add direct physical interaction with complex objects. Add pets and children. Add toys on the floor. Add random furniture with non-standard dimensions. Add exposure to dust, dirt, water, grease, and who knows what else? Puke? Bleach? Dog pee?
Oh, and remove designated roads and standardized rules about how you're supposed to drive on those roads. There are no standards. Every home is arranged differently. People behave differently. Kids are nuts. The cat will climb on it. The dog may attack it. The pet rabbit will chew on any exposed cords.
We've all seen those Boston Dynamics robots. They're awesome but how durable would they be in those conditions? Would they last for years with day to day constant abuse in an environment like that?
From a pure engineering point of view (neglecting the human factor or cost) a home helper robot is almost definitely harder than building and operating a Mars base. We pretty much have all the core tech for that figured out: recycling atmosphere, splitting and making water, refining minerals, greenhouses, airlocks, and so on. As soon as we have Starship or another super heavy rocket that's reliable we could do it as long as someone was willing to write some huge checks.
And of course it's a totally untested market. We don't know how big it really is. Will people really be willing to pay thousands to tens of thousands for a home robot with significant limitations? Only about 25% of the market probably has the disposable income to afford these.
You'd have to go way up market first, but people up market can afford to just pay humans to do it.
> Will people really be willing to pay thousands to tens of thousands for a home robot with significant limitations?
The answer to that is no, probably for the foreseeable future. The robot demos we have no can't even fold laundry or put dishes away without being teleoperated. Both extremely basic tasks that any household robot would be required to do, along with other messy jobs that put it at risk as you said: taking out the trash, feeding the pets, cleaning up messes, preparing or cooking food, etc.
The price it would have to cost with current tech would be astronomically more than just hiring a human, and they would almost certainly come with an expensive subscription as well, whereas I can hire a human to come in and clean my home weekly for about $200/month.
Humans who aren't skilled require training regardless of how "unskilled" the task is.
Humans that are chronically unskilled also don't learn well, somewhat as a rule.
Humans that don't make much money have a high turnover rate from burnout. Additionally, those that can learn typically leave for greener pastures.
The bar isn't terribly high. Efficiency of scale in production will solve this eventually. I think the likely outcome is robots building themselves first.
Almost all developed economies are running into a fertility crisis right now, with labor shortages already appearing in the frontrunners of the trend, such as Germany.
Human work is going to cost more in the future, and immigration from countries such as Thailand or Vietnam is already slowing down. Even a mediocre robot will be sought after if it is the only choice you have.
I understand that. It's my personal opinion that one of the causes of low birth rates is that we continually choose to have robots solve our problems instead of choosing a human.
I think we could increase birth rates by making a taxation scheme in which the most marginally effective way to solve a problem is with a human, paid a wage which allows for that occupation to be a lifelong career.
They’ll be bought/leased, providing direct profit. Also, there’ll be maintenance revenue. I think they’re expected to cost around $30K.
In the case where they’re replacing a low-skill human worker, they’ll pay for themselves in 1-2 years…plus no sick days, no drug use, no theft, and they can work 24 hours a day, less any recharging time.
Once large swaths of the planet have been rendered uninhabitable from human activity, we'll require them to continue extracting profit from those areas. (this is a downer comment but also realistically the first thing that came to mind when trying to think of a use for them).
A teleoperated robot is little more than a human worker with extra steps. (And an expensive, clumsy human worker at that.) I can't imagine many situations where that would make sense instead of having a human do the work in person.
I could see teleoperated help catching on. Americans are weird about staff. When I visit my old-world family, it's seen as perfectly normal to have someone living in an attached apartment, handling the cooking the cleaning, etc. There are well-established etiquette rules, understood both by the staff and the family, which help navigate the rather complicated, radically unequal relationship between the two.
Americans by and large don't do that. We software developers have not that different of an income gap between us and minimum wage workers compared to my family overseas and their staff. Yet, it would be considered weird, extravagant even, for a $300-500k/yr developer to have dedicated help. We're far more comfortable with people we don't need to interact with directly, like housecleaners, landscapers, etc.
Teleoperated robots sidestep that discomfort, somewhat, by obscuring the the humanity of the staff. It's probably not a particularly ethical basis for a product, but when has that ever stopped us.
Maybe you can scale to have one operator operate ten or a hundred household robots at a time.
An autonomous robot that has 99% reliability, getting stuck once an hour, is useless to me. A semi-autonomous robot that gets stuck once an hour but can be rescued by the remote operator is tempting.
Expect security and privacy in the marketing for these things, too, but I don't think that's a real differentiator. Rich and middle class people alike are currently OK with letting barely-vetted strangers in their houses for cleaning the world over.
- Services like maids or cleaners are usually scheduled, maybe you have to wait and open the door etc. Maybe they can't make it that day because of snow storm etc.
- Services are normally limited to certain hours. With a remote operator, the robot could do laundry all night ran by someone in a different time zone.
- If needed could be operated in shifts.
- Other new use cases could arise, e.g. wellness check on elderly, help if fallen or locked out etc.
Low duty cycle. If one human can drive 20 robots, because most of them are sitting still most of The time, it starts to make sense. Vs a maid or butler that can obviously only really work one home at a time.
The person in a third world country is not a slave, they're doing the job for a few bucks a day because it's still better than other options available to them.
What is the difference between being a teleoperator in India for a californian family robot, and being a software dev for a company selling SaaS products to the US market?
Yeah but with a teleoperated worker you can have them work remote from a place with poor labor regulations and extremely low pay.
The future with this as a reality is a really dark place, where the uber wealthy live entirely disconnected from the working class except through telepresent machines half a planet away. That way the wealthy don't have to be inconvenienced by the humanity of the poors.
If a robot can do basic cleaning, laundry, and dishes, that's worth a lot to a lot of people. Dual-professional households have the money, and not having to do this housework could save some marriages.
I don't think it actually is worth a lot to people. I know dual-professional households who don't even use their dishwasher consistently, and multiple companies have gone bankrupt trying to bring automated laundry folding (which does exist in industry) to the consumer market.
Maid services are generally expected to handle "everything" for a pretty expansive definition of everything. They pick up scattered stuff and put in a sensible location, they arrange everything visible in an aesthetically pleasing way, they take out the trash, if there's some weird dirt that's hard to clean they creatively problem solve to find a way to get it off. I don't think there's a market for a service that can only handle basic cleaning.
(Will someone eventually invent a machine that can do all of that and more? Yes, probably, and they'll make billions when they do. But Tesla has offered no reason to believe this is on their horizon, and the focus on a humanoid form factor strongly suggests that they're optimizing for media appeal over practical capabilities.)
Maids are paid a VERY low wage in exchange for being able to take on an almost unlimited list of general tasks, from folding laundry to managing kids to mopping stairs. We are decades away from robots with that capability, and they are intended to replace people who are often not making even minimum wage? Please. Get real.
Robot vacuum with a mop, washing machine, tumble dryer and dishwasher reduce housework to like an hour per week, ie 30 min/person/week. This can be higher if you live in a big house, but if your marriage can’t tolerate 30 mins of house work a robot will not solve it.
> Dual-professional households have the money, and not having to do this housework could save some marriages.
Dual-professional households could hire a maid and pay for marriage counseling and still save money compared to a $20k robot plus whatever a subscription would run.
I can google "maid service seattle" and see dozens of entries. The first one in the yelp list is available to book and will clean a 1000 - 1500 sq ft, 2 bed, 2 bath house for well under $200. There's even a decent discount if you book is as a weekly or biweekly service.
That feels pretty affordable? I know it's a scale, but minimum wage here is $21/hr now.
I have enough time to take care of my own space, but for comparison Comcast internet is well over $120/month for crappy speeds. I think in comparison a little more than that for 1 deep cleaning a month is reasonable.
They haven't said it explicitly. But the reason that Waymo can add five cities this year is very likely they are at least at break even on opex. They likely reached that point sometime last year and it seems to have held up.
So I wouldn't call robotaxi service unproven. But I would call the idea that you can claim to be running a robo taxi service without depots, cleaners, CSRs, and remote monitoring that can handle difficult situations in a more sophisticated way than each car having a human monitor it, naïve.
I read that as meaning even the scaled robotaxi service (Waymo) does not throw off enough cash to offset the loss of Tesla's vehicle sales unit. (The putative Tesla buyer they are dissuading from purchase would have to take a whole lot of robotaxi trips to generate the same amount of profit for Tesla. Assuming Tesla can get robotaxis working.)
In the 2000s publishing pivot to the Internet, this was known as "trading physical dollars for digital pennies."
This seems to be a major strategic decision of Alphabet pretty much across the board. I have only recently noticed the stark contrast to the constant hype trope you see in their competitors.
A lot of the current valuation is based on Elon drumming up investor expectations. As they start to lose their spot as market leaders in EV, Tesla's inability to deliver on what Elon promised will become more clear as their competitors level with and surpass them.
Moving to new, unproven markets is fruitful ground for someone like Elon to drum up expectation and hopefully keep distracting people from the fact that he's had very few recent successes to show for all the hype he receives.
On top of that, despite huge investments of both time and money into both areas, seemingly rivaling competitors, Tesla does not seem to be anywhere close to a market leader in either segment. They have to both prove the markets and that they can compete in them.
Maybe that's the driver. I always figured keeping Musk on was a sort of suicide pact, without Musk the company might be more traditionally valued, but that means the stock would tank. So they have to stick with him.
Staying in autos, eventually folks figure out the math and the stock tanks ... so they have to keep moving and keeping that sort of aspirational stock price.
To be fair, market has been decoupled from reality on the ground for a while now. Just the fact that companies were able to operate giving stuff away for free only to suddenly yank the chain in a desperate bid to gain profitability later should be enough of a signal.
That said, as much as I dislike Musk ( and I have bet money against him before ), his instincts are likely not wrong. And it does help that, clearly, he knows how to bs well.
I am not saying you are wrong, but I think he is just a poster child for everything wrong with current market ecosystem.
Except that it doesn't need to be consumer to start off. You can build specialized robots that deliver value at a massive scale. Imagine a "Prep Cook" at a restaurant, there are millions of these around the world. If the Optimus can do that job for a price of $1,000/month, that's likely to be more efficient and better quality than a human can do. And there has to be many jobs like this.
Robots that specialise in one thing already exist. In big factories, where they'll peel and dice tons of onions per hour, being fed via unsexy conveyor belts into massive dicers.
That's the problem with robots like Optimus. The "specialized" part (Cutting the onions) is 1% of the skills. You'd still need to other hard 99% (Prehensility, vision, precise 3D movement, etc.).
And if you sorted the hard 99%, what's the point in specialising in cutting onions, when the same exact skills are needed to fold and put away laundry?
No, automation doesn't reduce jobs, i.e. doesn't reduce consumer spending, as consumer spending is determined by output, which automation boosts.
The savings from automation in a particular sector are spent elsewhere — wherever services are more costly (in labor). That's the dynamic behind Say's law, which shows that spending on less automatable jobs like barbers and physical therapists increases as automation reduces costs in other sectors of the economy.
I understand this is a well-developed economic theory and I am complete uninformed, but this doesn't make intuitive sense at all.
If 1 million prep cooks are replaced by robots, will food become cheap enough that those prep cooks can all get jobs as barbers, and the money people spend on food will shift to haircuts?
Will the food be so cheap that all those prep cooks can afford to learn to cut hair?
Also consider the money velocity of a human vs a robot. A human is probably paycheck to paycheck spending everything they earn. Robot earnings go back to company, which makes the stock go up, 90% of which is owned by billionaires who just keep hoarding and hoarding.
Adaptation does not require mass retraining into new professions; it happens through task simplification, AI-augmented shallow competence (less qualified people can do more advanced work), partial work, income stacking, and lower subsistence costs. As automation advances, less-automatable sectors (personal services, care, local physical work) see wage pressure rise, consistent with Say’s Law, because yes, what people save at restaurants, is spent instead at barbers, massage therapists, nail technicians, etc.
As for the gains from robotics, they go just as much to workers as to investors. Remember, investors are competing with each other, so they have to keep cutting prices. And that means workers see their wages buy more goods and services, given those goods and services cost less to buy. When wages buy more, that's effectively the opposite of inflation. In inflation-adjusted terms, that equates to a wage hike.
A general drop in services, yes. A drop in the services being provided by the robot, probably not. I doubt if many Prep Chefs are regularly eating at the restaurants they work at. When the robots are taking
millions of jobs in all areas of service, there might be a problem.
> How can the transition be rationally justified? Let alone the valuation.
Musk seems to have successfully decoupled investors from results. The stock price seems to move far more based on what he says and does than what the company says and does. It's completely irrational. Tesla is a huge bubble.
Oh, well let me get in my sub-$30,000 Model S, with a swappable battery and full-self-driving capabilities, and take a fully automated trip to the Hyperloop downtown so I can catch a quick ride out to O’Hare so I can fly out to watch a successful Starship launch…
…oh wait. I can’t. Because for all his successes, Musk has also sowed quite a lot of bullshit that has gone precisely nowhere.
Just like to point out Hyperloop wasn’t intended for local transportation and isn’t a Musk company, just some BOE speculation (BON?) that others have pursued.
> so I can fly out to watch a successful Starship launch
Not just watch a launch, but go to O'Hare to launch and go to Sydney in ~30min. In September 2017 they said we'd be flying Earth-to-Earth on a BFR last year.
And in Musk’s case, “longer” means “abandoned”. Like the cheap model 3. Or the Hyperloop. Or swappable batteries. Or X as an everything app that includes banking.
In everyone else’s case too. This was supposed to be a Startup News site. Instagram was supposed to be a way to check into cool places in your neighbourhood and see who was around.
More examples, please! Reusable rockets is the load-bearing example, I don't think that argument works without it. You could maybe squeeze in "he kickstarted the EV market".
maybe? Tesla is the biggest reason there are any electric vehicles on the road today, I haven't heard of anyone (knowledgeable) who has even hinted otherwise. I can understand not liking Elon, but trashing the companies he's formed and the marvels they've created is just proof you don't value a truthful understanding of the world.
I agree with you, but I find it interesting that BYD got started around the same time as Tesla. They took quite different paths to international distribution.
I guess that may be technically correct, but that's really just a anti-elon reddit type comment you're parroting. There were like four people and him "buying it" was just investing way more money in than anyone else wanted to.
Starlink was also ridiculed. "Some upstart beating industry veteran at crewed spaceflight" was also treated as a ridiculous idea (see extra funding Boeing was able to extract from congress).
To be clear, Neuralink has shown some promising signs. Has also shown some terrible signs.
And then I don't know if Musk is oversimplifying for a soundbite or more of his Dunning Kruger, but some of the descriptions seem to lack any knowledge of neurology. He describes a universal chip that will do different things and solve different issues depending on what part of the brain it's implanted in. That's not how it works at all.
I was using the classic idea of the flying car as an example of a thing that has been out of reach as an as a product for normal people and may not actually be successful if it were to really be sold.
Replace flying car with whatever example you want.
To put it in a different way, you could be so busy figuring out how to do it that you don’t figure out that a business case doesn’t actually exist.
I wasn’t trying to comment on any of Musk‘s other companies specifically. Only that we don’t know if making robots will actually make money.
If this were the case Waymo would already be gone. Tesla, under Musk, has missed a big opportunity. The claims of FSD "next year", by Musk for the past decade, fall on deaf ears now. While Waymo was focusing on building it Musk was multi-tasking and letting Tesla falter. RIP Tesla and what could have been. The reality is more clearly that Tesla could have been an amazing EV platform in totality. Instead they are being beaten in: driverless, PSD/FSD, and home energy production & storage. The only thing Tesla has a real lead in is still their EV power distribution footprint. I wouldn't be surprised to see that sold off in the next 5 years given their direction.
If Waymo didn't exist, we'd instead be lauding the progress of Wayve, Pony, and WeRide.
At this point, Tesla have the potential to be at best maybe #5 globally. No wonder they're so desperate to hide behind a tariff wall in their home market.
And yet Tesla's FSD v14 critical disengagement rate is behind where Waymo was over a decade ago, when Waymo first started reporting this figure, even worse in city driving to compare apples to apples.
There might be a subset of people, such as yourself, that looks for CUDA as a hard requirements when buying a GPU. But I think it's fair to say that Vulkan/Spir-V has a _lot_ of investment and momentum currently outside of the US AI bubble.
Valve is spending a lot of resources and AFAIK so are all the AI companies in the asian market.
There are plenty of people who wants an open-source alternative that breaks the monopoly that Nvidia has over CUDA.
I think the new AMD R9700 looks pretty exciting for the price. Basically a power tweaked RX 9070 with 32gb vram and pro drivers. Wish it was an option 6-7 months ago when I put my new desktop together.
Great. I’m with you there. There is no way that’s describing Apple though.
They’re not open source, for sure. But even setting that aside, they don’t offer anything like CUDA for their system. Nobody is taking an honest stab at this.
I know you are making stuff up as I own a Tesla and live on the East coast and FSD is nowhere close to what you are describing. You can barely use FSD 4 months out of the year.
I use their adaptive cruise control; and unless they intentionally nerfed it compared to their FSD, I have to be really careful using it in the DC area, as it will randomly brake in certain places.
With the latest Tesla Updates I can tell it thinks there's a grade or curve issue that is causing it to brake; but before the latest update it would just randomly brake in certain places (coming down from 70 to 40 very quickly) and that is just dangerous in the DC area.
Unrelated to your friends, but a big part of learning is to do tedious tasks. Maybe once you master a topic LLMs can be better, but for many folks out there, using LLMs as a shortcut can impede learning.
I'm ~8,000 XP into MathAcademy right now, doing the calculus stuff I skipped by not going to college. I'm doing a lot, lot, lot of tedious practice. But I know why I'm doing it, and when I'm doing doing it, I'm going to go back to using SageMath to do actual work.
That's really neat. I also had a similar need to dynamically manage DNS Record and decided to create a Kubernetes operator instead to manage it (https://github.com/pier-oliviert/phonebook).
I do like your approach, it's really refreshing. I'd probably want to split the API keys from the rest of the config files.
How much I would love to be able to use this. Every few years, I try to replace my Macbook with a different laptop and linux. But the "finish" that the Apple products have is unmatched.
Specially the keyboard/trackpad support. It's always been underwhelming with Linux. I know this is a subjective take and it's unrelated to Hyprland.
I hope that my next laptop can finally be the final step off to the Linux laptop, because I would love to use Hyprland.
If you want polish, don't use a build-it-yourself desktop. You are responsible for adding the "magic" on a lot of these desktops, and if you don't do the work you generally don't reap the reward.
KDE and GNOME both work out-of-box for most features and configurations. If you don't try to chase the r/UnixPorn dream you can end up with usability in spades.
This has changed drastically in the last couple of years. The Asus G series and Razer blade series are both very close. If you’re in 14”, check the G14 (razer doesn’t do oled). If you’re in 16 — compare the G16 and Blade 16.
I use custom external keyboards though, so maybe I’m not the best reference for your specific complaints.
I didn't jump back since the M1s. We are not ready to jump again battery, energy control, and fanless wise. I often read about the experience with Qualcomm in Samsung, Surface, Lenovo, and Acer devices. It is clear far far away. Asahi Linux is not the answer either.
Maybe you could try a chargeback. Having to pay for a Linux license and not have such a basic feature (because everyone has touchscreens in 2024) is outrageous. I heard they don't even accept code contributions to fix this mess.
Also, I do contribute to open source - my github says I've contributed to 53 repos. But that's irrelevant - I should be able to criticize open source software without the response being "lol how about you fix it", because in that case every issue on Github can be closed with the response "how about you submit PR".
I don't think you've made your point. Windows and MacOS are honestly more configuration if you're a developer - Linux is exactly the way you want it to be out of the box.
If you perceive MacOS as your only option, I feel grieve for your freedom more than Windows users.
Uhh... not all linux distros. I speak from direct experience with the latest Nix and Ubuntu as of literally yesterday. There's a reason why I know it takes 3 programs to get volume gestures working.
Oh, you mean like having to install "Display Menu Pro" on macOS in order to access my actual native screen resolutions?
An action for which I normally don't have to install anything for, in, well, <checks notes> any OS other than macOS.
I always have to laugh at macOS users who talk about how polished everything is--whose menubar right side has enough app hieroglyphics to make an ancient Egyptian envious.
> "I try to replace my Macbook with a different laptop and linux. But the "finish" that the Apple products have is unmatched."
Having to install three programs and a language runtime AND ALSO configure them, and have one of the programs be a fork because you want "instant" feedback while changing volume (instead of having to wait until the end of the scroll) is absurd. It isn't simply "lol install 3 programs and it Just Works" - getting everything to interface with each other and then scripting everything up is a chore.
It's a bloody shame that something this time-consuming is necessary for something that comes out of the box in other mainstream OSes.
But of course your comment history is mostly one line zingers so why do I bother.
Let's be honest, very little is "built-in" on Linux. What you guys are referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
I install and configure at least around 20 packages/extensions/tweaks on Linux, macOS, and Windows. Another 10+ browser extensions per browser. That’s not even listing the apps themselves (the ones that aren’t primarily a utility or tool).
There isn’t an OS that’s good to go out of the box. But the Apple hardware on a MacBook is completely unparalleled; people acting like there’s anything like it on the windows side are delusional. As are the apps (from the store, from the internet, from GitHub, and from brew). The quality is just much better, and so is the likelihood of finding an existing app/utility for a niche use case. Many packages on Linux are just binaries without even so much as a TUI, let alone a GUI (NordVPN). Oh ya Brew, also substantially more pain-free than other package manager experiences.
> Oh ya Brew, also substantially more pain-free than other package manager experiences.
No joke - I have never heard someone that uses multiple package managers praise Brew. If you have to use it in a larger org, across system architectures or are versioned across system upgrades, it is the single most fragile package manager you can employ. pamac, apt, rpm and eopkg all wipe the floor with Brew.
Nix and Macports are a bit better, but anyone that's used a proper package manager knows Brew is a lightweight.
Seems like it should be somewhat easier to bomb 50 datacenters than it would be to hack and disrupt 1000s of different services.
Again, this is just me thinking out loud on a tangent and this doesn't have much to do with this story, but I felt it was an interesting thought to share nonetheless.
reply