NIMBYism has never been about preserving neighborhood characteristic, or noise and traffic concerns. Menlo Park is not Big Sur. Sure, some concerns are reasonable and should be investigated, but most of the time they're bureaucratic distractions that's been weaponized by people who want to delay progress and protect their investment.
For most Americans, A house is their primary savings account, retirement plan, and probably where they keep majority of their wealth. We don't build new housing in old neighborhoods because it would de-value the investment of too many people. Until we can solve this problem (where people are incentivized to pull the ladder up behind them), we will always have housing shortages. It's just too profitable.
Anecdotally, what we found in Austin was a combination of two factors:
First, awareness of the futility and selfishness of "growth elsewhere" as a solution is much higher in younger people — and by younger, I mean currently under fifty. Generational turnover in Austin had been eating away at the NIMBY majority, and conversations about housing in Austin have long been polarized more by age than by left/right political sentiment. There's a caricature, with a strong vein of truth, of the old Austin leftist who has Mao's little red book on their shelves and thinks apartment buildings are an abomination, and Austinites of that generation are experiencing mortality. At the same time, younger people are adopting more and more urbanist mindsets compared to their parents.
However, I think a much much bigger factor was the influx of younger people, especially young people with experience of larger cities, diluting the votes of the older NIMBYs. Austin has been shaped by growth for half a century, but its "discovery" in the 2000s and very brief status as a darling of coastal hipsters (remember that term?) has had a lasting effect on Austin's popularity and its demographics. It's been twenty years since it was the "it" place for Brooklynites to visit, but in that twenty years, it's had a lot of exposure for young urban dwellers, and some of them discovered they liked it and moved here, bringing their comfort with dense living and their appreciation that growth can bring a lot of positives.
Personally, every homeowner I know in Austin has seen their houses depreciate significantly this decade, and I don't think it changed a single person's mind about Austin's housing policy. People who opposed the reforms are bitter about the outcome, and people who supported the reforms say it sucks for us personally, but it's what we set out to accomplish, and we're glad that it worked.
People see lower property taxes as a silver lining for short-term swings in the market, but I don't know anybody who thinks this is a short-term swing that they can ride out.
Nobody is happy about their property values going down long term. It exposes them to the risk of a big loss if they're forced to sell because of events in their life.
> Austinites of that generation are experiencing mortality.
This is such a funny and novel way of saying "old people in Austin are dying" I just had to point it out.
Also, I like the way this comment is written in general. Felt easy to read for its length, and most importantly the tone stayed fun and personal while still being informative and on topic.
> For most Americans, A house is their primary savings account, retirement plan, and probably where they keep majority of their wealth.
If you allow for increases in density, that house (actually the land beneath it, but still.) becomes more valuable as it's redeveloped. So that American homeowner does benefit, by unlocking the upside of "evil gentrification" (or actually, density increase).
That can only happen if the higher density coincides with equal economic growth in the neighborhood. Otherwise, the higher density could result in a negative home valuation trend.
Given the above uncertainty, and higher density could result in more traffic, noise, crime, nymbys are likely taking the correction position for wealth preservation and quality of life.
"Traffic" doesn't come from higher density, it comes from zoning bans on mixed-use neighborhoods which force people to drive everywhere. The "crime" argument is especially silly: why assume that higher density only ever attracts criminals? Usually, having more people around is a positive.
You can assume higher density has "more crime" because the increase in people means if you want to keep the same absolute rate of crimes (which is the only thing people ever notice--every violent or sexual crime will be repeated in the news), you have to correspondingly increase the efficiency of crime-fighting, and American police aren't up to the task, even if they were motivated to do so.
A paper came out about this recently: The City as an Anti-
Growth Machine.
> Logan and Molotch's “urban growth machine” remains foundational in urban theory, describing how coalitions of landowners, developers, and politicians promote urban growth to raise land values. This paper argues that under financialized capitalism, the dynamics have inverted: asset appreciation now outweighs productive investment, and urban land is increasingly treated as a speculative asset.
I'm not sure why new housing devalues old housing. In my mind, higher density generally makes an area more desirable (e.g. because higher density enables more jobs, better infrastructure) and raises the value. Imagine as an extreme example and existing house in the middle of nowhere around which a metropolis is developed. Surely the value of the house, or at least the land it is built on, goes up, even though it loses its "cabin in the woods" appeal.
You think if there were modern highrises in Menlo Park a tiny 2BR shack next door would still sell for $2M? It’s a supply and demand issue, nothing more.
What is your mental model for this then? If the "2BR shack" can be built from scratch for 300k, and the value for the lot + shack is $3M, then the land value is $2.7M. Most expensive real estate is land value, not actual structure value.
I see what you're saying, my point is that the principle thing driving it's value isn't the land nor the shack, it's the regulatory framework of the area.
Yes, it would go way, way, way up because if there is a high rise next door someone wants to knock down the shack and put up another high rise, a commercial building, etc.
When regulations are reduced to allow more density, the value of the land goes up because its productivity increases. The land can do more now, e.g. hold 10 apartments vs 1 house. The same land generates more rent so developers are willing to pay more for that land.
Meanwhile, the value of housing units goes down due to increased competition among sellers/landlords.
Consider two zoning changes.
1) You are a homeowner and more units are allowed on your parcel, e.g. single-family -> duplex. That increases your land value.
2) You are a homeowner and there is more density around you, but not on your parcel, e.g. apartments are allowed nearby but not on your street. Your land value does not increase. Your home value decreases due to increased competition. (Of course, there may be long term effects like the increased density actually leading to economic windfalls in the area, increasing its desirability, and then increasing your home value.)
it's not that density per se drives down existing costs, but density almost always brings more housing stock to the market (unless they are simultanously tearing down housing elsewhere) and housing stock drives down the cost of housing, which is the point of the original article.
So if we take it as an assumption that density increases housing stock, there is lots of evidence that density drives down prices of existing land/home values.
Density is not going to drive down cost for the same kind of housing. A SFH is not the same as a smaller home on a denser plot, much less an apartment block in a high rise. So the SFH owner who pursues increased density does indeed benefit.
The article only talks about rent, not price of housing, which I think is an important data point.
Homeowners don't want housing prices to fall. Ever. They don't care about rent prices (at least, not directly). But renters care about both — obviously lower rent prices are good, but many want to be able to enter the housing market but it's prohibitively expensive.
Perhaps falling rent prices has a similar effect on home prices — the value of buying a home for the purposes of renting becomes less desirable due to lower rental revenue, so prices fall. Not sure, the macroeconomics of housing never made sense to me because it's never as simple as pure supply and demand.
As an example, my wife and I finally decided to buy a house in a fast-growing CA suburb (not in Bay Area). The house was constructed in 2021 and sold for $611k. Plenty of renovations have been done on the house, we'd estimate around $20k+ worth of renovations, and the neighborhood and surrounding area has only grown since then (more parks, housing, great schools, stores etc).
The house was listed for sale at $600k; even then we were able to underbid and get our offer accepted. Inspections turned out clean, just minor cosmetic issues.
I don't keep an eye on the rental market but we've lived at two different rental properties and both of those places went up in rent once each, so I can only assume that rent is going up everywhere in this area.
Point is, rent and real estate don't always go in lock step.
I think there are a big range of opinions people have. There are some hardcore housing resisters whose opinions get a lot of sway because of the way processes work (public consultations, activism, etc). Lots of people are a bit sceptical because of pretty legitimate reasons – noise, traffic, disruption, aesthetics.
I think there probably are balances where people could generally be happier with new construction and that opinion could be clear enough to overrule those who would never be happy with it. Things like:
- ways of having locals vote on new development with small enough constituencies that they can be paid off (ie some of the gains that would have gone to developers or other positive externalities can be captured by those who are more effected) with lower taxes or new roads or parks or whatever
- making residents vote instead of having consultations will lead to less bias in favour of the most obnoxious
- allowing apartment blocks to vote to accept offers of redevelopment (eg you get a newer apartment; more apartments are added to the block and sold to fund the redevelopment)
- having architectural standards that locals are happy with for new buildings
- allow streets to vote to upzone themselves (I don’t love this as it’s basically prisoners dilemma – if your street does it, land value increases and you gain; if every street does it land value only increases a bit but now you are upzoned)
I basically think that there are developments that can be broadly appealing and we are in a bad local minimum in lots of places of having bigger governments trying to push development on unwilling smaller governments/groups
Fundamentally as a society we need to stop treating housing as an investment. It is and should be a utility.
Suring property prices is a relatively new phenomenon (as in, post-WW2). The true origins of NIMBYism, at least in the US, is (you guessed it) racism. Long before segregation ended, and long after, there was economic segregation. Redlining [1], HOAs [2], the post-WW2 GI Bill [3], where highways were built [4][5], etc.
In fact this is a good rule of thumb: if you're ever confused why something is the way it is in the US, your first guess should pretty much always be "because racism".
Case in point: my parents. Built a house in 1988 and they still live there. Two people in 3500 square feet. Four bathrooms and five bedrooms. Meanwhile, you need a family income of 3x the median to rent a townhouse 1/3rd the size nearby.
This is beyond ridiculous and it’s totally unsustainable.
Hate to be the bearer of bad news here, but the boomers will never die. Gen X will become the new boomers, and then the millennials after them. Individual people die, but interests stay the same.
There’s a lot of truth here, but two countervailing points: first younger generations own homes than Boomers at equivalent ages; second Boomers are particularly blind to the effects of zoning and strongly oppose development due to see firsthand the effects of 1950s urban redevelopment. They also love cars.
Us younger generations will have seen firsthand the negative effects of zoning, we do not possess a visceral opposition to development, and there is much greater appreciation of walkable neighborhoods.
> If NIMBYs were primary motivated by making money the prudent thing to do would be to support unrestricted zoning and then develop or sell the lot.
That is highly dependent on what exactly is being built next to your home. Sure, if it's more luxury housing then it'll probably drive the value of your home up. If it's low-income housing then it probably won't. And what we need is more of the latter rather than the former.
> you can take out loans against the value of the equity but this isn’t particularly common.
It's because it's an investment, you're going to get the return once you finally sell your home. Only in a pinch if someone needs a large amount of money to start a business or pay for an emergency will they mortgage their house.
> And what we need is more of the latter rather than the former.
You just need to wait. The luxury housing that gets built today becomes low-income housing as it ages. There's no short-circuiting that process the way the incentives are set up, but you can drive down prices across the board by building more, even more luxury housing.
>For most Americans, A house is their primary savings account
This is true for California, where people (foolishly) rely on their home value as their retirement plan, which further incentivizes NIMBYism.
But in places like Texas (and other areas with affordable housing), the house is just treated as something you pay off to have a low housing cost in retirement. And your investments are your retirement+savings account.
I wasn’t trying to say one was better or not, just different. Californians wrap up a large amount of their retirement savings in their houses though, so keeping those home prices high is important to them and that’s a reason for stalling development.
I think Californians do, a lot of time, retire with a higher net worth. But most of them do that because they’re more relatively house-poor during their lives - they take out larger mortgages, and save more into their net worth.
As opposed to Texans, who have higher disposable income since they have smaller house payments. It’s less incentive to save so they may spend more.
So that’s a partial advantage to California - the expensive homes force a higher savings rate, naturally.
But, at retirement age, a lot of their net worth is tied up in their home. So to unlock a lot of those savings they need to move to a lower cost of living state like Arizona, Nevada, Florida, etc.
While the Texans can just stay in their paid-off house.
So yeah it’s just different.
Texans are just paying off their home throughout their life and staying in it. They have larger disposable income to go towards other stuff (kids, lifestyle) while Californians gotta pay that mortgage
It's not that they're "intuitively better" it's that prior generations of them have passes less insane state and local law and they're not at the tail end of a ~20yr industry boom so "pay down my house and cash out to somewhere cheaper" doesn't make sense as a retirement strategy for as many of them.
Master planning has never worked for my side projects unless I am building the exact replica of what I've done in the past. The most important decisions are made while I'm deep in the code base and I have a better understanding of the tradeoffs.
I think that's why startups have such an edge over big companies. They can just build and iterate while the big company gets caught up in month-long review processes.
For most working-class Americans, education is a form of job-training.
In the AI maximalist world where humans are obsolete and cannot contribute to the economy in any meaningful way, there is actually no reason for public education to exist beyond being a free day care for non-rich people. Why learn algebra/calculus at all if the AIs can do it? Why should the US invest billions of dollars into public education instead of data centers?
I hope the US and AI leaders are still "speciesist" in that they put humans first. I hope AI will cure all illnesses, unlock space travel, and lead to flourishing of humanity, not just a flourishing of datacenters. It's also possible that AI just cleave societies in half and we are all worse off for it.
I thought the same as gp, that putting teachers at high risk invalidates the whole visualization. If this is intended to be useful for future career planning, with meaningful gradations between specializations, than it should exist in the probability space where human agency still matters. And in that space, from a Riccardian and political economy perspective, high human-touch jobs with strong public unions should be among the safest.
To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.
Terminology matters, if you use the right words, the AI will work better.
Just saying "use red/green TDD" is a shortcut to a very specific way of fixing bugs.
Or when you use a multi-modal model to transcribe video saying "timecode" instead of "timestamp" will improve the results (AV production people say timecode, programmers say timestamp, it hits different parts of the training material)
Good advice to the younger folks. You can afford to look stupid. So go ahead and do that thing you wanted to try. There's more acceptance because of your age. You're expected to fail in some ways.
Once you have a mortgage, a reputation to maintain, an image of competence to uphold at work, you pretty much can't afford to look stupid in my opinion.
Intelligence and ignorance are two different things. It is a sign of intelligence to be able to acknowledge your ignorance when it exists. Then you use your intelligence to correct that. Even with a mortgage this has never failed me. 20 years, 2 employers due to an ownership change, and several RIFs survived.
The power of saying, "I don't know, but I will find out" is underestimated.
Max Tegmark, a cosmologist and MIT professor, is known for his "provocative ideas" and has a self-imposed rule regarding his work: "Every time I've written ten mainstream papers, I allow myself to indulge in writing one wacky one". This approach allows him to pursue unconventional, "crazy" theories without jeopardizing his reputation as a serious scientist.
I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!
I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.
> my whole career using regex and never fully grokking it
Sorry to hear that, nobody ever told me either. Had you invested a bit of time earlier in your career, it would have paid dividends 100x fold. The key is knowing what’s wheat and what’s chaff. Regex is a wheat.
With that said, maybe you tried.. everyone has their limits.
If you're going to deploy what you make with them to production without accidentally blowing your feet off, 100%, be they RegExp or useEffect(), if you can't even tell which way the gun is pointing how are you supposed to know which way the LLM has oriented it?
Picking useEffect() as my second example because it took down CloudFlare, and if you see one with a tell-tale LLM comment attached to it in a PR from your coworkers who are now _never_ going to learn how it works, you can almost be certain it's either unnecessary or buggy.
For things Im working on seriously for my work, for sure, I spend time understanding them, and LLMs help with that. I suppose, also having experience Im already prone to asking questions about things I have a suspicion can go wrong
But there is also a ton of times something isnt at all important to me and I dont want to waste 3 hours on
I disagree. It's worth asking why some people find brand watches beautiful? Where did they get their sense of aesthetic? Were they born with a congenital preference for RM 16-01 Citron?
Culture shapes our taste. Companies go on multi-decade billion-dollar campaigns to shape our culture. We like certain things because famous actors or athletes endorse them; because hip hop artists rap about them; because influencers talk about them; because Hollywood portrays them a certain way. This extends to all modern aesthetic preferences from architecture to watches to cars to furniture to dating.
I think the argument pg is making is that brand-obsessed cultures are not maximally truth/beauty-seeking and gets really weird. e.g. Japanese Ohaguro, Chinese foot binding, various cranial deformation practices from the Mayans to the Huns, high-heels, ugly (to outside observers) watches.
It's a really thought-provoking essay. But it's too heterodox and "autistic" to share with most of my friends. Socially speaking, it's best to outwardly embrace the current zeitgeist.
> I disagree. It's worth asking why some people find brand watches beautiful? Where did they get their sense of aesthetic? Were they born with a congenital preference for RM 16-01 Citron?
There's plenty of art that's celebrated, but also kinda weird and ugly. Is "Vertumnus" by Giuseppe Arcimboldo (1591) also a product of the "brand age"? What about various gargoyles and grotesques on old church buildings?
Some people just like weird art, maybe because they think it reflects their own quirky or rebellious nature. Some of these people have money. I don't see why we need some sort of a cynical theory of a "brand-obsessed culture" at the center of it. How many people in your social circle are obsessed with brands? We might have a brand or two we like, typically because we like the way the products look or work. That's about it.
I know some people who like expensive watches. They talk about the design a lot more than they talk about who made it.
Citron is obviously a weird watch but you can always find weird expensive examples of anything. Most expensive watches look normal and they look really beautiful thanks to the attention that goes into building them.
Yes, what I find beautiful is the craftsmanship, dedication, and the singular, almost monastic focus required to become a master in some human pursuit, whether its software, sushi, or making watches. I find dedication and sacrifice deeply moving and eternally beautiful.
Software scales. Customer support doesn't. SaaS companies do not want to deal with customer support at all. It's only gotten worse with AI agents.
It's incredibly frustrating to spend a good 10 minutes navigating a website's complex web of menus to get a phone number (I think they deliberately try to hide it...). Then spend another 5 minutes listening to bots telling me to press 1 for English, only to fall into the wrong menu where the bot repeats some useless information I already know, say goodbye, then hang up.
Having a bot say to me: "we care about your concerns, and we value your business" is absurd and oxymoronic.
Compare this to say Chase, Amex, or Geico. I call, someone answers within 2 minutes and addresses all my problems/concerns in fluent English. I'd happily pay a premium for that.
Half of all humans on Earth uses Meta products (Facebook, Instagram, Messenger, WhatsApp, Threads). These products are free for you to use. But for Meta, your attention is the product which they sell to advertisers.
99% of their revenue comes from ads, and 1% comes from VR stuff.
Ultimately, AI is meant to replace you, not empower you.
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Dario admitted in the same interview that he's not sure whether current AI techniques will be able to perform well in non-verifiable domains, like "writing a novel or planning an expedition to Mars".
I personally think that a lot jobs in the economy deal in non-verifiable or hard-to-verify outcomes, including a lot of tasks in SWE which Dario is so confident will be 100% automated in 2-3 years. So either a lot of tasks in the economy turn out to be verifiable, or the AI somehow generalizes to those by some unknown mechanism, or it turns out that it doesn't matter that we abandon abstract work outcomes to vibes, or we have a non-sequitur in our hands.
Dwarkesh pressed Dario well on a lot of issues and left him stumbling. A lot of the leaps necessary for his immediate and now proverbial milestone of a "country of geniuses in a datacenter" were wishy-washy to say the least.
Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.
This comparison is very typical. I've seen a lot of people trying to correlate performance in chess with performance in other tasks.
Chess is a closed, small system. Full of possibilities, sure, but still very small compared to the wide range of human abilities. The same applies to Go, StarCraft or any other system. Those were chosen as AI playgrounds specifically because they're very small, limited scenarios.
People are too caught up trying to predict the future. And there are several competing visions, each one absolutely sure they nailed it. To me, that's a sign of uncertainty in the technology. If it was that decided (like smartphones became from 2007->2010), we would have coalesced into a single vision by now.
Essentially, we're witnessing an ongoing unwillingly quagmarization of AI tech. At each bold prediction that fails, it looks worse.
That could easily be solved by taking the tech realistically (we know it's useful, just not a demigod), but people (especially AI companies) don't do that. That smells like fear.
It's an exoskeleton. A bicycle for the mind. "People spirits". A copilot. A trusted companion. A very smart PhD that fails sometimes, etc. We don't need any of those predictions of "what it is", they are only detrimental. It sounds like people cargo culting Steve Jobs (and perhaps it is exactly that).
There are other scenarios: the AIs might decide that they are more alike than not, and team up against humans. Or the AI that first achieves runaway self-improvement pulls the plug on the others. I do not know how it will play out but there are serious risks.
> AI will soon plan better, execute better, and have better taste
I think AI will do all these things faster, but I don't think it's going to be better. Inevitably these things know what we teach them, so, their improvement comes from our improvement. These things would not be good at generating code if they hadn't ingested like the entirety of the internet and all the open source libraries. They didn't learn coding from first principles, they didn't invent their own computer science, they aren't developing new ideas on how to make software better, all they're doing is what we've taught them to do.
> Dario and Dwarkesh were openly chatting about ..
I would HIGHLY suggest not listening to a word Dario says. That guy is the most annoying AI scaremonger in existence and I don't think he's saying these words because he's actually scared, I think he's saying these words because he knows fear will drive money to his company and he needs that money.
Sometimes I seriously am flabbergasted at how many just take what CEOs say at face value. Like, the thought that CEOs need to hype and sell what they’re selling never enters their minds.
1. Consumption is endless. The more we can consume, the more we will. That's why automation hasn't led to more free time. We spend the money on better things and more things
2. Businesses operate in an (imperfect) zero-sum game, which means if they can all use AI, there's no advantage they have. If having human resources means one business has a slight advantage over another, they will have human resources
Consumption leads to more spending, businesses must stay competitive so they hire humans, and paying humans leads to more consumption.
I don't think it's likely we will see the end of employment, just disruption to the type of work humans do
I pay for pro max 20x usage and for something that is like even little open ended its not good it doesnt understand the context or edge cases or anything. i will say it writes codes chunks of codes but sometimes errors out and i use opus 4.6 only, not even sonnet but for simple tasks like write a basic crud i.e. the things that happen extremely higly in codebases its perfect. So, i think what will happen is developer get very efficient but problem solving remains with us dirrection remains with us and small implementation is outsourced in small atomic ways, which is good cause who likes boilerplate code writing anyways.
We should be fighting back. So far I have been using Poison Fountain[1] on many of my websites to feed LLM scrapers with gibberish. The effectiveness is backed by a study from Anthropic that showed that a small batch of bad samples can corrupt whole models[2].
Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.
> 2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Seems like a TAM of near-0. Who's buying any of the product of that labor anymore? 1% of today's consumer base that has enough wealth to not have to work?
The end-game of "optimize away all costs until we get to keep all the revenue" approaches "no revenue." Circulation is key.
It seems like they have the same blind spot as anyone else: AI will disrupt everything—except for them, and they get that big TAM! Same for all the "entrepreneurs will be able to spin up tons of companies to solve problems for people more directly" takes. No they wouldn't, people would just have the problems solved for themselves by the AI, and ignore your sales call.
>First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance.
My attempt to talk you out of it:
If nobody has a job then nobody can pay to make the robot and AI companies rich.
Who needs the money when you have an autonomous system to produce all the energy and resources you need? These systems simply do not need the construct of money as we know it at a certain point.
I think we're going in that direction. The typical reader here I think can't see the forest for the trees. We're all in meat space. They call it real life. Most jobs aren't on the internet and ultimately deal with the physical. It doesn't matter what tech we have when there's boxes to move and shelves to stock. If AI empowers a small business owner to do things that were previously completely outside their budget I can only imagine that will increase opportunity.
The Star Trek society was a myth - even on Star Trek. This was called out by Quark on DS9. There was very much the idea of “credits” and rationing based on limited resources.
During the original Star Trek show, you never really saw what life was like for everyone who was not aboard the flagship starship. They started exposing more of the universe in subsequent decades
Gene Roddenberry imagined a society in which humans evolved beyond their base desires and where petty disagreements and conflict didn't exist - but that made for lousy drama so the writers ignored it.
But you don't even need to go that far. A Star Trek style 'post-scarcity' society is impossible because it depends on infinite free energy, FTL and perfect matter replication, none of which are allowed by modern physics. In the real world you can't just outwit the second law of thermodynamics. No matter what form AGI takes - if it ever exists at all - it won't be magical. There will always be scarcity, and where scarcity exists there will always be hierarchies of power and control because human nature doesn't change.
Don't take it to the limit, but consider a continuous relaxation : underemployed people doing whatever is not feasible or economically attractive to AI/robots, like prostitution, massage therapy, art, sales, social work, etc.
Being rich is ultimately about owning and being able to defend resources. IF something like 99% of humans become irrelevant to the machine run utopia for the elites, whatever currency the poors use to pay for services among each other will be worthless to the top 1% when they simply don't need them or their services.
Would we just be splitting society in two in that case? Seeing as the poor would have nothing to give to the rich, nor the rich to the poor, wouldn’t the poor 99% just create their own new society with their own units of value?
Or the poor decide the rich have no more right to be rich than themselves and start burning and destroying rich people's stuff and anger turns to blood.
It is far easier and faster to destroy than to build and technology can only protect you so much. Unless someone manages a completely self contained enviroment and has a good enough place to hide with it they could be at risk.
That's where "being able to defend your resources" comes in, otherwise they're not rich. But yes, I'm implying that in the future the defense could be done by something like armed drones. And "defense" would mean enforcement of whatever draconian laws they cook up. When the executive branch is non-human, you can never have a mutiny. You simply don't need to do any convincing or any pretending that you're a good guy. All you need is to outproduce the humans, make more drones than they can put up resistance. And drones are cheap, even today, and would only be one piece of the equation anyway, among direct access to your bank account, real-time AI surveillance for minute missteps, crowd control weapons mounted to autonomous vehicles,... all in all pretty grim. The question is who gets to be the elite. It's not obvious that it should be the Silicon Valley guys. We'll surely have massive elite wars (fought by humans + AI), sold to us as civil/national wars and people's revolutions, before the above pans out. The partial release of the Epstein files is one cluster of the elite (around Trump) threatening another. I'd wager that a lot of dirt will see the light of day in the next ten years and underneath it will be the fight over who gets to command the new kingmaker tech.
I think it would be too depressing to have all these ghost towns and stuff - how would you explain that to your kids? That they're robot towns? Unaliving 50-80% under tragic circumstances like plagues and wars on the other hand...
And as an aside, the naturally rebalancing effect after the black death which killed 1/3 of Europe was that workers were suddenly in higher demand and could negotiate much improved workers' rights, ending serfdom. Such an effect won't be possible when there's something replacing the workers...
So what? If you can generate all goods and services without anyone else's help, you'll just do that. You don't need other people buying what you produce. You don't need other people at all, except for a very small number of servants.
If you assume AGI that is better than humans for effectively free of course it seems better.
But your assumptions are based on an idealized thing unrelated to anything that is shown.
No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.
Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.
Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.
Robotics is solved. Software is solved. There is no task on the planet that cannot be automated, individually. The remaining challenge is exceeding the breadth of skills and the depth of problem solving available to human workers. Once the robots and AI can handle at least as many of the edge cases as humans can, they'll start being deployed alongside humans. Industries with a lot of capital will switch right away; mass layoffs, 2 week notice, robots will move in with no training or transition between humans.
Government, public sector, and union jobs will go last, but they'll go, too. If you can have a DMV Bot 9000 process people 100x faster than Brenda with fewer mistakes and less attitude, Brenda's gonna retire, and the taxpayers aren't going to want to pay Brenda's salary when the bot costs 1/10th her yearly wage, lasts for 5 years, and only consumes $400 in overhead a year.
For me this is the outcome of the incentive structure. The question is if we can seize the everything machine to benefit everyone (great!) or everything becomes cyberpunk and we exist only as prostitutes and entertainers for Dario and Sam.
If someone’s only value add was “I codez real gud” you have been a replaceable commodity for over a decade if you were a standard enterprise dev (no offense to enterprise devs - that’s what I was for 25 years until 2020 with various levels of responsibility and for all intents and purposes still am just with a fancier title and the ability to talk to people). It’s finally caught up to BigTech.
Thr value I brought to companies has never been I can write for loops. It’s always been I can use my decade+ of experience with computers to either make the company more money or save the company more money than they are paying me.
Before anyone replies I didn’t have a decade of experience starting out, actually I did. I was hobbyist assembly language then C developer for a decade before graduating from college in 1996.
I agree with you. This generation of LLMs is on track to automate knowledge work.
For the US, if we had strong unions, those gains could be absorbed by the workers to make our jobs easier. But instead we have at-will employment and shareholder primacy. That was fine while we held value in the job market, but as that value is whittled away by AI, employers are incentivized to pocket the gains by cutting workers (or pay).
I haven't seen signs that the US politically has the will to use AI to raise the average standard of living. For example, the US never got data protections on par with GDPR, preferring to be business friendly. If I had to guess, I would expect socialist countries to adapt more comfortably to the post-AI era. If heavy regulation is on the table, we have options like restricting the role or intelligence of AI used in the workplace. Or UBI further down the road.
For most Americans, A house is their primary savings account, retirement plan, and probably where they keep majority of their wealth. We don't build new housing in old neighborhoods because it would de-value the investment of too many people. Until we can solve this problem (where people are incentivized to pull the ladder up behind them), we will always have housing shortages. It's just too profitable.
reply