Hacker Newsnew | past | comments | ask | show | jobs | submit | Lonestar1440's commentslogin

>I'm not keen on new nuclear (time and cost as much as anything else), but it's a terrible idea to phase out operating nuclear plants which are still safe and within their planned lifetime.

Time and Cost seem like excellent reasons to get started now, so we can finish by 2035 and get some materials purchased before inflation gets even worse.

All of the excellent arguments Pro-existing plants apply to new ones too.


Given Hinkley Point C, a plant approved now will be operational some time in the 2040s.

I think people have missed how much of a hockey stick graph renewables deployment can look like. https://edition.cnn.com/2025/05/01/climate/pakistan-solar-bo...


Hinkley Point C is a prime example of regulation causing cost and schedule overruns.

"Fish disco", for example.


If you are starting now wind and solar are almost always your best investment. Some form of storage is next, but not until you have large amounts of wind+solar in the system. (which many areas are already reaching)

This just seems like kneejerk anti-Nuclear stance in disguise. Maybe you did intend it as just a neutral observation but it's hard to take it that way.

Like maybe you're right... why not also support Nuclear plants, which we in fact need for baseload energy? Surely there are better places to cut the budget than other carbon-free energy sources.

I have no argument with building out solar and wind maximally. I will always push for new Nuclear as part of the mix.


Where does this "need for baseload" energy come from? Baseload is a demand side concern. It can be fulfilled by any number of sources and we already have grids operating with zero baseload.

The grids have dispatchable power. But that is a different concerns.

Point out the "baseload power" in this grid:

https://explore.openelectricity.org.au/energy/sa1/?range=7d&...

You also have to look at it in terms of outcomes. How do we get the most decarbonization the quickest per dollar spent?

Focusing on reducing the area under the curve. Looking at it from that perspective wasting money and opportunity cost on new built nuclear power leads to spending longer time entirely dependent on fossil fuels.


We don't need baseload energy! That is something the coal lobby likes to repeat but it is false. We need enough energy to supply demand. These days gas peaker plants amortize cheaper to run 24x7 than a new baseload plant and so a lot of new "baseload" is actually covered by a peaker plant.

Baseload doesn't have a consistent definition, but the general concept is some power plants are cheap at 100% output, but don't throttle back well, so you have a mix of these cheaper baseload plants, and the more expensive to operate peaker plants that are more expensive to operate, but can start/stop/slow as needed. However we don't need that. In any case even when baseload is cheaper than peaker, it is still much more expensive than wind+solar which have zero fuel costs, and so when you amortize the costs out wind+solar plus peaker plants to make up the difference is overall cheaper.

25 years ago I was with you - nuclear was the best answer. However wind+solar have really grown since then and now they your best bet. Because the times have changed I've in turned change. I'm against nuclear because it no longer makes sense even if the price was reasonable. (nuclear would still make sense for ships, I don't know how to push that though)

Edit: Come to think of it, I'd go so far as to say if you have a baseload coal plant today, you should be shutting it down immediately for new wind and solar plus gas peaker plants. It is economically stupid to not be doing that. Now, there may be coal power plants that are not baseload, but instead can be dispatchable. If so, I don't know how the economics of those play out. And likewise, nuclear, although it is baseload, probably is cheap enough to continue running as long as it's not too expensive to keep maintaining, and I would keep it running for the near future.


Gas peak plants are neither clean nor economical stable in Europe. The war in Ukraine and now the war in Iran has demonstrated how extreme the price of energy can become if we allow demand to exceed supply for any extended period, and multiple European governments in the last few years got elected explicitly to solve this. Having a single month cost as much as a full year, or even multiple years, is a costly lesson for voters and the economical effects are not slow to provide a second demonstration on how important stability is in the energy market.

Coal is not an option, nor is oil nor gas. Batteries for something like central/northern Europe is also not an option as a seasonal storage of weeks/months are prohibitively expensive. Hydro power has demonstrated to cause (near) extinctions of several species and ecosystems, modern research on soil has show some terrible numbers in terms of emissions, and the places where new hydro power could be built are basically zero. Biofuels from corn and oil is prohibitive expensive and also bad for the environment, and the amount of fraud currently being done in green washing corn ethanol as being "recycled" food waste is on a massive scale and not something Europe can build a seasonal storage on. Green hydrogen is not even economical yet for being used in manufacturing, not to mention being burned for electricity and heating. Carbon capture for synthetic fuel is even further away from being a realistic storage solution.

That leaves very few options, and if current world events continue as they have we will see more governments being elected on the promise of delivering a stable energy market. Wind+solar+Gas peaker plants are not that. It was already an bad idea when it got voted as "green" in EU, as it cemented a dependency on natural gas from Russia and middle east. In 2026 it should not be considered an option. Gas need to be phased out, as should the last few oil and coal plants.


yes. The whole hopium strategy in countries like Germany is expand gas to replace it sometime with dirt cheap H2

The big assumption here is that the Union's interests are, in fact, aligned with You, the Worker.

If they're not, then you're just caught between two powerful, unaccountable entities. You have to join the Union, after all. I see a lot of folks in Education who feel that the Union simply Exists and does not really help them (their employers being rather sympathetic as well).

When they are, of course the Worker benefits. Healthcare and Airline employees seem to fall into this camp.


I think on average it's a safer assumption that the union made up of your peers are more aligned with your interests than the owners of the company are.

All of my friends that are teachers do admit their union has flaws, but also are very grateful to have strong contracts, benefits, and people willing to fight for them when the school system tries to screw them over.


Is that actually the case for the tech unions that actually exist, though? Historically, the people pushing for tech unionization were doing it for ideological reasons, not in response to the relatively recent layoffs, etc. and you can see this in their leadership.

I have a pretty simple litmus test for them: are they opposed to H1B hiring, and would they have defended James Damore when he got ousted from Google for basically being autistic? I think the answer for many of them is a resounding no.


It sounds like you’re just as ideologically in your opposition if you’re bringing up James Damore.

On opposing H1B as they are implemented now I agree with you, but in a hypothetical world with tech unions James Damore would still be advocating for large swathes of fellow union members to be removed. He was being misogynistic not “basically being autistic”.


I think that Damore was stupid for writing and posting it internally and disagree with portions of it, but almost all of the reactions I saw to his manifesto were to third-hand misrepresentations of it or willful misreadings of it. (e.g. him referring to population statistics on neuroticism being interpreted as him saying that his female coworkers were neurotic). I think your reply is the perfect example: he wasn't advocating for women to be 'removed', he was arguing that DEI efforts to try to get a 50/50 male/female balance are fundamentally misguided because not as many women want to work in tech as men.

But whether or not you agree with him, you should agree with the idea that one of the primary jobs of a union would have been to give him a fair defense regardless of whether the union leadership likes him or not. I don't think any of the tech unions would do that.

edit: Let me put it this way. Suppose you make an post in an internal politics discussion forum saying that you oppose the H1B program as it is, and then get fired because people claim that you hate immigrants and want your fellow coworkers to be deported. Do you think these unions would defend you?


> Suppose you make an post in an internal politics discussion forum saying that you oppose the H1B program as it is, and then get fired because people claim that you hate immigrants and want your fellow coworkers to be deported. Do you think these unions would defend you?

In the real world case of long standing Teachers Unions in Australia (they vary by state) it is literally impossible to answer such a question on the basis of such a shallow construct.

The answer is both Yes and No - in any specific case the individual circumstances would be looked at - eg: as laid out Yes, they defend, however in most IRL cases the circumstances on the ground are far more complex, and it wouldn't be uncommon for a bunch of fellow peer union member coworkers to speak up in favour of not defending.


> Let me put it this way. Suppose you make an post in an internal politics discussion forum saying that you oppose the H1B program as it is, and then get fired because people claim that you hate immigrants and want your fellow coworkers to be deported. Do you think these unions would defend you?

Yea. But if I made a claim that h1b holders were biologically disinclined to not be as capable of doing software, I wouldn’t.

Damore was just a misogynist.


In the case of many teachers, there is no "company". The union fights valiantly against the State, and against the pay structure Voters democratically select for their public servants.

Further, I posit that existence of such Unions serves as an incentive for voters not to simply assign more pay and benefits to such servants directly. Mayors and Governors know that it's always going to be a "Union game" and all they can do is negotiate - even when they're Progressives who actually want to pay teachers well.

It just gets worse when it's Cops instead of State School teachers.

Big assumption about "interests" lead to bad analysis, in my experience.


> You have to join the Union, after all

Uh, how? This might be a country thing but you don't have to join any union in my country. You do, if they represent your interests. Big companies have multiple, competing unions, and the anarchists (which refuse state subsidies and are fully self-funded) are pretty good at what they do.

If you have to join a union isn't that essentially a racket?


Of course, and that's why some people are opposed to a union, but then others say you're just falling for propaganda or some such nonsense if you deign to have any, even small, criticism of unions.

> If you have to join a union isn't that essentially a racket?

Yes, this is a big part of my critique.

I'm no lawyer and can't provide a useful explanation of the "why", but literally every educator I know is in the Educator's union. Same with Cops and Nurses. I don't know any airline pilots but I understand it's the same way.


Some jobs it is mandatory to join the union in America anyway

So SpaceX bought a $60B Option on Cursor, plus a bunch of services, for $10B.

If strike date comes and Cursor is in fact worth less than $60B... they can move to acquire it for that price. Or just let it "expire". And if it's worth more, they get a savage good deal. If the services were worth $8B anyway, it's hard to lose.

It seems less crazy to me through this lens. A straight acquisition, today, at $60B would in fact be crazy.


What's crazy is that a company that sells an IDE (that's not even a particularly good one compared to competitors like JetBrains) integrating some AI plugins could be worth more than $60B...

In terms of IDE yeah it is not that great.

I do have Copilot in VSCode and Cursor.

I thought both should be equal in solving problems - turns out Cursor with the same model selected somehow was able to solve tasks that Copilot would get stuck or run in loops.

They have some tricks on managing file access that others don’t.


Cynics on HN easily dismiss AI service wrappers (and many of them are in fact overblown and not worth their own code). But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy. The biggest issue is that model providers also see what the community likes and often move on with their own offerings that are tailored to their own models, potentially at the training stage. So even if you have the best harness for something today, unless you are also a frontier LLM provider, there's zero guarantee you will still be relevant in the future. More like the opposite.

> But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.

true, but its not worth $60 billion fucking quid.


It's not like someone paid $60 billion for a product the way you pay for bananas at the store. They invested a much smaller amount and essentially bought an option to acquire. And even if you don't believe the company's assets are worth the current valuation, an acquisition can still make sense if you believe that valuation will go up further. And if they actually do acquire, it will probably still not be in cash. They'll just be swapping stocks. That is essentially how all startup funding works. There is nothing strange about this. It merely reached new dimensions thanks to AI.

I mean yes, you are right, but they also paid $10 billion for that option. Which is also far too much for a harness.

it's insanity.

the whole thing is driven by irrational stock market investers who NEED ai to be the thing that saves the world.

they're betting everything on it.


I mean they doubled revenue from $1B/yr to $2B in a month.

At some point it can be valued as a high growth business, the code that backs it is almost irrelevant if the business is strong.


IIRC, it was $2B annualized. Which means nothing. Also, their expenses will be way north of that.

trusting a startup to accurately report its revenue in this market is about the dumbest thing you can do

> (...) writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.

This. They are after the harness engineering experience of the Cursor people, I'd assume the they want to absorb all that into Grok's offerings.

The value and the room for innovation on the harness side seems to be underestimated.

Oddly the harness also affects model training, since even GLM/Z.ai for example train (I suspect) their model on the actual Claude Code harness. So the choises made by harness engineers affects the model. For Kimi/Moonshot and OpenAI the company makes their own harness. Alibaba uses Gemini.

Very interesting dynamics.


There are plenty of harder things in the world and very few are worth 60B.

Something being harder and attributing value to that makes no sense. Sure a big moat is important for value but "difficult to do" is just a unidimensional angle.

Showing naked butt on the internet seems easy.

Earning millions that way is much more complicated.


Isn't Codex TUI available for free though? Besides others like Pi and OpenCode of course.

It can use local/oss models, but it doesn't make it simple to do (easiest with ollama) and it's not clear what else you 'lose' by making that choice.

If you had a really good (big) local model, maybe it's an option, but on the more common smaller (<32b) models, it will have similar problems in looping, losing context, etc. in my experience.

It's a nice TUI, but the ecosystem is what makes it good.


mac studio 512gb versions are entirely sold out for this reason precisely

"But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy."

It is surprisingly easy to do it once someone else has done the work. Increasingly that's the nature of AI-based software engineering: point it at an existing tool and ask it to carefully duplicate features until it has parity. As you pointed out, frontier LLM companies happen to be well positioned to sell the resulting products.


>They have some tricks on managing file access that others don’t.

I thought it was a Windows thing. My Windows work computer is so heavily managed and monitored I assumed that was why Copilot stops being able to get terminal output or find the file I'm looking at. It's the same problem in IntelliJ and VSCode, with different models trying to find things in different ways.

Now that I think of it though, I've only used Copilot at work. At home I use Debian but I've never tried using Copilot. Claude, OpenCode, Gemini, and IntelliJ's AI Chat pointed at local Ollama models never have issues finding files or reading files and terminal output.


Sure, but is it worth 60 billion?

Their annualized revenue run rate is on track to surpass $6 billion by the end of 2026 so it's not ridiculous for them to be valued at $60 billion at some point. Also worth noting that if they do get access to SpaceX compute, they could start pretraining their own model. Composer is good but its built on top of Kimi 2.5.

Definitely not if someone frames it "shitty IDE with some plugins".

But if someone frames it "engineering talent that knows how to make LLMs even better at software development than competition" it might.

I see with my own work it works so it is not like Devin that was basically a scam that was valued at 10 billion.

In this kind of context yeah feels like it is quite possible to be worth 60 billion.


SpaceX thinks so.

SpaceX the space rocket and internet satellite company? Or SpaceX the Elon Musk piggy bank used to buy up all his financial misadventures?

You mean Musk thinks xAI need to be shown making AI investments to keep getting outside funding.

I actually now think ai prompt writing in the IDE is completely overkill nowadays.

IDEs are made for just a human to interact with code. I think the paradigm of forcing these tools that weren’t built for this to do this, is us trying to fit a square peg in a round hole.

Call me old, but don’t put ai in my ide. My ide was made for a human, not an ai. For the established players for sure it makes sense since they already have space on our machines. But for the new ones imo terminal, or dedicated llm interfaces are where it’s at.

If I’m writing code sure suggest the next line. If the machine is writing code, let it, and just supervise properly. and have the proper interface that allows the strength of each


My IDE has nicer tooling for things like diffs, and has all of my LSP's configured which the harness can utilize

They're using the code intelligence from the IDE to run the AI, while Claude Code only does greps.

AI coding is much more than just the model - all the tools that human use in IDE are also useful for AI. Claude Code on the other hand just works with grep.


They are now a Codex clone and without the subscription pricing. You have to spend thousands to get what you get from a $200 Codex subscription. How do they compete with this except from users who haven't caught on yet, or businesses that are unbothered to spend thousands a month per dev and wouldn't consider just subscribing to 1-3 $200 subscriptions instead?

And their price is so high because it's markup on API rates. API rates, even without markup, are just insanely irresponsible for anyone to be spending on full-time daily usage.


> users who haven't caught on yet

They are catching up fast!

https://www.businessinsider.com/chamath-palihapitiya-ai-cost...


Tellingly, from his full post: "Mostly because I do not yet see an equivalent uptick in productivity or revenue..."

https://x.com/chamath/status/2029634071966666964

I suspect that as the value a company provides is more than its code, then increasing code churn does not lead to an equivalent increase in revenue. Even for a tech company, a business' concept, connections, knowledge, assets, non-coding staff, etc.. are a significant value and increasing code doesn't increase the throughput of that value. For non-tech companies code is the grease in the gears, not the gears themselves.


Codex is coming for those non coding use cases too. Is Cursor?

Whose pricing is above API rates? Not Cursor. It's 100% at each model provider's published API rate. With a bigger sub, you get it cheaper than that.

Cursor makes a ton of money because the product is great. It's easily the most sophisticated harness out there, and it isn't an IDE anymore. It's an agent dashboard since version 3.

Suffice it to say it's not all idiot money being thrown at them by users.


API rates on local models are quite cheap, and you can even run them locally. Yes, the hardware for doing so at speed is expensive, but people used to drop the equivalent of what would be $50k or $100k today on an individual workstation for full-time use. It's justified if the productivity gain is strong enough.

But that’s not competitive. The only reason to do that is out of need for privacy. Which is critical for some. The tradeoff is that the models are relatively bad. I don’t see how Cursor can win from this use case especially if to get the privacy benefit you need to spend a huge amount. You can already run Codex for free with local models too.

What's the advantage over github copilot actually? They seem to have all the same access and features (except for this sheduling thing?) for cheaper.

> users who haven't caught on yet

If you think this of users who use cursor then I don’t think you’ve used cursor much at all.


I've used Cursor a lot. Until recently it was mandated by my employer. I can't see the attraction at all. It's a (bad IMO) IDE integration, a reasonable model (but I still generally preferred Claude over Composer), and a bunch of other tools that weren't very developed (like cloud environments and multi-agent orchestration). It's a suite of tools, most of which have superior alternatives. What am I missing?

You have model choice in cursor… why would you use composer?

What do you mean?

Only the foundation model companies offer cheap/subsidized compute.

If you're an app layer company, you're offering a 10x worse deal to your customers.

Foundation model companies are willing to lose money to win loyalty. Remains to be seen if it'll work.


If you’re more worried about cost than you are being productive and getting good results then sure, stick with foundational model company apps.

“Being productive” without taking inputs/costs into consideration is an oxymoron.

But euros spent on tokens is a tiny fraction of the overall costs of the project.

That’s the thing, I have never seen detailed costs of what people are spending their money on. I know that for Claude there’s a $200 monthly subscription through which assigned credits one burns pretty fast, at which point (and I may be wrong on this, because I’ve never used the thing) one can run extra code on a “pay as you use it” basis? Again, I might be wrong on this.

I’ve also seen it mentioned a lot of people having 2, 3 or even more subscriptions, which I’m pretty sure that can easily go South when it comes to costs.

But, again, and the most important point, I’ve never seen a detailed post on what people spend on this AI thing on a monthly basis (let’s say).


A company that cares more about cost than results is probably a terrible company to work for. They will give you 10yo dell laptop with 8gb memory and complain that you’re slow when it takes 15m to build the application.

So no it’s not an oxymoron.


Productivity is literally a statement of the relationship between the result and the cost, presumably you found that out after reading the reply and that is why you switched from "productivity" to "results" in your reply.

Until you learn what productivity is we can’t continue the conversation.

Please at least try to keep track of which sockpuppet you are using in this thread sighthrowaway.

API rates are the real rates. Subscription costs are the "first hit is free" subsidized pricing.

They’re not the “real rates”, they’re the rates that are being charged for API use. API reportedly has a margin of profit

You also neglect that products like Cursor run on two margins, their own plus the API provider’s. That’s always going to come at a premium


Yes, the rates with a margin of profit are the real rates.

The rates without a margin of profit (or with a negative one) are not real.


General Motors is worth $72B.

That feels more like a reflection of how terrible most GM cars are than about the inflated valuation of Cursor, which is what I infer you were trying to imply.

Their revenue and growth justified it. Plus, for xAI that could be the only way to get a SOTA coding model they want so hard.

I thought cursor became mostly obsolete with Claude Code and Codex TUIs?

> I thought cursor became mostly obsolete with Claude Code and Codex TUIs?

I wouldn't think so. At work I have both cursor and claude code and while I use both, cursor is by far the most pleasant to use. If I had to give one up, I'd let claude go.


Are TUIs not yesterday’s hot thing?

The way I work now in the Codex desktop app is that I spin up 3-5 conversations which work in their dedicated git worktree.

So while the agent works and runs the test suite I can come back to other conversations to address blockers or do verification.

Important is that I can see which conversation has an update and getting desktop notifications.

Maybe I could set this up with tabs in the Terminal, but it does not sound like the best UX.


That's probably more a personal preference than objective measurement. A lot of people already spent most of their dev time in the terminal, so for someone like myself that uses neovim claude code or codex cli are much easier than using the GUIs.

The solution is use both. They both have their usecases. Cursor's autocomplete and quickly highlight a few lines -> throw into context, plus it's got a very good file index/API (which burns much less tokens than Claude's grep'ing) and whatever else they are doing underneath to optimize it for coding.

Claude is still gold standard if you're not in an IDE though.


Grep'ing doesn't use tokens, it uses grep.

Reading files is always the biggest token burning when coding. If it can't find stuff quickly or has to use less and head to trim it before finding it, then you're just wasting context window

Cursor both lets you highlight specific lines multiple times per chat and is much quicker at finding stuff.


Claude has to use more tokens to read the grep output.

That matches my anecdatal experience with a couple dozen devs. Many wnet hard on the Cursor train and have mostly gotten off now with CC and Codex TUIs available

Because of user count? Same was said about instagram. with all due respect, devs don't seem to understand business

Or devs are just different users who care about different things and have different experiences.

Reminds me of the famous dropbox post: https://news.ycombinator.com/item?id=9224 - I don't even know if dropbox still exists in 2026 but i'm still happily using rsync and mailing things around because dropbox has just absolutely never worked reliably for me, unlike my 2007 gmail account.

Likewise, if it were up to me, instagram and any business whose business model revolves around ads would be banned (because ads would be banned because advertisement is harmful in general).


It's fine to care about different stuff, but if you want to understand the valuation of a company, then your experience only goes so far. it's not going to make any sense unless you broaden your scope of interest to the metrics that impact valuation.

I don't read OP's post we're talking about ("What's crazy is that a company [...] could be worth more than $60B...") as not understanding, but as disagreeing that our world should work in such a way where this state of affair is even remotely considered acceptable

It's an interesting idea that society should somehow prevent companies valuation being linked to how many people use their product.

Unsure how it would work in practice.


But do devs know a which IDE is better? That seems to be a rather important question here.

It's not 'the' most important question.

Are you using the same AI engineering tools you were using 2-3 years ago? 1 year ago? I'm not. Without a network effect, capturing revenue is hard.

My use is not relevant. It's not ideal to extrapolate from your own personal habits. cursor's user volume and growth is the important thing

Who are the users? I haven't seen many pro users using cursor

Companies. Single devs can jump around IDEs and TUIs more easily but that’s not what companies tend to do.

  you've formed an opinion on the value of the company without knowing how many users it has? Kind of proves my point, no?

can't X recreate one with 1B? As an IDE, honestly I can't even understand it needs more than 1M to create

It's not about the tech, it's about the pool of users that use Cursor, by acquiring Cursor you get a bunch of users + subscribed and already paying pool of people instead of just rebuilding something from scratch and convincing people to change their tools with a new one

Is it about the users or the data the users generate. Pretty easy to see the day devs are replaced by the data they themselves generated. Companies are only going to get one chance to grad this data. Similar to the internet cutoff.

True, especially with Composer (the finetuned model by cursor)

the IDE has little value

What they want is the massive user base, the data (Cursor has a lot of high quality coding data for training), the teams expertise in coding models and agents, and the Composer models

60 billion is a large number but these frontier labs are burning billions a month in compute alone, and SpaceX is IPOing soon so they'll have a lot of cash to spend


This is it. I can’t believe the other commenters are unaware that Cursor recently fine-tuned an open-source model and brought it to the frontier, even if it remained there briefly.

Elon/xAI want Grok to become useful for coding. Cursor has enough data and expertise to create a useful coding model. They found a price and an arrangement that made sense for both parties.


>and the Composer models

You mean Kimi K2.5? They can get that for free.


How massive is the Cursor user base?

The numbers I could find says 1 million, with about 35% paying.

I'd say that a million users is pretty good, but 350.000 paying users isn't, if you're a $60B company. Someone else mentioned that Anysphere has a $1B ARR, but I seriously doubt that each user is forking over ~$3000 per year.


Over $2B ARR now.

Why do you doubt $3k/yr? Corporate usage skews a lot higher, when it's evaluated against hiring, not as a nice to have addon.

If $10k/yr means you get work done with one less hire that's an easy decision.


enterprise contracts. ARR is almost definitely juiced by counting future contract value

Cursor sells its own models as well now

It's own RT'ed open source models right?

> What's crazy is that a company that sells an IDE (that's not even a particularly good one compared to competitors like JetBrains) integrating some AI plugins could be worth more than $60B...

yes. plus $2b ARR, 1m DAU


Welcome to the era of vibe-based valuations

* MicroSoft is shaking in the corner lol

MS is doing just fine I'm sure

AI yielding such incredible cost savings. /s

Cursor is useless

> that's not even a particularly good one compared to competitors like JetBrains

Massive understatement calling it "a not particularly good plugin". If it were that simple there wouldn't be a need to even do this.


Paying $10B for the option is also crazy though. Paying $10B for the thing outright and not just an option would be absurdly high.

Is this cash or compute? Elon has one of the world's biggest compute clusters spun up, and little inference demand to speak of.

Trading billions worth of idle compute, in exchange for a high-strike call option on the #3 player in the most-promising-vertical for AI, plus (presmuably) some access to their data, starts to sound like not a bad trade. Especially if you're pre-committed to betting your entire rocket company on winning in AI, and you're currently in sixth or seventh place.


> you're pre-committed to betting your entire rocket company on winning in AI

SpaceX has invested a small amount as a share of its value in XAI, and could survive the loss of its investment.


It's true he could write off xAI today and the company could still fetch a trillion-dollar valuation. But I was more referring to his stated intentions - between his stated plans, his actions taking SpaceX from a profitable company to spending basically all their revenue (plus a rumored large chunk of what's raised via its IPO) on AI, and seeing his tendency to make bet-the-farm bets on Tesla, I think it's fair to say he's committing to bet all of SpaceX on xAI.

I heard he made a deal with a company to use his clusters. Is there good data on demand for Grok? Seems like relatively little chatter at least, in spite of tremendous investment.

[flagged]


[flagged]


I hate Trump as much as the next guy, but what is that evidence, again?

He had a very close, decades long friendship with the most notorious sex-trafficker-of-children-to-rich-creeps in modern history for decades. And when imprisoned, that infamous pedophile died while in a federal prison under Trump's control, with a strange gap in the CCTV video footage. And Trump's handling of the entire Epstein Files saga makes it clear that Trump is described extensively in those files and he desperately wants to conceal it. What could be in there that he would use the entire justice department to try and redact? Trump is shameless about things that are legal even if they're salacious (like sleeping with porn star Stormy Daniels), so you have to wonder, what could Jeffery Epstein's good friend be trying to conceal?

Also, he owned the Miss Universe org (including Miss USA and Miss Teen USA) for decades, and he was known to walk into the dressing rooms of teen contestants as young as 15 while they were undressed. [0]

Also, he bragged about molesting women, and a court of law found that he sexually assaulted E Jean Carroll.

I haven't proven the case that Trump had sex with a minor, but there's way more than enough probable cause to believe it's more likely than not.

[0] https://web.archive.org/web/20200111171647/https://www.rolli...


Obviously this looks very bad but you don't seriously think it constitutes evidence?

Imagine there's a camera continuously recording a cookie jar. A child eats all of the cookies and then deletes the footage from the time they ate the cookies. A parent returns to find their child covered in crumbs, loudly proclaiming they haven't eaten a cookie in years and actively interferes with the parent's investigation and tries to distract from it by throwing a brick through the window of an Iranian family down the street.

Are any of the facts in this hypothetical "evidence"? With the knowledge of the truth (that the kid ate the cookies), it's clear these are all relevant pieces of evidence. If we take knowledge of the truth out of the equation, would these facts still be evidence? Unambiguously they would.


you don't seriously think it constitutes evidence? Do you even know what the word evidence mean? It is not the same as proof.

Maybe you would want to insert the term "circumstantial" or so.

Definitionally both circumstantial and direct evidence are forms of evidence. No modifier is necessary.

And incidentally you can be convicted in a court of law purely on circumstantial evidence, and that's the place in society where we have the highest standard of proof. The evidence all being circumstantial is not a gotcha.



Yeah that's pretty bad.

This isn't court. The evidence, such as it is, is all of the smoke which commonly motivates people to look for fire. The strongest and most comprehensive that I've seen is the argument that if Trump was not implicated in the Epstein files, he would be publishing them in free book form himself and forcing every media outlet to advertise it. Slight exaggeration, but I think truly only slight.

Not really relevant to the thread, but there are simple answers to the "eViDeNcE??" question. You may have already known this.


Again, circumstantial and speculative.

Clearly you don’t and that disingenuousness is frowned upon in discussions here.

So, where’s the evidence?


[flagged]


Someone who works on a “sugar dating” app advocating for synthetic child porn? That’s… uncomfortable?

To say the least. Great catch! 'O brave new world, that has such people in 't.'

Has the availability of deepfake porn generation reduced the demand for deepfake porn featuring real people? When deepfake generators are capable of creating convincing imagery of flawless ideal fake humans, why do you suppose there’s so many real humans who report being non-consensual subjects of deepfake porn?

> Has the availability of deepfake porn generation reduced the demand for deepfake porn featuring real people?

yes

> When deepfake generators are capable of creating convincing imagery of flawless ideal fake humans, why do you suppose there’s so many real humans who report being non-consensual subjects of deepfake porn?

?


One obvious argument is what it was trained on.

Doesn't have to be. You can train it on normal pictures of children and nude images of adults.

> Doesn't have to be. You can train it on normal pictures of children and nude images of adults.

You say this so casually, as though it were a normal thing to know, or as if a normal person would know it. Does that actually seem true where you live right now?

And how do you know that, anyway, Harsh? I mean, all those "unblocked" games you stole to give away and that you also put on Github, that's one thing. But this...


Come on, it's not hard to come up with this idea. And it's not even true, model trained on clothed children and nude adults wouldn't know how children's genitals look like.

If it's not in an 8K filing it isn't real.

Problem is basically, that if the option works out (Cursor truly has the talent to train a frontier model on SpaceX's infrastructure, and were simply lacking the infra before) the fair price would be way way more than $60B.

OpenAI tried to acquire Windsurf last year for $3B and couldn't.


Seems like Elon's move is two fold

1) A gamble based on cursor's compute constraint 2) if 1) plays out, he can purchase cursor via shares of spaceX over valued shares, at a fixed price should the valuation increase.


> Cursor truly has the talent to train a frontier model on SpaceX's infrastructure, and were simply lacking the infra before

Wild conjecture.


I think this was an “if” scenario

This makes more sense that my initial reading of it indeed

It reportedly has a $2B ARR, and a 5x multiplier doesn't seem insane to me, but who knows, honestly

But it's paying a 5x ARR multiplier for the right to buy at a 30x multiplier.

They have 2B ARR because their business model is about selling models cheaper than they cost.

The main frenzy with Cursor started when you could access Anthropic models practically for free.

Otherwise it is just VS Code.


> Otherwise it is just VS Code.

This is a bit simplistic. It's the VS Code that everyone used before cc came to town. Real devs, on real projects. All that data they collected is worth a lot more than "just vscode". Their composer2 is better than kimi2.5 and it's just a finetune on that data.

xAI had a decent model in grok4 (it was even sota on a bunch of benchmarks for a few weeks), but they didn't have great coding models (code-fast was ok-ish but nothing to write home about, certainly nowhere near SotA). Now that they've been banned from using claude, they'll get their expertise + data to build a coding model on top of whatever grok5 will be + their cluster for compute.

It doesn't sound like a bad plan to me, financial shenanigans or not.


What data? Their commercial terms promised they wouldn’t keep any for training.

There's a lengthy discussion to be had here, and there's enough lawyerspeak in every provider's data retention policy to wiggle out of anything. A few notes from their current data use page:

> If you enable “Privacy Mode” in Cursor’s settings: zero data retention will be enabled for our model providers. Cursor may store some code data to provide extra features. None of your code will ever be trained on by us or any third-party.

Note the "may store some code data" and "none of your code will ever be trained on". In general you never want to include actual customer code in training the data, because of leaks that you may not want. Say someone has a hash somewhere, and your model autocompletes that hash. Bad. But that's not to say you couldn't train a reward model on pairs of prompts + completions. You have "some code data" (which could be acceptance rate) and use that. You just need to store the acceptance rate. And later, when you train new models, you check against that reward model. Does my new model reply close enough to score higher? If so, you're going in the right direction.

> If you choose to turn off “Privacy Mode”: we may use and store codebase data, prompts, editor actions, code snippets, and other code data and actions to improve our AI features and train our models.

Self explainatory.

> Even if you use your API key, your requests will still go through our backend!

They are collecting data even if you BYOK.

> If you choose to index your codebase, Cursor will upload your codebase in small chunks to our server to compute embeddings, but all plaintext code for computing embeddings ceases to exist after the life of the request. The embeddings and metadata about your codebase (hashes, file names) may be stored in our database.

They don't store (nor need to store) plain text, but they may store embeddings and metadata. Again, you can use those to train other things, not necessarily models. You can use metadata to check if you're going in the right direction.


At 60B they might do it anyway and then pay 200M in fines when the court rules against them.

xAI needs a dev tool to compete with Codex and Claude Code.

Cursor needs their own 1st party backend model.

Sounds like a match made in heaven.


Not quite first party, but composer 2 is far superior to grok for coding. Unless you're eluding to them using SpaceX infra to train their own model vs. using grok

If they're the same company then Grok becomes first party to Cursor.

2B ARR at what cost base?

ARR for a company where 99% of that goes back out to model providers is pretty meaningless

Not only is it almost certainly compute (“services”) it’s likely priced at Anthropic rack-rate, or at least what Cursor’s been paying Anthropic.

The cluster’s already paid for, so likely in the $2B range for operating cash needs. Not more than $5.

If I imagine bringing in Cursor’s team to build a frontier model, ideally combined with Grok, which has one of the few truly proprietary data feeds available to it, and with a much larger custom model Cursor can solidify a place, and I get to do a stock swap to buy it, this sounds like a bet worth making.

Upshot - I bet there’s an MS/oAI deal on IP on the back of this; meanwhile the cluster goes brrr.


Is that so or would those 10B be discounted from the purchase?

not that it isn't wild regardless


I'm not sure what you're referring to by "that" but I think you're right that it's 10B to not purchase or 60B to purchase, so as an option posting $10B for an option with a $50 strike price.

have concrete terms been published or is that an educated guess of the contract?

It's a statement based on the contents of the articles linked at the top of this comments section.

But they also get a whole bunch of AI Services from Cursor. Other comments have noted that xAI has fallen on bad times (idk one way or the other) so perhaps they were going to spend $5B on getting these services elsewhere, anyway.

SpaceX spending $1B a month on various AI services seems ~plausible

(EDIT - Or maybe it's an IP transfer, or maybe it's over a longer time horizon. Idk but SpaceX clearly expects value from 'our work together' even if they don't exercise.)


$1B per month on AI services does not seem remotely plausible to me... Engineers don't consume that many tokens...

And on the AI development side they're the ones providing compute in the form of a "million H100 equivalent Colossus training supercomputer"... On top of the cash.


Cursor has no AI services, they do not develop their own frontier models. I see no reason to understand why $10bn for Cursor's services is an advantage xAI versus say a $10bn deal with Anthropic, OpenAI or Google.

It's true that Cursor doesn't have their own frontier models, but they are training their own models. They just aren't at frontier level yet. The $60B/$10B deal looks like a bet that this is a capital/GPU constraint rather than a capability one.

Those other companies wouldn't also toss in a purchase option.

But I agree that it's hard to articulate what Cursor services you could blow this much money on.

Maybe it is all just an option! Or maybe they get a bunch of IP either way?


Plausible how? Explain please.

Tokens. Tokens spawning sub agents using more tokens. Maybe some training too.

I didn't say it was Wise.

I said it seems within possibility for this, very particular, corporation.


Despite their impressive ARR, Cursor faces existential threat from not only BigLabs (Claude Code, Open AI Codex) but also BigTech (AWS Kiro, Google Antigravity, MS VSCode). I am sure the usual suspects would have lined up to purchase Cursor, and the deal from xAI was probably the best of the lot. Marks an end to a remarkable sprint for a 3yo company, and an admirable exit (considering the recent discombobulation of Windsurf's), just as investor money and/or hype is going belly up.

Having tried most (all?) of the commercially available + open source options, and even tangential competitors like CC, Conductor, Antimetal, etc. I haven't found anything that's close to the experience of Cursor. The harness they've built is incredible.

I'd even go so far as to say that any competitors that are direct (windsurf, kiro, etc.) aren't even in the same universe. Cursor is just so much better, faster, has better features (plan and debug mode), and squeezes much better results/code out of the same models. They absolutely have some secrete sauce that the other options just don't have.


Cursor is my favorite of the VS forks. Agree that it delivers better plans than others. I prefer using Claude in Cursor over CC CLI when I am heads down going through bugs. I am disappointed in how "little value" in token use Cursor provides compared to others.

Do you have examples? I'm curious.

It has shown surprising stickiness. Occupying some middle ground between full adoption and still ~in the code.

I am starting to see some potential in moving back away from pure terminal, a mixed modality with AI. But it is not in the direction of IDE in any traditional sense.


Do you really think anyone is using AWS Kiro or Google Antigravity? They are not real competitors in the slightest.

This valuation is absurd. Perhaps a year ago- sure, but there have been so many iterations of this “kind of editor” since then, not to mention countless alternatives.

So for me it’s more of a data deal - Elon buying himself some insight into codebases and real dev usage patterns? Oh finally someone to use his dirty data centres


Cursor is still the best I’ve used are there others I should try?

I've been using Kilo Code (VS Code Plugin) for the last few days, and it does most of what I liked in Cursor without tying me to their particular subscription.

That said, people are increasingly migrating to CLI tools (Claude Code if you like the Claude models, Pi Agent if you want something that's highly customizable, Crush if you want something fun), or GUI tools that are less code-first (Codex GUI).


What makes Crush fun?

It has a CLI component and a very flashy TUI application. The TUI has lots of effort put in to layouts, color, and really pushing the boundaries of what a TUI can be. It looks a bit “hacker in a 2000s movie” except with pink instead of green as the dominant color.

Totally not for everybody though. I can see why some people would hate it.


People keep saying this and they don't understand how businesses work.

Cursor has 1B in enterprise revenue. It doesn't matter if people can clone their product, those deals don't move slowly


> Cursor has 1B in enterprise revenue.

That' all well and good and they had astounding growth rates but doesn't mean much. And 1B in ARR is not _that_ much in comparison. Also, reportedly they spend all their revenue and they have no control over the spend-side. The models they use will very likely get much more expensive. All the foundation model companies have a competing product. Cursor has the first mover advantage, but that will only help then so much. There have been plenty companies who grew fast, had huge revenue, but failed in the end, because they never got profitable. That's also in the cards for Cursor, if they don't fundamentally change their business model


Put 1B into a better product and 10B into marketing. If you can’t beat their 1B in revenue, the market for making your money back on the Cursor acquisition also isn’t there.

If you pay 10B for options at 60B and the strike is 8B you ... just lost 10B. Thats it.

Add emotional hedges if needed but they are just emotional not financial.

Your argument is based on an assumption that cursor cannot lose value. Even if the market says it has.

No free lunch: an option is a bet for both sides. Zero sum.


3 things bug me Now why would cursor agree to that unless the offer was better than what their market valuation + acquisition premium < 60

This was a similar play for twitter by the same person

While an innovator at the time, today there are a lot of LLM coding solution, sold by model providers, model aggregators even open source ones , it’s not obvious what is being bought that isn’t a feature of vs code or one of the LLM agents ( as the dismissive saying goes )


I used to be a hardcore Cursor user until I spent more time with the alternatives and cursor has gotten rough on my machine lately: spins up the MBP fans in minutes and drains the battery in an hour or two on the go. The harness is great, but eventually someone will build something equally good that is not vibe coded to death.

What services could SpaceX possibly be buying from Cursor that would cost $8bn?

Thinking of it as an option makes it much more rational.

Downside is capped (cost of services + deal structure), upside is asymmetric if Cursor outperforms.

That said, these deals always hinge on whether the “$8B in services” is real economic value or just internal accounting. If it’s the latter, the risk profile looks very different.


To be worth $60B at a 50x P/E ratio this implies $1.2B in profit.

Not happening


Overall US Energy production has been expanding, faster, each recent year. https://www.eia.gov/energyexplained/us-energy-facts/. This is all before you factor in the recent attention to Nuclear, which could come online within the next decade.

The ice caps may be worse off for it, but there's little reason to think the USA will cease to "lead in energy" anytime soon.


The US has long since exhausted it's "easy" oil/gas reserves. Yes, there's tons more down there, but it's increasingly hard to get to. Lots of extraction methods only make sense when the price for oil is above some amount.

If the rest of the world standardizes on solar+battery, demand for oil goes down, and so will the price. Which in turn makes US-produced oil not cost effective to extract, and domestic energy production collapses in favor of cheap foreign imports.

And then we're worse off in several different ways.


This probably a stupid question but do solar and batteries depend on rare earth metals and their supply?


The quick answer is yes, today. But there are battery technologies that require less and less in development.

Also, rare earth elements are not that rare. But they are not concentrated, and finding concentrations of them is kinda rare. Event then, you have to mine a lot of area to get them, which is not great for the environment. And since Americans (and everyone ex-China) has not been doing it for decades, only China has advanced the technology to extract and refine it for decades.

This lack of refining is similar to our lack of working on solar which will but us behind potentially forever, or until there is a big enough disruption to overcome the decade of experience. You can look at chipmaking and see that such things are not easy.


The answer depends on the kind of battery chemistry and how literally you mean "rare earth". If you take some slack on the definition and just mean "metal stuff in limited supply", then many battery chemistries have limited supplies.

There are, however, some chemistries with really nice supply chains. The Iron Redox Flow Battery (IRFB) really only needs iron and iron chloride as reactants. Those batteries are being commercialized, but they aren't common (yet?).


There are a great many assumptions in this argument, and I'm not sure they stand up well to examination.

1) "We're out of easily extractable oil" maybe, but I've heard it before and technology does have a way of marching forward.

2) "Rest of world's oil demand will drop" is possible but certainly not happening today and far from certain.

3) "Then Oil prices will plummet in the US Domestic market" is far from a sure thing even if 2) comes to pass. How do the other producers - who don't have large domestic markets! - react? What happens to global petrochemical demand? And what sort of Industrial policy could shield our markets, even if this happens globally?

At the end of the day, we have a continent full of oil (and Uranium! which I prefer!) and an energy-hungry population.


> 1) "We're out of easily extractable oil" maybe, but I've heard it before and technology does have a way of marching forward.

You've heard it before because it's been true for a long time. Technology marches forwards, yes, but technology is expensive, and like I said, a lot of domestic production has fairly high price levels below which they will not operate.

> 2) "Rest of world's oil demand will drop" is possible but certainly not happening today and far from certain.

That's totally fair.

> 3) "Then Oil prices will plummet in the US Domestic market" is far from a sure thing even if 2) comes to pass. How do the other producers - who don't have large domestic markets! - react? What happens to global petrochemical demand? And what sort of Industrial policy could shield our markets, even if this happens globally?

Assuming (2) does happen, then I think this follows naturally. The cost to produce a barrel of oil varies wildly by country. If global demand drops, then the cheapest producers eat the market that they currently cannot fully supply.

Could industrial policy shield this? Sure, but at great cost to the US; that would have the side effect of pushing down energy prices for the rest of the world even more, making it even harder for us to keep up.

Uranium absolutely could save us, but I think we're a couple decades out from the political will being there to really get a lot of nuclear online.


Fracking was a brilliant invention, but may be reaching inherent limits---there are lawsuits between oil companies about fracking fluids from one well flooding and disabling other wells.


Ice caps? Try human beings.

Increased Mortality: Projections indicate an additional 14.5 million deaths by 2050 due to climate-related impacts like floods, droughts, heatwaves, and climate-sensitive diseases (e.g., malaria and dengue).

Economic Losses: Global economic losses are predicted to reach $12.5 trillion by 2050, with an additional $1.1 trillion burden on healthcare systems due to climate-induced impacts. One study estimates that climate change will cost the global economy $38 trillion a year within the next 25 years.

Displacement and Migration: Over 200 million people may be displaced by climate change by 2050, with an estimated 21.5 million displaced annually since 2008 by weather-related events. In a worst-case scenario, the World Bank suggests this figure could reach 216 million people moving internally due to water scarcity and threats to agricultural livelihoods. Some researchers predict that 1.2 billion people could be displaced by 2050 in the worst-case scenario due to natural disasters and other ecological threats.

Food and Water Insecurity: Climate change exacerbates food and water insecurity, leading to malnutrition and increased disease burden, especially in vulnerable populations. For example, a significant increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050. An estimated 183 million additional people could go hungry by 2050, even if warming is held below 1.6°C.

Mental Health Impacts: Climate change contributes to mental health issues like anxiety, depression, and PTSD, particularly in vulnerable populations and those experiencing climate disasters or chronic changes like drought. Extreme heat has been linked to increased aggression and suicide risk. Studies also indicate that children born today will experience a significantly higher number of climate extremes than previous generations, potentially impacting their mental well-being and sense of future security.

Inequality and Vulnerability: Climate change disproportionately affects vulnerable populations, including low-income individuals, people of color, outdoor workers, and those with existing health conditions, worsening existing health inequities and hindering poverty reduction efforts.


Nice try, ChatGPT.

Not a single of these idiotic projections will ever come true.


> Over 200 million people may be displaced by climate change by 2050

This one seems like it undershoots realistic estimates by a large amount.



I specifically refer to the question of who will own the IP and economic might to lead in the clean energy market. Who will innovate? Who will build industrial capacity and know how, etc. It seems we’ve ceded the field

Not just strict energy production. Especially when it comes from sources of energy increasingly infeasible and unpopular.


> One thing I (in general) miss from those days, was how easy it was to get into modding.

I'm generally skeptical about the use cases for current-gen AI, but very hopeful that it can help us get back to this golden age of game Modding.

I think many people, like me, got lost in all the polygons and shaders soon after Half-Life 1. But if AI tools can make it easier to express Modern game outcomes, the way we could make a funky HL1 mod with the IDEs back then; it could be swing things back.


No, we have not even scratched the surface of what current-gen LLMs can do for an organization which puts the correct data into them.

If indeed the "GPT 5!" Arms race has calmed down, it should help everyone focus on the possible, their own goals, and thus what AI capabilities to deploy.

Just as there won't be a "Silver Bullet" next gen model, the point about Correct Data In is also crucial. Nothing is 'free' not even if you pay a vendor or integrator. You, the decision making organization, must dedicate focus to putting data into your new AI systems or not.

It will look like the dawn of original IBM, and mechanical data tabulation, in retrospect once we learn how to leverage this pattern to its full potential.


I took about 60 credits - ~8 hours per week each school semester - of Computer Science courses back in the mid 00's at a top state school. Besides the 101 Course, heavy on Java syntax; and Software Architecture where one learns the dark art of Swing, we used pencil, paper, and white boards (even a few chalk boards!) for the rest.

I use concepts like Dijkstra's algorithm and the Turing machine regularly in my job. They are very real to me - more real than any programming language - because I sat for hours taking paper notes off a whiteboard while some OG Computer guru discussed the topic.

If I didn't need tech to learn Computer Science, kids definitely don't need it to learn Algebra.


I didn't even own a computer for my first year of computer science (couldn't afford it), and had a 1½-2 hour commute to school. I did everything with paper and pencil, because when I had to actually turn in something, it involved staying in a giant, crowded computer lab and getting home well after dark after having left well before dawn. I still have notebooks filled with C.

That being said, my mother learned how to program when they were still using punchcard decks. My ordeal wasn't special. Don't know if I learned any better than others, but I think the need to not have bugs on the first iteration was more important for me than for other people. I did not just tweak things until it worked.


What if it is possible to spend a lot less time learning that with newer classroom methods (ed tech doesn't have to mean whiteboards and apps, teaching methods are technology too).

We can make a good house without metal fasteners using hand tools and nothing but muscles. But that doesn't mean that a house built using brushless power tools in 1/4 the time isn't also a good house.

More directly: I conceptually understand a lot of algorithms that I read about. For me though, the ones that I learned by coding them and running them are the ones I understand much better. Hand written notes on a lecture do not guarantee complete or correct understanding, and there is no mechanism for checking.


Trump talked about inflation, and his desire to fix it, constantly.

Harris did not.

Once again, Republicans Show Up and they win by default. Yes, his "plans" are nonsensical, but the opponents decided to forfeit the match!


This isn't true. Harris has talked about fighting inflation many, many times. The issue is nobody listens, ultimately republicans have been able to support the lie that they are the "party of economics". Past that propaganda piece, nobody cares.


As I tried to imply in my original post: Harris' talk about low inflation or fighting inflation loses on a technicality, which is that people tend to experience inflation as the current price not the rate of change in the current price. Thus, when Harris is talking about inflation fighting and inflation cooling down, you have a bunch of people who look at the price of eggs/pizza/houses and say, "this shit is still expensive, Dems are full of shit." They are not looking at the CPI, and calculating the year-over-year change.

Let me share an anecdote: I worked on a project to estimate household-level price sensitivities to the market basket of goods commonly used in CPI calculations. (My employer had shopper-card/upc/transaction-level data from tons of major grocery chains across the USA with which to attempt this project.) I tried to read through the docs on how CPI is calculated, and let me tell you: major snoozefest, and I consider myself "a numbers guy."

I doubt the run-of-the-mill American can accurately define inflation. Consequently, "look at how we fought inflation" is the wrong campaign slogan.


"The Rent is too Damn High" is still a well-recognized meme. I doubt many people remember the gentleman's name or what he was running for. But the message worked! It's got to be simple and focused.


Slogans are important. Everyone knows MAGA, it's on the hat, but can you name Harris, Biden, or Clintons?


Clinton's was I'm With Her, wasn't it? Not sure about the others off the top of my head. TBF I'm With Her isn't nearly as compelling as Make America Great Again.


Right, I'm With Her - alienating anyone who's not sexist and votes for policy, not genitalia.


People are suffering and the Dems ran on 'things are going great'. To the people suffering that feelz/vibez like 'our version of great DNGAF about you'. It's easy to see how that could be a less than optimal message for a candidate for election.


The issue is that she's part of the current administration and the current dominant party. That's all people care about. They look at who's in charge and vote the other way. It's really that simple.


That seems to imply that things can't get worse.. much much worse


Oh, they will get worse, much worse. But the simpletons who think the president is in charge of egg prices or whatever will never comprehend that. Maybe if it gets bad enough people will learn then.


> Harris has talked about fighting inflation many, many times.

There was this Biden admin. push to not call things a "recession" due to technicalities that probably pissed people off? "Inflation" means 'higher prices' and "recession" means 'economy things suck right now'.


I did not hear this, and neither did the median voter. Perhaps that is down to our choice of media diets, but we should take such things as constants when considering political outcomes.

I did hear Trump loudly, constantly, inaccurately talking about Grocery prices.


Where are the long form interviews that cover the policy from her point of view?


afaik, Inflation in the us i quite low. Isn't it almost at the target? So why would she have a policy?


To beat a dead horse, the working class cannot afford grocery or rent. If you say that inflation is not that bad, in their mind you dismiss their suffering and dismiss them entirely.


I'm saying that because inflation is what we're discussing.

I have no trouble believing many people are worse off, which sucks. And many politicians should care more and try to do more.

But: 1) I would attribute that to low wage increases for several decades, not the last 4 years. 2) there's no easy fix for these things. 3) Putting inflation in a global perspective is meant to show how this is not mainly Biden's fault, since he doesn't control the rest of the world.


What if Women, on average, prefer to take more time away from work due to having a child than their male partners? And what if "Black" people are, on average, younger than other groups and so are more likely to be in early-career roles?

More broadly, once we start dividing "People" up into groups like "Black" "White" "Man" "Woman"; isn't a bit silly to think the groups won't expect and want and do different things? Like even if we assign people literally at random (and 'Race' isn't much different than this); wouldn't differences emerge?


Now, imagine you enslave one of those groups for ~400 years, prevent them from voting or getting an equal education for another ~100+. Might differences emerge in how society treats that population?


Yes. Do you agree that my point is also correct? Different groups want different things, and have different demographics, and excel in different areas.

If we defined the "groups" in a less historically informed way, we'd still have differences.


> Different groups want different things, and have different demographics, and excel in different areas.

I think it's very easy to overstate how much those things are genuine differences in preference/ability. Allowing no-fault divorce dropped female suicide rates by 20%; were they happy in those marriages, or enduring them? Would they choose differently if offered the same opportunity?


Wasn't there also research showing that when women's financial independence improved, divorce rates went up? I wish I could find that source again.


Do you think people with Green eyes are more or less compensated than people with blue eyes?


Eye color, unlike Race or Gender, is pretty evenly distributed over the obvious confounding variables like "Age" or "Preference of staying home with children". I'd expect it to be +/- 10%, though probably not "equal" enough to keep "disparate impact" folks from calling it out.


"Patching" is the fundamental reason airgapping isn't a sound solution, IMO. If you're a TLA you can probably find some secure, verifiable, write-only way to transfer patches to your air gapped machines. But for any normal person/organization; you'll very likely end up less secure due to how hard this is.


You can use DVD-Rs to load a WSUS server for Windows or a package mirror for Linux, I’d just be surprised if many airgapped operators were keeping on top of this.


This exactly how its done in many high security gapped environments. Once you get in a rhythm its not hard.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: