Yep, his administration took the worst possible approach by waiting so long only to bring these slow milquetoast prosecutions against trump. They should have gone after him and his accomplices immediately, but failing that doing nothing would have been better.
These weak prosecutions did nothing to stop trump and only caused republicans to rally around him.
Hold on there, they have been very explicitly doing the opposite of reducing energy costs. The administration has been aggressively trying to cancel all sorts of energy projects, even projects that have almost been completed. At the same time they've been encouraging as much data center build out as possible. Lowering supply and increasing demand is hardly going to reduce energy costs.
They have managed to significantly lower expectations for global economic growth which brings down energy costs, but that's hardly a sane way to accomplish that goal.
I think it's a little bit more nuanced than what you say and that they generally are trying to increase energy supply while withdrawing from using heavy government subsidies or extensive regulations to pick the winners and losers.
From a demand side, they aren't looking to restrict demand, but want to have ample supply to meet the demand.
If that was true they would have let incentives for new energy infrastructure (which can be net positive, given energy is a national level concern), draw down in a way that didn't disrupt/destroy existing investment.
You can alter forward looking policy sensibly in a day. But you can't redline years of cooperative investment on the same day without destroying tremendous value (of the kind you claim to be working toward), credibility and trust.
I am baffled that performative flailing gets interpreted as progress, with such thin narratives.
The deeply counterproductive actions taken ostensibly to increase US investment in manufacturing are more of the same.
The destruction of valuable US research and capabilities, in the name of fiscal responsibility, only to continue fiscal irresponsibility is more of the same.
The destruction of diplomatic and defense alliances and influence, in the name of being stronger, is more of the same.
The private masked army roaming cities, harassing people with low relevance to their purported purpose, in the name of making the country safer: more of the same.
They all involve some truth, and then loud damaging counterproductive execution. Unless loud chaos is value.
I have deliberately moderated my use of AI in large part for this reason. For a solid two years now I've been constantly seeing claims of "this model/IDE/Agent/approach/etc is the future of writing code! It makes me 50x more productive, and will do the same for you!" And inevitabely those have all fallen by the wayside and been replaced by some new shiny thing. As someone who doesn't get intrinsic joy out of chasing the latest tech fad I usually move along and wait to see if whatever is being hyped really starts to take over the world.
This isn't to say LLMs won't change software development forever, I think they will. But I doubt anyone has any idea what kind of tools and approaches everyone will be using 5 or 10 years from now, except that I really doubt it will be whatever is being hyped up at this exact moment.
HN is where I keep hearing the “50× more productive” claims the most.
I’ve been reading 2024 annual reports and 2025 quarterlies to see whether any of this shows up on the other side of the hype.
So far, the only company making loud, concrete claims backed by audited financials is Klarna and once you dig in, their improved profitability lines up far more cleanly with layoffs, hiring freezes, business simplification, and a cyclical rebound than with Gen-AI magically multiplying output. AI helped support a smaller org that eliminated more complicated financial products that have edge cases, but it didn’t create a step-change in productivity.
If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.
I have friends who make such 50x productivity claims. They are correct if we define productivity as creating untested apps and games and their features that will never ship --- or be purchased, even if they were to ship. Thus, “productivity” has become just another point of contention.
100% agree. There are far more half-baked, incomplete "products" and projects out there now that it is easier to generate code. Generously, that doesn't necessarily equate to productivity.
I've agree with the fact that the last 10% of a project is the hardest part, and that's the part that Gen-AI sucks at (hell, maybe the 30%).
> If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.
If we’re even just talking a 2x multiplier, it should show up in some externally verifiable numbers.
I agree, and we might be seeing this but there is so much noise, so many other factors, and we're in the midst of capital re-asserting control after a temporary loss of leverage which might also be part of a productivity boost (people are scared so they are working harder).
The issue is that I'm not a professional financial analyst and I can't spend all day on comps so I can't tell through the noise yet if we're seeing even 2x related to AI.
But, if we're seeing 10x, I'd be finding it in the financials. Hell, a blind squirrel would, and it's simply not there.
Yes, I think there many issues in a big company that could hide a 2x productivity increase for a little while. But I'd expect it to be very visible in small companies and projects. Looking at things like number of games released on steam, new products launched on new product sites, or issues fixed on popular open source repos, you'd expect a 2x bump to be visible.
Perhaps. I've had LLMs tell me some code is deeply flawed garbage that should be rewritten about code that exact same LLM wrote minutes before. It could be a sign of deep meta cognition, or it might be due to some cognitive gaps where it has no idea why it did something a minute ago and suddenly has a different idea.
This is not a fair criticism. There is _nobody_ there, so you can't be saying 'code the exact same LLM wrote minutes before'. There is no 'exact same LLM' and no ideas for it to have, you're trying to make sense of sparkles off the surface of a pond. There's no 'it' to have an idea and then a different idea, much less deep meta cognition.
I'm not sure we disagree. I was pushing back against the idea that suggesting a rewrite of some code implies meta cognition abilities on the part of the LLM. That seems like weak evidence to me.
This keeps me away from these sorts of stores if I can avoid them, which is pretty much always (so far, anyway). I would be absolutely shocked if the error rate was comparable to a normal checkout process and I don't want to waste the cognitive overhead of either wondering how much I'm getting ripped off by a corporation or having to go back and review and try to resolve overcharges.
Right, if a future democratic president starts sending masked government thugs out to assault and kidnap American citizens we all know that 100% of the people who are defending the current ICE atrocities will suddenly be outraged about government tyranny.
a surprising amount of people seem to genuinely believe law enforcement (generally, not just police) is at its core based on discretionary actions guided by their moral values and not a morally neutral action upholding agreed upon contracts
that is to say, the law only applies to you if you do "bad" things. and ill be honest, there is a level of truth to this to me. from a practical standpoint, it is infeasible to formally understand every nuance of every law ever created just to be a citizen. The underlying core social contract does appear to be one of "if you do 'good' things, generally the law will agree with you and if it doesnt then we wont hold it against you the first time"
*the important caveat here is that this leaves a rather disgustingly large and exploitable gap in what is considered good vs bad behavior, with some people having biases that can spin any observable facts into good or bad based on their political agenda. Additionally, personal biases like racism for example, influence this judgement to value judge your actions in superficial ways
Which is why its backwards and makes no sense that we allow / cater to "well nothing said I couldnt do that" as a reasonable defense. The value judgement system should go both ways. then a lot less would need to be written down to begin with, because it wouldnt be an arbitrary set of rules on every front but the codification of a specific value judgement system with clarifications on how to align yourself to it.
We really shouldnt be allowing things like, "this is a location dedicated to peace and non-violence" and then section 32 subsection C part 2 (a) says "we can kick the shit out of you if you photograph the premises". Just a random made up example for communication purposes, but it applies to all sorts of things. Personally, I think it should apply to social media. there was a implied sense of privacy to it, that people could not see my information if i did not approve it - and then the fine print says except for the company running the page who can sell the information to whoever they want. Like WTF was that about? I wont say its an ignored thing, there plenty of outrage over it - but i think its incredibly fundamental to whats going wrong and feeding this information overload in a dangerous / stressful way.
Companies shouldnt need 10 pages of TOS to say all the obvious things, and appealing to this idea that only whats written down is what matters shouldnt allow for just any arbitrary set of things to be written down and called reasonable
Less about value judgements. More about outsourcing to people/brands we trust.
When it comes to software licenses, we aren’t lawyers, so the informed people will use a primer created by a trusted 3rd party. Maybe GitHub’s “which license is right for me?” Page.
Who to vote for in local elections is usually decided via one of the following: (1) I know/met the person, (2) I trust the party they affiliate with, (3) I trust the newspaper/news source which recommended them.
Academic papers are usually thick, long, and inaccuracies are difficult for anyone not in that field of expertise (or something relevant like statistics) to identify. Most people require an overview of the article by an expert. Hopefully (but unlikely) they can choose one which is impartial / minimally biased and who can give an opinion on how definitive or significant the findings are.
The last 2 decades have been spent with companies learning to exploit this. For example, every large tech business would prefer all your code was MIT/BSD and they have spread advice to this effect.
The other caveat is if you're a historically persecuted minority group, then those assumptions toward law enforcement don't usually apply. And now the political opposition to the current US administration is also feeling that way.
I have never considered this perspective, but this fits very well with people's actions. Thank you for sharing.
To me, the system of codified law and courts makes intuitive sense, and most people misunderstand or abuse the system. But other people's intuitive understanding of the law as you mentioned is a much easier way to understand and actually IS a rough approximation of what the system does.
the bigger caveat here is where some people can do "bad" things but the law doesn't apply to them. This breaks social contract and exposing law as a tool for the powerful to control the masses (this is still true, but by not doing it blatantly, the contract can still be somewhat upholded).
In an ideal world, when this happen, it should be anarchy until a new set of government, that uphold the law equal to everybody, is enacted. But we don't live in ideal world.
Honestly, and I say it without a shade of irony, it might be for the best, if the collective 'we' stop attempting re-enact fictional events and lives in alternate worlds. It would do everyone, and I do mean everyone, a good solid needful, should they just stopped and thought about what they are doing and the likely course of the events given their actions.
It would be orders of magnitude more productive if we did that.
I’m saying people should watch a powerful series about state violence and masking with real world lessons that can be taken away. I’m unsure what you mean by how we shouldn’t re-enact fictional events. Are you talking about my suggestion? Or are you saying we should end acting? Or is it something g else?
Apologies. I may have come too strong possibly, because I do it myself sometimes by referencing shows as a means to convey relatable message to the audience. Lately, however, I started to think that the shorthand those references introduce may be more of a problem than not. I am not even familiar with the particular show you are referencing.
I think my concern was that we think too much in terms popular culture. That itself is a problem. Still, as problems go, it is not urgent. Hence my apology.
All good. I would say Watchmen (the show) is one of those ones in particular that is truly above the noise however. I even hesitate to call it “pop culture.” It’s “high art” if I can be pretentious about it, and a powerful mirror for us.
At a time when state violence in the US is brandished so loudly and proudly, it feels like a very important piece of media for folks to watch.
They are acting with the expectation that Democrats are too spineless to do anything because thats all they have seen their entire lives and they are probably right.
Yeah I also expect they are correct on that assumption. If history is any guide Dems will take very few if any concrete actions to correct these wrongs if/when they ever get back into power again. I'm sure they'll give some rousing speeches and press conferences though.
What should happen is that everyone who is flagrantly violating the law and looting the federal govt right now should be quickly and aggressively prosecuted. Real concrete legislative reforms should be enacted to limit future corruption and dangerous adventurism by demented leaders.
I don't think there's any question at this point that it's in Nordic self interest to develop a nuclear deterrent. This has also become true for other regions in the world.
This is all a horrible development for the overall future of humanity, but it's the world we live in now. At a minimum hundreds of billions of dollars will be siphoned off from more beneficial uses over the coming decades, and the risk of major accidents will increase. The worst change is of course the fact that the odds of a complete societal collapse have increased dramatically.
Almost all of the world's nukes are controlled by aging old dictators or aspiring dictators who are surrounded by sycophants and treat competence as much less important than personal loyalty. Geopolitical risks are only going to increase as these rulers become more erratic and demented.
> I don't think there's any question at this point that it's in Nordic self interest to develop a nuclear deterrent.
Yes, it definitely is.
> The worst change is of course the fact that the odds of a complete societal collapse have increased dramatically.
A nuke means that anyone who wants to invade you needs to price in a total loss of their largest city as a possible outcome.
That is a great disincentive, one that Ukraine probably wishes it had against Russia.
> and the risk of major accidents will increase.
I don't think that's reasonable to say about a bunch of countries getting their first nuke.
The concern should be more with countries like the US and Russia that have so many nukes, which they can't possibly use effectively, and don't have the ability to properly maintain.
If every western country had exactly one nuke, the world would probably be much safer than if the US has all of them.
> A nuke means that anyone who wants to invade you needs to price in a total loss of their largest city as a possible outcome. That is a great disincentive, one that Ukraine probably wishes it had against Russia.
It's even more complex than that. If Ukraine responded conventional war with nukes, it can be sure Russia would retaliate with even more nukes, practically extinguishing their statehood.
The equilibrium is reached when the exchange is equally devastating, so the only winning move is attacking first, and only if the attacked won't be able to retaliate. The Cold War never ended, just warmed a little, because it doesn't exist (yet) a guaranteed way to avoid an all-out nuclear retaliation.
Russia didn’t start this war with the intention of getting into a protracted slugging match over 20% of Ukraine - they got into for the whole thing.
Luckily Ukraine beat back the drive on Kyiv. But if Russia’s success metric at the outset of the war (the complete capitulation and conquest of Ukraine) carried a credible risk of losing Moscow or even smaller cities closer to the front would they have been anywhere near as likely to have made such an attempt?
Russia did not start this war after a rational and accurate assessment of reality.
Why do you believe they would rationally and accurately assess nuclear war probabilities?
The entire problem is that these leaders are fucking nuts, and surrounded by people who cannot defect from sycophancy to burst the stupidity bubble and bring people back to reality.
What would have saved Ukraine is actual support.
Arguably what would have been Ukraine's best bet is if they had substantial independent oil reserves that they could not tap alone. The USA would have "liberated" them years ago. Hell, Trump is literally going this direction now, demanding "mineral rights" to do what we should be doing already.
Re-read what you wrote. That's exactly what this was is about: who gets to control a colony. And from that angle, the US went from having 0% of Ukraine as its colony to having 75%, including all mineral rights. At this point continuing the war is too expensive, which is why the US and Russia want to just stop. Europe keeps jamming up the gears though because they got a terrible deal.
that was the etymology, given the world we were emerging was one where major world powers came directly to blows amongst themselves rather than through the countless small-scale, regional proxy wars we saw over the 2nd half of the 20th century.
It's not up to me. So I'm not "letting" or not "letting" anyone do anything.
I was stating what I believe to be a true counter-factual. If every western country had 1 nuke, the world would be safer than if a single country has all the nukes.
The west is also not "my side". I have no stake in most western countries, and their success or failure is not something I feel as part of my day-to-day.
I'm glad there is more than one, so if something goes wrong I can go to another one.
The west gets special treatment because it is filled with prosperous democracies. Democracies are relatively stable, and rarely do things outside their Overton windows, like launching a nuclear weapon unprovoked.
Prosperity is what makes people peaceful. Prosperous people have more to lose. No one in the west wants to backslide towards a state of nature because an invasion or unprovoked conflict went the wrong way.
I am not convinced that the likes of Putin or Trump would care about the total destruction of their largest city, so long as they weren't there at the time.
The vast majority of users make zero changes to the default settings of an app or device, even for software they use all the time and where some simple builtin adjustments would significantly improve their experience.
I simply can't imagine a world where these same people all decide they constantly want to learn a completely unique UX for whatever piece of software they want to use.
Users will not fumble with the complex web of nested settings that engineers wet dream about.
But they will tell the LLM "I'd really like it if the tool bar only had the hammer and saw tools", and it will be done.
I cannot see software going in any other direction than a blank front end that users prompt LLMs to run scripts on top of.
Picture MS Word where the GUI is just a page and a sidebar for telling an LLM what you want it to do. And if it's not possible, the LLM could even write extensions and plugins that make it possible.
> Picture MS Word where the GUI is just a page and a sidebar for telling an LLM what you want it to do.
Done. And it seems absolutely awful.
"Please bold the text I have selected" instead of a preexisting bold button.
Oh wait I can just tell it all the tools I commonly use and where to put them... Hmmm topbar or side bar. Wow so much fun getting to make all these decisions!
Ok time to change fonts. "Please add a font picker so I can pick a font"
All the people may not, but a decently skilled software engineer armed with an LLM, who doesn't have a lot of free time might be now be motivated to do it, whereas before it was like, "This thing is going to take months to replace, do I really want to write my own?"
The LLM will know how the user operates, their proclivities and brain structure, and will design UX perfectly suited to them, like a bespoke glove. They won't have to learn anything, it will be like a butler.
Years ago I did a lot of driving around rural Latin America and it could not have been more different from a US city. Official traffic rules were almost non-existent in many areas but the informal ones that had evolved worked shockingly well. Like a cramped two way street might only have room for one car in spots, but there would be a pattern for pulling over and letting opposing traffic pass.
Things like that would probably break down at a certain level of crowded-ness, but it did somewhat change my view of regulation in general. I think there are a lot of cases where people will figure things out just fine if you leave them alone and count on them to be responsible, versus having a million detailed rules that are poorly enforced.
These weak prosecutions did nothing to stop trump and only caused republicans to rally around him.
reply