Hacker Newsnew | past | comments | ask | show | jobs | submit | master_crab's commentslogin

For production Postgres, i would assume it’s close to almost no effect?

If someone is running postgres in a serious backend environment, i doubt they are using Ubuntu or even touching 7.x for months (or years). It’ll be some flavor of Debian or Red Hat still on 6.x (maybe even 5?). Those same users won’t touch 7.x until there has been months of testing by distros.


Ubuntu is used in many serious backend environments. Heroku runs tens of thousands (if not more) instances of Ubuntu on its fleet. Or at least it did through the teens and early 2020s.

https://devcenter.heroku.com/articles/stack


Do they upgrade to the new LTS the day it is released?

Ubuntu's upgrade tools wait until the .1 release for LTSes, so your typical installation would wait at least half a year.

Not historically.

and they are right, this is because a lot of junior sysadmins believe that newer = better.

But the reality:

  a) may get irreversible upgrades (e.g. new underlying database structure) 
  b) permanent worse performance / regression (e.g. iOS 26)
  c) added instability
  d) new security issues (litellm)
  e) time wasted migrating / debugging
  f) may need rewrite of consumers / users of APIs / sys calls
  g) potential new IP or licensing issues
etc.

A couple of the few reasons to upgrade something is:

  a) new features provide genuine comfort or performance upgrade (or... some revert)
  b) there is an extremely critical security issue
  c) you do not care about stability because reverting is uneventful and production impact is nil (e.g. Claude Code)
but 99% of the time, if ain't broke, don't fix it.

https://en.wikipedia.org/wiki/2024_CrowdStrike-related_IT_ou...


On the other hand, I suspect LLMs will dramatically decrease the window between a vulnerability being discovered and that vulnerability being exploited in the wild, especially for open-source projects.

Even if the vulnerability itself is discovered through other means than by an LLM, it's trivial to ask a SOTA model to "monitor all new commits to project X and decide which ones are likely patching an exploitable vulnerability, and then write a PoC." That's a lot easier than finding the vulnerable itself.

I won't be surprised if update windows (for open source networked services) shrink to ~10 minutes within a year or two. It's going to be a brutal world.


Too often I see IT departments use this as an excuse to only upgrade when they absolutely have to, usually with little to no testing in advance, which leaves them constantly being back-footed by incompatibility issues.

The idea of advanced testing of new versions of software (that they’ll be forced to use eventually) never seems to occur, or they spend so much time fighting fires they never get around to it.


all fair points, on the other hand, as a general rule, isn't it important to stay on currently-supported versions of pieces of software that you run?

ymmv, but in my experience projects like postgresql which have been reliable, tend to continue to be so.


There is serious as in "corporate-serious" and serious as in "engineer-serious".

I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.

In other words, those industries that used to run windows before ?

I'd say about 2/3 of the places I've worked started on Linux without a Windows precedent other than workstations. I can't speak for the experience of the founding staff, though; they might have preferred Ubuntu due to Windows experience--if so, I'm curious as to why/what those have to do with each other.

That said, Ubuntu in large production fleets isn't too bad. Sure, other distros are better, but Ubuntu's perfectly serviceable in that role. It needs talented SRE staff making sure automation, release engineering, monitoring, and de/provisioning behave well, but that's true of any you-run-the-underlying-VM large cloud deployment.


A customer of mine is running on Ubuntu 22.04 and the plan is to upgrade to 26.04 in Q1 2027. We'll have to add performance regression to the plan.

Are you running ARM servers?

Tool calls (particularly fetching for context) eats the context window heavily. I explicitly send MCP calls to sub agents because they are so “wordy”.

Everyone who has not hit this bug thinks it’s user error… It’s not. It happened to me a few days ago, and the speed at which I tore through my 5 hour usage cap was easily 10x faster than normal.

Also: sub agents do not get you free usage. They just protect your main context window.


I'm on Max. This morning, just to test, before doing anything else whatsoever, I was at 0%, and I typed 'test one two three' into CC.

That put me at 12%.

I have no MCPs except the built in claude-in-chrome.

This is clearly a bug.


Readimg through this thread, it seems likely is a KV cache "bug". Theyre likely doing too many evictions of the LLM cache so the context is being reloaded to often.

Its a "bug" because its probably an intended effect of capturing the costs of compute but surfacing a fact that they oversold compute to a situations where they cant keep the KV cache hot and now its thrashing.


Caching helps them too, so I hope they fix it

Don't they consume less of the token quota in case the subagents are running cheaper models like Sonnet and Haiku compared to Opus?

Correct—I just wouldn't want folks to mistakenly think that the context fill % corresponds 1:1 with session token use.

Yes, sorry. I meant it more as a descriptor of how many tokens it consumes. You are still stuck burning money.

In the past it had less to do with seizing the vessels and more to do with keeping financial flows between organizations offering shipping services and oil hidden from the banking system. America could have easily seized any ship they wanted to during the sanctions over the past decade. They didnt because the sanctions are American constructs: they dont apply on the open seas where UNCLOS matters. America can still seize them, but the legality is murky and comes with a reputational cost.

Now with Hormuz closed, America needs every last oil barrel moving so the economy doesn’t grind to a halt. Remember, it’s a war of choice for the US. We don’t need Iran gone as much as we want low oil prices.


> the sanctions are American constructs: they dont apply on the open seas where UNCLOS matters

Technically correct. But the way these countries evade U.S. sanctions is by flying false or no flag. That, in turn, makes them vulnerable under UNCLOS's anti-piracy rules.


No flag is rare because that immediately opens them to anti-piracy.

But coming back to my original point: it isn’t America’s determination that a registration is fraudulent. It is the flag state’s.


> it isn’t America’s determination that a registration is fraudulent. It is the flag state’s.

Sort of. If there is no flag, it's America's determination. And in many of the seizure cases, the flag state confirmed a fraudulent registration. (I believe there was one around Venezuela falsely registered with Panama.)


I’m not an AF vet so I don’t have an idea, but what’s the over-under that the US injuries were the crew trying to get the plane ready to fly after the alert came in? I think the number of injuries lines up closely with expected crew compliment.


Oh this is a good article. Thanks for that.

Towards the bottom they list some satellite imagery and a statement indicating they are possibly using the taxiways as parking.

Still leaves open the question of who might have been injured and where, but at least answers how the Iranians could have possibly hit a taxiing plane — they didn’t.


The Oklahoman and Jerusalem Post are also reporting it was a total write off.

Also adding: the spike in 2008 was transient and partially juiced by a weak dollar. Unfortunately, we will probably get no respite this time around.

At the current geopolitical trajectory, I also doubt $147 is anywhere near the limit of where oil is going.


I got what you were saying. I read it the same way. I’m sorry for your loss.

No one leaves this planet alive, and the best you can hope for is that the majority of your time is spent relatively healthy and independent.


I don’t disagree with any of your assessments, but I don’t know if it’s a bigger mistake than Iraq…yet. That war was a 10 year (longer if you bc point ISIS) debacle that cost trillions.

Let’s wait a few years before saying this mistake is bigger first.

However, one point that I agree with that might lead to this war being worse: the Gulf are showing some serious buyers remorse with sticking in the US orbit. Both the uselessness of America’s strategy and the almost clear prejudice Trump shows towards the Arabs vs Israel in the decision tree of this conflict is unsettling for the Gulf states.


Good write up. Not to mention the other big HR constraints on DoD engineers: they almost always have to be a “US person.”

Anyone who gets a CAC working on a personal computer deals with this all too much. The root certs DoD uses are not part of the public trusted sources that commonly come installed in browsers.


lol I very nearly included a rant about that but decided it was too far off topic. Not being able to smoke weed may be more of an obstacle these days though.


I haven’t touched a Palantir system since 2008 (and I still feel dirty) so I’m not the most read up on this: but Maven is just the harness or workflow tool. It still needs an LLM to evaluate data, and they used Anthropic for the kickoff of this war.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: