Hacker Newsnew | past | comments | ask | show | jobs | submit | lopsotronic's commentslogin

Mmmm. Not just the worst from a moral perspective - which is still bad! - but also some of the dumbest.

Ours are not the Masters of Industry from the Industrial Age[1], or the fission-missile-kings of the Nuclear Age. They're not ready to teach a Physics unit at a community college.

The tippity top of the uber-wealthy today are remarkably short on actual formal knowledge. This makes sense in their ideological system: scientific acumen as more of a commodity than a value.

In this view, everything should look like the stock market. But this is a profoundly stupid view. It requires not just ideology, but willfully not looking at the universe.

I'm probably steering afoul of about 90% of ycombinator here, so I'll just pull the throttles back and stop there.

[1] "Isambard Kingdom Brunel . . But Got-DAMN did men used to have some proper-ass names" - Achewood


90% seems high. I think there’s a solid chunk of the HN population that is very aware that this industry is run by morons. I, for example, only came to this realization a few years ago. But I believe it’s a growing sentiment. Late stage capitalism / techno feudalism really is a trip.

Living systems - hell, complex systems - don't do "forever" real well. You end up adding a compounding amount of energy over time, a "negentropic tax", as the universe tries to untie that complexity into radiation.

After a while the compounding energy input of the negentropic tax overwhelms the control mechanisms that feed it into the "preserved" system, and it blows up.

It's a common feature across disciplines: content management, biology, programming, maintainability engineering, neural networks, chemical engineering . . I imagine the list is pretty close to boundless. Ha, turns out human knowledge is also a complex natural system.

So I guess what I'm saying is only dead things live forever. Which should say a lot about the internal life of the standard tech/finbro. "I want to be just like I am right this second for all time!"

Speaking personally, I'm always amused by the Eternal Life pitch whether I hear it in church or on the internet. Everyone gets eternal life. We're surrounded by it, we eat it, we poop it out every day. Our grandfathers are in our lungs, old friends in the leaves of trees, giant parts of your brain die every morning as you wake. Eternal Life is not for the selfish. Something that the Bible thumpers could read for themselves, if they bothered to read the thing.


If you can figure out a Gig Economy way to get robot/remote/AI pilots into airline cockpits, you will make a mint. "What? I can save ten bucks on airfare if I accept a robot pilot? GIVE ME THAT TICKET"

A mint we will then need to spend on bribes to ALPA. DoT is almost entirely captured now, so that's less of a problem.

In fact, here's a much better get-rich app / scheme: use AI to find regulatory situations that are both easy to break and profitable to break and where enforcement is usually just done to poor people. The Ubermaker. Why dig a gold mine when you can sell the shovels.


> In fact, here's a much better get-rich app / scheme: use AI to find regulatory situations that are both easy to break and profitable to break and where enforcement is usually just done to poor people.

How about a less cynical alternative: Use it to find ways to defeat regulatory capture so that you can enter a large market which is currently locked up by incumbents, or make more in an ancillary market from doing "commoditize your complement" on the one which is currently captured.


This comment severely lacks second-order thinking. The regulations exist for a reason. Removing them because some billionaire wants to make a buck is not a good reason.

Value proposition of an awful lot of Enterprise Software is evaluated only in hindsight, and on an institutional tidal wave of "Industry Standard!", FOMO, and all-expensed "Technical Forums".

"Good" is optional in the land of the ERP. Or even "not-gut-rippingly awful"

I suspect we're about to see some interesting days in the alphabet soup of PDM, PLM, ERP, MBSE, PIM, DMS, FMEA, CRM, SRM, ILS, IPS, QMS, LSA, TDM . .


I delight in all things that remind me of RSS and am still on the hunt for the killer.

The moment when that age of technology turned - the wiki/blogger/RSS era giving way to the Facebook/Twitter one - seems to have marked some sort of dark horizon in technology and in intellectual discourse and culture.


When asked to show their development-test path in the form of a design document or test document, I've also noticed variance between the document generated and what the chain-of-thought thingy shows during the process.

The version it puts down into documents is not the thing it was actually doing. It's a little anxiety-inducing. I go back to review the code with big microscopes.

"Reproducibility" is still pretty important for those trapped in the basements of aerospace and defense companies. No one wants the Lying Machine to jump into the cockpit quite yet. Soon, though.

We have managed to convince the Overlords that some teensy non-agentic local models - sourced in good old America and running local - aren't going to All Your Base their Internets. So, baby steps.


No citizen in a nuclear-armed state need learn anything about anyone else, save perhaps about other nuclear-armed states.

The Westphalian system of armed states had its legs chopped out from under it after 1945, but it's taking a while for a new way to materialize.

This is one of the reasons why the Absolute Worst Thing is a nuclear-armed state with uncertain borders. Look around the world and you'll see that the "trouble spots" we spend a lot of time looking at in the news, are those places where nuke powers get to feeling itchy and twitchy about where exactly their countries end.


I think the discussion in recent years has refocused, embracing ethnonatalist implications and challenging the core assertion that "racism is wrong".

My main resistance to that is much the same as yours: the differences are so small, that re-architecting society around them is not going to be enough juice for the squeeze.

But one could also argue that the juice is not even the point: by re-architecting society in this way, you "pre-brutalize" your population so that their threshold for violence against "others" is lowered. Thus your population is closer to being wholly militarized, and theoretically is more effective in war, and is less captured by "weak" or "unmanly" moral ideals, such as empathy.

While this might seem a virtue to someone of an expansionist mindset, in application this principle never, ever works well - again, thanks to those tiny differences. If a citizen is pre-brutalized to have a lowered resistance to killing those with curly hair, how long is it before they kill their next door neighbor with wavy hair, over something like lawn furniture?

Pre-brutalizing your populace to killing any sapiens is enough to brutalize them towards harming anyone else. This is the core of the "imperial boomerang", or the colonial boomerang theory, as to why the great wars of the 20th century took on such a nasty character. The ease with which we dehumanized subject populations was - all too easily - redirected against the neighbors, most memorably with Germany trying to re-create the American West to their East.


Dangit, I'll need to give this a run on my personal machine. This looks impressive.

At the time of writing, all deepseek or qwen models are de facto prohibited in govcon, including local machine deployments via Ollama or similar. Although no legislative or executive mandate yet exists [1], it's perceived as a gap [2], and contracts are already including language for prohibition not just in the product but any part of the software environment.

The attack surface for a (non-agentic) model running in local ollama is basically non-existent . . but, eh . . I do get it, at some level. While they're not l33t haXX0ring your base, the models are still largely black boxes, can move your attention away from things, or towards things, with no one being the wiser. "Landing Craft? I see no landing craft". This would boil out in test, ideally, but hey, now you know how much time your typical defense subcon spends in meaningful software testing[3].

[1] See also OMB Memorandum M-25-22 (preference for AI developed and produced in the United States), NIST CAISI assessment of PRC-origin AI models as "adversary AI" (September 2025), and House Select Committee on the CCP Report (April 16, 2025), "DeepSeek Unmasked".

[2] Overall, rather than blacklist, I'd recommend a "whitelist" of permitted models, maintained dynamically. This would operate the same way you would manage libraries via SSCG/SSCM (software supply chain governance/management) . . but few if any defense subcons have enough onboard savvy to manage SSCG let alone spooling a parallel construct for models :(. Soooo . . ollama regex scrubbing it is.

[3] i.e. none at all, we barely have the ability to MAKE anything like software, given the combination of underwhelming pay scales and the fact defense companies always seem to have a requirement for on-site 100% in some random crappy town in the middle of BFE. If it wasn't for the downturn in tech we wouldn't have anyone useful at all, but we snagged some silcon refugees.


To add to some of what others are saying, another problem is the measurement problem.

DAs and police in general are almost universally evaluated based on arrest numbers. Only very rarely on actual crime rates, and never on something as abstract as quality of life or local revenues or property values.

Gauging how good law enforcement is just by looking at arrest numbers is probably the wrong dial to be looking at.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: