Hacker Newsnew | past | comments | ask | show | jobs | submit | agentultra's commentslogin

It sounds like most of the data centers promised in 2025 and 2026 are not even built yet and most of the GPUs bought haven't even been installed.

If it does all go down in flames, even floor value is not going to be that valuable.

I can't predict the future but it's smelling a lot like a recession already under way that is bigger than the sub-prime crash.


I think it will wall people off from software.

I don’t know what SaaS has to do with FOSS. The point of FOSS was to allow me to modify the software I run on my system. If the device drivers for some hardware I depend on are no longer supported by the company I bought it from, if it’s open source, I can modify and extend the software myself.

The Copy Left licenses ensure that I share my modifications back if I distribute them. It’s a thing for the public good.

Agent-based software development walls people off from that. Mostly by ensuring that the provenance of the code it generates is not known and by deskilling people so that they don’t know what to prompt or how to fix their code.


The LLM didn’t make a compiler. It generated code that could plausibly implement one. Humans made the compilers it was trained on. It took many such examples and examples of other compilers and thousands of books and articles and blog posts to train the model. It took years of tweaking, fitting, aligning and other tricks to make the model respond to queries with better, more plausible output. It never made, invented, or reasoned about compilers. It’s an algorithm and system running on a bunch of computers.

The C compiler Anthropic got excited about was not a “working” compiler in the sense that you could replace GCC with it and compile the Linux kernel for all of the target platforms it supports. Their definition of, “works,” was that it passed some very basic tests.

Same with SQLite translation from C to Rust. Gaping, poorly specified English prose is insufficient. Even with a human in the loop iterating on it. The Rust version is orders of magnitude slower and uses tons more memory. It’s not a drop in Rust-native replacement for SQLite. It’s something else if you want to try that.

What mechanism in these systems is responsible for guessing the requirements and constraints missing in the prompts? If we improve that mechanism will we get it to generate a slightly more plausible C compiler or will it tell us that our specifications are insufficient and that we should learn more about compilers first?

I’m sure its possible that there are cases where these tools can be useful. I’m not sure this is it though. AGI is purely hypothetical. We don’t simulate a black hole inside a computer and expect gravity to come out of it. We don’t simulate the weather systems on Earth and expect hurricanes to manifest from the computer. Whatever bar the people selling AI system have for AGI is a moving goalpost, a gimmick, a dream of potential to keep us hooked on what they’re selling right now.

It’s unfortunate that the author nearly hits on why but just misses it. The quotes they chose to use nail it. The blog post they reference nearly gets it too. But they both end up giving AI too much credit.

Generating a whole React application is probably a breath of fresh air. I don’t doubt anyone would enjoy that and marvel at it. Writing React code is very tedious. There’s just no reason to believe that it is anything more than it is or that we will see anything more than incremental and small improvements from here. If we see any more at all. It’s possible we’re near the limits of what we can do with LLMs.


  > Writing React code is very tedious.
slightly off-topic perhaps, but it makes me wonder if its so tedious how did it catch on in the first place...

i feel like llms are abstracting away that tedium sometimes yes, but i feel its probably because the languages and frameworks we use aren't hitting the right abstractions and are too low level for what we are trying to do... idk just a thought


I wouldn’t call what LLMs are doing an abstraction. They generate code. You just don’t have to write it. It can feel like it’s hiding details behind a new, precise semantic layer… but you’ll find out once the project gets to a certain size that is not the case: the details absolutely matter and you’ll be untangling a large knot of code (or prompting the AI to fix it for the seventh time).

It’s a good thought and I tend to think that this is the way I would feel more productive: better languages that give us the ability to write better abstractions. Abstractions should provide us with new semantic layers that lose no precision and encapsulate lots of detail.

They shouldn’t require us to follow patterns in our code and religiously generate boilerplate and configuration. That’s indirection and slop. It’s wasted code, wasted effort, and is why I find frameworks like React to be… not pleasant to use. I would rather generate the code that adds a button. It should be a single expression but for many reasons, in React, it isn’t.


"I’m sure its possible that there are cases where these tools can be useful. I’m not sure this is it though. "

You are arguing against the internet, motor cars and electricity.

It's like 1998, and you're saying: "I'm sure it's possible there are cases where the internet can be useful. I'm not sure it is though"

On 'hackernews' of all places.

It's pretty wild to see that, and I think it says something about what hn has become (or maybe always was?).


Humans learned from prior art, and most of their inventions are modifications of prior art. You a are after all, mostly a biological machine.

The point is - there are so many combinations and permutations of reality, that AI can easily create synthetically novel outcomes by exploring those options.

It's just wrong to suggest that 'it was all in some textbook'.

"here’s just no reason to believe that it is anything more than it is "

It's almost ridiculous at face value, given that millions of people are using it for more than 'helping to write react apps every day'.

It's far more likely that you've come to this conclusion because you're simply not using the tools creatively, or trying to elicit 'synthetic creativity' out of the AI, because it's frankly not that hard, and the kinds of work that it does goes well beyond 'automation'.

This is not an argument, it's the lived experience of large swaths of individuals.


One area where it may end up leaving you behind is if you’re looking for a job right now. There are a lot of companies putting vibe coding in their job requirements. The more companies that do this the harder it will be to find employment if you’re not adopting this tool/workflow.

We do have such detailed specifications. But they are written in a language with a narrow interface. It’s a technique called, “program synthesis,” and you can find an example of such a language called, Synquid.

It might be illuminating to see what a mathematically precise specification can and cannot do when it comes to generating programs. A major challenge in formal methods is proving that the program implements the specification faithfully, known as the specification gap. If you have a very high level and flexible specification language, such as TLA+, there is a lot of work to do to verify that the program you write meets the specification you wrote. For something like Synquid that is closer to the code there are more constraints on expressivity.

The point is that spoken language is not sufficiently precise to define a program.

Just because an LLM can fill in plausible details where sufficient detail is lacking doesn’t indicate that it’s solving the specification gap. If the program happens to implement the specification faithfully you got lucky. You still don’t actually know that’s true until you verify it.

It’s different with a narrow interface though: you can be very precise and very abstract with a good mathematical system for expressing specifications. It’s a lot more work and requires more training to do than filling in a markdown file and trying to coax the algorithm into outputting what you want through prose and fiction.


This works well for problems that are purely algorithmic in nature. But problems often have solutions that don't fall into those categories, especially in UI/UX. When people tell me that LLMs can solve anything with a sufficiently details spec, I ask them to produce such a spec for Adobe Photoshop.

I think the worst case is actually that the LLM faithfully implements your spec, but your spec was flawed. To the extent that you outsource the mechanical details to a machine trained to do exactly what you tell it, you destroy or at least hamper the feedback loop between fuzzy human thoughts and cold hard facts.

Unfortunately even formal specifications have this problem. Nothing can replace thinking. But sycophancy, I agree, is a problem. These tools are designed to be pleasing, to generate plausible output; but they cannot think critically about the tasks they're given.

Nothing will save you from a bad specification. And there's no royal road to knowing how to write good ones.


Right, there’s no silver bullet. I think all I can do is increase the feedback bandwidth between my brain and the real world. Regular old stuff like linters, static typing, borrow checkers, e2e tests… all the way to “talking to customers more”

This might work on small, self contained projects.

No side effects is a hefty constraint.

Systems tend to have multiple processes all using side effects. There are global properties of the system that need specification and tests are hard to write for these situations. Especially when they are temporal properties that you care about (eg: if we enter the A state then eventually we must enter the B state).

When such guarantees involve multiple processes, even property tests aren’t going to cover you sufficiently.

Worse, when it falls over at 3am and you’ve never read the code… is the plan to vibe code a big fix right there? Will you also remember to modify the specifications first?

Good on the author for trying. Correctness is hard.


Very cool but I haven’t been able to convince software developers in industry to write property based tests. I sometimes joke that we will start writing formal proofs until the tests improve. Just so that they will appreciate the difference a little more.

I can’t even convince most developers to use model checkers. Far more informal than a full proof in Lean. Still highly useful in many engineering tasks. People prefer boxes and arrows and waving their hands.

Anyway, I don’t know that I’d want to have a system vibe code a proof. These types of proofs, I suspect, aren’t going to be generated to be readable, elegant, and be well understood by people. Like programs they generate it will look plausible.

And besides, you will still need a human to review the proof and make sure it’s specifying the right things. This doesn’t solve that requirement.

Although I have thought that it would be useful to have a system that could prove trivial lemmas in the proof. That would be very neat.


The point is you just need to scrutinize the theorem. Not easy either, but still significantly less work than writing the proof.


There isn’t a mind to change. Unfortunately the article is slop. Too bad, won’t read the rest.

I wish there was a tag or something we could put on headlines to avoid giving views to slop.


There is a mind; the model + text + tool inputs is the full entity that can remember, take in sensory information, set objectives, decide, learn. The Observe, Orient, Decide, Act loop.

As the article says, the models are trained to be good products and give humans what they want. Most humans want agreeableness. You have to get clear in your heuristic instructions what you mean by "are you sure?", as in, identify areas of uncertainty and use empiricism and reasoning to reduce uncertainty.


That definition falls substantially short of a mind.


Tomato tomato. It’s sufficiently cybernetic. We can treat it therefore as something with agency.


In AWS, for example, DNSSEC Route53 signing is possible, but almost no one configures it. Generally, most people do a lot of good things about security, but they somehow forget about DNS.


100%.

There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.

Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.

Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.

There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.


I don't have any idea of what a unit test is, but with AI I can make programs that help me immensely in my real world job.

Snobby programmers would never even return an email offering money for their services.


It's unclear what point you're even trying to make, other than that AI has been helpful to you. But surely you understand that if you don't know what a unit test is you're probably not in a position to comment on the value of unit testing.

> Snobby programmers would never even return an email offering money for their services.

Why the would they? I don't respond to the vast majority of emails, and I'm already employed.


Helpful to me and millions of others. Soon to be billions even.

You are employed because somewhere in the pipeline there are paying customers. They don't care about unit tests, they care about having their problems solved. Beware of AI.


Right... I mean, no engineer is going to tell you that customers care about unit tests, so I think you're arguing against a straw man here. What engineers will tell you is that bugs cost money, support costs money, etc, and that unit tests are one of the ways we cheaply reduce those costs in order to solve problems, which is what we're in the business of doing.

We are all very aware of the fact that customers pay us... it seems totally strange to be that you think we wouldn't be aware of this fact. I suspect this is where the disconnect comes in, much to the point of the article - you seem to think that engineers just write tests and code, but the article points out how silly that is, we spend most of our time thinking about our customers, features, user experience, and how to match that to the technology choices that will allow us to build, maintain, and support systems that meet customer expectations.

I think people outside of engineering might be very confused about this, strangely, but engineers do a ton of product work, support work, etc, and our job is to match that to the right technology choices.


What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.

With AI code we might get software that is an ugly mess underneath, but at least we have it. While human programmers are unwilling to provide this software for even a high price.

I could argue that people are better off having nothing to eat rather than having high quality food. But in reality something is better than nothing.

There is a gigantic market and a gigantic need of software in the field between hobbyist and enterprise. And AI code will serve that field. Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.

Otherwise more and more people will do like me and have AI make their own bespoke solutions.


> What I'm saying is that it's a non-issue for customers if software has good engineering or not, if it fulfills their needs at a price they can pay.

You think that good engineering is unrelated to fulfilling needs at a price they pay? I think you're confused. Software engineers are tasked with exactly this problem - determining how to deliver utility at a price point. That's... the whole job. We consider how much it costs to run a database, how to target hardware constraints, how to build software that will scale accordingly as new features are added or new users onboard, etc. That's the job...

That's sort of the whole point of the article. The job isn't "write code", which is what AI does. The job is "understand the customer, understand technology, figure out how to match customer expectations, business constraints, and technologies together in a way that can be maintained at-cost".

> While human programmers are unwilling to provide this software for even a high price.

Sorry, but this is just you whining that people didn't want to work for you. Software engineers obviously are willing to provide software in exchange for money, hence... idk, everyone's jobs.

> And AI code will serve that field.

That may be true, just as no-code solutions have done in the past.

> Software engineers like you are the people who can best exploit this market segment, probably by leveraging these new AI tools.

Yes, I agree.

> Otherwise more and more people will do like me and have AI make their own bespoke solutions.

I'm a bit skeptical of this broadly in the long term but it's certainly the case that some portion of the market will be served by this approach.


Yeeesh. Looking at other posts from that user, they seem to have a serious grudge against software devs, presumably for not responding to their emails. "You should starve" - words taken from another post.

Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck. Whatever weird personal issue you have, I doubt anyone was not working for you out of some whatever this perceived snobbery is and it's just like... we all have jobs?


I don't have a grudge, but there needs to be some balance. Software devs are incredibly well paid compared to other professionals. It is their responsibility to use their talents to benefit themselves, and if they are out-competed then they should work with something else. They don't have a right to a fantastic career.

All other workers have had to go through this when their fields became more automated and efficient. A cargo ship used to have hundreds of crew, now it's a dozen and the amount of cargo on a ship is ten times more.

So I will absolutely not cry for a software dev who has to make changes in the face of AI competition. If they're too precious to adapt or take a different job, then starve.

> Look, no one wanted to write code for you idk what to tell you. Now you can have AI do that for you. Congrats, best of luck.

Me and hundreds of thousands of other organizations who have software needs that were under served by the market. Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.

I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on. If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source. Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.


> If they're too precious to adapt or take a different job, then starve.

Yeah I mean I think everyone is with you except for the "then starve", this is just weirdly combative and lacking in empathy, I find it totally strange.

> Me and hundreds of thousands of other organizations who have software needs that were under served by the market.

And... you blame software developers for that? You blame software devs for a lack of capacity in the field? So weird.

> Now we will have AI write that code for us - or more realistically, now we will purchase this software from any of the thousands of boutique software development shops that will emerge, which use AI + talented human developers to serve us.

Okay, I mean, this has always been an option. I guess it will be more of an option now. There have been consulting agencies or "WYSIWYG" editors like Wix or other "low code/ no code" platforms for ages. No one is going to be upset that you're using them. This hostility is totally one sided lol

> I have the strong impression that programmers in many cases have a good deal of snobbery regarding what tasks they are willing to work on.

We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.

> If it's not giant enterprise software, then it's usually just filed under "hobbyist" or open source.

I find this funny because hobbyist/ open source projects are by far the ones that are glamorized by the field, not enterprise software.

> Hopefully many programmers will find a well paying career serving less glamorous customers with software that solves real world problems. But many will have to change their attitude if they want to do that.

I have no idea where you get this impression from. Most software devs I've worked with are motivated heavily by solving real world problems. I think you have very, very little insight into what software development actually looks like or what software engineers are motivated by. Frankly, this comes off as very much "I was snubbed and now I'm happy that the people who I perceive as having snubbed me will be replaced by AI", which I think is quite lame. You definitely seem to have a resentful tone to your posts that I find weird.


Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force. Asking yourself "who is going to purchase the products?" when software development is being automated is quite silly. Why didn't they ask that question when thousands of other professions suffered the same?

> And... you blame software developers for that?

I don't blame them. They had more lucrative ventures to tend to. Now that under served market segments can be served with the help of AI, then they shouldn't complain.

You mention making web sites, but this is probably the only field in computing where the market has a lot of offerings to customers from all segments. If I need a website I don't have to use Wix, there is an endless supply of freelancers or small, medium, or big studios that offer their services. The same cannot be said of other bespoke software needs.

BTW, you are hearing a lot more hostility in my comments than is actually there.

> We like to work on interesting projects... is that surprising? Is that snobbery? I don't get it.

Yes it is snobbery. Other skilled professionals generally do not have that option, they have to do the boring stuff as well. And if you only like to work on interesting projects, then why are people complaining that AI is taking their jobs?

Regarding hobbyist / open source, I mean that when software devs aren't working on big enterprise style projects as a job, they tend to work on enterprise style projects as open source, or just play with hobbyist projects. Servicing smaller customers with bespoke software seems to be considered a little bit beneath the programmer dignity.

And it's not my personal experience talking. Consider how many studios are offering bespoke software for small businesses, compared to how many studios are offering websites for small businesses. There's a huge gap, that is probably going to be filled in some way pretty soon.


> Lacking in empathy could also be said of the software devs who think that software devs are a significant customer group in the economy, when they are a tiny percentage of the work force.

This is silly equivocation. I'm telling you that your statement lacks empathy, and you're making vague, unclear gestures to an entire field.

Anyway, reading your post it's clear that you have a rather pathetic grudge because software devs weren't interesting in working with you and now you get to grin gleefully as you see AI take away jobs. You obviously have zero insight into software development as a practice, nor how software devs think - this is glaringly obvious by your detraction that software devs give software away for free as somehow snobbery because they wouldn't work on whatever project you clearly hold a grudge over. Further, your comments from start to finish demonstrate a complete lack of understanding of what the job actually entails.

> BTW, you are hearing a lot more hostility in my comments than is actually there.

Maybe so! I can't tell you what you actually think, but it comes off as really pathetic, so maybe reread your post and consider why I'm hearing it.

Best of luck in your ventures.


You aren't thinking clearly enough to understand what I'm saying, becuase you can't let go of my perhaps harsh comment.

But come on, people here are saying "who is going to buy products if we don't have jobs", as if they never gave a thought to other people who had their jobs automated away. That's a lack of empathy.

In many cases software is exactly what took away other people's jobs. Secretary used to be a common job, as well as "computer", which was a human making calculations.

But since you are a hacker and have to be always right, you have just descended into personal attacks now.


> But since you are a hacker and have to be always right, you have just descended into personal attacks now.

The irony of this statement is so silly. Once again, you clearly have hang ups.

I've explained everything I have to say.


The end user of a bridge doesn’t care about most things the engineer who designed it does. They care that the bridge spans the gap and is safe to use. The company building the bridge and the group financing its construction care about a few more things like how much it will cost to provide those things to the end user, how long it will last. The engineer cares about a few more things: will the tolerances in the materials used in the struts account for the shearing forces of the most extreme weather in the region?

So it is with software.

You might not need a blueprint if you’re building a shed in your back yard. This is the kind of software that and user might write or script themselves. If it’s kind of off nobody is going to get hurt.

In many cities in North America you can’t construct a dwelling with plumbing connections to a sewer and a permanent foundation without an engineer. And you need an engineer and a blueprint to get the license to go ahead with your construction.

Because if you get it wrong you can make people in the dwelling sick and damage the surrounding environment and infrastructure.

Software-wise this is where you’re handling other people’s sensitive data. You probably have more than one component that needs to interact with others. If you get it wrong people could lose money, assets could get damaged, etc.

This is where I think the software industry needs to figure out liability and maybe professionalize a bit. Right now the liability is with the company and the deterrents are basically no worse than a speeding ticket in most cases. It’s more profitable to keep speeding and pay off the ticket than to prevent harm from throwing out sloppy code and seeing what sticks.

Then if you are building a sky scraper… well yeah, that’s the large scale stuff you build at big tech.

There are different degrees of software with different requirements. While not engineers by professional accreditation, in practice I would say most software developers are doing engineering… or trying.

What I agree with in the article is that AI tools make bad engineering easier. That is for people building houses and skyscrapers who should be thinking about blueprints: they are working under assumptions that the AI is “smart” and will build sky-scrapers for them. They’re not thinking about the things they ought to be and less about things that will pass on to the customers: cost and a product that isn’t fit for use.

A bridge that falls down if you drive too fast over it isn’t a useful bridge even though it looks like a bridge.


https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...

It can generate plausible code because the examples are already in the training set, the documentation, the how-to-write-a-database, other databases, etc.

But unless you could write SQLite yourself it will be hard to specify a good one and to get the generator to produce a correct implementation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: