Hacker Newsnew | past | comments | ask | show | jobs | submit | arscan's commentslogin

I do love the concept, but a little part of me died each time I came across an article with a very strong AI voice. That just feels antithetical to the ‘small web’ ethos because it obscures the ‘neighbor’ behind it.

Welcome to 2026 when the next door neighbour is an AI datacentre using up all your groundwater.

Sure but industry cares about value (= benefit - price), not just price. Price could be astronomical, but that doesn’t matter if benefit is larger.


I’ve been thinking about it like this for some time: If the computer is a bicycle of the mind, then the LLM is its credit card.


I’m not so sure an increasingly large context window will be seen as a critical enabler (as it was viewed 6 months ago), after watching how amazingly effective subagents and tool calls are at tackling parts of the problem and surfacing the just the relevant bits for the task at hand. And if increasing the context window isn’t the current bottleneck, effort will be put elsewhere.


I agree. My suspicion is that token efficiency is what will drive more efficient tool calls, and tool building. And we want that. Agents should rely less on raw intelligence (ability to hold everyting in context), and more on building tools to get the job done.


Reminds me of the “Google AI Challenge” in 2011 called Ants [1], except the ‘AI’ is implemented using ‘AI’ now instead of human programmers.

I was proud for getting the highest-ranked JavaScript-based implementation, but got absolutely crushed by the eventual winner.

1. https://github.com/aichallenge/aichallenge


> But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do).

It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).

Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.

And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.


> It works, sure, but is it worth your time to use?

This is something I like about the LLM future. I get to spend my time with users thinking about their needs and how the product itself could be improved. The AI can write all the CSS and sql queries or whatever to actually implement those features.

If the interesting thing about software is the code itself - like the concepts and so on, then yeah do that yourself. I like working with CRDTs because they’re a fun little puzzle. But most code isn’t like that. Most code just needs to move some text from over here to over there. For code like that, it’s the user experience that’s interesting. I’m happy to offload the grunt work to Claude.


Every little detail matters though. In SQL, do you want your database field to have limited length? If so, pay attention to validation, including cases where the field's content is built up in some other way than just entering text in a free-form text field (e.g. stuffing JSON into a database field). If not, make sure you don't use some generic "string" field type provided by your database abstraction layer that has an implicit limited length. Want to guess why that scenario's on my mind? Yeah, I neglected to pay attention to that detail, and an LLM might too. In CSS, little details affect the accessibility of the UI.

So we need to pay attention to every detail that doesn't have a single obviously correct answer, and keep the volume of code we're producing to a manageable enough level that we actually can pay attention to those details. In cases where one really is just literally moving data from here to there, then we should use reliable, deterministic code generation on top of a robust abstraction, e.g. Rust's serde, to take care of that gruntwork. Where that's not possible, there are details that need our attention. We shouldn't use unreliable statistical text generators to try to push past those details.


> So we need to pay attention to every detail that doesn't have a single obviously correct answer

I really, really wish that were the case. But look at the modern web. Look at iOS apps. Look at how long discord takes to launch on a modern computer. Look how big and slow everything is. Most end user applications released today do not pay attention to those small details. Definitely not in early versions of the software. And they're still successful. At least, successful enough.

I'd love a return to the "good old days" where we count bytes and make tight, fast software with tiny binaries that can perform well even on 20 year old computers. But I've been outvoted. There aren't enough skilled programmers who care about this stuff. So instead our super fast computers from the future run buggy junk.

Does claude even make worse choices than many of the engineers at these companies? I've worked with several junior engineers who I'd trust a lot less with small details than I trust claude. And thats claude in 2026. What about claude in 2031, or 2036. Its not that far away. Claude is getting better at software much faster than I am.

I don't think the modern software development world will make the sort of software that you and I would like to use. Who knows. Maybe LLMs will be what changes that.


> But look at the modern web. Look at iOS apps. Look at how long discord takes to launch on a modern computer. Look how big and slow everything is. Most end user applications released today do not pay attention to those small details. Definitely not in early versions of the software. And they're still successful. At least, successful enough.

The main issue is that we have a lot of good tech that are used incorrectly. Each components are sound, but the whole is complex and ungainly. They are code chimeras. Kinda like using a whole web browser to build a code editor, or using react as the view layer for a TUI, or adding a dependency just to check if a file is executable.

It's like the recently posted project which is a lisp where every function call spawn a docker container.


Yep I think this is broadly true. Though its still not clear to me if vibe coding is going to make this better or worse.


probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need

Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.

came from a bit of innovation that LLMs are incapable of.

I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.


> There's a middle ground of "written by human and LLM together".

Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).


> but I’d categorize that ‘bit’ as the innovation from the human.

Agreed.


Yeah, sure, you could create a social media or photo-sharing site, but most people that want to share cat photos with their friends could just as easily print out their photos and stick them in the mail already.


My father was an unnamed DG marketing executive in the book, who joked that his greatest career regret was asking Kidder to be unnamed in case the book wasn’t any good (it won Kidder the Pulitzer). I’ve been meaning to go through his old notebooks, as he took detailed notes on everything, to see if there is anything left from that era.


DG = data general a “large” computer company in Westboro mass. My mom worked there doing internationaliztion.

Soul of a new machine is a fantastic book. About DG going up against DEC (Digital Equipment Corporation) and their Vax machines.


> I’ve been meaning to go through his old notebooks, as he took detailed notes on everything, to see if there is anything left from that era.

Please do! Ephemera like that can tell more sides of the story that didn't make it into the book, for whatever reason.


I got the sense that Kidders side was approximately equal to my father’s side, as my father said he provided a lot of information to the author through interviews and was happy with the account that ended up in the book. I’ll see what I can find though.

I ended up working for the lead of the competing team within DG (whose product lost to the book’s protagonist) for many years right after college at a different company he founded. I suspect he has a slightly different perspective on the whole thing, but I never asked.

Sadly my father and many of his contemporaries are no longer with us. But I’m really happy that this book exists as a durable & accurate snapshot of the period. The computer history museum also has a wonderful collection of interviews worth checking out, which includes several of the staff from DG [1]

1. https://computerhistory.org/oral-histories/


I joined DG in their last days, largely due to kidders book.

Also, I really liked DG/UX, for reasons I no longer recall.


amazing lore!


This absolutely has been the case for me for the last few months. But what’s disheartening is that this signal will just be mimicked through simple prompting if too many people start tuning in to it. Or maybe that’s already happened?


  Don't worry about the future
  Or worry, but know that worrying
  Is as effective as trying to solve an algebra equation by chewing Bubble gum
  The real troubles in your life
  Are apt to be things that never crossed your worried mind
  The kind that blindsides you at 4 p.m. on some idle Tuesday

    - Everybody's free (to wear sunscreen)
         Baz Luhrmann
         (or maybe Mary Schmich)


Having testimonials attributed to Gemini 3 Pro and Claude 4.5 Opus is... interesting. I'm curious what prompt was used to get those quotes.


lol thanks for the compliments, generated both the testimonials after giving the mcp server to both opus and gemini and asked their feedback on it.

it is supposed to be directly used by agents, so they are kind of my end users, hence it made sense to get their testimonials :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: