Are you suggesting HN is now mostly bots boosting pro-AI comments? That feels like a stretch. Disagreement with your viewpoint doesn't automatically mean someone is a bot. Let's not import that reflex from Twitter.
> This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.
The average commenter doesn't write this kind of comment. Usually it's just a "can you expand/elaborate?". Extra politeness is kind of a hallmark of LLMs.
And if you look at the very neat comment it's responding to, there's a chance it's actually the opposite type, an actual human being sarcastic.
I can't tell anymore.
Edit: I've checked the comment history and it's just a regular ole human doing research :-)
Every time I read stuff like this I honestly wonder if the author is using the same tools I am.
I can have Claude Code bang out everything from boilerplate to a working prototype to a complex algorithm embedded in a very complex and confusing code base. It’s not correct 100% of the time but it’s pretty damn close. And often times it comes up with algorithms I would have never thought of initially.
These things are at least a 10x multiple of my time.
The difficulty is we skeptics have read claims like yours tens of times, and our response is always, "please share a repo built this way and an example of your prompts," and I at least have never seen anyone do so.
I'd love for what you say to be possible. Comments like yours often cause me to take another crack at agentic workflows. I'm disappointed every time.
I can back that claim up. Unfortunately, I've only worked on proprietary codebases and I can't share them. However before I left my previous gig at PermitFlow I was primarily using Claude Code for all of my work.
I don't view LLMs as ways of foregoing the responsibility of writing code and rather see it as my "really smart keyboard". With enough context priming and a well structured codebase I no longer need to spend time writing each line of code and can have Claude do it in a fraction of the time.
I need to start a blog sooner rather than later as I don't agree with the article nor the naysayers. Maybe a year ago I'd say that it's not possible to code with LLM agents. However ever since Cursor's release, LLMs have completely changed my workflow.
What tools / experiments out there exist to exercise these cheaper models to output more tokens / use more CoT tokens to achieve the quality of more expensive models?
eg, Gemini 2.5 flash / pro ratio is 1 1/3 for input, 1/8 for output... Surely there's a way to ask Flash to critique it's work more thoroughly to get to Pro level performance and still save money?
This looks very interesting, however it seems to me like the critical piece of this technique is missing from the post: the implementations of getFileContext() and shouldStartNewGroup().
Reading between the lines, it sounds like they are creating an AI product for more than just their own codebase. If this is the case, they'd probably be keeping a lot of the secret sauce hidden.
More broadly, it's nowadays almost impossible to find what worked for other people in terms of prompting and using LLMs for various tasks within an AI product. Everyone guards this information religiously as a moat. A few open source projects are everything you have if you want to get a jumpstart on how an LLM-based system is productized.
This is awesome. Love this idea. What a great way to make history more alive. I've only spent ~30s with it so far but I hope to find ways to contribute to it (content and code/different visualizations)
For me I studied vipassana and Buddhism, and did the daily meditations with Gil Fronsdal[0] on his amazing audio dharma[1] which has a lot of resources (and his teachings dating back 25+ years recorded and freely available there). If you’re in the area you can visit the insight meditation center[2] in Redwood City.
I think these things helped me a lot in my recovery but I also don’t think it’s necessary or sufficient. I think my advice above is the real key, but vipassana and Buddhism give a pretty structured approach to achieve the appropriate detachment from what’s not truly real and the appreciation for what is. IMO that’s the basis for recovering from burnout durably. But the other thing is there’s no magic, it takes time.
Do you have any open source examples (either your own or others) of simple Ruby + Sequel that are particularly elegant + productive that you could point to? Would love to see this.
I agree Ruby is the most elegant and enjoyable language to develop in.
reply