Hacker Newsnew | past | comments | ask | show | jobs | submit | jalopy's commentslogin

This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.

This dead internet uncanny (sarcasm?) valley is killing me.

Are you suggesting HN is now mostly bots boosting pro-AI comments? That feels like a stretch. Disagreement with your viewpoint doesn't automatically mean someone is a bot. Let's not import that reflex from Twitter.

> This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.

The average commenter doesn't write this kind of comment. Usually it's just a "can you expand/elaborate?". Extra politeness is kind of a hallmark of LLMs.

And if you look at the very neat comment it's responding to, there's a chance it's actually the opposite type, an actual human being sarcastic.

I can't tell anymore.

Edit: I've checked the comment history and it's just a regular ole human doing research :-)


Now I'm just confused. Maybe LLMs really do change how humans communicate.

This sounds very promising. Any link to more details?

Every time I read stuff like this I honestly wonder if the author is using the same tools I am.

I can have Claude Code bang out everything from boilerplate to a working prototype to a complex algorithm embedded in a very complex and confusing code base. It’s not correct 100% of the time but it’s pretty damn close. And often times it comes up with algorithms I would have never thought of initially.

These things are at least a 10x multiple of my time.


The difficulty is we skeptics have read claims like yours tens of times, and our response is always, "please share a repo built this way and an example of your prompts," and I at least have never seen anyone do so.

I'd love for what you say to be possible. Comments like yours often cause me to take another crack at agentic workflows. I'm disappointed every time.


I can back that claim up. Unfortunately, I've only worked on proprietary codebases and I can't share them. However before I left my previous gig at PermitFlow I was primarily using Claude Code for all of my work.

I don't view LLMs as ways of foregoing the responsibility of writing code and rather see it as my "really smart keyboard". With enough context priming and a well structured codebase I no longer need to spend time writing each line of code and can have Claude do it in a fraction of the time.

I need to start a blog sooner rather than later as I don't agree with the article nor the naysayers. Maybe a year ago I'd say that it's not possible to code with LLM agents. However ever since Cursor's release, LLMs have completely changed my workflow.


If you do wrote that blog, I'd love to read it! My email is in my bio if you get time to start it.


Most AI evangelist commenters here end up with the same arguments every time.

You are just not a true scotsman.

And when they link something is one of those repos full of sloop and no code.

I am becoming paranoid and wonder how many people here can even code.


It is a bit funny that 4 hours on and still no replies from the evangelists.

I want them to be right and me wrong! Please someone show I am wrong.


The number of projects shared is more ot less tye number of bussines that accept Ethereum


Every time I read something like this I wonder if the author only writes HTML


Popular != true. Galileo found that out by spending the last ~decade of his life under house arrest. Thankfully today we mostly just get downvoted.


Super valuable resource - thanks!

What tools / experiments out there exist to exercise these cheaper models to output more tokens / use more CoT tokens to achieve the quality of more expensive models?

eg, Gemini 2.5 flash / pro ratio is 1 1/3 for input, 1/8 for output... Surely there's a way to ask Flash to critique it's work more thoroughly to get to Pro level performance and still save money?


Interesting tidbit:

>>> In this study, high levels of these beta oscillations were associated with faster learning and more programming knowledge

This makes me think those binaural beat programs attuned to beta wave frequency might help with heavy coding sessions?

Can anyone point to studies that confirm/reject this?


This looks very interesting, however it seems to me like the critical piece of this technique is missing from the post: the implementations of getFileContext() and shouldStartNewGroup().

Am I the one missing something here?


No, the code he posted sorts files by size, groups them, and then…jazz hands?


Reading between the lines, it sounds like they are creating an AI product for more than just their own codebase. If this is the case, they'd probably be keeping a lot of the secret sauce hidden.

More broadly, it's nowadays almost impossible to find what worked for other people in terms of prompting and using LLMs for various tasks within an AI product. Everyone guards this information religiously as a moat. A few open source projects are everything you have if you want to get a jumpstart on how an LLM-based system is productized.


Yeah, and in the code bases I’m familiar with, you’d need a lot of contextual knowledge that can’t be derived from the code base itself.


This is awesome. Love this idea. What a great way to make history more alive. I've only spent ~30s with it so far but I hope to find ways to contribute to it (content and code/different visualizations)


Digging in a bit more: Love this explainer of how it's done - https://www.maptiler.com/story/oldmapsonline/


Great advice.

What books/resources do you suggest to follow more on this path of meditation (possibly also stoicism)?


For me I studied vipassana and Buddhism, and did the daily meditations with Gil Fronsdal[0] on his amazing audio dharma[1] which has a lot of resources (and his teachings dating back 25+ years recorded and freely available there). If you’re in the area you can visit the insight meditation center[2] in Redwood City.

I think these things helped me a lot in my recovery but I also don’t think it’s necessary or sufficient. I think my advice above is the real key, but vipassana and Buddhism give a pretty structured approach to achieve the appropriate detachment from what’s not truly real and the appreciation for what is. IMO that’s the basis for recovering from burnout durably. But the other thing is there’s no magic, it takes time.

0 https://en.wikipedia.org/wiki/Gil_Fronsdal 1 https://www.audiodharma.org/speakers/1 2 https://www.insightmeditationcenter.org/


This is great. Anyone know how hard it would be to adapt to Athena?


Do you have any open source examples (either your own or others) of simple Ruby + Sequel that are particularly elegant + productive that you could point to? Would love to see this.

I agree Ruby is the most elegant and enjoyable language to develop in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: