Hacker Newsnew | past | comments | ask | show | jobs | submit | buzarchitect's commentslogin

This resonates. I build products on top of LLMs, and the most interesting work I do has nothing to do with AI; it's designing structured methodologies, figuring out what data to feed in before a conversation starts, deciding what to do when the model gives a weak answer. The AI is plumbing.

But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen


Now do it with knowledge or causal graphs

Causal graphs are interesting, but in my experience, the bottleneck isn't the representation; it's getting the model to actually follow through on weak signals instead of moving on to the next topic. A graph won't help if the system doesn't know what to do when it hits a node that doesn't resolve cleanly. What's your experience been with them?

This matches my experience. I've been building structured pipelines around LLMs, and the biggest lesson is that the raw model is maybe 30% of the value. The other 70% is the methodology you wrap around it; what data you feed in before the conversation starts, what you do when the model gives a weak answer, and whether you track open questions and circle back to them.

The irony is that "extreme amounts of guidance" is exactly what makes a human domain expert valuable, too. A senior consultant isn't smarter than a junior one; they have a better methodology for directing attention to what matters. The actual problem with the "just throw an agent at it" approach isn't cost. It's that without structure, you can't tell the 10% of useful output from the 90% of noise


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: