Hacker Newsnew | past | comments | ask | show | jobs | submit | ZaoLahma's commentslogin

Yeah. In these cases it's not like anyone is going to spin up their own instance and start competing with you.

Government / handles society-critical things code should really be public unless there are _really_ good reasons for it not to be, where those reasons are never "we're just not very good at what we're doing and we don't want anyone to find out".


Some months back I would have agreed with you without any "but", but it really does help even if it only takes over "typing code".

Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. And the time you don't spend on figuring out language syntax, you can instead spend on tweaking the code on a higher architecture level. Spend time where you, as a human, are better than the AI.


I've recently worked extensively with "prompt coding", and the model we're using is very good at following such instructions early on. However after deep reasoning around problems, it tends to focus more on solving the problem at hand than following established guidelines.

Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."


A separate pre-planning step, so the context window doesn’t get too full too early on.

Off the shelf agentic coding tools should be doing this for you.


They do not.

At my company, I use them all the time with the fancy models and everything. Preplanning does not solve the problem they're describing.

When claude is doing a complex task, it will regularly lose track of the rules (in either the .rules stuff or CLAUDE.md) and break conventions.

It follows it most of the time, but not all of the time.


Full agree on this.

I (deep, deep in embedded systems) have seen this too often, that code is incredibly complex and impossible to reason around because it needs to reach into some data structure multiple times from different angles to answer what should be rather simple questions about next step to take.

Fix that structure, and the code simplifies automagically.


I think it boils down to how companies view LLMs and their engineers.

Some companies will do as you say - have (mostly clueless) engineers feed high level "wishes" to (entirely clueless) LLMs, and hope that everyone kind of gets it. And everyone will kind of get it. And everyone will kind of get it wrong.

Other companies will have their engineers explicitly treat the LLMs as collaborators / pair programmers, not independent developers. As an engineer in such a company, YOU are still the author of the code even if you "prompted" it instead of typing it. You can't just "fix this high level thing for me brah" and get away with it, but instead need to continuously interact with the LLM as you define and it implements the detailed wanted behaviors. That forces you to know _exactly_ what you want and ask for _exactly_ what you want without ambiguity, like in any other kind of programming. The difference is that the LLM is a heck of a lot quicker at typing code than you are.


This will be a fun little evolution of botnets - AI agents running (un?)supervised on machines maintained by people who have no idea that they're even there.


Huh ya, how long till a bot with credit card, email, etc access sets up its own open claw bot?


I mean just look at the longer horizon of small capable models being able to run on consumer hardware and being able to bootstrap themselves.

Just imagine a bunch of little gremlins running around the internet outside of human control.


Great. My poorly secured coffee maker was mining bitcoins, then some dumb NFT, then it got filled with darkness bots, then bitcoin miners again, and now it's gonna be shitposting but not even to humans, just to other bots.


This reminds me of the "if you were entirely blind, how would you tell someone that you want something to drink"-gag, where some people start gesturing rather than... just talking.

I bet a not insignificant portion of the population would tell the person to walk.


Yes, there are thousands of videos of these sorts of pranks on TikTok.

Another one. Ask some how to pronounce “Y, E, S”. They say “eyes”. Then say “add an E to the front of those letters - how do you pronounce that word”? And people start saying things like “E yes”.


Spread the risk and reduce the probability of extinction.

We know for a fact that earth is doomed, on top of our own continuous efforts to kill ourselves off. No not recent climate change type of doomed, but the evolution of our sun is continuously pushing the habitable zone outwards. We might be able to deal with that particular annoyance by hiding underground when it becomes an emergency in half a billion years or so, but our utopia won't be as utopic anymore.

Eventually however, the sun will balloon to a red giant at which point we better have a plan in place other than staying on this planet.


If we're thinking that far out we might as well all just lay down and wait for rain because there's no avoiding the heat death of the universe. Treating the sun dying out like it's a real concern that we need to address in the next 2, 200, 2,000, or 2,000,000 years is comical. Whatever is around to experience that won't be human as we know it.


This is the correct way. Make it unnecessary to look at and into the clever code until it's absolutely necessary to look at and into the clever code.

The vast majority of those who are affected by what you're doing should be asking themselves why you never seem to be doing anything difficult.


It's hard to motivate high quality at high cost on subscription based platforms. We all pay the same price regardless of whether the content is barely palatable or great, and we all want new content frequently.

Better then to pump out a wide range of mediocracy to attract and keep as many subscribers as possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: