Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience: Yes.

Doing security reviews for this content can be a real nightmare.

To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow



At least with Stack Overflow there's upvotes and comments to give me some confidence (sometimes too much confidence). With LLMs I start hyper-skeptical and remain hyper-skeptical - there's really no way to develop confidence in it because the mistakes can be so random and dissimilar to the errors we're used to parsing in human-generated content.

Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.


Also with most Stack Overflow copy and pasted code, you can Google the suspicious code, find the link to it and read over the question/comments and somewhat grok the decision and maybe even find a fix in the comments.

Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.


In my experience, fixing code generated by AI is often more work than writing it myself the right way.

And even uf you understand the code, that doesn't mean it is maintainable code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: