Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM, certainly by itself, can't be "as creative and exploratory as any human coder", because it's limited by inability to reason other than by training data mashup, has no curiosity, no ability to learn from it's exploratory mistakes and successes (were it to make them), etc, etc.

It seems we've reached the point that understanding of LLMs would be a great candidate for the beginner/intermediate/expert meme. "It's just autocomplete" -> "It's got a world model, it's thinking for itself" -> "It's just autocomplete".



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: