An LLM, certainly by itself, can't be "as creative and exploratory as any human coder", because it's limited by inability to reason other than by training data mashup, has no curiosity, no ability to learn from it's exploratory mistakes and successes (were it to make them), etc, etc.
It seems we've reached the point that understanding of LLMs would be a great candidate for the beginner/intermediate/expert meme. "It's just autocomplete" -> "It's got a world model, it's thinking for itself" -> "It's just autocomplete".
It seems we've reached the point that understanding of LLMs would be a great candidate for the beginner/intermediate/expert meme. "It's just autocomplete" -> "It's got a world model, it's thinking for itself" -> "It's just autocomplete".