There are real productivity gains by using these tools right now. Instead of doing 1x your normal work, you can do 5x while still maintaining quality. This is like an accountant sticking to pen and paper because calculators are big and clunky.
Also, if your AI has a 20% error rate, you're not holding it right. You need to spend more time keeping it on rails - unit tests, integration tests, e2e tests, local dev + browser use, preview deployments, staging environments, phased rollouts, AI PR reviews, rolling releases. The error rate will be much closer to 0%.
That wasn't what the comment you responded to was referring to. I guess it makes sense since you are kind of like an LLM with how you respond to input.
> I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. [...] If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
I think the point the author is making is not that it's all useless, but against the very overly simplistic idea the plot of Amount of AI vs Productivity in All Situations is a hockey stick chart.
Being told to be excited about something when clearly all they're saying is "it works sometimes, other times not so much. I'll keep checking and when it's good enough for me I'll get on board" is aggravating.