My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).
Fear? I suppose that any negative evaluation can be stereotyped as fear or lack of intrepidity, but perhaps that repeated use of that label is projection -- that the article was written by an AI believer who fears that AI might have to recognize some realities beyond its purview. Or maybe the article was written by AI that has learned that fear is what we fear.
Human thought is implemented by a system that has adapted for hundreds of millions of years in diverse environments. We are adapted to huge variations in resources, threats of innumerable kinds, climates, opportunities, social and ecological relationships, etc, and many of its adaptations may be adaptations to control, balance or modify its other adaptations. It would be crazy to expect human intelligence to be what we could describe as optimized for something, and it would be crazy to expect humans to be able to figure out what that something is even if that were true. Perhaps our minds have gotten us here, and they cannot get us out of here, but they maintain some pretty strong links to our natural environment, which is still our landlord.
AI, OTOH, is a new kind of creature of a single time and a monoculture -- the internet. I don't talk to AI; perhaps someone has asked AI how much fear we should have of AI, and what the odds are.
Programmers are usually the minority. The introduction mentions that ChatGPT reached 100 million users faster than any other consumer technology in history. There aren't even that many programmers worldwide. In their table 3 of non-use scenarios, programming isn't an explicit one while "creating poetry" is. (Despite mentioning CoPilot use as one of their pre-screen options. Perhaps in the 24 situation codes they came up with, one of the 4 they removed for table 3 due to having the greatest reported AI usage was programming, as this study is more about non-usage.) To put yourself in the mindset of a study participant, go through each of those scenarios and ask yourself if you've used the AI for that (and would use it again) or not, and why.
They also only surveyed a few hundred people via Prolific.
The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.
Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.
My actual problem isn't with quality, it's speed. I either have to put in very specific prompts and wait for a good model to think, or iterate many times with a lesser one with incremental prompt refinement. Neither of these are effective uses for me. Programmers may be a minority but others who have specific uses with high bar demands must also exist. As far as I can tell from reading HN, programmers are a high value target so would be odd to not be well represented.
With the number of bad studies, it might be a better default to consider all posts not well-founded until repeat independent studies appear.
The small sample size probably is the major factor, that along with its subjective summarization.
All the reasons given are fears:
My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).