I personally think the "AI" part here is a red herring. The problem is the deliberate dishonesty. This would be no more ethical if it was humans pretending to be rape victims or humans pretending to be trauma counselors or humans pretending to be anti-BLM black men or humans pretending to be patients at foreign hospitals or humans slandering members of certain religious groups.
I agree that that's concerning, but the same can be said of any of the other things that made this attack on public perception at scale relatively easy: Reddit, the World Wide Web, the Internet, you name it. AI is the latest effectiveness multiplier for bad actors (be they malicious or negligent), but not the first and probably not the last.
Put simply, I don't blame the hammer for being used to build a gallows, even if it's a really really fast hammer.
Exactly. The “AI” part of the equation is massively important because although a human could be equally disingenuous and wrongly influence someone else’s views/behavior, the human cannot spawn a million instances of themselves and set them all to work 24/7 at this for a year
You're right; this study would be equally unethical without AI in the loop. At the same time, the use of AI probably allowed the authors to generate a lot more comments than they would have been able to manually, and allowed them to psychologically distance themselves from the generated content. (Or, to put it another way: if they'd had to write these comments themselves, they might have stopped sooner, either because they got tired, or because they realized just how gross what they were doing was.)
The AI wasn’t just pretending to be a rape victim, it was scraping the profiles of the users they replied to infer their gender, political views, preferences, orientation, etc - and then use all that hyper-targeted information to craft a response that would be especially effective against the user.