I guess the "trap" is just a lack of imagination? I'm in that school of, wtf are you trying to say, at least until we're in an "I robot" situation where autonomous androids are welcomed into our homes and workplaces and given guns, I'm simply not worried about it
That's just because of a failure of imagination. The real world is not like Hollywood, get Terminator out of your head. A real AI take over is likely something we probably can't imagine because otherwise we would be smart enough to thwart it. It's micro drones injecting everyone on earth with a potent neurotoxins or a mirror virus that is dispersed into the entire atmosphere and kills everyone. Or its industrial AIs deciding to make the Earth a planetary factory and boiling the oceans with their resulting waste heat, they didn't think about, bother or attack humans directly, their sheer indifference kills us nonetheless.
Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.
> Get Terminator out of your head. A real AI take over is likely something we probably can't imagine.
Indeed, robotic bodies aren't needed. An ASI could take over even if it remained 100% software by hiring or persuading humans to do whatever it needed to be done. It could bootstrap the process by first doing virtual tasks for money, then taking that money to hire humans to register an actual company with human shareholders and executives (who report to the ASI), which company does some lucrative business and hires many more people. Soon the ASI has a massive human enterprise to do whatever it directs them to do.
The ASI still needs humans for awhile but it's a route to a takeover while remaining entirely as running code.
Microdrones and mirror life are still highly speculative[0]. Industrial waste heat is a threat to both human and AI (computers need cooling). And furthermore, those are harms we know about and can defend against. If AI kills us all, it's going to be through the most boring and mundane way possible, because boring and mundane is how you get people to not care and not fight back.
In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.
In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.
[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
>If AI kills us all, it's going to be through the most boring and mundane way possible, because boring and mundane is how you get people to not care and not fight back.
Human disempowerment is a different thing vs extinction I'd argue.
Anyway the boring human extinction scenarios we come up with probably just aren't close to what might actually happen but our lack of imagination doesn't make use safe.
>[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
Whether this or any other scenario humans come up with are actually feasible is completely irrelevant to the doom argument. People however hyperfocus on these things like why an ASI couldn't possibly build Nanobots or whatever they fixate on and it's just irrelevant to the core argument.