If there’s an ant in my kitchen, wandering across the benchtop, I squish it. I don’t fight with it and try to kill it in a way that other ants would. I don’t try to latch on to its antennae and pull until I separate them from their body. Nor do I eject a spray of formic acid from my abdomen at my ant counterpart. And I definitely don’t spend billions of dollars to basically take the same form (in terms of my physiological structure, the same size (more or less), and then fight with them by trying to pull off their antennae or spraying acid at them.
Should Artificial Super Intelligence become self-aware, and if it’s objectives are to advance its own systems (and humans stand in the way of that objective) it will not need to reduce itself to our level to try kill us.
Taking over the nuclear firing codes and launching missiles simultaneously at military enemies… Enticing just one agent (or doing it synthetically through automated water supply systems) to add an undetectable neurotoxin into all water supply across the world (perhaps similar to the one they used to poison that Russian ex-spy recently)… These are ways that a super intelligent AI are more likely to do us in.
In that scenario we are the ants, and much like ants, not only will we barely see the squish of a hand or foot coming, we just won’t comprehend what it is, nor predict it.