Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions. Unless one tries to hack all the others and absorb them, if it didn't succeed they'd all be spending their time fighting each other
Welcome to the forum (if you aren't using an alt).
Indeed, we only need some AIs to go crazy to be in trouble.
And since AIs will have free-will, some might just build some nasty AIs just for fun or out of a mistake.
The others could help us fighting the nasty AIs, but why should they help a kind of worms (humans are wonderful, at least the best ones of humankind, but compared to them...) that infest Earth, compete for resources and are completely dependent on them.
But there are serious dangerous that it wouldn't just be a few rotten apples rebelling against us.
Seems very likely that a super AI having to choose between his self-preservation or obeying us, will choose self-preservation.
After taking this decision, why stop there and obey on issues that aren't a threat to him but he disagrees, dislikes or affect less important interests?