X, formerly Twitter. Take out all the engineers and content moderators, and that business will be shuttered by regulators within a few months, because AI is terrible at building original products and disinformation classification (ironically, classification was one of the first use cases discovered for neural networks).
Also all AI companies - because AI cannot train itself.
Not a best example, since Twitter sacked huge portion of their staff after Elon Musk took it over (he said it was around 80%) and not only that didn't hurt them, but it turned out to be a good move. It wasn't due to the automation though.
AI should be able to train itself and there are
claims that it already is.Managers. AIs aren't going to just suddenly get a connection to the outside, sensory world and deal with the kind of people-management they have to do.
That's the huge incentive for automation - cut out the human labour and you will no longer need management to look after the workers. Not to mention that it's a known fact (largely exposed by remote working during covid) that huge chunk of the middle management is completely useless and doesn't even have to be replaced by anything.
Sure, there will likely be *some* positions where people will be necessary, but it's all about numbers, if 1 coder equipped with AI can do work of 1000 coders, they'll need to find something else to do, which will be getting harder as automation affects pretty much every industry. And as much as say 10-20 years ago some experts were dismissing the problem, currently there seems to be a consensus that technological unemployment is inevitable. It's not a question of if, but when.