I'm not saying malicious firmware cannot be signed. I'm saying it cannot be signed without people knowing.
Just to give one example, three of the 5 key holders at Trezor conspire and sign a malicious version of the firmware that is given to a hacker. The hacker unleashes a virus with a malicious plug-in or standalone MyTrezor bridge, that instructs clients to download and install the "latest version" of the firmware, which is of course the malicious version above.
You are exaggerating with the other "use cases". It's not going to happen.
Well, I hope that manufacturers can resist that temptation.
20000 lines of code can be verified in a month or two for backdoors. To fully understand all of it, it takes more time. The point is, it's possible for a single person and people did it.
You mean, someone already checked it, and did not see
the backdoor?
Yes, IF they were malicious, they can sign non-git version of the firmware that can have money stealing interface. If such a firmware would be flashed onto the device on a hacked computer (by the hacked computer) then your BTC would be stolen. You would still need to confirm that you want this firmware flashed on the device. Also, you would now have a signed malicious firmware and you could sue them with it, because its digitally signed with their signatures. They would probably get away with it, claiming all their keys were stolen. But the company would go bancrupt.
But again, this kind of attack is not specific to this company. If five bank employees agree to forge a withdrawal from your bank account, how would you protect against such inside-job attack?