I want to focus into this:
How can you verify that it doesn't generate predictable entropy or that it's malicious in another way?
Sure, you can verify that once you build the source code, it's the same executable as the one that's signed, but how can you verify that the functions that are used to generate entropy aren't executed improperly, outside the firmware? Excuse me for my complete lack of hardware knowledge, but isn't there a part of it that is used exclusively for producing randomness?
Generally, aren't there parts that are used along with the firmware to make it work?
You're right; we have to verify the code is good (no backdoors etc.), but there are also 'parts used along the firmware' - the secure element hardware, indeed. This one is not reprogrammed and the firmware just interfaces with it, similar to an API. If that element is used as a randomness source and the randomness is bad / has weak entropy / has a backdoor, ..., then it could be an attack vector.
That's why it would be
huge if Trezor really follows through with that open-source secure chip they are teasing. I'm not sure how you verify that what's in the silicon matches the (open sourced) spec sheet, though. For maximum paranoia, it would at least allow you to have the chip manufactured and build the device yourself. I believe someone already tried to build a Trezor from scratch using all the open-source hardware information.
Of course, you can also (depending on hardware wallet feature set) use your own entropy, like dice rolls and use that to generate the seed, then you don't have to trust the chip to have good entropy at all.