In block version number case and BIP 34 we are using least significant byte to signal a compatibility level and in txn it is always set to 1 for compatibility purposes and AFAIK occasionally Shipshift sets it to 2. again sort of signaling.
Suppose we declare txn version number as char[4], then we would assert txnVN[3] should be set to a value between 1 and the highest version number.
If you think that way then everything else can only be interpreted as an array of bytes instead of a variable type. But setting the 4 bytes as an int makes things easier. For instance when you get a block and deserialize its transactions it is easier to put the first 4 bytes in an int instead of a byte array and then in the verification process you just say if version<2 && has OP_CHECKSEQUENCEVERIFY then reject as invalid rather than saying if byte[0]<2 && byte[1]==0 && byte[2]==0 && byte[3]==0 && hash OP_.....
And transaction version does have a meaning. Check BIP68.
You don't need to check other bytes, just byte[3] < 2 suffices (it is little endian AFAIK). The point is, for the fields under discussion there is no signed or unsigned int use case ever.
Block version number is mostly used in ASIC Boost implementation for anthropy purposes, it is no int.
This is worse for txn version notto be confused with sequence number) , it is mostly set to 1 with very rare exceptions that are set to 2 by some wallets. again it is no int.
I personally don't like such a style in using integer data type when it comes to encoding I prefer to use properly sized byte arrays and int for quantization.