First off, it's great IMO to seeing people doing code reviews of Electrum, the more eyes the better!
It looks like the last word of the seed was not generated very well, so this means that pre-2.7 seeds have a bit lower security?
In short, pre-2.7 seeds have slightly more entropy, but it probably doesn't matter.
Prior to Electrum 2.7, Electrum encoded 136* bits of data into 13 or fewer mnemonic words using a wordlist of length 2048**. log
22048=11, so each word is capable of encoding exactly 11 bits of data. This made Electrum perhaps not as efficient as it could have been, since 13 words (as you already pointed out) can store 143 bits of data, but it only needed 136 bits of storage.
*From the source:
n_added = max(16, len('01')*4 + 128 - ceil(log2(1)))
n_added = max(16, 2*4 + 128 - 0)
n_added = 136
**Except for the Portuguese word list which has 1626 words in it, log
21626≈10.67 bits per word.
Prior to encoding, Electrum truncates any leading 0 bits, so if the 136 bits of data has 4 or more leading 0 bits (1 in 16 chance for random data), the resulting mnemonic would be 12 or fewer words. There's no loss of bit storage/security here, it's simply the equivalent of using a simple compression algorithm before encoding into words.
These 136 bits are generated randomly by the OS (assuming no OS bugs), but are then incremented until an HMAC of these bits starts with the byte 0x01 to act as a sort of checksum. This effectively discards 255 out of 256 all possible seeds, which decreases the entropy by 8 bits (2
8=256), resulting in a seed with about 128 bits of entropy.
AFAIK, all versions of Electrum 2.x prior to 2.7 used this algorithm (assuming custom entropy wasn't added), but I could be mistaken.
The commit you referenced makes two changes. First, instead of aiming to gather 136 bits from the OS, it aims for just 128 bits. Second, it rounds 128 up to remove the inefficiency I mentioned above so that the mnemonic's max possible length is fully utilized. 128 bits gets rounded up to 132 bits ( ⌈128/log
22048⌉*log
22048 ). The result, after removing the bits to account for the checksum, is 124 bits of entropy (≈120 bits for Portuguese).
The second change makes sense to me, but I don't understand the reasoning behind the first. The commit message says in part "count prefix length as entropy" which I disagree is a valid thing to do.
Does this commit matter practically? Not really IMO, 124 bits of seed entropy is still far beyond any practical ability to crack any resulting ECDSA keys.