Because bc is better at arbitrary precision floating point calculation than bitcoin.
Bitcoin uses an approximation:
double GetDifficulty()
{
// Floating point number that is a multiple of the minimum difficulty,
// minimum difficulty = 1.0.
if (pindexBest == NULL)
return 1.0;
int nShift = 256 - 32 - 31; // to fit in a uint
double dMinimum = (CBigNum().SetCompact(bnProofOfWorkLimit.GetCompact()) >> nShift).getuint();
double dCurrently = (CBigNum().SetCompact(pindexBest->nBits) >> nShift).getuint();
return dMinimum / dCurrently;
}
In case its not clear, it doesn't use the most significant bits, it just shifts the bignums representing the current target and the minimum target down by a fixed number of bits, and then divides those numbers to get a floating point answer. It's not the best approximation for two reasons:
- It potentially throws away resolution by assuming knowledge of the minimum target. Fortunately that assumption is true.
- Doubles contain more than 32 bits worth of precision, and the division is done with two 32 bit integers.
Short of implementing a full arbitrary precision floating point division (which isn't trivial), this psuedo code might be better:
unsigned int hb1, hb2;
hb1 = highest_bit( proofOfWorkLimit );
hb2 = highest_bit( currentTarget );
if( hb2 > hb1 )
hb1 = hb2;
hb1 -= bits_of_precision_in_double;
uint64_t L = proofOfWorkLimit >> hb1;
uint64_t T = currentTarget >> hb1;
return (double)(L)/T;
To be honest though, there's no real reason to change it. It's only used for displaying to users, never for calculations. Other than for pride, there's no pressing need to fix it.