Here is a wild proposal: after block 209999, we replace the block reward function with an approximated exponential, such that the reward in 210000 is still close to 50 BTC, and sum of all rewards ever still tends to (and doesn't reach) 21M BTC.
Since we keep a higher reward (near 50 BTC) for longer, this will need to be compensated later. This results in a somewhat faster convergence towards 21M, with a halving of the reward every 145561 blocks (instead of 210000).
Here is a C function that computes rewards whose sum reaches 20999999.99999336 at block 4899870 (the current system reaches 20999999.9769 at block 6929999). It only uses 64-bit integer arithmetic and is quite fast.
static const int64_t coef[11]=
{ 419991, 630002, 840000, 1050003,
1260003, 1470003, 1680004, 1890005,
2100005, 2310005, 2520006};
uint64_t reward(int blocknum) {
int64_t ret = 5000000000ULL;
if (blocknum<210000) return ret;
blocknum -= 210000;
int shift = blocknum/145561;
blocknum %= 145561;
int64_t m = (blocknum*2380982516ULL)/100000;
ret -= m;
for (int i=0; i<11; i++) {
m = (-m*blocknum)/coef[i];
if (!m) break;
ret -= m;
}
return (ret>>shift);
}
The resulting reward is shown here:
The resulting distance from the target amount in circulation (21M BTC) is shown here:
With some tuning I'm sure it's possible to have it go to its final value somewhat smoother.
So, questions:
* Do you think a continuous decrease of the block reward is better?
* Is it worth breaking backward compatibility for?
* Does anyone know a nicer function than A*exp(B*x) that could be fitted to better match the existing reward?