Pages:
Author

Topic: Dynamic block frequency - page 2. (Read 5704 times)

donator
Activity: 2058
Merit: 1054
May 08, 2012, 02:41:36 PM
#19
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.
newbie
Activity: 29
Merit: 0
May 08, 2012, 02:11:55 PM
#18
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading. Definitely worth considering. One of the potential weaknesses is it creates an incentive to oscillate the difficulty. This incentive already exists, but you need a significant amount of hashing power to benefit from it in the current system. Bounding the weight deviation seems an effective way to keep that incentive down.
donator
Activity: 2058
Merit: 1054
May 08, 2012, 01:54:43 PM
#17
"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.
As someone that knows a little bit about CPUs and digital logic, I can promise you that the right shift operation is not done by divide-and-round, it is done in dedicated hardware called a shifter that literally shifts bits.  It might be harder for laymen to understand, but programmers (and CPUs) understand shift very literally.
Look, we're not going to get anywhere if you keep misinterpreting what I say. Of course how a CPU physically does a bit shift is completely different from how it does integer division. That's why I said it's faster. But the mathematical operation being performed is the same. Do you really find "block reward starts at 5 billion satoshis, and halves every 4 years" less intuitive than "block rewards starts at 5 billion satoshis, and is shifted by 1 bit to the right every 4 years"?

It seems to me we're speaking completely different languages - you as a programmer (I presume?) are passionate about implementation details (and you interpret everything I say as an implementation specification which must be rigorous), for me as a mathematician the concepts come first, the implementation can be worried about later - especially in those cases when it is clear that how we choose to implement it is not going to matter.

I maintain that it is very important, and that it should be exact and simple, barring a very good reason to be otherwise.
As I said, I agree with this, but I don't see how what I'm talking about is any less simple or exact. The claim that bit-shift is somehow more exact than integer division is ludicrous.

I think that this is the part that needs the most work.  Miners will seek to maximize things based on whatever incentive scheme we come up with.  We need to show that there is a problem (or at least a sub-optimal condition) and that this change fixes (or at least improves) it.

I agree with you that 600 seconds forever is very unlikely to be optimal.  But is dynamic actually any better?  If we make it dynamic, and set up a system that adjusts based on the desires of miners, and miners adjust things to their liking, is that evidence that the new value is better?  Or is it just evidence that the miners are gaming the incentive?  And how do we measure better-ness anyway?
The users, who are the second half of the equation, would need to have influence over the incentive structure of the miners. I still need to iron out a few things with this.
kjj
legendary
Activity: 1302
Merit: 1026
May 08, 2012, 12:38:20 PM
#16
Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.
Sorry, but no, I didn't get anything wrong. I failed to communicate an idea clearly, which I would have spent more effort avoiding if it referred to anything important.

It seems like we are unlikely to come to an agreement on the importance of the subsidy calculation.  I maintain that it is very important, and that it should be exact and simple, barring a very good reason to be otherwise.

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.

As someone that knows a little bit about CPUs and digital logic, I can promise you that the right shift operation is not done by divide-and-round, it is done in dedicated hardware called a shifter that literally shifts bits.  It might be harder for laymen to understand, but programmers (and CPUs) understand shift very literally.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?
It's all about the incentive structure. If we can find a system that allows the people for whom the value actually matters to vote with their feet, we should come up with something good.

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.
Some magic numbers are more important than others. 21M total bitcoins is unimportant, 100M units per bitcoin is unimportant. 2 weeks is somewhat important. 4 years and 10 minutes are quite important. We can't change 4 years for Bitcoin itself, but maybe we can change 10 minutes.

I agree completely that it is hard to deduce that one value is better than another, but I don't think it's as hard to show that a certain system has the incentives to find an optimal value properly aligned.

Also, even if the dynamic system can't be trusted to be robust, the proposal gives a framework for changing the static block time, if there is ever a consensus that it should be changed.

I think that this is the part that needs the most work.  Miners will seek to maximize things based on whatever incentive scheme we come up with.  We need to show that there is a problem (or at least a sub-optimal condition) and that this change fixes (or at least improves) it.

I agree with you that 600 seconds forever is very unlikely to be optimal.  But is dynamic actually any better?  If we make it dynamic, and set up a system that adjusts based on the desires of miners, and miners adjust things to their liking, is that evidence that the new value is better?  Or is it just evidence that the miners are gaming the incentive?  And how do we measure better-ness anyway?
donator
Activity: 2058
Merit: 1054
May 08, 2012, 05:06:11 AM
#15
Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.
Sorry, but no, I didn't get anything wrong. I failed to communicate an idea clearly, which I would have spent more effort avoiding if it referred to anything important.

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.

Do you get what I'm trying to say?
No.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?
It's all about the incentive structure. If we can find a system that allows the people for whom the value actually matters to vote with their feet, we should come up with something good.

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.
Some magic numbers are more important than others. 21M total bitcoins is unimportant, 100M units per bitcoin is unimportant. 2 weeks is somewhat important. 4 years and 10 minutes are quite important. We can't change 4 years for Bitcoin itself, but maybe we can change 10 minutes.

I agree completely that it is hard to deduce that one value is better than another, but I don't think it's as hard to show that a certain system has the incentives to find an optimal value properly aligned.

Also, even if the dynamic system can't be trusted to be robust, the proposal gives a framework for changing the static block time, if there is ever a consensus that it should be changed.

All that said, unless you want to discuss these things further, I'm willing to bow out here and let you get this thread back on track so that you can finish presenting your idea.  I really would like to see the rest of it.
We can discuss it further. Some of the discussion is better deferred until I write about some related ideas. I'm not saying I have all the answers, but I think it's possible to make a useful system out of this.
kjj
legendary
Activity: 1302
Merit: 1026
May 08, 2012, 01:42:22 AM
#14
I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.
The current block reward calculation isn't any more exact than what I propose - when the block reward is halved, the amount of satoshis is rounded down.

It's hard for you to make a case about simplicity taking as an example a minute detail about one of the simpler aspects. If multiplying two numbers gives us troubles, I worry.

Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.  This is an extreme example, one that I often choose because it is so trivial and so simple, and still so easy to get wrong.  (See my posts in various other threads with proposals to change up the subsidy calculation.)

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.  That extra cost should not be paid (now and for eternity) without good reason.  The bitcoin software is chock full of things far more complicated than this, so clearly we can get this detail right too.  But should we?

Do you get what I'm trying to say?

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.

All that said, unless you want to discuss these things further, I'm willing to bow out here and let you get this thread back on track so that you can finish presenting your idea.  I really would like to see the rest of it.
donator
Activity: 2058
Merit: 1054
May 07, 2012, 11:42:16 PM
#13
I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.
The current block reward calculation isn't any more exact than what I propose - when the block reward is halved, the amount of satoshis is rounded down.

It's hard for you to make a case about simplicity taking as an example a minute detail about one of the simpler aspects. If multiplying two numbers gives us troubles, I worry.

I'm not saying that anyone should reject the proposal simply because of this, but it should give you pause.
Clearly the proposal adds complexity. But the whole integer arithmetic thing is not it.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?
I think it's possible to parameterize it so that this will be rare.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.
legendary
Activity: 1204
Merit: 1015
May 07, 2012, 08:36:03 PM
#12
Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?
Are you sure able that? If we assume that average block times are going to tend to be very quick, it'd be stupid to ignore any block. This assumes, of course, 2*minimum weight > 1*maximum weight. With that, if any weight block is added to the block you are ignoring, you lose. In fact, assuming that lower weight blocks put a portion of the fees from the block forward to the next block, it would be in every miner's benefit to mine on the low-weight block.

As for why this dynamic wouldn't just push people to mine on the lowest-weight block released, that's because, generally, people wouldn't want forks. Not to mention, we could have it so that when miners are able to spend their coins depends on weight, meaning that a building on a higher weight block would result in getting paid sooner.

Woah, multiple dynamics are going on there...
kjj
legendary
Activity: 1302
Merit: 1026
May 07, 2012, 03:59:43 PM
#11
Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.
Are we really arguing semantics now? When I said "round to nearest" I meant "round to the nearest integer smaller than the value or equal to it", and I thought it was clear that I'm referring to normal integer division (which rounds down). I'm not as precise in my language in conversation as in formal proposals.

And, with any other rounding method, there's still very little to game. I said it's "immaterial" which implies I couldn't care less how it's rounded, or how people would interpret how I say it's rounded. Energies are better saved for important things.

I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.

I'm not saying that anyone should reject the proposal simply because of this, but it should give you pause.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.
Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?
Options 1 and 2 aren't that far off. But seriously, hypothesizing what the equilibrium will be is hard enough when the system is fully specified, trying to deduce it based on guesses on what I want to propose is just silly. I would appreciate the benefit of the doubt that what I want to propose is not totally idiotic and broken.

Don't take it personally.  Lots of people propose changes that fix things that aren't problems, or that don't fix an actual problem, or that create too many new problems.  I have no idea if your proposal fits one of those three categories, or if it actually fixes something that is actually a problem and doesn't create a whole new mess.  I find out by poking at the edges.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
donator
Activity: 2058
Merit: 1054
May 07, 2012, 03:19:48 PM
#10
Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.
Are we really arguing semantics now? When I said "round to nearest" I meant "round to the nearest integer smaller than the value or equal to it", and I thought it was clear that I'm referring to normal integer division (which rounds down). I'm not as precise in my language in conversation as in formal proposals.

And, with any other rounding method, there's still very little to game. I said it's "immaterial" which implies I couldn't care less how it's rounded, or how people would interpret how I say it's rounded. Energies are better saved for important things.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.
Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?
Options 1 and 2 aren't that far off. But seriously, hypothesizing what the equilibrium will be is hard enough when the system is fully specified, trying to deduce it based on guesses on what I want to propose is just silly. I would appreciate the benefit of the doubt that what I want to propose is not totally idiotic and broken.
kjj
legendary
Activity: 1302
Merit: 1026
May 07, 2012, 02:46:58 PM
#9
The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.
It is obvious to me that by the time the generation reward is significantly reduced, satoshis will be split so what is now known as 1 bitcoin will be more than 100,000,000 atomic units. Even in the highly unlikely case that a satoshi will be significant, since the value is rounded down, this will only create a preference to weights which result in an integer, which causes no harm.

Once again, implementation should follow design, and the "2.1E+15 atomic units" detail can be changed.

You are probably right about that.

Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.

Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?
donator
Activity: 2058
Merit: 1054
May 07, 2012, 02:02:37 PM
#8
The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.
It is obvious to me that by the time the generation reward is significantly reduced, satoshis will be split so what is now known as 1 bitcoin will be more than 100,000,000 atomic units. Even in the highly unlikely case that a satoshi will be significant, since the value is rounded down, this will only create a preference to weights which result in an integer, which causes no harm.

Once again, implementation should follow design, and the "2.1E+15 atomic units" detail can be changed.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.
kjj
legendary
Activity: 1302
Merit: 1026
May 07, 2012, 01:41:15 PM
#7
The subsidy is based on an exact integer operation.
And this is relevant how? Multiply the integer block reward by the integer network target, divide by the specific target. The reward will be rounded to the nearest satoshi but that's immaterial.

Implementation details should serve the design, not the other way around.

The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
donator
Activity: 2058
Merit: 1054
May 07, 2012, 12:49:36 PM
#6
The subsidy is based on an exact integer operation.
And this is relevant how? Multiply the integer block reward by the integer network target, divide by the specific target. The reward will be rounded to the nearest satoshi but that's immaterial.

Implementation details should serve the design, not the other way around.
kjj
legendary
Activity: 1302
Merit: 1026
May 07, 2012, 09:55:12 AM
#5
The subsidy is based on an exact integer operation.
legendary
Activity: 1204
Merit: 1015
May 07, 2012, 01:22:47 AM
#4
@Maged - if I understand your objections correctly, they are solvable by imposing a lower bound in addition to the upper bound, and tightening it. For example, the weight of a block needs to be between 0.9 to 1.1 times the weight of the 500 most recent blocks. This way the attacker can't get much advantage by using a block weight significantly different than the rest of the network; but if the current time span is suboptimal, the network will slowly but surely drift towards a better value.
Interesting idea. At worst, a merchant would require one additional confirmation to know that they are safe from a Finney Attack, but because of the market dynamics this idea has, the market might just push average block time to less than half of our current system anyway. You could actually even widen the range of possible weights to .67 to 1.33 without hurting the merchants any more than .9 to 1.1 would. However, to allow it to stretch that far, you wouldn't want the range of weights to be calculated on a rolling basis, but that's fine. If you did want it recalculated every block, you would have to tighten the range just slightly.

It's also not true that pools don't care about variance. Good PPS pools charge 5% fee to compensate for their risk, and threads of normal pools are littered with discussions of good and bad luck, and complaints about maturity time. The big pools become bigger because they have less variance.
Fair enough. I guess we'd just have to see how this would play out.

P2Pool basically emulates a blockchain with a smaller timespan, and I agree if removes some of the need for an actually shorter span. I still believe it would be better for the timespan itself to be more optimal.
I do agree with you there. 10 minutes is really far too long, in my opinion.
donator
Activity: 2058
Merit: 1054
May 06, 2012, 11:03:07 PM
#3
@Maged - if I understand your objections correctly, they are solvable by imposing a lower bound in addition to the upper bound, and tightening it. For example, the weight of a block needs to be between 0.9 to 1.1 times the mean weight of the 500 most recent blocks. This way the attacker can't get much advantage by using a block weight significantly different than the rest of the network; but if the current time span is suboptimal, the network will slowly but surely drift towards a better value.

It's also not true that pools don't care about variance. Good PPS pools charge 5% fee to compensate for their risk, and threads of normal pools are littered with discussions of good and bad luck, and complaints about maturity time. The big pools become bigger because they have less variance.

P2Pool basically emulates a blockchain with a smaller timespan, and I agree it removes some of the need for an actually shorter span. I still believe it would be better for the timespan itself to be more optimal.
legendary
Activity: 1204
Merit: 1015
May 06, 2012, 03:45:42 PM
#2
Attacker manipulation. The incentives of an attacker are different from honest miners. He will not care about the variance in long time spans, and choosing higher weight blocks will make an attack easier. In fact, it can be shown that if the attacker can choose an arbitrarily high weight, any double-spend attempt is guaranteed to succeed eventually, whatever the hashrate of the attacker and the honest network.

Therefore, some limitation on weight selection will need to be placed. For example, it can be bounded by some multiple of the median weight of the recent blocks. This will still allow the market to gradually converge towards a higher time span, while preventing an attacker from choosing time spans much higher than the rest of the network.
Unfortunately, this is where your scheme loses some of its greatest usefulness. It would in no way allow merchants to receive a "given level of security of a transaction" that would be any more useful than it is now. The biggest jump in the security of a transaction is when it reaches 1 confirmation, and that's because that is when a Finney Attack is no longer effective. However, under your proposal, a Finney Attack remains equally possible and effective all the way up to the cutoff point for the maximum weight of a single block. Even worse, though, if an attacker doesn't even need that much weight to pull a Finney Attack on someone, they can make blocks with a lower weight, allowing them to pull off the attack more often. This means that in order for someone to have an equal amount of security as one confirmation right now, they need to wait until the maximum weight for a single block has been added to the chain. Now, before you say that that's a good thing since it allows more fine-grained control of the security of a transaction, that's only barely true: a transaction at 99% weighted confirmations is barely more secure than a 0-confirmation transaction, but is still significantly less secure than a 100% weighted confirmation transaction, and that difference is almost the same as the current difference between 0-confirmation transactions and 1-confirmation transactions.

Another factor that makes Finney Attacks more effective is the reduced variance this would give the blockchain. For example, let's say that most miners produce blocks at 10% of the maximum weight, and that 100% weight is defined to average to 10 minutes, meaning that 10% blocks are produced on average every 1 minute. This dynamic all but ensures that a 100% weight Finney Attack has 10 minutes before the Finney block would have to be released, allowing Finney Attacks to be used in places that were previously not practical because of how long it would take to checkout. Namely, the real world. Whereas, with the current setup, it becomes increasingly riskier (and unpredictably so) to hold onto a Finney block for a period of time before releasing it, making it dangerous to use in the real world. As a result, even 0-confirmation transactions would be riskier than they are now.

Now that I've shown that the advantage for merchants is no longer true, that leaves us with "Less variance for miners". If miners want less variance, there are already solutions for that, the best of which being P2Pool. Speaking of pools, it would be in the interest of all pool to mine the maximum weight because they already don't have to worry much about variance. Thus, practically, this doesn't change much anyway. At worst, if it dynamically allows the maximum weight to result in blocks on average coming less often than every ten minutes (most pools wouldn't even mind if a block was only made once an hour), this makes the life of merchants much harder.


I would support a global change in the re-targeting time, or even some way for miners to express support for automatic global changes, but it'd be hard for me to support a situation where it's possible for two blocks to not be equal.
donator
Activity: 2058
Merit: 1054
May 06, 2012, 10:57:10 AM
#1
tl; dr: The target time span between blocks, which for Bitcoin is 10 minutes, is an important design parameter. Since it is difficult to determine the optimal value in advance, it may be desired to have it dynamically set by the market rather than hardcoded - and this is possible.

Introduction. The Bitcoin mining difficulty is adjusted so that a block will be found, on average, once every 10 minutes. This figure appears in Satoshi's whitepaper, but not much motivation is given for this choice. It is a compromise obtained from considering the following points:

Advantages of long time span:
1a. Less time wasted trying to continue an old chain while waiting for new blocks to propagate, which would give attackers an advantage.
2a. Less overhead for transmitting and storing block headers.

Advantages of short time span:
1b. Less variance for miners (and you should care about miners).
2b. Less time merchants need to wait to receive a given level of security of a transaction (there are some subtleties and much confusion about this point, but it's basically correct).

There is no guarantee that the value of 10 minutes is optimal, and some alternative blockchain-based currencies ("alts") have tried their luck with a different target. It is also likely that the optimal value will change over time, so it may be desired not to have it hardcoded, but rather have a system that allows market forces to determine it dynamically. In this post I will propose just such a system.

I will first focus on the scenario that the block reward consists entirely in new generated coins, without any transaction fees, and on the respective advantages 1a and 1b of long and short time spans. The general case will be considered afterwards.

I don't know if the block target time is considered to be a part of the Schelling point of Bitcoin which must never be changed, or if the proposed system is too much trouble to implement now that the wheels are already in motion. At the very least, it could be something to consider for a well-designed alt.

Basics. The network will still have a target hash value such that finding a hash lower than the target would be done once every 10 minutes on average. However, when mining a block, the miner can choose a target hash specific for this block which can be distinct from the network target. As long as the block hash is lower than the specific target, the block is valid. The specific target is part of the hashed block header, so the miner can only choose the target before he starts mining.

Branch selection will be based on the total work contained in the branch, obtained by summing the inverses of the specific targets.

The block's weight will be the ratio between the network target and the specific target. An easy block with a higher target will have less weight. The amount of new generated coins will be the normal block reward multiplied by the weight. A block which is ten times easier to find than a normal block will only give a tenth of the bitcoins as a reward.

Various landmarks which are now triggered by a certain number of blocks, will instead be triggered by a certain total weight. The normal block reward will be halved whenever the total weight reaches a multiple of 210000. A block which crosses the threshold will be rewarded proportionally based on the portion of the weight that rests on either side of the crossover point. This guarantees that the total bitcoins generated remains 21M, and on the same schedule (though with different granularity).

The difficulty (the network target) will be adjusted every 2016 weight units, so that the time to find 2016 weight units is 2 weeks.

If miners in general end up choosing a weight higher/lower than 1, this will have a similar effect as hardcoding a time span longer/shorter than 10 minutes.

Market forces. Choosing a weight different from 1 has no effect on the theoretical expected reward from mining. However, it affects variance and efficiency.

A miner will not want the weight to be too large, as that would mean a smaller chance to get a higher reward, which increases variance. If the default time span is so large that the variance is costly, miners will tend towards choosing a lower weight.

A miner will not want the weight to be too small, since that would increase the chance that it will be rejected. Suppose the average timespan is so short that forking is a matter of course. If two miners find a block at the same time, the one that chose a higher weight will be accepted, because the resulting branch is longer. Thus the higher weight miner will enjoy a higher percentage of accepted blocks and higher average reward, and this will push the market back to an average timespan where conflicts aren't that much of a problem.

The same considerations exist for mining pools too, who for the sake of their users want to minimize both variance and invalid block rate.

All in all, this will allow the market to reach an equilibrium point which strikes a balance between variance and forking.

Attacker manipulation. The incentives of an attacker are different from honest miners. He will not care about the variance in long time spans, and choosing higher weight blocks will make an attack easier. In fact, it can be shown that if the attacker can choose an arbitrarily high weight, any double-spend attempt is guaranteed to succeed eventually, whatever the hashrate of the attacker and the honest network. Being able to choose a very small weight makes Finney attacks easier.

Therefore, some limitation on weight selection will need to be placed. For example, it can be bounded from above and below by some multiple of the mean weight of the recent blocks. This will still allow the market to gradually converge towards a different time span, while preventing an attacker from choosing time spans much higher or lower than the rest of the network.

Future work. The resulting equilibrium may be such that merchants will suffer from needlessly long confirmation times; miners can choose short time spans that while not causing many conflicts, overload the blockchain with numerous headers; and the proportional relationship between block weight and reward is not at all clear for a scenario when transaction fees are significant.

I believe all of these issues can be solved within a reworked framework of the way transaction fees work, which I think is needed anyway. In the future I may write about that in a separate post.

CONTINUED:

Transaction fees. When the transaction fees become significant, the procedure described above loses the direct proportionality between block weight and reward, twisting the incentive structure and favoring shorter blocks. This can be solved by using the continuous version of rollover transaction fees, which pays out transaction fees not to a single block but to several blocks based on their weight. The proportionality is thus approximately reinstated.

If the decay is fairly fast, this still gives a higher reward per weight unit the lighter the block is. This gives Bitcoin users some control of the miners' choice. If the confirmation speed leaves something to be desired due to block times too long, they will typically pay a higher fee to expedite their transactions; if everyone does this, transaction fees become proportionally higher than coin generation, which increases the incentive to mine light blocks which negate the problem.

This effect no longer holds true when the coin generation becomes negligible, in which case transaction fees are the only significant component of the reward, and increase fees does not encourage shorter blocks. Finding a mechanism to allow users to encourage shorter blocks in this case is an open problem. One idea is to not use only an exponentially decaying function, but give the user a choice between functions with a different kurtosis.

Block header upkeep cost. Every block generated creates a burden of the network due to the need of every node, even SPV clients, to download it and store it. Thus the miner should pay for the resources consumed by his block. This can be of the form of a fixed cost deduction from the block reward, which will be added to the long-term fee pool described here. This will prevent miners from choosing very short blocks which overload the network.

Contested transaction fees. Because transaction fees decay, miners could be incentivized to intentionally reject a valid block and try to find their own heavier block, allowing them to collect more fees. This can be solved by the principle that contested transaction fees are deferred - that is, in case of a fork, the fees which the blocks have in common are not paid to either of them, bur rather suspended and added back to the pool until blocks are found with a total weight equal to the current maximum weight. This applies only to fees and not to coinbase. This way there is no incentive to create a fork when you know you will lose the reward you tried to reclaim. This also makes unintentional forks that much more painful, but this will be compensated by the deferred fees that will exist at any time.

The implementation of this could be tricky, and requires among other things that invalid blocks will be considered in the calculation of the block rewards of the valid chain.
Pages:
Jump to: