Pages:
Author

Topic: Mining subsidy in a DAG (Read 4527 times)

sr. member
Activity: 420
Merit: 262
February 17, 2016, 03:42:12 PM
#80
I finally found that link and inserted it into this quoted post from upthread:

This is from the DagCoin paper:



Btw, you might want to reference the post I made within the past month or so (probably in my Decentralized thread in Altcoin Discussion) wherein I pointed out that Sergio had an egregious error in his conceptualization on that diagram.
legendary
Activity: 1008
Merit: 1007
February 17, 2016, 12:22:14 PM
#79
Total ordering is proven impossible by the Second Law of Thermodynamics and because the speed-of-light is not infinite.

For the same reason you can't reach the edge of the universe because there is no edge.

You're thinking about the problem in terms which are too concrete. Take bitcoin's blocks, for instance - if you take the genesis block and the current best block, would you say that sequence of blocks constitutes a total order? What about if you included orphans in the ordering?

If you monetize all orphans then orphans will be unbounded.

My generative essence point applies in spades.

Scary. That reminds me of a line from Dr.Strangelove!  Cheesy

An excellent observation - however, no one said anything about rewarding orphans.

edit: my hint was to get you thinking along the right lines
sr. member
Activity: 420
Merit: 262
February 17, 2016, 12:17:39 PM
#78
Total ordering is proven impossible by the Second Law of Thermodynamics and because the speed-of-light is not infinite.

For the same reason you can't reach the edge of the universe because there is no edge.

You're thinking about the problem in terms which are too concrete. Take bitcoin's blocks, for instance - if you take the genesis block and the current best block, would you say that sequence of blocks constitutes a total order? What about if you included orphans in the ordering?

If you monetize all orphans then orphans will be unbounded.

My generative essence point applies in spades.

Sorry this discussion is unproductive until you explain precisely how you totally ordered the universe. I'll be waiting.  Roll Eyes
legendary
Activity: 1008
Merit: 1007
February 17, 2016, 11:16:47 AM
#77
Total ordering is proven impossible by the Second Law of Thermodynamics and because the speed-of-light is not infinite.

For the same reason you can't reach the edge of the universe because there is no edge.

You're thinking about the problem in terms which are too concrete. Take bitcoin's blocks, for instance - if you take the genesis block and the current best block, would you say that sequence of blocks constitutes a total order? What about if you included orphans in the ordering?
sr. member
Activity: 420
Merit: 262
February 17, 2016, 11:14:06 AM
#76
You will end up right back to why Iota had to adopt a mathematical model of this. There is no structural (of the tree structure) model which will give you these consensus properties without a mathematical model driving what payers and payees choose.

Do you accept that this is possible with a total order of transactions?

Total ordering is proven impossible by the Second Law of Thermodynamics and because the speed-of-light is not infinite.

For the same reason you can't reach the edge of the universe because there is no edge.

For the same reason that the future and past don't collapse into one infinitesimal point in spacetime, because we have friction and speed-of-light is finite.
sr. member
Activity: 420
Merit: 262
February 17, 2016, 11:12:47 AM
#75
I don't think the model in the paper fully includes all the factors it needs; IIRC it doesn't consider the payer's cheapest way to ensure timely confirmation in the face of the double spend branch orphaning problem. If a payer is required to choose between submitting his transaction N times, which costs (PoW x N), and submitting it further back in the history, but only once (PoW x 1), then it seems his cheapest choice is to widen the DAG?

Not enough information to make a conclusion. We need to do in-field testing.

As CfB wrote in response to me upthread, the optimal strategy has to be some balance between maximal and minimal breadth because in the former case then no one's transactions ever get confirmed.

The issue is which strategy do payers and payees choose? And what are the game theories? And is a Nash equilibrium created?

I argue it is very dubious and likely the math model will need to be centrally enforced. But I don't know if there is some math which convinces that I am incorrect.
legendary
Activity: 1008
Merit: 1007
February 17, 2016, 11:11:26 AM
#74
You will end up right back to why Iota had to adopt a mathematical model of this. There is no structural (of the tree structure) model which will give you these consensus properties without a mathematical model driving what payers and payees choose.

Do you accept that this is possible with a total order of transactions?
sr. member
Activity: 420
Merit: 262
February 17, 2016, 11:07:27 AM
#73
In the meantime, payers need to know where to place their transactions in the DAG so the transactions don't become eventually invalid. And payees need to know NOW which transactions to honor.

Transactions never become invalid except for double spends and their payee chain - invalidation cascade is limited to the output involved in the double spend.

Payee's have a metric for transaction acceptability - point at which it becomes unprofitable for an attacker to double spend them.

You will end up right back to why Iota had to adopt a mathematical model of this. There is no structural (of the tree structure) model which will give you these consensus properties without a mathematical model driving what payers and payees choose.
legendary
Activity: 1008
Merit: 1007
February 17, 2016, 10:20:16 AM
#72
In the meantime, payers need to know where to place their transactions in the DAG so the transactions don't become eventually invalid. And payees need to know NOW which transactions to honor.

Transactions never become invalid except for double spends and their payee chain - invalidation cascade is limited to the output involved in the double spend.

Payee's have a metric for transaction acceptability - point at which it becomes unprofitable for an attacker to double spend them.
sr. member
Activity: 420
Merit: 262
February 17, 2016, 10:05:11 AM
#71
The system must have some means of converging on a consensus choice amongst competing double-spends.

Your thread is a discussion about that. We don't need to repeat that discussion here.

Eventual total ordering.

In the meantime, payers need to know where to place their transactions in the DAG so the transactions don't become eventually invalid. And payees need to know NOW which transactions to honor. Please quote my reply and post your reply in your thread, not in this thread.

I copied this post to your thread. Please continue to there. Please.
legendary
Activity: 2142
Merit: 1010
Newbie
February 16, 2016, 01:18:41 PM
#70
The consensus power of the network decreases linearly in the number of partitions. You've noted this yourself earlier in this thread: https://bitcointalksearch.org/topic/m.13847669

Some partitions disappear because transactions are reincluded into the DAG, no need to merge branches, leave some abandoned.


I don't think the model in the paper fully includes all the factors it needs; IIRC it doesn't consider the payer's cheapest way to ensure timely confirmation in the face of the double spend branch orphaning problem. If a payer is required to choose between submitting his transaction N times, which costs (PoW x N), and submitting it further back in the history, but only once (PoW x 1), then it seems his cheapest choice is to widen the DAG?

Not enough information to make a conclusion. We need to do in-field testing.
legendary
Activity: 1008
Merit: 1007
February 16, 2016, 12:15:47 PM
#69
I'm using the wrong terminology. I should be saying 'partitions', rather than branches. The consensus power of the network decreases linearly in the number of partitions.

So what is the problem with partitions in DAG?

The consensus power of the network decreases linearly in the number of partitions. You've noted this yourself earlier in this thread: https://bitcointalksearch.org/topic/m.13847669


The model shows that in high-load regime DAG never converges into a single point, it pulses in cosine-like pattern (number of tips goes from low to high and back). A wide DAG is not good but it's not fatal.

I don't think the model in the paper fully includes all the factors it needs; IIRC it doesn't consider the payer's cheapest way to ensure timely confirmation in the face of the double spend branch orphaning problem. If a payer is required to choose between submitting his transaction N times, which costs (PoW x N), and submitting it further back in the history, but only once (PoW x 1), then it seems his cheapest choice is to widen the DAG?
legendary
Activity: 2142
Merit: 1010
Newbie
February 16, 2016, 12:08:13 PM
#68
I'm using the wrong terminology. I should be saying 'partitions', rather than branches. The consensus power of the network decreases linearly in the number of partitions.

So what is the problem with partitions in DAG?


Yes, 0-confirmation transactions. More specifically, the optimal referencing policy a transaction submitter should adopt - the probability that your transaction will not be orphaned is increased by only referencing transactions a few generations behind the tips. This widens the DAG, which is the opposite of consensus, which requires the narrowest possible DAG.

The model shows that in high-load regime DAG never converges into a single point, it pulses in cosine-like pattern (number of tips goes from low to high and back). A wide DAG is not good but it's not fatal.
legendary
Activity: 1008
Merit: 1007
February 16, 2016, 11:46:18 AM
#67
Merging branches is a requirement for convergence. Consider a DAG with infinite branches; the consensus power of the network is reduced to 0%.

Your edge case doesn't prove your claim. I can even say that merging of branches is not needed for consensus.

I'm using the wrong terminology. I should be saying 'partitions', rather than branches. The consensus power of the network decreases linearly in the number of partitions.

Are you talking about 0-conf transactions? Because if a transaction has made into 90% of branches then the attacker needs to outpace 90% of the network.

Yes, 0-confirmation transactions. More specifically, the optimal referencing policy a transaction submitter should adopt - the probability that your transaction will not be orphaned is increased by only referencing transactions a few generations behind the tips. This widens the DAG, which is the opposite of consensus, which requires the narrowest possible DAG.
legendary
Activity: 2142
Merit: 1010
Newbie
February 16, 2016, 11:27:37 AM
#66
Merging branches is a requirement for convergence. Consider a DAG with infinite branches; the consensus power of the network is reduced to 0%.

Your edge case doesn't prove your claim. I can even say that merging of branches is not needed for consensus.


Again, this implies that convergence has been lost because you are requiring participants to confirm their own transactions, which degenerates into a system where an attacker doesn't have to outpace the network anymore, he only has to outpace his victim.

Are you talking about 0-conf transactions? Because if a transaction has made into 90% of branches then the attacker needs to outpace 90% of the network.
legendary
Activity: 1008
Merit: 1007
February 16, 2016, 11:11:15 AM
#65
Branches don't need to be merged, we can have 100 branches all the time and still have a transaction in the past confirmed by 90%+ of total hashing power.

Wow, nice use of colour! Smiley

Merging branches is a requirement for convergence. Consider a DAG with infinite branches; the consensus power of the network is reduced to 0%.

New parts will be extended by those who need their recent transactions confirmed and who can afford to do the same PoW 10 times. Even weak PoW hashers will make forward progress if odds that their PoW will be wasted is low enough.

Again, this implies that convergence has been lost because you are requiring participants to confirm their own transactions, which degenerates into a system where an attacker doesn't have to outpace the network anymore, he only has to outpace his victim.
legendary
Activity: 2142
Merit: 1010
Newbie
February 16, 2016, 10:44:45 AM
#64
Coming to consensus on some state in the past is ok, I agree. However, diverging from consensus by not merging partitions, or agreeing not to make forward progress by extending old parts of the DAG are both not ok results.

Branches don't need to be merged, we can have 100 branches all the time and still have a transaction in the past confirmed by 90%+ of total hashing power.

New parts will be extended by those who need their recent transactions confirmed and who can afford to do the same PoW 10 times. Even weak PoW hashers will make forward progress if odds that their PoW will be wasted is low enough.
legendary
Activity: 1008
Merit: 1007
February 16, 2016, 10:02:58 AM
#63
Sounds right to me. You are supposed to find a sweet spot and if you fail you'll be forced to do PoW again, maybe even several times.

DAG comes to consensus on some state in the past, check page 11 of http://188.138.57.93/tangle.pdf to get the idea.

Coming to consensus on some state in the past is ok, I agree. However, diverging from consensus by not merging partitions, or agreeing not to make forward progress by extending old parts of the DAG are both not ok results.
legendary
Activity: 2142
Merit: 1010
Newbie
February 16, 2016, 09:40:14 AM
#62
Of course, but this is not the point. If I am submitting a transaction I want to maximise the chances that my transaction get's approved the most, but I cannot be sure when I submit if any of the parents I reference will be orphaned. So, in order to minimise the chance that my transaction gets orphaned, I must reference as few transactions as possible - or, I can try to reference very old transactions which I know are unlikely to get orphaned. So, on the one hand, I am incentivised to avoid merging branches, and on the other, I am incentivised to extend the DAG from a historical location. Both of these choices lead to the opposite of consensus for the DAG overall.

Sounds right to me. You are supposed to find a sweet spot and if you fail you'll be forced to do PoW again, maybe even several times.

DAG comes to consensus on some state in the past, check page 11 of http://188.138.57.93/tangle.pdf to get the idea.
legendary
Activity: 1008
Merit: 1007
February 16, 2016, 09:03:14 AM
#61
But what about your example earlier? Temporary partitions caused by network latency mean a double spend is not necessarily visible as such to any given payer.

Eventually one of the transactions will win.

Of course, but this is not the point. If I am submitting a transaction I want to maximise the chances that my transaction get's approved the most, but I cannot be sure when I submit if any of the parents I reference will be orphaned. So, in order to minimise the chance that my transaction gets orphaned, I must reference as few transactions as possible - or, I can try to reference very old transactions which I know are unlikely to get orphaned. So, on the one hand, I am incentivised to avoid merging branches, and on the other, I am incentivised to extend the DAG from a historical location. Both of these choices lead to the opposite of consensus for the DAG overall.
Pages:
Jump to: