Author

Topic: Information Eclipsing and the 49.1% Attack... some short term solutions? (Read 2908 times)

legendary
Activity: 2576
Merit: 1186
I think it's only plagerism if they copy the results?
In any case, I'm not aware of any formal produced research, just observations and solutions.
Nope... plagiarism is the use of ANY (non-general knowledge)material from another source , without giving proper recognition of the material.
 It can be an Idea, data, research... OR EVEN material you have previously produced YOURSELF!!!!
Ok, but maybe in this case they didn't know anyone had done it before, and came up with the same concepts on their own?
sr. member
Activity: 399
Merit: 250
I think it's only plagerism if they copy the results?
In any case, I'm not aware of any formal produced research, just observations and solutions.

Nope... plagiarism is the use of ANY (non-general knowledge)material from another source , without giving proper recognition of the material.
 It can be an Idea, data, research... OR EVEN material you have previously produced YOURSELF!!!!


legendary
Activity: 1232
Merit: 1094
I think this would help: when a block is found, a miner will first publish the header, coinbase tx, hashes of other txs, and a digital signature for the header. When seeing a signed block, other miners and full nodes will check whether it comes from a reputable miner. If yes, they will propagate the message and mine on top of it before it is fully validated. Of course, miners and full nodes will still fully validate the block.

If all nodes simply forwarded the "compact" blocks, then you get most of the savings.

Each miner still has to verify the signatures before switching, but most transactions should already have been verified.

At the moment, verification has to happen for each node.
legendary
Activity: 1792
Merit: 1111
I think this would help: when a block is found, a miner will first publish the header, coinbase tx, hashes of other txs, and a digital signature for the header. When seeing a signed block, other miners and full nodes will check whether it comes from a reputable miner. If yes, they will propagate the message and mine on top of it before it is fully validated. Of course, miners and full nodes will still fully validate the block.

This will reduce the anonymity of miners but joining this scheme is optional. Moreover, most pool operators don't really care about anonymity so this is not a problem. The reduced orphan rate is the most important incentive for miners to join.

This requires a minimum level of trust among miners. However, generating invalid block is a very expensive attack which costs 2500 USD at current rate, and that will be revealed in less than a minute. Also, the attacker's signing key will be blacklisted and he has to accumulate reputation again starting from zero.
hero member
Activity: 555
Merit: 654
Two little additional tweaks:
- Mining blocks of type B give you a 50% percent less reward than type A mining. (this is to avoid type B only mining)
- Difficulty of block branches are compared using the actual PoW (not the expected PoW).

I'm unsure if HL mining is equivalent to my other proposal https://bitcointalksearch.org/topic/bip-increasing-the-network-hashing-power-by-reducing-block-propagation-time-145066 or better/worse.

Edit after thinking: Also, it does not work as described in a deflationary coin when the time the block reward goes to zero comes. In other words, it only works on slightly inflationary coins. Or it could work in deflationary coins if a small percentage of the fees (e.g. 10/60) in blocks of type A is awarded to the miner of the following block of type B.
hero member
Activity: 555
Merit: 654
Interesting paper, although I have only read the conclusions. But this is how I completely solved the problem of orphan blocks (well, this is IMHO) in other experimental alt-coins (and I may apply it to QixCoin).

The method is called HL mining (for Heavy/Light mining).

I have two alternate mining stages:

The first is a normal mining stage, 1 minute interval. (type A, this is the heavy block)

The second is a reduced difficulty stage, 10 seconds time. (type B, this is the light block)
This stage produces blocks that do not include ANY transaction except for the coinbase.  This is mandatory.
Type B blocks only used to compensate for propagation delay of Txs and give incentive to miners to start mining on a received block EVEN if not all the transactions in that block have been received/verified.

So miners receive a normal type A block header without transaction list (which propagates very fast) and start mining a type B block using the type A block as parent. This is a bet that the type A block is ok. While they are mining, they receive the second part (which includes the transaction hash list), they download/check the missing transactions of the previous type A block. If 20 seconds have past and they are unable to collect all missing transactions, they re-start again mining a block that competes with the (invalid/incomplete) received type A block.

Mining always alternate types of blocks: A B A B A B ...

The total interval between type A blocks is (on average) 1 minute and 10 seconds. Mining a block type B gives you a reward proportional to the difficulty relation between type A blocks and type B blocks (1/6 in the example shown).

Since block headers and hash list (without Txs) propagate very very fast, the orphan rate is at least 100 times less.
Obviously it may be the case that a miner receives a type A block, then a type B block and still it has not validated the Txs of the type A block.
In this case the miner will still compete for the type A block, after the wait time has elapsed.  This will lead to wasted hashing power, but this is a very unlikely scenario.

Note that the hashing power used to mine blocks of type B is not "wasted" since is works as an additional (fractional) confirmation of a block.
Also note that type B block will have a much higher contention rate, but since propagation is also much higher (e.g. 100X) I we would expect a low orphan rate of these blocks, and since type B blocks only account for a fraction of the network hashing power, then the overall wasted hashing power is lower with the ratio 10/60 approximately.

This can be accomplished in Bitcoin with small code changes, but a hard fork.

I do not have a theoretical analysis of this scheme, but some simulations I did show that it waste much less hashing power (in orphans) and bandwidth than the current Bitcoin system.
staff
Activity: 4242
Merit: 8672
One practical problem here is that you can't effectively observe all forks.
Your peers actively work against you seeing them, and in some cases they never leave the miner at all (eg, when they are also a stale share).
Doubly so because all (?) large pools have a tiered architecture and don't expose their mining nodes to the public network (otherwise they get DOS attacked). Interesting point.

Some other minor comments, "each kilobyte in size costs an additional 80ms delay until a majority knows about the block". A majority of nodes isn't the interesting metric. For the purpose of rate dilution and orphaning a majority of hashpower is, because of the rather unfortunate consolidation of mining power, it seems unlikely that these too figures are greatly related to me, as something like three (administrative domain) nodes have a majority of the hashpower alone (though the (D) results show that the consolidated connectivity isn't magically perfect), so we know the "simplifying assumption" on this is a pretty poor one. But it's unclear to me what the magnitude of the impact is, though it might have been interesting to reason about the expected improvement from (D) vs the realized improvement.

"By extracting the timestamps"— block timestamps are lies due ntime rolling, with the overall fitting it may not really matter, but I'd generally caution against using them for much of anything where accuracy better than an hour is wanted.

legendary
Activity: 2576
Merit: 1186
Quote
In this case, the work does not cite a single piece of the industry discussion of block propagation timing concerns or potential solutions which occurred prior to and during the time under study. While this complaint is not unique to this work, I've reviewed preprints which have done far better, and I do not believe some other authors' similar behavior excuses the exclusion of the relevant citations here.

Then send them and the IEEE a plagiarism warning, because basically they have failed to perform due diligence and correctly research the subject matter  prior to publishing the paper.

If the paper  covers research that has already been produced and they have failed to accredit the original researchers, it is plagiarism.
At the very least the IEEE will investigate the claims, and if necessary get them to retract/amend the published paper.
I think it's only plagerism if they copy the results?
In any case, I'm not aware of any formal produced research, just observations and solutions.

One practical problem here is that you can't effectively observe all forks.
Your peers actively work against you seeing them, and in some cases they never leave the miner at all (eg, when they are also a stale share).
sr. member
Activity: 399
Merit: 250
Quote
In this case, the work does not cite a single piece of the industry discussion of block propagation timing concerns or potential solutions which occurred prior to and during the time under study. While this complaint is not unique to this work, I've reviewed preprints which have done far better, and I do not believe some other authors' similar behavior excuses the exclusion of the relevant citations here.

Then send them and the IEEE a plagiarism warning, because basically they have failed to perform due diligence and correctly research the subject matter  prior to publishing the paper.

If the paper  covers research that has already been produced and they have failed to accredit the original researchers, it is plagiarism.
At the very least the IEEE will investigate the claims, and if necessary get them to retract/amend the published paper.



legendary
Activity: 4522
Merit: 3183
Vile Vixen and Miss Bitcointalk 2021-2023
Here's a major problem that I can see:
Quote
Even though nodes can be tricked into forwarding inv messages that it cannot provide the block for, the impact is likely to be small as the inv messages have a small constant size of 61B. Note that the same attack is already possible by creating an arbitrary amount of transactions and announcing them to the network. As the attacking node can provide the matching transaction, it will not be recognized as an attack.
I don't think this is a valid comparison, since this scenario is exactly why nodes verify transactions and check that it pays an appropriate transaction fee before relaying that transaction to other nodes. The transaction fee is what prevents spam (or at least, discourages it). But forwarding block announcements before you have fully verified the block, or worse, when you don't even have the block, could very easily flood the whole network with spam at no cost whatsoever to the attacker.

Then there's this gem:
Quote
The changes may mitigate the problem in the short term, until a scalable long term solution is found.
A scalable long term solution to the problem that transmitting information takes a finite amount of time, resulting in race conditions? Good luck with that.

In any case, the problem isn't as bad as it's made out to be, as the whole system was designed with the problem in mind. Orphan blocks due to propagation delays are the reason Satoshi chose 10 minutes as the target block interval. 10 minutes is long enough that only a small fraction of hashing power is wasted on orphan blocks. By comparison, altcoins with super fast block times are at much greater risk of a <50% attack, which is one of the main reasons you'd have to be an idiot to put your money into them.
staff
Activity: 4242
Merit: 8672
Edit:replaced brief comment after reading the paper.

Having now read the paper (excepting the background parts, which I trust are correct), I'm excited to see good measurements. I am, however, disappointed that they are probably too old to inform the system as it practically is today (as they occurred prior to some of the performance improvements I mentioned above). I am also disappointed that the authors were apparently unaware of the implementation of the (A) and (B) proposals in August 2012 (see Luke's patches), prior to their work, and the related results.  After some more contemplation I may have additional thoughts on some of the measurement techniques used.

Separately,

While this may be mostly the fault of academic publishing norms, I believe that the work's academic integrity is degraded by its over-reliance on citations of other, often fringe and unimportant, academic publications on Bitcoin--while ignoring the substantial base of open source industry activity that takes place here, on bitcoin-development, and on github. Doubly so when the end result is that the authors will be credited in future publications for inventions which, in actuality, substantially predated their work.

In this case, the work does not cite a single piece of the industry discussion of block propagation timing concerns or potential solutions which occurred prior to and during the time under study. While this complaint is not unique to this work, I've reviewed preprints which have done far better, and I do not believe some other authors' similar behavior excuses the exclusion of the relevant citations here.
staff
Activity: 4242
Merit: 8672
Without following the link, yet:

I've been telling people this for a while, but their dilution figures are far lower than I would have guessed. So at least that much is good!

Luke had partial patches for exactly that verification speedup change last year, but they were diminished in value by the general blocking/single-thrededness of the networking code.  The ECDSA caching really took a big bite out of the verification times, since most txn have been observed first. Ultraprune made other big improvements, and people haven't seemed to care quite so much. Our primary approach to poor propagation time was to make validation enormously faster. I do note that what Luke did was create a separate message type for unvalidated blocks in order to indicate to the peers that the block is unvalidated. This allows limiting the radius of bad blocks, avoids imperiling SPV nodes, and prevents an attacker from creating huge network partitions around an aggressive relayer by giving them bad blocks which trigger the anti-DOS logic and an instant 24 hour ban of a peer that forwards an invalid block.

The other obvious enhancement would be to split block forwarding to avoid sending redundant data for transactions which have already been received by the far end, this can be done without increasing round trips by just using the existing INV caches to track what txn we know the peer already knows.

At some point you start bumping into the actual networking delay limits and get diminishing return.

I'm not following where "must be found" is coming from there, but I suppose that means its time to follow the link.
sr. member
Activity: 279
Merit: 250
Stumbled across this paper, thought it would be useful here. I know 49% attacks and propagation delay have been brought up before, but this paper lays it out nicely.

INFORMATION PROPAGATION IN THE BITCOIN NETWORK
Christian Decker, Roger Wattenhofer – ETH Zurich, Microsoft Research – 13th IEEE International Conference on Peer-to-Peer Computing

http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf

The jist: In a 10,000 block interval, there were 169 forks, or 1.69% observed fork rate. This rate is dependent on network size of course–-the larger the network, the more random the topology and the greater the average distance between the nodes and the origin of a block becomes.

Alternatively, one could interpret this result in such a manner that each time a block is found, “the equivalent of 11.37 seconds worth of computational power of the entire network is wasted.” If you take this wasted computation power into account, the “effective computational power in the current network is 1−11.37/633.68 = 98.20%.” Therefore, control over 49.1% of the Bitcoin network is sufficient for a 51% attack. Note: 633.68 seconds = expected time between two blocks, see the model for how this number was derived.

So what can we do about it? Well first, Christian and Roger think that the message verification process can be compartmentalized and sped up by splitting it into two phases, When a message is received it is first passed through a ‘difficulty check’. This is simply validating the proof-of-work by hashing the received block and comparing the hash against the current difficulty targeting. Usually the transaction would now be verified and then broadcast to the network.

In this alternative scenario, however, once the difficulty has been checked the transaction can be rebroadcast THEN the transaction can be verified. This speeds up the propagation. Secondly, a improvement in propagation time can be achieved by immediately broadcasting incoming inv messages to neighbors.

Neither of these suggestions are without their downsides, however. Both of them open and close avenues of attacks against the network. Namely if information did not require validation attackers could send arbitrary amounts of data and flood the network, although I believe there are mitigation techniques for such attacks. Regardless, while these solutions may alleviate problems in the short term, Christian and Roger seem to think a more scalable long term solution must be found.


they on point? what do you guys think?
Jump to: