Pages:
Author

Topic: Analysis of hashrate-based double-spending - page 2. (Read 14792 times)

legendary
Activity: 1106
Merit: 1004
December 13, 2012, 03:05:04 AM
#59
DAG stands for Directed Acyclic Graph. The PoS article you linked to mentions this once, not going into detail. I think it refers to the structure of the block relations. The blockchain currently forms a tree, where different branches are not reconcilable, with one having to be eventually forfeighted. I guess the proposal is to make it possible for branches to join back. I don't know of any such proposals though.
Thanks. It's probably the blocktree thing then.
This is indeed what a DAG is and I think we're thinking about the same proposal. I might try linking to it sometime.

I guess this is Maged proposal, isn't it? https://bitcointalksearch.org/topic/m.686497
donator
Activity: 1736
Merit: 1010
Let's talk governance, lipstick, and pigs.
December 13, 2012, 01:28:52 AM
#58
Nice analysis, and well written. Thank you for not making it overly alarming. More of this research is always interesting.
hero member
Activity: 868
Merit: 1008
December 13, 2012, 12:45:19 AM
#57
However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.
This is indeed what they argue, and this is wrong.
Is it?  Note, there is no time component being discussed here.  It is simply saying that 6 confirmations is equivalent to 60 confirmations (given equivalent overall hash rate).  It doesn't matter if the 6 or 60 blocks take 10 hours or 10 seconds to arrive.
It is. 6 confirmations is equivalent to 6 confirmations and 60 confirmations is equivalent to 60 confirmations. The probability of success is uniquely determined by the number of confirmations and the attacker's percentage of the total hashrate. With these two fixed, it does not at all matter what are the difficulty, the total hashrate, the average time between blocks, or the actual time it took to find those confirmations.
Yes, I see it now.  I believe it's a correct analysis (and not what I had previously thought).
hero member
Activity: 588
Merit: 500
firstbits.com/1kznfw
December 12, 2012, 11:41:37 PM
#56
Because litecoin's algorithms are built so that mining with GPUs is infeasible, so it would be very easy for a large & powerful entity (US govt, FED or whatever) to build their ASICs quickly and completely destroy Litecoin.

The problem with litecoin isn't the future possibility of a shadow government creating a litcoin ASIC, it's the real present danger of the network being taken over by bot-herders.

Also because no one accepts it.
legendary
Activity: 1050
Merit: 1003
December 12, 2012, 11:35:57 PM
#55
I don't even know what a direct acyclic graph is. Sounds interesting though!
If you allow the branches of a tree to merge, you don't have a tree anymore: you only have DAG. This is what DAG is.

In your proposal it's similar, although very limited: you allow the blockchain to be forked into (n-1) blocks just to be joined back immediately. I guess relaxing this requirement and allowing the parallel branches to be non-empty and arbitrary length (as opposed to 1) could give more flexibility.

Yeah, I think you could do a lot with this concept of a DAG. It would allow for a lot of asynchronous computation. For example, suppose that all coins are colored based on their block origin. Spacer blocks contain txns of one color only. Then rearranging the ordering of spacer blocks does not result in forks unless you alter the order of two blocks of the same color. Hmmm... I think there is something useful here, but it also seems really complicated.
legendary
Activity: 1002
Merit: 1000
Bitcoin
December 12, 2012, 11:12:12 PM
#54
A lot of big thinking behind this paper.. Thanks for doing it, as I see, it seems to raise some serious tought, and may help bitcoin become better in the future.

Meni Rosenfeld : You seem to have put a lot of time and effort in studying this, and writing a such good paper..

Why have you done this ?  I mean, what was you motivation to study this so deeply ?
legendary
Activity: 1358
Merit: 1003
Ron Gross
December 12, 2012, 09:42:19 PM
#53
Excellent paper & thread, posted to reddit.

1.

Quote from: Meni Rosenfeld
The time constant might be relevant, if we assume that the attacker cannot sustain his
hashrate for a prolonged period of time. In this case, even majority hashrate does not
guarantee success, as the attack could take more time than available. However, it does
not seem likely that an attacker would go through the trouble of obtaining enormous
computational resources and only maintain them for a time short enough to make this
consideration relevant.

I think the time constraint assumption does have merit. The attacker can rent hash power on a hash market (or just rent GPUs/ASICs directly from a cloud provider), he does not necessary need to exclusively own a large hash rate himself.

2.
Added Block Cementing to the wiki.
full member
Activity: 309
Merit: 102
Presale is live!
December 12, 2012, 07:30:33 PM
#52
I don't have time to watch the entire video just to get your point. So get to it.
The bomb in the video says something to the effect of "I must think of this". There is no point, just bad trolling.
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
December 12, 2012, 06:30:23 PM
#51
You could arrange bitcoin so that you had to submit n different PoW hashes in each block; each of which met a difficulty target (about n-fold lower than the current one).
Interesting, need to think about this.
Is that you, bomb?  Grin

I didn't get this. Can you explain in more detail ?

You mean you watched the linked movie and still don't see the connection between his and bomb's words?

TL;DR (or rather: TL;DW). I watched first 30 seconds and I see no connection.

I don't have time to watch the entire video just to get your point. So get to it.
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
December 12, 2012, 05:38:16 PM
#50
You could arrange bitcoin so that you had to submit n different PoW hashes in each block; each of which met a difficulty target (about n-fold lower than the current one).
Interesting, need to think about this.
Is that you, bombGrin

I didn't get this. Can you explain in more detail ?
donator
Activity: 2058
Merit: 1054
December 12, 2012, 05:01:00 PM
#49
So, say i find 3 confirmations acceptably safe. That's exactly 3 confirmations in litecoin too for the same safety?
That's 30 minutes vs 6 minutes (it was 2min/block iirc)... on average.
Yes. (It's 2.5 min so 7.5 min).

Why are we still using bitcoin instead of litecoin again? because it's most accepted?
1. Yes, because it's more accepted. If we switched to a new currency every year because of some alleged trivial benefit, we would miss the point.
2. Decreasing the average block time has the obvious benefit of reducing time for confirmations. But it has equally obvious disadvantages:
a. There is more forking, meaning an attacker has more effective hashrate. I've ignored inadvertent forking completely in the paper, but it starts being a problem if we go wild with reducing the time constant.
b. There are more blocks, and hence more block headers. For a full node this is negligible but for an SPV (aka lightweight) client this makes a world of difference, especially if it is to be run on weak mobile devices.
So some tradeoff needs to be chosen. It can be argued that 2.5 min is better than 10 min, but it's far from clear-cut.
3. It doesn't really matter ultimately. If you can afford to wait more than a few seconds, you can usually afford to wait an hour. If not, I believe the gap can be bridged with Trustless, instant, off-the-chain Bitcoin payments.
4. Because Litecoin might be more prone to botnets.

You could arrange bitcoin so that you had to submit n different PoW hashes in each block; each of which met a difficulty target (about n-fold lower than the current one).
Interesting, need to think about this.

DAG stands for Directed Acyclic Graph. The PoS article you linked to mentions this once, not going into detail. I think it refers to the structure of the block relations. The blockchain currently forms a tree, where different branches are not reconcilable, with one having to be eventually forfeighted. I guess the proposal is to make it possible for branches to join back. I don't know of any such proposals though.
Thanks. It's probably the blocktree thing then.
This is indeed what a DAG is and I think we're thinking about the same proposal. I might try linking to it sometime.
full member
Activity: 309
Merit: 102
Presale is live!
December 12, 2012, 12:24:04 PM
#48
I don't even know what a direct acyclic graph is. Sounds interesting though!
If you allow the branches of a tree to merge, you don't have a tree anymore: you only have DAG. This is what DAG is.

In your proposal it's similar, although very limited: you allow the blockchain to be forked into (n-1) blocks just to be joined back immediately. I guess relaxing this requirement and allowing the parallel branches to be non-empty and arbitrary length (as opposed to 1) could give more flexibility.
legendary
Activity: 1050
Merit: 1003
December 12, 2012, 11:59:06 AM
#47
This is my explanation.
Thanks. It does make sense now. This sounds like a very special case of DAG proposal, whatever that is. Do you have a link to a discussion of that?
I wasn't around for the DAG proposal. I don't even know what a direct acyclic graph is. Sounds interesting though!
full member
Activity: 309
Merit: 102
Presale is live!
December 12, 2012, 11:56:10 AM
#46
This is my explanation.
Thanks. It does make sense now. This sounds like a very special case of DAG proposal, whatever that is. Do you have a link to a discussion of that?
legendary
Activity: 1050
Merit: 1003
December 12, 2012, 11:50:00 AM
#45
You can just think of it as inserting (n-1) empty blocks as spacers between each block pair. The spacers act as variance killers.
OK, I see. However, it's still not clear why a large block (of size n*k) followed by a few (n-1) empty ones would be better than a few (n) smaller ones (of size k).
This is my explanation. Under normal circumstances, blocks are built in sequence. This means I have to see block t-1 before I can build block t.

Spacer blocks are different. A spacer block wouldn't have any function except inserting work and a time delay between blocks. It would need to build on the last full block t, but would not need to be aware of the last spacer block. Therefore, spacer blocks are not ordered. They could be constructed in parallel rather than in sequence. It would be fine for everyone to have their own copy of the blockchain which orders the spacer blocks between blocks t and t-1 differently. Unlike a regular block, disagreements about spacer block order would not cause the network to fork. This should mean that spacer blocks could be mined at relatively narrow time intervals without causing a network disruption.

I know next to nothing about P2P networks. So please anyone who is enlightened jump in to set me straight.

[This is irrelevant in any case. Because as Meni points out it is the majority attacks that are the real problem and adding spacer blocks doesn't help with this.]
full member
Activity: 309
Merit: 102
Presale is live!
December 12, 2012, 11:00:54 AM
#44
You can just think of it as inserting (n-1) empty blocks as spacers between each block pair. The spacers act as variance killers.
OK, I see. However, it's still not clear why a large block (of size n*k) followed by a few (n-1) empty ones would be better than a few (n) smaller ones (of size k).
legendary
Activity: 1050
Merit: 1003
December 12, 2012, 09:59:23 AM
#43
You could arrange bitcoin so that you had to submit n different PoW hashes in each block; each of which met a difficulty target (about n-fold lower than the current one).
Right now n=1. You could increase n. Doing this would reduce variance in the block arrival rate and make it harder to do execute the type of minority attacks Meni describes in his paper.
Do you mean there will be n independent nonce's to search, each affecting its own hash? If so, I think it would give an unfair advantage to the larger pool, thus discouraging decentralisation. Not good at all!

Not quite like that, no. More like the pool broadcasts block t and then (n-1) people come up with additional solutions for block t and broadcast them in no particular order. Then the block t+1 builds on the original block t as well as the extra (n-1) solutions. You can just think of it as inserting (n-1) empty blocks as spacers between each block pair. The spacers act as variance killers. The information in a spacer is just the winning hash together plus an address to send block reward to.

I'm assuming here that block solutions are really tiny in terms of information and that (unlike full blocks) solutions could diffuse very rapidly through the network. Is this true? This question has been kind of nagging me. How long does it take 200 bytes to diffuse through the network vs. 200 kilobytes?

If there is no significant difference in diffusion time, then you might as well include closely spaced complete blocks (as in litecoin).
legendary
Activity: 1106
Merit: 1004
December 12, 2012, 09:50:51 AM
#42
Still don't know what DAG stands for though.

I'm not sure exactly about the context but DAG normally refers to a "Directed Acyclic Graph" (think of an OS folder structure).

DAG stands for Directed Acyclic Graph. The PoS article you linked to mentions this once, not going into detail. I think it refers to the structure of the block relations. The blockchain currently forms a tree, where different branches are not reconcilable, with one having to be eventually forfeighted. I guess the proposal is to make it possible for branches to join back. I don't know of any such proposals though.

Thanks. It's probably the blocktree thing then. I was aware of that, but until now hadn't seen this proof-of-stake concept, and there's even an implementation already. And I thought I followed bitcoin world closely enough...
vip
Activity: 980
Merit: 1001
December 12, 2012, 09:50:15 AM
#41
Sorry to be offtopic
but guys, Litecoin has been mined on GPU for a long time, starting with Solidcoin's Reaper having scrypt algo and in July of this year I held pledges for to pay cgminer's developer to have scrypt support added

Thanks to Meni Smiley
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
December 12, 2012, 09:42:38 AM
#40
This is out of the scope of the devs, it's a core protocol issue. I think defending against casual double-spenders is certainly feasible. The real problem is with well-funded entities trying to harm Bitcoin with majority attacks.

There are some proposals for alternative systems (e.g. proof of stake), but they are fairly controversial.

This is right. Concern over double-spends is way overblown.

If I understood the paper correctly, using the method described by @Meni Rosenfeld, it is possible to double spend as many transactions as one wants simultaneously, thus completely paralyzing the entire network (Somebody correct me if I am wrong).

They don't make economic sense.

They make sense, if you are a government or a banker with millions of USD in accounts.
Few millions is really nothing for destroying your greatest competitor.
Pages:
Jump to: