Pages:
Author

Topic: bitcoin "unlimited" seeks review - page 10. (Read 16106 times)

sr. member
Activity: 381
Merit: 255
January 02, 2016, 05:04:32 PM
#30
BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is the different between Bitcoin being secure and insecure, we have bigger problems already (soon enough someone's just gonna make a patch, then it will be dirt simple to mod any consensus setting). It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new thread?

You wanted to be able to post on this forum and not be censored, yet you are not prepared to answer the hard questions posed to BU?

Lets try again. If I setup 2000 nodes, each voting for a 200MB block, thus overtaking consensus, what prevents a step 2 scenario from happening, where a miner that gets lucky starts mining 200MB blocks and propagating those. Longest chain is mine, as I run the most nodes.

Adam's questions are somehow of a similar fashion as he is asking how we prevent multiple shards of the block to happen, where each node follows an arbitrary size and starts rejecting the larger blocks. Meaning, I can kick Adam out of the network quite quickly as my 2000 nodes in consensus for 200MB blocks will ignore his 1MB + 10% blocks consensus with itself.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
January 02, 2016, 04:56:47 PM
#29
Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

The attack is a lot more complex than that. I think you're on the BU forum? Taek had a nice explanation of the centralization pressure enabled by BU. Someone could leverage a sybil attack to effectively do just what he proposed: slowly but surely prune nodes out of the network until it gets consolidated into a few more controllable hands.


Quote from: Taek
If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.
legendary
Activity: 994
Merit: 1035
January 02, 2016, 04:54:08 PM
#28
Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

To be honest, I don't think that this attack is worth to be discussed - while Adam Back raised some question I'd love to see addressed.


Yes, of course those 200MB blocks will be orphaned now with BU. We are discussing how BU would hypothetically work if a majority of the mining power supported the implementation and relegated the blocksize to nodes instead of themselves. BU isn't assuming switching to PoS in the future, right? The security model right now assumes a coordinated attack of miners and nodes. BU would allow the nodes to perform this attack immediately as the miners will be relegating their maxblocksize to the nodes, right?

BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is what is keeping Bitcoin secure, we have bigger problems already. It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new topic?

Yes, I would rather move onto other topics, but can you explain to me the "1% easier" difference in 1 post assuming future possibility of a majority of miners support BU and relegate the blocksize to nodes instead of themselves ?


P.S... I am not posing these questions to denigrate your efforts and am genuinely interested in learning about BU and helping other implementations. Please don't be offended by these questions.
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 04:47:34 PM
#27
BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is the different between Bitcoin being secure and insecure, we have bigger problems already (soon enough someone's just gonna make a patch, then it will be dirt simple to mod any consensus setting). It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new thread?
sr. member
Activity: 409
Merit: 286
January 02, 2016, 04:45:34 PM
#26
Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

To be honest, I don't think that this attack is worth to be discussed - while Adam Back raised some question I'd love to see addressed.

Edit, cause "brand new" looks ugly: I'm C. Bergmann but unfortunately lost my password and my bitcoin-signed pledge for recovery was not answered. I'm not affiliated with BU but I like the idea and think it is worth to be discussed open-minded.
legendary
Activity: 994
Merit: 1035
January 02, 2016, 04:32:15 PM
#25
Not sure what you mean. I'm just saying if someone wanted to create a fork of Core with a 200MB blocksize cap now, it's not difficult. Then if they had the resources to deploy 1000 nodes, we'd be at your scenario.

Point is, this has nothing to do with BU.

The difference being that those 1k nodes would be producing orphaned blocks on the original chain with 99% of the hashing security(thus committing economic suicide) and with the BU proposal one is assuming the miners have accepted the proposal and allow the nodes to dynamically adjust blocksize. This is a significant difference, is it not? Don't we want to assume the future hypothetical that BU has the majority of mining security behind it to evaluate it true potential?
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
January 02, 2016, 04:31:30 PM
#24
I would say a Sybil attacker with the resources to cook up 1000 nodes will have no trouble modding a bit of C++ code or hiring a coder to do that. That's the least of the barriers, and even if it were to be relied on, that would be a losing battle. If inconvenience were all that is keeping Bitcoin secure, we would have a problem. Also see my edit to the post immediately above yours.

I'm not sure if you're intentionally avoiding the gaping hole in your analysis or if you just don't see it.

Yes, someone could spin up 1000 nodes tomorrow that advertise a larger block size but the context is quite different in that the network has agreed by consensus that these would be invalid. For that reason miners will not mine such blocks or they will get forked off the network for not respecting the consenus rules (and lose money).

From what I understand BU proposes that all of these nodes be aggregated into a signal that miners should consider when creating the blocks. That is the nature of a sybil attack.

With current Core consensus rules it is very easy for miners to tell nodes apart from eachother, there are two kinds: 1MB nodes and the rest.
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 04:25:52 PM
#23
Not sure what you mean. I'm just saying if someone wanted to create a fork of Core with a 200MB blocksize cap now, it's not difficult. Then if they had the resources to deploy 1000 nodes, we'd be at your scenario.

Point is, this has nothing to do with BU.
legendary
Activity: 994
Merit: 1035
January 02, 2016, 04:23:27 PM
#22
I would say a Sybil attacker with the resources to cook up 1000 nodes will have no trouble modding a bit of C++ code or hiring a coder to do that. That's the least of the barriers, and even if it were to be relied on, that would be a losing battle. If inconvenience were all that is keeping Bitcoin secure, we would have a problem. Also see my edit to the post immediately above yours.

Is there any coded algorithm for determining blocksize consensus in BU available to post here?
legendary
Activity: 4214
Merit: 1313
January 02, 2016, 04:20:35 PM
#21
Isn't the difference being that BU will allow maxBlockSize to be determined by nodes and core/xt/ect... insures that miners make that decision, or am I missing something?

Well, that is already the case. BU just makes it more convenient.

True, I suppose nodes can already break off from the main chain with little to no hashing security and create their own chain. You are suggesting that in BU a sybil attack is made easier though as the incentive structures under core and xt is to stay on the chain with the majority hashing security? It is far easier and less expensive to spin up a bunch of nodes than replicate replicate the hashing power. Would you agree or disagree?

They don't even have to break off and form their own chain.  They can just recompile with a parameter changed to accept larger blocks.  And then in theory that larger block would be orphaned and it would go back to the main chain eventually. (There are other considerations for say allowing 200MB blocks with regard to just changing that parameter, but safe to ignore them in this reply I think).
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
January 02, 2016, 04:18:28 PM
#20
Don't we have to prepare for sybil attacks in a hostile environment?

I feel that it will just derail what is supposed to be a general discussion on the pros and cons of moving away from hard coded limits. We either find that the concept is valid, in which case a discussion on attack vectors is called for, or its found to be unworkable, in which case the attack discussion in irrelevant.

Also, its very hard to find attack vectors that are specific to this implementation and that cannot be applied to bitcoin as a whole.
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 04:17:56 PM
#19
I would say a Sybil attacker with the resources to cook up 1000 nodes will have no trouble modding a bit of C++ code or hiring a coder to do that. That's the least of the barriers, and even if it were to be relied on, that would be a losing battle. If inconvenience were all that is keeping Bitcoin secure, we would have a problem. Also see my edit to the post immediately above yours.
legendary
Activity: 994
Merit: 1035
January 02, 2016, 04:15:11 PM
#18
Isn't the difference being that BU will allow maxBlockSize to be determined by nodes and core/xt/ect... insures that miners make that decision, or am I missing something?

Well, that is already the case. BU just makes it more convenient.

True, I suppose nodes can already break off from the main chain with little to no hashing security and create their own chain. You are suggesting that in BU a sybil attack is made easier though as the incentive structures under core and xt is to stay on the chain with the majority hashing security? It is far easier and less expensive to spin up a bunch of nodes than replicate replicate the hashing power. Would you agree or disagree?
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 04:10:17 PM
#17
Isn't the difference being that BU will allow maxBlockSize to be determined by nodes and core/xt/ect... insures that miners make that decision, or am I missing something?

Well, that is already the case. BU just makes it more convenient. And granular: you don't just have a choice between Core@1MB and XT@8MB+, but rather anything - but the increased number of options doesn't mean users can't converge on a Schelling point; more options doesn't mean more viable options.

I imagine a series of jumps from one Schelling-point consensus to the next. For example, first everyone warily converges on Pieter's very conservative BIP (+17%/year), then as capacity increases faster than expected people jump to Adam's 2-4-8, then an unforeseen adoption surge induces a jump to BIP101, and finally people see where this is going and nodes/miners - as the foremost experts on the network - move independently of the devs to create their own Schelling points.

A specialization and division of labor would occur as it should in any mature industry, with consensus-parameter-setting unbundled from the software offerings of Core/etc. People would "hire" the Core/etc. devs for their secure code, not for their determining of consensus parameters. Those would be set by the larger market, reacting dynamically to market conditions. To do otherwise is arguably a security risk as it concentrates power in one team of devs.
staff
Activity: 3458
Merit: 6793
Just writing some code
January 02, 2016, 04:06:40 PM
#16
From what I understand, BU moves the block size limit from consensus rules to a node policy rule. Instead of having the limit hard coded in, the user chooses their own block size limit. Also if a BU node detects a blockchain that has a higher block size (up to a certain user configurable threshold), after that chain is a number of blocks deep (user configurable), then it will switch to use that blockchain and set its block size limit higher.

So what happens if I left my node at 1MB +10% user threshold and a 1.2MB block comes - does my node reject it?
IIRC the node will keep the block and watch the chain it is on. If the chain it is on becomes n blocks deep (where n is user configurable) then your client will switch to use that chain as the active one. Otherwise it stays with the one it is currently using.

How will the network not split into a myriad little shards which diverge following accidental and/or intentional double-spends without manual human coordination?
I don't know. You'll have to ask someone else about that.
legendary
Activity: 994
Merit: 1035
January 02, 2016, 04:05:55 PM
#15
Nodes decide how they play the game. If they feel that 2000 nodes suddenly appearing on the horizon demanding 200MB blocks is the way forward, then thats what they do. If, on the other hand, they are rational, then they wont.

I thought we were discussing how it works, you are discussing how to attack it.   Wink

Discussing its strengths and weaknesses is one of the best ways to understand how it works. Isn't it rational for many to attack the network? Don't we have to prepare for sybil attacks in a hostile environment? Shouldn't we design a network that isn't dependent upon rational actors with goodwill intent and is protected against both irrational actors, actors with malicious intent, and mistakes due to incompetance, shortcuts, or ignorance?

There's nothing stopping an attacker from modding the Core client themselves and setting up 2000 nodes with 200MB blocks. Or at least, it's not the little bit of C++ coding that's likely going to be what stops them Grin

BU is NOT a big blocks client, let alone an "unlimited blocksize" client. The "unlimited" only refers to unlimited options. At this time, BU is simply Core + a few options. They can all be turned off to mimic Core.

Isn't the difference being that BU will allow maxBlockSize to be determined by nodes and core/xt/ect... insures that miners make that decision, or am I missing something?
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 04:02:55 PM
#14
There's nothing stopping an attacker from modding the Core client themselves and setting up 2000 nodes with 200MB blocks. Or at least, it's not the little bit of C++ coding that's likely going to be what stops them.

Bitcoin Unlimited is NOT a big blocks client, let alone an "unlimited blocksize" client. The "unlimited" only refers to unlimited options. At this time, BU is simply Core + a few options. They can all be turned off to mimic Core exactly.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
January 02, 2016, 03:55:58 PM
#13
From a pure tech point of view, what is stopping the sybil attack on BU?

Without having dug much into it, can you answer what there is in place to stop me from setting up 2000+ nodes and adjusting the blocksize to 200MB per block, and thus subverting the entire network to form consensus on a smaller size?

Nodes decide how they play the game. If they feel that 2000 nodes suddenly appearing on the horizon demanding 200MB blocks is the way forward, then thats what they do. If, on the other hand, they are rational, then they wont.

I thought we were discussing how it works, you are discussing how to attack it.   Wink
sr. member
Activity: 381
Merit: 255
January 02, 2016, 03:40:38 PM
#12
From a pure tech point of view, what is stopping the sybil attack on BU?

Without having dug much into it, can you answer what there is in place to stop me from setting up 2000+ nodes and adjusting the blocksize to 200MB per block, and thus subverting the entire network to form consensus on a smaller size?
legendary
Activity: 1036
Merit: 1000
January 02, 2016, 03:33:32 PM
#11
The proposal seems at first skim to be a copy of a few existing technologies from Bitcoin's roadmap and were first proposed by Greg Maxwell and others*: weak-blocks & network-compression/IBLT to reduce orphan risk, and flexcap (or a variant of it perhaps).

That is something else, perhaps from one of the research papers on future areas of interest.

Bitcoin Unlimited's main change at present is simply that, for better or worse, it makes it more convenient for miners and nodes to adjust the blocksize cap settings. This is done through a GUI menu, meaning users don't have to mod the Core code themselves like some do now. Planned improvements to BU include options that automatically mimic the blocksize settings of some Core BIPs, as well as blocksize proposals recommended by other luminaries.

The idea is that users would converge on a consensus Schelling point through various communication channels because of the overwhelming economic incentive to do so. The situation in a BU world would be no different than now except that there would be no reliance on Core (or XT) to determine from on high what the options are. BU rejects the idea that it is the job of Core (or XT, or BU) developers to govern policy on consensus or restrict the conveniently available policy options on blocksize.

BU supporters believe that to have it otherwise is the tail wagging the dog: the finding of market-favored consensus is not aided, but rather hindered, by attempts to spoonfeed consensus parameters to the users. (This is putting it gently. Having a controversial parameter set at a specific number by default would be spoonfeeding, not even having the option to change it is more like force-feeding.)

Widespread adoption of BU, or adoption of BU-like configurability of settings within Core/XT, would relegate developer-led BIPs on controversial changes to the status of mere recommendations. Proposals like 2-4-8 would be taken into consideration, but would have to compete in the market on their own without the artificial advantage of the current barrier of inconvenience and technical ability (users having to mod their code to deviate from Core settings).

BU does not support bigger blocks, nor smaller blocks; it is rather a tool for consensus on blocksize to emerge in a more natural, market-driven way - free of market intervention as it were.

Adam, if you are confident that, for instance, 2-4-8 scaling is the best option and would be supported by the market, I think you should either support BU or support a Core BIP to make the blocksize settings configurable within the Core client.

Right now the leaders of the dominant Bitcoin implementation are for a low blocksize cap, but imagine if the situation reverses and big blockists are in control, to the consternation of many in the community. I think you would not want them locking down the settings. You might say, "You folks are doing fine otherwise, but you are off on the blocksize cap. Why try to play central planner? Please leave it up to the market if you are so sure the market will like your huge blocks. People will follow your recommendations if they like them anyway, so what are you worried about?"

If I were Core maintainer, I would do the same. Perhaps I would set a higher default, but I would not take the option away from the user. To do so risks sudden consensus shocks due to friction effects, risks my position being undermined silently, and most of all assumes I know better than everyone else. I might set it at 10MB. But I may be wrong; I'd rather trust in the market, because none of us knows better than a million people all with skin in the game.

As for how communication to settle on a Schelling consensus happens, besides the usual out-of-band communication that happens now in the debate, there is also interest in adding a tool within BU to efficiently communicate information about blocksize settings across the network, thereby facilitating an emergent consensus.

dynamic block-size game-theory

The game theory is the same as that arising in the choice of Core vs. XT vs. whatever (or among the BIPs by the miners and other stakeholders; if we look at the game theoretic considerations applying to the Core dev consensus process I'm sure you realize that problem is intractable). Miners and nodes have all the same choices now, except there is some additional friction introduced by Core's locking down of the blocksize settings, forcing miners and nodes to mod the Core code if they want to change them.

The question ought to be turned around: what are the game-theoretic considerations involved in having a monolithic reference client causing complicated issues of inertia, authority, and potential power grabs on top of the cleaner game theory? If tractability of a game theory analysis is the goal, surely BU is at least no more complicated than the situation under Core in the event of a hard fork.

How will the network not split into a myriad little shards without manual human coordination?

Ah. This is good. What I believe you are not noticing is that "manual human coordination" need not be top-down. Coordination can emerge, and it can be just as solid as any. Are you familiar with situations where it does? That would save a lot of ink.
Pages:
Jump to: