Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 2. (Read 71610 times)

hero member
Activity: 815
Merit: 1000

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?
https://bitcointalksearch.org/topic/the-swarm-client-proposal-reminder-15-btc-pledged-so-far-now-worth-3255-87763
(Second search link Tongue)

Quote
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 
The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.

To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.
This history cannot be faked due to the nature of the blocks tree-data-structure.

Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.

There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.
legendary
Activity: 1232
Merit: 1094
But the miners would still need to check those blocks, and eventually so would everyone else.  This could introduce a new network attack vector.

I think miners are going to need to verify everything, at the end of the day.  However, it may be possible to do that in a p2p way.

I made a suggestion in another thread about having "parity" rules for transactions. 

A transaction of the form:

Input 0: tx_hash=1234567890/out=2
Input 1: tx_hash=2345678901/out=1
Output 0:

would have an mixed parity, since the it its inputs come from some transaction with an odd hash and some with an even hash.

However, a parity rule could be added that requires either odd or even parity.

Input 0: tx_hash=1234567897/out=0
Input 1: tx_hash=2345678901/out=1
Output 0:

If the block height is even, then only even parity transactions would be allowed, and vice-versa for odd.

If a super-majority of the network agreed with that rule, then it wouldn't cause a fork.  Mixed parity blocks would just be orphaned.

The nice feature of the rule is that it allows blocks to be prepared in advance.

If the next block is an odd block, then a P2P miner system could broadcast a list of proposed transactions for inclusion and have them verified.  As long as all the inputs into the proposed transactions are from even transactions, they won't be invalidated by the next block.  It will only have transactions with inputs from odd transactions, under the rule.

This gives the P2P system time to reject invalid transactions.

All nodes on the network could be ready to switch to the next block immediately, without having to even read the new block (other than check the header).  Verification could happen later.
legendary
Activity: 1708
Merit: 1010
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.


Who any why?  That's the vague part.  Are miner's not checking the blocks themselves, are they depending upon others to spot check sections?  How does that work, since it's the miners who will feel the losses should they mine a block with an invalid transaction?  Realisticly, it'd be at least as effective to permit non-mining full clients to 'spot check' blocks in full, but on a random scale.  Say, only 30% of the blocks that they see do they check before they forward.  All blocks should be fully checked before intergrated into the local blockchain, and I can't see a way around that process.

Quote

Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system.  This would keep miners honest.

But the miners would still need to check those blocks, and eventually so would everyone else.  This could introduce a new network attack vector.
donator
Activity: 668
Merit: 500

Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.
donator
Activity: 668
Merit: 500
I would hate to see the limit raised before the most inefficient uses of blockchain space, like satoshidice and coinad, change the way they operate.
Who gets to decide what's inefficient?  You?  That's precisely the problem - trying to centralize the decision.  It should be made by those doing the work according to their own economic incentives and desires.

SD haters (and I'm not particularly a fan) like Luke-jr get to not include their txns.  Others like Ozcoin apparently have no issue with the likes of SD and are happy to take their money.  Great, everyone gets a vote according to the effort they put in.

In addition I would hate to see alternatives to raising the limit fail to be developed because everyone assumes the limit will be raised. I also get the sense that Gavin's mind is already made up and the question to him isn't if the limit will be raised, but when and how. That may or may not be actually true, but as long as he gives that impression, and the Bitcoin Foundation keeps promoting the idea that Bitcoin transactions are always going to be almost free, raising the block limit is inevitable.
Ah, now we see your real agenda - you want to fund your pet projects of off-chain transaction consolidation.

If that is such a great idea - and it may well be, I have no problem with it - then please realise that it will get funded.

If it isn't getting funded, then please ask yourself why.

But don't try and force others to subsidize what you want to see happen.  Why no do it yourself if it's a winning idea for the end users?

Likely neither you nor the rest are doing it because there's no real economic incentive to do so - for now, perhaps.  But that's what entrepreneurship is all about.
legendary
Activity: 1232
Merit: 1094
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.

Quote
For example, pool miners already don't have to verify blocks or transactions.

In fact, it would be much easier to write software that doesn't do it at all.  Atm, the minting fee is much higher than the tx fees, so it is more efficient to just mint and not bother with the hassle of handling transactions.

If there is a 0.1% chance that a transaction is false, then including it in a block effectively costs the miner 25 * 0.1% = 0.025BTC, since if it is invalid, and he wins the block, the block will be discarded by other miners.

P2P pools would be better setup so that they don't risk it, until tx fees are the main source of income.

Quote
And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client.  With light clients we can skip even that part, to a degree.

Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system.  This would keep miners honest.
legendary
Activity: 1708
Merit: 1010

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?

EDIT: Nevermind, I found it.  And I think that the main reason no one ever cited fault was because no one who knew the details of how the bitcoin block is actually constructed bothered to read it, or take your proposal seriously enough to respond.  I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

For example, pool miners already don't have to verify blocks or transactions.  They never even see them, because that is unnecessary.  The mining is the hashing of the 80 byte header, nothing more.  Only if the primary nonce is exhausted is anything in the dataset of the block rearranged, and that is performed by the pool server.  We could have blocks a gig per, and that would have negligible effects on pool miners.  And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client.  With light clients we can skip even that part, to a degree.
hero member
Activity: 815
Merit: 1000
So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.
Actually O(n*m) where m is the number of full clients
Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.

No one ever found flaws in it and those who bothered to read it generally thought it was pretty neat. Just saying. + it requires no hard fork and can coexist with current clients.

This would also kill the malice driven incentive for miners to drive out other miners as it would no longer work (only bother the WHOLE network).
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
 I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

Yes, the metric is the hard part.  I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away?  If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?

EDIT:
Another half-baked though -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size?  I think I remember Gavin suggesting something similar.

The more time I spend thinking about the max block size pushes me towards concluding that block propagation and verification times are most important. I am interested to read that you have a similar view.
full member
Activity: 135
Merit: 107
 I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

Yes, the metric is the hard part.  I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away?  If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?

EDIT:
Another half-baked thought -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size?  I think I remember Gavin suggesting something similar.
legendary
Activity: 1708
Merit: 1010
  I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?
full member
Activity: 135
Merit: 107
OK, this thread has been a bear to read but I'm glad I did.  I understand the desire to limit max block size due to bandwidth limits and I certainly do not want a Google-esque datacenter centralization of mining.  Since bandwidth is the primary issue (storage being secondary) then I'm with the people who focus their solutions around bandwidth and not things like profits or hash rate.  I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.
WiW
sr. member
Activity: 277
Merit: 250
"The public is stupid, hence the public will pay"
And yet, somehow when the reward got cut in half (block fees went down) the hash rate went down. Doh!

And yet, somehow when the hash rate went down nobody successfully attacked the network. What's your point? How does this have to do with decentralization?
full member
Activity: 154
Merit: 100
The absolute minimum transaction size(1) is for single input single-output transactions. They are 192 bytes each, 182 if transaction combining is aggressively used. 1MiB/10minutes * 182bytes = 9.6tx/s

Ok, let's go with 9.6 tps. That means roughly 25,000,000 transactions per month. If Bitcoin becomes a payment backbone where users transactions are reconciled monthly, the current blocksize limit supports 25 million users. I strongly dislike hard forks, so until we have several million users, the blocksize needs to be left alone.
When the user is in serveral millions, it is even harder for a hard fork. Actually, that is almost an event that may kill bitcoin.

Bullshit. Bitcoin is entirely voluntary and nodes are entirely sovereign. What this means is that if you don't like the rules, you can fork Bitcoin and go your separate way where your bitcoins are incompatible with mine. My Bitcoin doesn't get hurt from that, well only as much as the exchange rate would likely drop due to decreased demand if many users left. And you can have a new Bitcoin out of thin air with new rules and if you can find enough demand for it it can also be valuable and keep a respectable exchange rate.


There is no need for a one size fits all solution with Bitcoin. We can have as many forks as we want. And you can use the one the rules of which you agree with and I'll stick with the one the rules of which I agree with.
when I say harder, I mean a hard fork event that bring most/dominant hashing power with it quickly and resolve the general uncertainty that "which block chain will win". When you have that many users, it is harder to get to that point.
Everyone can do a fork to his own liking, it is only up to others to decide follow or not.
legendary
Activity: 1078
Merit: 1003
The absolute minimum transaction size(1) is for single input single-output transactions. They are 192 bytes each, 182 if transaction combining is aggressively used. 1MiB/10minutes * 182bytes = 9.6tx/s

Ok, let's go with 9.6 tps. That means roughly 25,000,000 transactions per month. If Bitcoin becomes a payment backbone where users transactions are reconciled monthly, the current blocksize limit supports 25 million users. I strongly dislike hard forks, so until we have several million users, the blocksize needs to be left alone.
When the user is in serveral millions, it is even harder for a hard fork. Actually, that is almost an event that may kill bitcoin.

Bullshit. Bitcoin is entirely voluntary and nodes are entirely sovereign. What this means is that if you don't like the rules, you can fork Bitcoin and go your separate way where your bitcoins are incompatible with mine. My Bitcoin doesn't get hurt from that, well only as much as the exchange rate would likely drop due to decreased demand if many users left. And you can have a new Bitcoin out of thin air with new rules and if you can find enough demand for it it can also be valuable and keep a respectable exchange rate.


There is no need for a one size fits all solution with Bitcoin. We can have as many forks as we want. And you can use the one the rules of which you agree with and I'll stick with the one the rules of which I agree with.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
...

And yet, somehow when the reward got cut in half (block fees went down) the hash rate went down. Doh!



Perhaps we can agree here. The block halving to 25BTC was the most significant event affecting the hash power in 4 years and we might have predicted that it would take a couple of years for the hash power to return to end-Nov 2012 levels. Yet in a few months the peak is being attained again. Sure the few ASICs running make a big difference, but halving the hash power did not stop millions of dollars of ongoing new investment which secures the network.
legendary
Activity: 1064
Merit: 1001
...

And yet, somehow when the reward got cut in half (block fees went down) the hash rate went down. Doh!


WiW
sr. member
Activity: 277
Merit: 250
"The public is stupid, hence the public will pay"
Nice job completely missing the point of the original post. Here's the short-bus explanation for the challenged:

small max block size: high fees, secure network, decentralized mining

large max block size: low fees, lower hash rate, centralized mining.

Any questions?

Yeah, how did you miss my entire post (and previous ones)? I refer to each and every thing you mention here. Why do high fees lead to a secure network and decentralized mining? I don't even think network security necessarily correlates with decentralization.

I'm making a point to the opposite, and I even explain how to prove me wrong. Here's a short-bus explanation:

Small max block size: High fees, high motivation for abuse, centralization (I even provide an example)

Large max block size: Low fees, low motivation for abuse, no monopoly (I even provide an example)
full member
Activity: 154
Merit: 100
The absolute minimum transaction size(1) is for single input single-output transactions. They are 192 bytes each, 182 if transaction combining is aggressively used. 1MiB/10minutes * 182bytes = 9.6tx/s

Ok, let's go with 9.6 tps. That means roughly 25,000,000 transactions per month. If Bitcoin becomes a payment backbone where users transactions are reconciled monthly, the current blocksize limit supports 25 million users. I strongly dislike hard forks, so until we have several million users, the blocksize needs to be left alone.
When the user is in serveral millions, it is even harder for a hard fork. Actually, that is almost an event that may kill bitcoin.
legendary
Activity: 3878
Merit: 1193
The absolute minimum transaction size(1) is for single input single-output transactions. They are 192 bytes each, 182 if transaction combining is aggressively used. 1MiB/10minutes * 182bytes = 9.6tx/s

Ok, let's go with 9.6 tps. That means roughly 25,000,000 transactions per month. If Bitcoin becomes a payment backbone where users transactions are reconciled monthly, the current blocksize limit supports 25 million users. I strongly dislike hard forks, so until we have several million users, the blocksize needs to be left alone.
Pages:
Jump to: