Pages:
Author

Topic: I hold no view seeking information: 4 to 8 MB now. Technical Objections? - page 2. (Read 2249 times)

legendary
Activity: 2632
Merit: 1023
There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.



Mostly correct, but these problems are solvable. 

Flextrans (bunlded in BitcoinClassic)  offers an already coded solution to the quadratic hashing.   We can also limit the transaction size or number of sigops.

Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year. 
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin. 


OK... I see the quadratic issue seems solvable by part of segwit or Flextrans, so this would seem to at least reasonable argument that this part of the solution should be acceptable to all parties

As to the size issue, well, I feel 4 MB is not that large I mean even I could download that in time from where I am....I guess this argument is one of degree.....and mechanisms to distribute info.....
legendary
Activity: 3430
Merit: 3080
I had to re-download the 120 GB blockchain only recently, due (I think) to not allowing Bitcoin Core to shutdown properly and some part of the database becoming corrupted.

If the blockchain was growing at 8x the rate (or with Segwit, 20x or 40x if the base block was increased to 4 MB or 8 MB), I would have had to completely change all my plans around that, it would have taken so much more than the 2+ days that it took (using a Sandy Bridge laptop with external 1TB Samsung 850 SSD). 8MB baseblock with a corresponding 32 MB witness block (i.e. 40x) would have been far too much to contemplate.


This illustrates that even 100 GB IBD and validation is taxing in real world conditions, today.


Why are on-chain advocates never interested in the alternative hard-fork proposals, such as improving the tx encoding efficieny or using more space efficient signatures? By alwasy insisting on plain blocksize increases and nothing else, on-chain big-blockers ignore the threat of Mike Hearn's infamous "only 8 Google datacenters will run the Bitcoin network" future scenario. Jonald Fyookball talks constantly about this, and yet recently admitted he doesn't even run a Bitcoin node, he thinks someone else should do it for him.

If we increase even to 4MB, there is a risk that node numbers will go down, because the computer and net resources needed to keep up with that size of blockchain growth will overwhelm too many people that run nodes today. And I'm willing to risk the 4MB (that is, 1 MB base + 3 MB witness blocks) that Segwit increases the blocksize to. 8MB? Forget it for at least 2 years, it's bound to hurt the node count even more than 4MB.
legendary
Activity: 1120
Merit: 1003
Quote
Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Not only that, but we're already seeing TB hard drives at Best Buy. I'm old enough to remember that not so long ago (2000) I was blown away when I guy ordered a whole GB of memory for his Sun Starfire Server. He had a total WHOPPING 4GB in it!!! That was an insane amount of memory in a server the size of a refrigerator! Now most people have more than that in their frickin laptops.

By the time Bitcoin scales up to Visa sizes, most people will probably have a 4TB of RAM and 500TB (or more) hard drives.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.



Mostly correct, but these problems are solvable. 

Flextrans (bunlded in BitcoinClassic)  offers an already coded solution to the quadratic hashing.   We can also limit the transaction size or number of sigops.

Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.

Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year. 
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin. 
staff
Activity: 3458
Merit: 6793
Just writing some code
There are multiple problems with a block size increase in general, regardless of how big you make it. While 4 or 8 MB might sound reasonable to you, it in fact is not really that sustainable when you consider everything else that can result from having larger blocks.

An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.

Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.

Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.

There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.

Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.
legendary
Activity: 1120
Merit: 1003
Part of the reason so many people are switching clients away from Core is because they have tried to ask the same question and just got name-called and banned. What does that tell you?
legendary
Activity: 2632
Merit: 1023
What prevents say a blocksize to 4MB or 8MB now or some such as BTC may be reasonably said to be achieving higher usage, and bandwidth and storage is cheaper?

Why would the pro Segwit camp be against this?

Why would the pro BU camp be against this?

I am seeking technical arguments only and maybe financial interests as a consequence of the alternatives, not emotive responses.

At present I fail to see why both camps could not just upgrade to 4 or 8MB and continue of their BU and Segwit campaigns. Sort of just raises the threshold amount and allows more usage at less fees.

I also fail to see why if satoshi originally had 32 MB blocks why any party would object to going up to this size as usage increases

Also I don't buy you will get spam in blocks, because you have to pay to be included so all transactions are legitimate.

Pages:
Jump to: