Pages:
Author

Topic: Bigger blocks coming in release 0.11 - page 3. (Read 4732 times)

legendary
Activity: 1652
Merit: 1015
May 05, 2015, 11:42:27 AM
#51
But no! The problem is not storage, but bandwidth. At least that's what many claim.

Bandwidth becomes a problem at over 1 MB every 10 minutes?  My moderate Internet connection can download 1 MB every 1/8th of a second.

Yes, but apparently there are people with 56k connection that want to run full nodes...

And now that I think about it, it's still enough to handle 20 MB every 10 minutes, isn't it?

Yeah I remember those 56k modem days. That's 56000 bits per second, not bytes. It wouldn't cope with 20MB blocks.

hero member
Activity: 658
Merit: 500
May 05, 2015, 11:15:49 AM
#50
But no! The problem is not storage, but bandwidth. At least that's what many claim.

Bandwidth becomes a problem at over 1 MB every 10 minutes?  My moderate Internet connection can download 1 MB every 1/8th of a second.

Yes, but apparently there are people with 56k connection that want to run full nodes...

And now that I think about it, it's still enough to handle 20 MB every 10 minutes, isn't it?
hero member
Activity: 493
Merit: 500
May 05, 2015, 10:42:09 AM
#49
But no! The problem is not storage, but bandwidth. At least that's what many claim.

Bandwidth becomes a problem at over 1 MB every 10 minutes?  My moderate Internet connection can download 1 MB every 1/8th of a second.
hero member
Activity: 658
Merit: 500
May 05, 2015, 10:34:41 AM
#48
one small step for future upscaling one Giant Leap towards complete centralisation  Lips sealed
How is it a "Giant Leap" to complete centralization? The blocks won't instantly become 20mb and by the time they are it won't be a big difference that storage size will be larger than it is today. We are talking a full Terabyte a year at every block being 20mb, this would have to be one hell of a busy network and at the standard. For 5 years of block chain it would only cost 120$ http://www.newegg.com/Product/Product.aspx?Item=9SIA5AD2RS3903, if you don't think in 5 years the price of 5tb will be A LOT cheaper just look at my previous post and again this would have to be at full block capacity which it won't be for quite some time. The idea of nodes being centralized is only theory and it doesn't hold much water.

But no! The problem is not storage, but bandwidth. At least that's what many claim.
hero member
Activity: 658
Merit: 500
May 05, 2015, 09:57:06 AM
#47
Light weight clients are nice and even I use them, but the blockchain is getting more and more centralized, what kills the initial concept o Bitcoin.

I don't that “kills” the initial concept of Bitcoin at all.

Long before the network gets anywhere near as large as that, it would be safe
for users to use Simplified Payment Verification (section 8) to check for
double spending, which only requires having the chain of block headers, or
about 12KB per day.  Only people trying to create new coins would need to run
network nodes.  At first, most users would run network nodes, but as the
network grows beyond a certain point, it would be left more and more to
specialists with server farms of specialized hardware.  A server farm would
only need to have one node on the network and the rest of the LAN connects with
that one node.

The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day. 
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.

If the network were to get that big, it would take several years, and by then,
sending 2 HD movies over the Internet would probably not seem like a big deal.

Satoshi Nakamoto
Q7
sr. member
Activity: 448
Merit: 250
May 05, 2015, 07:55:43 AM
#46
I think this has been discussed in-depth in the other thread and has been long in the planning stage. So I would say it's about time. Anyway it is something that we would need to prepare for the future because eventually we have to do it no matter how. If we want to go mainstream, increasing the size to absorb more transactions per block is the only way.
legendary
Activity: 1386
Merit: 1000
English <-> Portuguese translations
May 05, 2015, 07:31:46 AM
#45
Even though getting a 1TB HDD is cheap today, what gets me worried is about the network.
Downloading a 40GB+ file isn't easy in the whole world(tons of places and countries have poor quality ISP with low speeds).
Light weight clients are nice and even I use them, but the blockchain is getting more and more centralized, what kills the initial concept o Bitcoin.
legendary
Activity: 2674
Merit: 2965
Terminated.
May 05, 2015, 07:18:41 AM
#44
As I understand it, Bitcoin blocks were always limited by MAX_SIZE (32 MiB).  It was in 2010 when MAX_BLOCK_SIZE (1MB) was introduced as a temporary anti-DOS measure.

Unfortunately, while many had high hopes that a robust mechanism could be devised to defeat the blocksize problem, no such proposals have survived close scrutiny.  Clever-sounding fee policy discussions (transactions becoming more expensive as blocks grow) were replaced with more general musings about votes (e.g. have full nodes report how long it takes them to download and verify blocks and adjust MAX_BLOCK_SIZE to target a sane duration).  Today, the blocksize limit remains in place and we seem to be left with few, very ugly options.
Bitcoin originally had no block size limit; it was added later as an anti-DOS measure with the expectation that it would be increased if necessary.

Really? I never knew that! Who wrote the 1M limit into the codes? I have always assumed 1M was hard-coded in from the beginning, and miners/pools have been experimenting with the block size themselves to balance propagation speed and fees.
Well you're right although I'm not sure if it was 32 MB. Someone older would have to verify if it had a 32MB limit at some point.
No the 1MB limit was not hard-coded from the begging. This is something that the majority seem to think, but it is wrong. We've reached that point in time where the measure, i.e. limit needs an increase.
We need to make enough room for future users and their transactions.
legendary
Activity: 1246
Merit: 1004
May 05, 2015, 06:38:52 AM
#43
As I understand it, Bitcoin blocks were always limited by MAX_SIZE (32 MiB).  It was in 2010 when MAX_BLOCK_SIZE (1MB) was introduced as a temporary anti-DOS measure.

Unfortunately, while many had high hopes that a robust mechanism could be devised to defeat the blocksize problem, no such proposals have survived close scrutiny.  Clever-sounding fee policy discussions (transactions becoming more expensive as blocks grow) were replaced with more general musings about votes (e.g. have full nodes report how long it takes them to download and verify blocks and adjust MAX_BLOCK_SIZE to target a sane duration).  Today, the blocksize limit remains in place and we seem to be left with few, very ugly options.

Disclaimer: I was quite new to Bitcoin at the time and am far from an authority on this topic.
hero member
Activity: 821
Merit: 1000
May 05, 2015, 06:18:56 AM
#42
Satoshi promised this 5 years ago

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.


Thanks that is the one I was thinking off and there is another one where he discussed it IIRC..
legendary
Activity: 1792
Merit: 1087
May 05, 2015, 05:42:18 AM
#41
Satoshi promised this 5 years ago

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

hero member
Activity: 821
Merit: 1000
May 05, 2015, 05:21:02 AM
#40
and why satoshi didn't think about this? i predicted the increase in adoption for sure, because he predicted big farm, he should have created big blocks since the beginning, at least now with don't need any fork

Bitcoin never had a block size limit originally ... it was added for DOS protection and satoshi himself said it would/could/should be upgraded in the future as needed .. (don't ask me for the post but I am sure someone else can pinpoint it)
legendary
Activity: 952
Merit: 1003
--Signature Designs-- http://bit.ly/1Pjbx77
May 05, 2015, 04:46:45 AM
#39
Bitcoin originally had no block size limit; it was added later as an anti-DOS measure with the expectation that it would be increased if necessary.

Really? I never knew that! Who wrote the 1M limit into the codes? I have always assumed 1M was hard-coded in from the beginning, and miners/pools have been experimenting with the block size themselves to balance propagation speed and fees.
legendary
Activity: 4326
Merit: 3041
Vile Vixen and Miss Bitcointalk 2021-2023
May 05, 2015, 04:11:05 AM
#38
Correct me if I am wrong... Bill Gates said something like this ..." We WILL NOT need more than 64k of RAM "
You are wrong. He said nothing like that, and neither did Satoshi. Bitcoin originally had no block size limit; it was added later as an anti-DOS measure with the expectation that it would be increased if necessary.
legendary
Activity: 3206
Merit: 1069
May 05, 2015, 03:05:13 AM
#37
and why satoshi didn't think about this? he predicted the increase in adoption for sure, because he predicted big farm, he should have created big blocks since the beginning, at least now with don't need any fork
legendary
Activity: 1652
Merit: 1015
May 05, 2015, 02:47:14 AM
#36
Correct me if I am wrong... Bill Gates said something like this ..." We WILL NOT need more than 64k of RAM "

He was clearly wrong... Look at where we are now. Satoshi made the same mistake, but the protocol allow for scalability, so it can be corrected.

If we go to mass adoption, we would have more transaction volumes and it's not currently possible with the current block size.

I would rather be pre-emptive than having a situation, where we sit with red faces, when we cannot handle huge transaction volumes and we get forced to do it in the future.  Huh

What mistake did Satoshi make?
legendary
Activity: 1274
Merit: 1000
May 05, 2015, 02:46:30 AM
#35
For what purpose is the bigger block? Why is there a need for a 1 megabyte block, the blockchain works fine like now?
Blocks only get big when there are many transaction in them, so how could a miner create a bigger block without that much transactions
http://gavinandresen.svbtle.com/why-increasing-the-max-block-size-is-urgent
http://gavinandresen.ninja/time-to-roll-out-bigger-blocks

Very good move Smiley, Gavin is strong point of Bitcoin

You second link is the same one I started this thread with. Smiley
legendary
Activity: 1904
Merit: 1073
May 05, 2015, 02:43:32 AM
#34
Correct me if I am wrong... Bill Gates said something like this ..." We WILL NOT need more than 64k of RAM "

He was clearly wrong... Look at where we are now. Satoshi made the same mistake, but the protocol allow for scalability, so it can be corrected.

If we go to mass adoption, we would have more transaction volumes and it's not currently possible with the current block size.

I would rather be pre-emptive than having a situation, where we sit with red faces, when we cannot handle huge transaction volumes and we get forced to do it in the future.  Huh
legendary
Activity: 1652
Merit: 1015
May 05, 2015, 12:24:36 AM
#33
This should be interesting.
legendary
Activity: 1876
Merit: 1000
May 04, 2015, 11:48:13 PM
#32
one small step for future upscaling one Giant Leap towards complete centralisation  Lips sealed
Pages:
Jump to: