Pages:
Author

Topic: Max Block Size Limit: the community view [Vote - results in 14 days] (Read 1915 times)

legendary
Activity: 1988
Merit: 1012
Beyond Imagination
I think many bank-like transaction services can be beneficial, since the purpose of bitcoin is to replace debt based money issuing of central bank, not replacing all those financial services. Just like gold, although itself are difficult to move, that does not stop people from building financial services based on them
legendary
Activity: 1764
Merit: 1002
I want a blocksize that allows the average user, with an average connection and an average computer (let's say 5 years old computer) to run a full node.

The magic of bitcoin is its decentralized nature: make it possible only to super-companies to run full nodes, and you will have a new central bank... And bitcoin will die.

yes
legendary
Activity: 892
Merit: 1013
i think a hard fork if succed would be a big plus for bitcoin.
It would mean that we can adapt if needed.
The ability to evolve is as important as the decentralisation. natural selection!

Now, of course i also believe that future 1 lead to much more centralisation than future 2.
legendary
Activity: 1148
Merit: 1018
I want to see concrete numbers: How many bits per second in transfer and how many transaction verifications per second can an average computer do?

Keep in mind that if I count the average in my household, the average PC has ~3 cores with ~2 GHz, ~6 GB RAM and about 2 TB of HDD space. In other areas of the world that might not even be reality in 5 years...

I'm sorry but I'm a non-technical user. Anyhow, it's not hard to see what is the "average computer". Just check the specs of a 5 years old Vaio, Apple laptop, personal-use computer. Of course that many people in Africa could not run a super note due to specs/network limitation. But the whole point of this is to prevent super companies being the only players able to run a full node. Something that can run in a standard, 5 years old, "personal use" computer by Dell, Apple, Sony Vaio, etc. would be definitely OK.

If you want me to be more specific: I would say that right now we would be looking at an Intel Core 2 2GH, 2GB RAM and 250MB of HDD space.
legendary
Activity: 1470
Merit: 1005
Bringing Legendary Har® to you since 1952
legendary
Activity: 1470
Merit: 1005
Bringing Legendary Har® to you since 1952
Agreed. you should have a 4th option. 'dynamically change the block size'
depending on some fixed and pre agreed algorithm

Therefore, this poll is totally broken.

I will make a new one.
hero member
Activity: 544
Merit: 500
Agreed. you should have a 4th option. 'dynamically change the block size'
depending on some fixed and pre agreed algorithm
legendary
Activity: 1106
Merit: 1001
My answer is: none of the above.
I think that the block size should be dynamic and recalculated every X blocks using some well-balanced algorithm.

Of course, results of this poll are not going to force anybody into a certain decision.

I agree... same approach as with difficulty, though perhaps more often.
legendary
Activity: 1470
Merit: 1005
Bringing Legendary Har® to you since 1952
My answer is: none of the above.
I think that the block size should be dynamic and recalculated every X blocks using some well-balanced algorithm.

Of course, results of this poll are not going to force anybody into a certain decision.
legendary
Activity: 1526
Merit: 1129
Do you really think this decision will be made based on a forum vote?

It doesn't work that way, sorry.
legendary
Activity: 2618
Merit: 1006
I want to see concrete numbers: How many bits per second in transfer and how many transaction verifications per second can an average computer do?

Keep in mind that if I count the average in my household, the average PC has ~3 cores with ~2 GHz, ~6 GB RAM and about 2 TB of HDD space. In other areas of the world that might not even be reality in 5 years...
legendary
Activity: 1148
Merit: 1018
I want a blocksize that allows the average user, with an average connection and an average computer (let's say 5 years old computer) to run a full node.

Could you please clarify this a bit more? How does a user with a 5 year old computer and an "average" (what is "average"?) connection benefit the network by operating a full node? How will "average" computers and connections look in 20 years?

Average users being able to run full nodes means average users being able to verify and sign. Means average users being able to prevent that a few super-nodes change the rules at their own will.

In 20 years "average computers" and "average connections" will grow. The rule is simple: for me too big is what an average computer cannot handle. What about an "average connection"? If you cannot run a full node through Tor, then it's not good. Don't forget that btc could be attacked by governments and other factual powers.
legendary
Activity: 1232
Merit: 1001
I'm unhappy with either future.

Intuitively, a block size limit of 1MB seems wrong, but allowing blocks of arbitrary size seems wrong too.

I don't think this needs to be a dilemma. I think we can have it both ways: A network that handles thousands of transactions per section AND where a normal user can still contribute towards storing the blockchain.

Whether this will require blockchain pruning, swarm nodes, storage fees, or some other solution, I don't know, but in either case it's a major project that will take years to implement.

In the meantime, we need a stopgap solution, and the limit needs to be increased to some new value X. I trust the developers to decide what that X should be.

I also don't think just increasing the Blocksize to a new max would be a Solution. Then we would eventually end up to have this problem again.

Also a free Blocksize would be dangerous.

We have already a self adjusting process for the creation time of Blocks in place, that ensures, no matter how the technical development goes blocks are (on average) found on a constant rate.

I'm sure a self adjusting Max Blocksize Function is possible, too.

A function that ensures that transaction space always remains scarce and limited. But at the same time ensures that there will be enough space that it remains possible to do a transaction.

F.E.: One that adjust the Blocksize in a way (possibly with each difficult adjustment), that during Peak (14 biggest Blocks of the last  last 144?) hours not all transactions can be included immediately but eventually during the low traffic times (14 smallest Blocks of the last  last 144?). Also include a limited increase / decrease at each change.
legendary
Activity: 2618
Merit: 1006
I want a blocksize that allows the average user, with an average connection and an average computer (let's say 5 years old computer) to run a full node.

Could you please clarify this a bit more? How does a user with a 5 year old computer and an "average" (what is "average"?) connection benefit the network by operating a full node? How will "average" computers and connections look in 20 years?
legendary
Activity: 1148
Merit: 1018
I want a blocksize that allows the average user, with an average connection and an average computer (let's say 5 years old computer) to run a full node.

The magic of bitcoin is its decentralized nature: make it possible only to super-companies to run full nodes, and you will have a new central bank... And bitcoin will die.
legendary
Activity: 938
Merit: 1001
bitcoin - the aerogel of money
I'm unhappy with either future.

Intuitively, a block size limit of 1MB seems wrong, but allowing blocks of arbitrary size seems wrong too.

I don't think this needs to be a dilemma. I think we can have it both ways: A network that handles thousands of transactions per section AND where a normal user can still contribute towards storing the blockchain.

Whether this will require blockchain pruning, swarm nodes, storage fees, or some other solution, I don't know, but in either case it's a major project that will take years to implement.

In the meantime, we need a stopgap solution, and the limit needs to be increased to some new value X. I trust the developers to decide what that X should be.
legendary
Activity: 2618
Merit: 1006
According to https://en.bitcoin.it/wiki/Scalability#Network an average transaction is ~0.5 kB (let's say 500 bytes).

Max block size = 1 million bytes, every 10 minutes. This means 2000 transactions per 10 minutes, 200 per minute and 3.333... transactions per second.
The minimum transaction size is "~0.2 kB" --> that's where these 7 transactions per minute come from.

One question is then also what to do with more complex transactions if there are only 7 "minimal" transactions possible in a second (e.g. can we "dumb down" smarter transactions) and so on.

At the moment by the way we're close to bumping against a quarter(!) of this limit, the only reason this is discussed now is that Bitcoin might continue to gain traction and take shorter than another 12 years to bump against the current hard limit of 1 million bytes per block.

donator
Activity: 2058
Merit: 1054
You are presenting a false dichotomy between "payments are transactions in the blockchain" and "payments are processed by traditional service providers".

Bitcoin is a powerful technology which allows more advanced applications.

If you make multiple payments to a given merchant you can use payment channels. And if you want more flexibility you can use my add a third party to the mix, but with absolutely minimal trust requirement and dependency which is nothing like traditional providers.


Anyway, the block size limit should eventually be increased, but not by an algorithm.
legendary
Activity: 1078
Merit: 1002
My answer: None of the above.

I will agree to, even advocate for, any rule change which solves a technical problem but leaves the core principles of Bitcoin, immediately and into the foreseeable future, intact, the most important of which is what I call my Bitcoin sovereignty.
newbie
Activity: 24
Merit: 1
My answer is none of the above.
Might I suggest another choice:

"Whatever everyone else is doing."

This position could be criticized as sheep-like, certainly, unthinking, just following.  But that is exactly what I think is best.  Honestly I don't really think a 1MB block limit is The End of Bitcoin.  And a 100MB block limit wouldn't be either.  Some kind of well designed variable limit based on fees over the last 2016 blocks, or difficulty, or something smart, sure, that'd be OK too.

You know what WOULD be The End of Bitcoin though?  If half the people stick to 1MB blocks, and the other half go to 10MB.  Then that's it, game over man.  MTGox price per bitcoin would plummet, right?  Because... MTGox price per WHICH bitcoin?  Etc.

So I'll go with what everyone else is doing.  And everyone else should too.  (There may be some logical feedback loop there...)  If there is a fork, it needs to come into effect only after substantially all of the last few weeks worth of transactions are all with a big-block compatible client, miners, everything.  Only then should any change be made.
Pages:
Jump to: