Pages:
Author

Topic: Gavin Andresen Proposes Bitcoin Hard Fork to Address Network Scalability - page 10. (Read 18413 times)

legendary
Activity: 1904
Merit: 1074
Well this is the GREAT thing about BTC If something fails or it's not working, it can be changed. The new changes will only apply to those who FORKS to the new changed protocol.

The others are left behind, holding a ALT coin.  Grin

I see this as a improvement on something that might fail in future. Those who are not happy, can mine the new ALT coin.  Wink
 
newbie
Activity: 24
Merit: 0
Note that with the "compressed blocks" idea we'd be able to handle 50 txs/sec with the current 1MB limit.

Is that your idea? Have you written an explanation about how it's done, and have others chimed in to verify it's possible? Does Gavin know about that suggestion? Did he comment on it?


Its a pretty great idea for me. Smiley I hope Gavin can read it.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Note that with the "compressed blocks" idea we'd be able to handle 50 txs/sec with the current 1MB limit.

Is that your idea? Have you written an explanation about how it's done, and have others chimed in to verify it's possible? Does Gavin know about that suggestion? Did he comment on it?

Actually I am pretty sure that it was *his idea* called "O(n) blocks" or something like that Huh (I just use the term "compressed blocks" as I think that is more "intuitive"). As stated one major thing that would need to be resolved is "transaction malleability" for this idea to be a "goer" (and see my posts in the previous page for the explanation).

Funnily enough I had *come up with the same idea* (I am using it in my own blockchain design) about six months ago - but I never thought to suggest it for use with Bitcoin at the time.
hero member
Activity: 490
Merit: 500
Note that with the "compressed blocks" idea we'd be able to handle 50 txs/sec with the current 1MB limit.

Is that your idea? Have you written an explanation about how it's done, and have others chimed in to verify it's possible? Does Gavin know about that suggestion? Did he comment on it?

legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Note that with the "compressed blocks" idea we'd be able to handle 50 txs/sec with the current 1MB limit.
legendary
Activity: 4424
Merit: 4794
if gavin proposes to fork the chain. i propose that he forks it and jumps the max limit to 20mb or more for 2015, and then do the 50% increase a year,
as at the moment the tx per second limits for the next 10 years are proposed by gavin as
2014: 7/sec
2015: 10/sec(rounded down for easy maths, dont knitpick)
2016: 15/sec
2017: 22/sec
2018: 33/sec
2019: 49/sec
2020: 73/sec
2021: 109/sec
2022: 163/sec
2023: 244/sec
2024: 366/sec

366 tx per second is still not competitive enough to cope with visa/mastercard tx/s volume in 10 years.

yet if we start at 20mb in 2015
2015: 140/sec
2016: 230/sec
2017: 345/sec
2018: 517/sec
2019: 775/sec
2020: 1162/sec
2021: 1743/sec
2022: 2614/sec
2023: 3921/sec
2024: 5881/sec

which is more appealing as able to handle large volume.

just remember this is just opening up the potential tx volume and it wont actually bloat the blockchain unless there is actual transactions to fill the limits, meaning in 2015 we may have the potential of handling 140 tx per second even if actual tx average is still less than a dozen (thus giving a nice buffer space to cope with unpredictable growth)
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
So to restate what your proposal, you are saying that blocks  (when initially propagated) should only contain the TXID of the confirmed transactions in each block and nodes should store the signed message of each TXID in their mem pool.

Yes - basically the blocks just contain the txids - which can be matched with those in each nodes memory pool (assuming they are present - they may need to "request" txs if they don't already know about them).

A new node would need to check each signed TX in order for it to validate that the blockchain is in fact valid.

I am confused as to how the blocks would go from only having a TXID to having the entire signed TX.

That would be part of the "validation" - to illustrate:

Code:
Before Validation:
...

Step 1:
...

Step 2:
...

Step n:
...

So during validation the block is "expanded" by replacing the txids with the actual txs and then the expanded block can be persisted.
legendary
Activity: 1036
Merit: 1000
Thug for life!
this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX

The transaction hashes (malleability issues aside) mean they don't need to worry about the *honesty* of their peers - either each tx matches or it doesn't. The blockchain hasn't been changed in basic design either and for *normal bitcoin core nodes* that relay txs they will already have (at least most of) the txs for new blocks in their memory pool.

If you are worried about a "brand new node" that needs to catch up then I would suggest that they could be given "full blocks". As stated my goal wasn't to save on disk space (so the blocks would still be *stored* exactly as now). The idea is that for "general block publication" (to nodes that have most if not all relevant txs in their memory pool) there is no need for the tx scripts to be included in the block (saving a lot of bandwidth).

Nodes that "don't relay transactions" (nor keep a memory pool) would be a problem but I think that generally they would be connecting to things like "stratum" servers which could always just send them "complete blocks".

So to restate what your proposal, you are saying that blocks  (when initially propagated) should only contain the TXID of the confirmed transactions in each block and nodes should store the signed message of each TXID in their mem pool.

A new node would need to check each signed TX in order for it to validate that the blockchain is in fact valid.

I am confused as to how the blocks would go from only having a TXID to having the entire signed TX.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX

The transaction hashes (malleability issues aside) mean they don't need to worry about the *honesty* of their peers - either each tx matches or it doesn't. The blockchain hasn't been changed in basic design either and for *normal bitcoin core nodes* that relay txs they will already have (at least most of) the txs for new blocks in their memory pool.

If you are worried about a "brand new node" that needs to catch up then I would suggest that they could be given "full blocks". As stated my goal wasn't to save on disk space (so the blocks would still be *stored* exactly as now). The idea is that for "general block publication" (to nodes that have most if not all relevant txs in their memory pool) there is no need for the tx scripts to be included in the block (saving a lot of bandwidth).

Nodes that "don't relay transactions" (nor keep a memory pool) would be a problem but I think that generally they would be connecting to things like "stratum" servers which could always just send them "complete blocks".
legendary
Activity: 1036
Merit: 1000
Thug for life!
I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).

this change would require users to check with the nodes on every single transaction on the blockchain. How the blockchain is setup now, a user can download what they think is the blockchain, and easily check each block to make sure no transaction spends the same input, while if the blockchain only contains the TXID of a Tx then someone would not only need to download the blockchain but also connect to what they hope is an honest node to confirm the inputs and outputs of each TX
hero member
Activity: 686
Merit: 500
...
In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth

Ya, of course it could.  What could be easier than a simple little hard fork?  Personally I never understood why we didn't do hard forks several times per week.


There are serious risks to the miners when Bitcoin is hard forked. There is a possibility that some miners will not accept the fork at first which would result in the miners mining on a what will be a worthless blockchain.

The network should really only be forked when it is absolutely necessary.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
What's a hard fork, and what's a soft fork? FFS I've asked this before and no one answered. Can't the bitcoin genius step up to the plate and use this forum to educate instead of joke and berate?

I await...
http://bitcoin.stackexchange.com/questions/9173/what-is-a-hard-fork
Quote
Simply put, a so-called hard fork is a change of the Bitcoin protocol that is not backwards-compatible; i.e., older client versions would not accept blocks created by the updated client, considering them invalid. Obviously, this can create a blockchain fork when nodes running the new version create a separate blockchain incompatible with the older software.

http://bitcoin.stackexchange.com/questions/30817/what-is-a-soft-fork
Quote
Softforks restrict block acceptance rules in comparison to earlier versions.

That way, any blocks considered valid by the newer version are still valid in the old version. If at least 51% of the mining power shifts to the new version, the system self-corrects. (If less than 51% switch to the new version, it behaves like a hardfork though.)

Blocks created by old versions of BitcoinCore that are invalid under the new paradigm might commence a short-term "old-only fork". Eventually they would be overtaken by a fork of the new paradigm, as the hashing power working on the old paradigm would be smaller ("only old versions") than on the new paradigm ("accepted by all versions").
legendary
Activity: 2184
Merit: 1024
Vave.com - Crypto Casino
What's a hard fork, and what's a soft fork? FFS I've asked this before and no one answered. Can't the bitcoin genius step up to the plate and use this forum to educate instead of joke and berate?

I await...
legendary
Activity: 924
Merit: 1132
I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).


This.  *EXACTLY* this.  I said this three pages ago and the responding silence was deafening.  Come on guys, this is a REALLY good idea, and needs a response. 
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
I'm not sure if this had been mentioned elsewhere but if *malleability* could be resolved then one very simple way to reduce the amount of data per tx is to only put the transaction id's in them (not the actual signed transaction script).

From memory a typical raw tx is 200+ bytes so if we just stored the tx hash (32 bytes) then we have just made nearly a ten-fold improvement in bandwidth usage for blocks (of course if a node doesn't have all the txs in a new block already its memory pool then it would need to request those in order to validate the block).

Note that the tx scripts still need to be stored (until they can be pruned) so this is not a suggestion about "disk storage" but about reducing *bandwidth* (the txs are already being broadcast so they don't really need to be *repeated* in each block as well).
legendary
Activity: 4760
Merit: 1283
...
In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth

Ya, of course it could.  What could be easier than a simple little hard fork?  Personally I never understood why we didn't do hard forks several times per week.

sr. member
Activity: 448
Merit: 250
Would not the best solution be a dynamic update?

With gavin suggestions we have that max blocksize will become

2014: 1mb
2015: 1.5mb
2016: 2.25mb
2017: 3.375mb
2018: 5.0625
2019: 7.59375
2020: 11.390625
2021: 17.0859375
2022: 25.62890625
2023: 38.44335937
...

LOL!  I gotta wonder if that was not one of the ideas that came out of the closed door sessions with the CFR.  I guess we'll never know since Gavin never saw fit to either swear off private conversations or give the community a de-briefing of any which may or may not have happened.  At least that I saw.  A huge percentage of humans lack the capability to conceptualize an exponential function, and people who make it to CFR status understand this deficiency well and deploy it liberally as a weapon.

That said, I find the proposed formula workable insofar as it would not destroy the core concepts of the system.  At these relatively modest rates the system would still be operational under significant attack and it would still be feasible to verify transactions fully by my estimation.  I'm in.

The downside of this modest growth is that it would not come close to solving the 'problem' of giving the system the capacity to serve as an exchange currency.  So, what's the point?  Especially if sidechains or some other logical scaling mechanisms develop.  One way or another, let's run up into the economics of transaction fees to see how they actually work before something as drastic as a hard fork.


The proposed changes to the block size has a goal of keeping up with anticipated TX volume growth over time. You need to remember that the average block size right now is well under 1 MB so even though the max block size is 1 MB, we are really not starting at that size when comparing the max block size in the future to today's block size.

In the event that the block size growth is not able to keep up with TX growth then the protocol could easily be forked again to increase the block size growth
legendary
Activity: 4760
Merit: 1283
Would not the best solution be a dynamic update?

With gavin suggestions we have that max blocksize will become

2014: 1mb
2015: 1.5mb
2016: 2.25mb
2017: 3.375mb
2018: 5.0625
2019: 7.59375
2020: 11.390625
2021: 17.0859375
2022: 25.62890625
2023: 38.44335937
...

LOL!  I gotta wonder if that was not one of the ideas that came out of the closed door sessions with the CFR.  I guess we'll never know since Gavin never saw fit to either swear off private conversations or give the community a de-briefing of any which may or may not have happened.  At least that I saw.  A huge percentage of humans lack the capability to conceptualize an exponential function, and people who make it to CFR status understand this deficiency well and deploy it liberally as a weapon.

That said, I find the proposed formula workable insofar as it would not destroy the core concepts of the system.  At these relatively modest rates the system would still be operational under significant attack and it would still be feasible to verify transactions fully by my estimation.  I'm in.

The downside of this modest growth is that it would not come close to solving the 'problem' of giving the system the capacity to serve as an exchange currency.  So, what's the point?  Especially if sidechains or some other logical scaling mechanisms develop.  One way or another, let's run up into the economics of transaction fees to see how they actually work before something as drastic as a hard fork.

hero member
Activity: 490
Merit: 500
So, the limit is defined in main.h, line 42.
https://github.com/bitcoin/bitcoin/blob/5505a1b13f75af9f0f6421b42d97c06e079db345/src/main.h#L42

And the test is done at main.cpp at line 727.
https://github.com/bitcoin/bitcoin/blob/3222802ea11053f0dd69c99fc2f33edff554dc17/src/main.cpp#L727

So this looks like a very simple change. 

Is there any way for someone to play sillybuggers right around the transition period?  With the main chain and other chains at different heights, can the client ever be in a state where it's willing to accept larger blocks in one chain, or unwilling to accept larger blocks in one chain, due to the block height of some different chain?




Could anyone lay out exactly what are the risks with the proposed hard fork? Ie. what could go wrong, and what are the percentages it could go wrong (if possible to calculate it).
legendary
Activity: 924
Merit: 1132
So, the limit is defined in main.h, line 42.
https://github.com/bitcoin/bitcoin/blob/5505a1b13f75af9f0f6421b42d97c06e079db345/src/main.h#L42

And the test is done at main.cpp at line 727.
https://github.com/bitcoin/bitcoin/blob/3222802ea11053f0dd69c99fc2f33edff554dc17/src/main.cpp#L727

So this looks like a very simple change. 

Is there any way for someone to play sillybuggers right around the transition period?  With the main chain and other chains at different heights, can the client ever be in a state where it's willing to accept larger blocks in one chain, or unwilling to accept larger blocks in one chain, due to the block height of some different chain?

Pages:
Jump to: