Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 16. (Read 14409 times)

sr. member
Activity: 476
Merit: 501
Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.

Preference level for me would be (current moment of thought - I reserve the right to change my mind):
Segwit + dynamic block size HF > block size HF > BTU > Segwit SF. The latter introducing a two tiered network system and a lot of technical debt.

Although a quick and simple static block size increase is needed ASAP to allow time to get the development of the preferred option right.
legendary
Activity: 2674
Merit: 2965
Terminated.
imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).
Yes, that's exactly what my 'proposal/wish' is supposed to have. A dynamic lower bound and a fixed upper bound. The question is, how do we determine an appropriate upper bound and for what time period? Quite a nice concept IMHO. Do you agree?

i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).
Problems:
1) 20 MB is too big right now.
2) 1 TB is definitely too big. Just imagine the IBD after 2 years.
3) You're thinking too big. Think smaller. We need some room to handle the current congestion, we do not need room for 160 million users yet.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)
Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.
legendary
Activity: 4410
Merit: 4766
I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.

mhm
dont think 7billion by midnight.

think rationally. like 1billion over decades.. then your fears start to subside and you start to see natural progression is possible

bitcoin will never be a one world single currency. it will be probably in the top 10 'nations' list. with maybe 500mill people. and it wont be overnight. so relax about the "X by midnight" scare storys told on reddit.
sr. member
Activity: 476
Merit: 501
My thoughts are:

Was the 1 MB cap introduced as an anti spam measure when everybody used the same satoshi node, and did that version simply stuff all mempool transactions into the block in one go?

Big mining farms are probably not using reference nodes, since they probably wouldn't be able to pick transactions where they have been priotised using a transaction accelerator.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)

Miners have to way up the benefits of the higher processing costs required to build a bigger block versus the orphan risk associated with the delay caused by it. In other words, a more natural fee market develops.

So it won't be massive blocks by midnight.

Any comments? (probably a silly question  Wink )

legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.
legendary
Activity: 4410
Merit: 4766
You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.

imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).

now imagine
we call the hard technical limit (like old consensus.h) that only moves when the NETWORK as a whole has done speed tests to say what is technically possible and come to a consensus.
EG 8mb has been seen as acceptable today by all speed tests.
the entire network agrees to stay below this, pools and nodes
as a safety measure its split up as 4mb for next 2 years then 8mb 2 years after that..

thus allowing for upto 2-4 years to tweak and make things leaner and more efficient and allow time for real world tech to enhance.
(fibre obtic internet adoption and 5G mobile internet) before stepping forward the consensus.h again



then the preferential limit(further safety measure) that is adjustable and dynamic (policy.h) and keeps pools and nodes inline in a more fluid temporary adjustable agreement. to stop things moving too fast. but fluid if demand occurs

now then, nodes can flag the policy.h whereby if the majority of nodes preferences are at 2mb. pools consensus.h only goes to 1.999
however if under 5-25% of nodes are at 2mb and over 75% of nodes are above 2mb. then POOLS can decide on the orphan risk of raising their pools consensus.h above 2mb but below the majority node policy

also note: pools actual block making is below their(pools) consensus.h

lets make it easier to imagine.. with a picture

black line.. consensus.h. whole network RULE. changed by speed tests and real world tech / internet growth over time (the ultimate consensus)
red line.. node policy.h. node dynamic preference agreement. changed by dynamics or personal preference
purple line.. pools consensus.H. below network RULE. but affected by mempool demand vs nodes overall preference policy.h vs (orphan)risk
orange line.. pools policy.h below pools consensus.h


so imagine
2010
32mb too much, lets go for 1mb
2015
pools are moving thier limit up from 0.75mb to 0.999mb
mid 2017
everyone agree's 2 years of 4mb network capability (then 2 years of 8mb network capability)
everyone agree's to 2mb preference
pools agree their max capability will be below everyones network capability but steps up due to demand and node preference MAJORITY
pools preference(actual blocks built). below other limits but can affect the node minority to shift(EB)
mid 2019
everyone agree's 2 years of 8mb network capability then 2 years of 16mb network capability
some move preference to 4mb, some move under 3mb some dont move
late 2019
MINORITY of nodes have their preference shifted by dynamics of (EB)
2020
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
late 2020
MINORITY of nodes have their preference shifted by dynamics of (EB)
2021
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
mid 2021
a decision is made whereby nodes preference and pools preference are safe to control blocks at X% scaling per difficulty adjustment period
pools preference(actual blocks built). below other limits but can shift the MINORITY nodes preference via (EB) should they lag behind

p.s
its just a brainfart. no point knit picking the numbers or dates. just read the concept. i even made a picture to keep peoples attention span entertained.

and remember all of these 'dynamic' fluid agreements are all extra safety limits BELOW the black network consensus limit
legendary
Activity: 2674
Merit: 2965
Terminated.
I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?
The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.
You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?

The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.
legendary
Activity: 2674
Merit: 2965
Terminated.
I support SegWit  Grin
I forgot to mention in my previous post, that this is a healthy stance to have as the majority of the technology oriented participants of the ecosystems are fully backing Segwit.

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.
I think it does, as it doesn't initially reduce the block size. This is what made luke-jr's proposal extremely contentious and effectively useless.

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.  I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year.  But recurring increases every diff period are unlikely if the total fees generated has to increase every time.  We'd reach an equilibrium between fee pressure easing very slightly when it does increase and then slowly rising again as blocks start to fill once more at the new, higher limit.
legendary
Activity: 2674
Merit: 2965
Terminated.
I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.
I don't understand that statement. Are you talking about DooMAD's idea (modified BIP100+BIP106) or the compromise proposed by "ecafyelims", or both?
Both.

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

if blocks grow to say 8mb we just keep tx sigops BELOW 16,000 (we dont increase tx sigop limits when block limits rise).. thus no problem.
That's not how this works.

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.
full member
Activity: 154
Merit: 100
***crypto trader***
I support SegWit  Grin
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.

...

Thanks I wasn't aware of that. Probably something worth offering in conjunction with BIP102 then.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
im starting to see what game jbreher is playing.
...

Now you just look silly. I'll leave it at that.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.
Right, there is no *public* code that I'm aware of, and I do hack on bitcoind for my own purposes, especially the mining components so I'm quite familiar with the code. As for "up until the point in time that it is not", well that's the direction *someone* should take with their code if they wish to not pursue other fixes for sigop scaling issues as a matter of priority then - if they wish to address the main reason core is against an instant block size increase. Also note that header first mining, which most Chinese pools do (AKA SPV/spy mining), and as proposed for BU, will have no idea what is in a block and can never choose the one with less sigops.

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.

...
legendary
Activity: 4410
Merit: 4766
im starting to see what game jbreher is playing.

now its public that segwit cant achieve sigop fix. he is now full on downplaying how bad sigops actually is...
simply to down play segwits promises by subtly saying 'yea segwit dont fix it, but it dont mtter because there is never been a sigop problem'

rather than admit segwit fails to meet a promise. its twisted to be 'it dont matter that it doesnt fix it'.

much like luke JR downplaying how much of a bitcoin contributor his is at the consensus agreement. by backtracking and saying he signed as a human not a bitcoin contributor.. much like changing his hat and pretending to be just a janitor to get out of the promise to offer a dynamic blocksize with core.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.
Right, there is no *public* code that I'm aware of, and I do hack on bitcoind for my own purposes, especially the mining components so I'm quite familiar with the code. As for "up until the point in time that it is not", well that's the direction *someone* should take with their code if they wish to not pursue other fixes for sigop scaling issues as a matter of priority then - if they wish to address the main reason core is against an instant block size increase. Also note that header first mining, which most Chinese pools do (AKA SPV/spy mining), and as proposed for BU, will have no idea what is in a block and can never choose the one with less sigops.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.

That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).

The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.

No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy.

"no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not.
legendary
Activity: 4410
Merit: 4766
also worth noting

miners dont care about blocksize.

an ASIC holds no hard drive. an asic receives a sha256 hash and a target.
all an asic sends out is a second sha256 hash that meets a certain criteria of X number of 0's at the start

miners dont care if its 0.001mb or 8gb block..
block data still ends up as a sha hash

and a sha hash is all a miner cares about



pools (the managers and propagators of blockdata and hash. do care what goes into a block)

when talking about blockdata aim your 'miner' argument to concern pools ... not the miner(asic)
legendary
Activity: 1512
Merit: 1012
This and other similar mix of BIP's has been suggested... If it scales, I'm down for it or pretty much anything else.

I have read DooMAD's proposal now and I like it a bit. It would give less powers to miners as they only can vote for small block size increases, but would eliminate the need for future hardforks. The only problem I see is that it could encourage spam attacks (to give incentives to miners to vote higher blocksizes) but spam attacks will stay as expensive as they are today will be even more expensive than today because of the "transaction fees being higher than in last period" requirement, so they are not for everyone.

Very interesting post. The other issue I see here is that such code doesn't exist, thus isn't tested, so it can't be deployed anytime soon.

Pages:
Jump to: