Pages:
Author

Topic: SegWit + Variable and Adaptive (but highly conservative) Blockweight Proposal (Read 2090 times)

legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Yes. I think what we need is code - an actual implementation draft - and a real BIP proposal, as soon as we can. The UASF/MAHF polarization doesn't look good. Unfortunately, I am the wrong person for this (I only know a little Python, no C++ at all).

Any news with respect to the "orphaning risk problem"? I have looked at Luke's BIP but there's nothing about it, there. And unfortunately here my knowledge about the issue comes to a limit.

Maybe it would be even OK to start with a BIP and ignore that issue for now, or draft two versions (one with the decrease option, the other without it).
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
With the Miner-activated fork vs User-activated fork situation looming on the horizon, time is running out if you don't want a fixed blocksize "solution", which will undoubtedly make us revisit this same horrific debacle when we start hitting another purpose-built wall later.  Whoever activates the fork, either blinkered and shortsighted outcome is foolish.

Either we reach an intelligent compromise soon, or we descend into chaos and farce once again in the future.

It's time to decide.
legendary
Activity: 4410
Merit: 4766
But I'm reluctant to drop the reduction aspect in the same way I'm reluctant to adopt Carlton's fixed upper cap.  I sincerely doubt you'd accept his idea and there's no way he'd accept yours, heh.  Both views are a bit too far towards one of the polarised extremes.  In order to be a compromise, I'm trying to steer this thing somewhere towards a happy middle-ground.

carltons 'infinite growth' .. or as the reddit script Fudster buzzword calls it "gigabytes by midnight"
i facepalm that.

imagine it this way.
we are in 2013.. consensus is 1mb.. but policy is 0.5mb
now imagine if that 0.5mb was not simply a decision pools made alone. but nodes had some control of. to ensure it didnt jack up above 0.5mb too fast.
where nodes had a speed test benchmark mechanism in their node which publicised what they could cope with.
nodes wouldnt necessarily orphan the blocks above 0.5mb. but would atleast highlight to pools where pools should slowdown if there was not a good healthy node capability

EG
2018 new rules
8mb consensus
nodes publicise 4mb capability

pools made blocks below 4mb at healthy increments of 1mb-4mb over time(eg 0.25mb/year (kind of like 2011-2015(roughly))).. where pools knew what they do would not cause risks to nodes.. thus not cause orphan drama. or drops of node count
if pools know what the network can handle then pools know what not to risk


separate rant
what i do truly laugh at is while the "gigabytes by midnight" fudsters are screaming 'it will kill full node count'
they are not arguing how many full nodecounts are dropping due to prunned, no witness(stripped/filtered/downstream) features which have been added and told are "all good and safe"
legendary
Activity: 4410
Merit: 4766
the simple solution is a better fee priority formulae..
that way you dont have to decrease the blocksize that hurts everyone should in a fortnights time demand picks up again but hits the decreased wall.. (as thats just silly)

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb
by decreasing the blocksize means he can fill the block with less transactions. which is stupid aswell as all the complexities of trying to avoid the rescan orphan things i said before

a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk

Well, I did ask:

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?

which is related to fees making it harder to spam, and then the thread died for almost 3 days.   Grin

But yeah, let's look at the fee priority mechanism as well.  Each level of security we can add makes it that bit more robust.  

value of a TX is meaningless.. and 'rich' spammers got around the old fee mechanism by having a TX where one output had 10kbtc and the other outputs of the same tx had 1sat each
which allowed him to not pay a fee because the old formulae was based on value.

this ended up hurting everyone else though. especially those from 3rd world countries who were not spammers, were not able to have 10k btc to counter the fee, where they just innocently wanted to send more than a couple hours labour (only a few cents) but ended up paying more in fee's.. while malicious spammers didnt pay a fee because they simply worked around the 'value' test


all that matters is how 'fresh' the coins are and how bloated the tx is.

i see no reason at all for ANYONE to have the need for 10%-20% allocation of a block just for 1tx
so things like
4k txsigops of 20k blocksigops
or
16k txsigops of 80k blocksigops
is literally asking for trouble.(5tx fills the block)

i suggest if the block is going to be 4mb (80k blocksigops)
then make txsigops 2k... and make sure even if blocksigops rises.. txsigops does not. that way each increase makes it harder.

also
the 100kb 'larger than' tx data rule.. again who the hell deserves 10% of block space.
bring that down to 10kb or less. and keep it down even if the blocksize increases.

that way it makes it cost more to fill up
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
the simple solution is a better fee priority formulae..
that way you dont have to decrease the blocksize that hurts everyone should in a fortnights time demand picks up again but hits the decreased wall.. (as thats just silly)

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb
by decreasing the blocksize means he can fill the block with less transactions. which is stupid aswell as all the complexities of trying to avoid the rescan orphan things i said before

a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk

Well, I did ask:

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?

which is related to fees making it harder to spam, and then the thread died for almost 3 days.   Grin

But yeah, let's look at the fee priority mechanism as well.  Each level of security we can add makes it that bit more robust.  But I'm reluctant to drop the reduction aspect in the same way I'm reluctant to adopt Carlton's fixed upper cap.  I sincerely doubt you'd accept his idea and there's no way he'd accept yours, heh.  Both views are a bit too far towards one of the polarised extremes.  In order to be a compromise, I'm trying to steer this thing somewhere towards a happy middle-ground.
legendary
Activity: 4410
Merit: 4766
Regarding the problem to "decrease" the maximum block size eventually: I have thought a bit about it, I'm not an expert but I think also it would be desirable to decrease the maximum block size in the case blocks are far from being full, to dis-incentive spam attacks. It would however not be a show-stopper because the proposal is really so conservative that a spam attack would be very, very expensive anyway.

Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.

There must be a simple fix, since (at least as far as I remember seeing) no one raised the issue when BIP106 was originally proposed, or, more particularly, when lukejr proposed reducing the blocksize.  I'm sure a dev wouldn't have made a proposal with a gaping hole in it.  Someone would have voiced concerns well before this point if it were a showstopper.  Obviously this wouldn't work as a soft fork, so if all nodes are upgraded, it stands to reason we can tell them not to reject blocks that were valid at the time.

As for blocks being newly appended to the chain at the moment of a reduction, miners could voluntarily operate a soft cap of .01 base and .03 witness under the current threshold if they wanted to play it safe.  Effectively they could operate a week in lieu of the actual limit.  Plus that's only an issue if the blocks are full to the brim at the time.

there is a simple fix.. without all the cludgy code to drop the blocksize and ensure resyncing doesnt cause issues of orphaning when blocksize drops

afterall decreasing the blocksize hurts everyone, should in a fortnights time demand picks up again but hits the decreased wall.. (so thats just silly) as all the complexities of trying to avoid the rescan orphan things i said before

but if the block remained at 4mb but was 'empty' it would cost a spammer a hell of a lot more to fill it. compared to a block that decreased to under 4mb..


the solution is simple.. a new fee priority formulae
a better fee priority mechanism ensures the spammers pay more for spamming every block while not causing issues for the normal folk.

here is one example - not perfect. but think about it
imagine that we decided its acceptable that people should have a way to get priority if they have a lean tx and signal that they only want to spend funds once a day. (reasonable expectation)
where
if they want to spend more often costs rise,
 if they want bloated tx, costs rise..
to which, things Like LN would become a viable option for those that are innocent but need to spend regularly.

which then allows those that just pay their rent once a month or buys groceries every couple days to be ok using onchain bitcoin.. and where the costs of trying to spam the network (every block) becomes expensive where by they would be better off using LN. (for things like faucet raiding/day trading every 1-10 minutes)

so lets think about a priority fee thats not about rich vs poor(like the old one was) but about reducing respend spam and bloat.

lets imagine we actually use the tx age combined with CLTV to signal the network that a user is willing to add some maturity time if their tx age is under a day, to signal they want it confirmed but allowing themselves to be locked out of spending for an average of 24 hours.(thats what CLTV does)

and where the bloat of the tx vs the blocksize has some impact too... rather than the old formulae with was more about the value of the tx


as you can see its not about tx value. its about bloat and age.
this way
those not wanting to spend more than once a day and dont bloat the blocks get preferential treatment onchain ($0.01).
if you are willing to wait a day but your taking up 1% of the blockspace. you pay more ($0.44)
if you want to be a spammer spending every block. you pay the price($1.44)
and if you want to be a total ass-hat and be both bloated and respending EVERY BLOCK you pay the ultimate price($63.72)

note this is not perfect. but think about it

in short dcreasing the blocksize consensus can cause more issues for everyone, requires more coding and more cludge
where as a fee priority makes frequent spammers pay more.
the fee priority also makes sure that people, innocent or guilty of spamming DONT pay the same penalty. thus being fair to the innocent that care about the transactions they make and penalise the ones that dont care and just wanna respend as fast as possible but refuse to use LN
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.

There must be a simple fix, since (at least as far as I remember seeing) no one raised the issue when BIP106 was originally proposed, or, more particularly, when lukejr proposed reducing the blocksize.  I'm sure a dev wouldn't have made a proposal with a gaping hole in it.  Someone would have voiced concerns well before this point if it were a showstopper.  Obviously this wouldn't work as a soft fork, so if all nodes are upgraded, it stands to reason we can tell them not to reject blocks that were valid at the time.

As for blocks being newly appended to the chain at the moment of a reduction, miners could voluntarily operate a soft cap of .01 base and .03 witness under the current threshold if they wanted to play it safe.  Effectively they could operate two weeks in lieu of the actual limit.  Plus that's only an issue if the blocks are full to the brim at the time.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Regarding the problem to "decrease" the maximum block size eventually: I have thought a bit about it, I'm not an expert but I think also it would be desirable to decrease the maximum block size in the case blocks are far from being full, to dis-incentive spam attacks. It would however not be a show-stopper because the proposal is really so conservative that a spam attack would be very, very expensive anyway.

Regarding franky1's orphan risk because of rescanning nodes: I think there is no other way than what aklan said, to store the maxblocksize changes in the blockchain, so the nodes are aware of the changes when they rescan. There would be perhaps another possibility  - to make a conditional decision ("if CheckedBlockSize > ActualMaxBlockSize and CheckedBlockHeight < (ActualBlockHeight - 2016) then AcceptBlock") so nodes can accept larger blocks when they rescan and are more than one difficulty period under the actual block height, but I don't know if this introduces new attack vectors like nodes passing a fake ActualBlockHeight value.
legendary
Activity: 924
Merit: 1000
If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.

In the scenario you've described, both blocks would be the same size.  Empty and unused space indeed doesn't create any additional data requirements.  But I did give at least 3 reasons why there is a point in being able to reduce the maximum blocksize if the space isn't being used.  Unused space has the potential to be abused.  We want to limit the potential for abuse.

By the same token, you could ask if there is any point in having a maximum blocksize at all.  It essentially amounts to the same thing.  Smaller is generally considered safer.

That is what i want to know.
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.

In the scenario you've described, both blocks would be the same size.  Empty and unused space indeed doesn't create any additional data requirements.  But I did give at least 3 reasons why there is a point in being able to reduce the maximum blocksize if the space isn't being used.  Unused space has the potential to be abused.  We want to limit the potential for abuse.

By the same token, you could ask if there is any point in having a maximum blocksize at all.  It essentially amounts to the same thing.  Smaller is generally considered safer.
legendary
Activity: 924
Merit: 1000
If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?

Hope that clears it up.  Smiley

You haven't answered my questions.
legendary
Activity: 1778
Merit: 1008
a very detailed and clear response. thanks!
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?
i think it's meant to be like the rising and (rare, but possible) falling of the difficulty. it can adjust up, or down, as needed.

though as i sit here typing this i'm having a hard time thinking of a real reason it would need to get smaller. the difficulty, of course, needs to match the available hash power to provide security and keep the prescribed block times. but since we can already make and confirm smaller blocks if we want to, i don't know that a reduction in max size is needed.

i'm not a coder of any kind, though. don't take my word for it.

There are actually a few reasons for that decision:

Partly it's the simple fact that we don't know what the future holds or what the levels of demand may be as time goes by.  So if we're aiming for the proposal to be adaptable to demand in real time, it makes sense that we don't want to arbitrarily limit the types of situations it can adapt to.  

Then, as previously mentioned, there's the disincentives to spam, or to game the system with artificial volume.  If demand isn't legitimate, a reduction will negate any fraudulent increases as soon as the attack can't be maintained.  We don't want to encourage spam.  That's a huge no-no.  While miners can certainly choose voluntarily to make smaller blocks, it should be noted there is a clear financial benefit to be gained from cramming in more transactions and collecting more fees as a result.  Gaming the system to reach a higher blocksize to squeeze in more tx in this manner is also a no-no.  We want natural and organic growth, not manipulation.

Also, many deem fee pressure to be an important characteristic of Bitcoin.  In an ideal world it should have a fair amount of consistency and not fluctuate too wildly.  While we obviously don't want fees to be too high, at the same time, we don't want them to be too low, either.  If the space available exceeds demand, fees could potentially diminish, which could sway the alignment of incentives for miners.  This particular issue is a big problem with all of the "whole number" blocksize proposals, that generally involve at least doubling the blocksize and completely obliterating any kind of fee pressure.  As such, changes should be smaller and more frequent.

And lastly, the legitimate concerns over the costs of bandwidth for full nodes as the total blocksize increases.  We have to take every reasonable precaution to prevent any large increases that could potentially result in a drop in node count.  Plus, there have been enough instances in this increasingly ugly scaling debate where one side appears to be shouting over the other and not taking into consideration opposing views.  With this proposal, I'd hope those of both sides of the argument at least feel their voice is being heard.  

Hope that clears it up.  Smiley
legendary
Activity: 1778
Merit: 1008
i think it's meant to be like the rising and (rare, but possible) falling of the difficulty. it can adjust up, or down, as needed.

though as i sit here typing this i'm having a hard time thinking of a real reason it would need to get smaller. the difficulty, of course, needs to match the available hash power to provide security and keep the prescribed block times. but since we can already make and confirm smaller blocks if we want to, i don't know that a reduction in max size is needed.

i'm not a coder of any kind, though. don't take my word for it.
legendary
Activity: 924
Merit: 1000
Else IF more than 90% of block's size, found in the first 2016 of the last difficulty period, is less than 50% MaxBlockSize
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB


If 1mb block had 1 * 250k tx and 2mb block had 1 * 250k tx - surely the file size of the block is the same? or not? Empty spaces don't have any bytes??

If there any point in reducing the maxblocksize?
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Define Spam first.

 Grin

I think that's another one of those things that everyone finds difficult to agree on.  The safest definition for me would be deliberate and repeated transactions with no intention to transfer any value, but it's not always easy to recognise such transactions if the culprit is determined to cover their tracks.  Some attackers are more blatant than others.  But equally it's easy to lose context and assume that all small value transactions or transactions with low fees are spam, but this isn't a safe assumption due to users in less economically wealthy parts of the world getting involved.  All we can really do is minimise the motivation to engage in deliberate spamming by making it expensive or difficult (or both) to do.

Was Litecoin's spam fix ever implemented in Bitcoin?  And if not, could we look at implementing that as part of this proposal?
hv_
legendary
Activity: 2534
Merit: 1055
Clean Code and Scale
I like the idea of adaptive blocksizes.  

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed.  

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Define Spam first.

 Grin
legendary
Activity: 1512
Merit: 1012
I remember vaguely of reading about this proposal on the Development forum (not sure if OP is the same) and I think deploying this would be a very reasonable solution that would be of interest for both "sides" of this question. In addition to this many people have already talked for several times about dynamic blocks... And I think the only reason this hasn't been implemented yet is because we don't have enough developments regarding that.

I'm all for scaling. I wouldn't mind seeing a system like this go live on Bitcoin.

Plus, this thread is a nice read in a forum where there's been a lot of hate lately Smiley
legendary
Activity: 1778
Merit: 1008
could track what the max block size was for a given difficulty, perhaps? either a database of "this difficulty = this blocksize" or some sort of code, while determining orphans, to check if the blocksize was valid at the time.
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
@DooMAD: I have, however, a slight update proposal:

Change
Code:
(TotalTxFeeInLastDifficulty > TotalTxFeeInLastButOneDifficulty) 

to

Code:
(TotalTxFeeInLastDifficulty > average(TotalTxFee in last X difficulty periods)) 

with X = 4 or more, I would propose X = 8.

The reason is that totaltxfee can have fluctuations. So a malicious person/group that wanted to increase the block size could produce a "full block spam attack" in a difficulty period just after a period with relatively low TotalTxFee.

Exceptional reasoning, I'm totally on board with that.  I was hoping we could find improvements that help raise disincentives to spam and this absolutely qualifies.  OP updated.  Thanks.   Smiley


It's not about precedents. Or about how many people say it (seriously?)


It's about design. It's about logic. Don't talk to us about what everyone already thinks or has said, talk about what makes sense. Satoshi wouldn't have made Bitcoin if he'd listened to all the preceding people who said that decentralised cryptocurrency was an unsolvable problem, you don't solve design problems by pretending the problem doesn't exist.

Maybe it's just me, but I honestly don't see what's logical about taking a stab in the dark now with no way to accurately forecast future requirements.  Particularly if that stab in the dark could easily result in another contentious debate later.  If someone can convince me why a potential hard fork later is somehow better than an equally potential soft fork later, I'll reconsider my stance.


also
Code:
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB

this would cause orphan risks of when nodes rescan the block chain

if 2016 blocks were X then the next time 2016 blocks were y(=x-0.01)  with the rules being Y.. all the X blocks would get orphaned because the 2016 blocks that are X are above the current Y rule.

Is there some kind of workaround or fix that would still enable us to reduce dynamically while limiting the potential for orphans?  I have doubts a sufficient supermajority could be reached for the proposal if max sizes could only increase.  It needs to be possible to reduce if there's a lack of demand.
Pages:
Jump to: