Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 12. (Read 14409 times)

legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.

OK, 10 MB looks good for me (it would be possible to handle at least 50 million users with that) - and it's also close to Franky's 8 MB. With Segwit, if I understand it well, that transaction capacity (30 tps) would be equivalent to a 2-4 MB limit approximately.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.

How important fungibility is to you is something only you can decide.
legendary
Activity: 4410
Merit: 4766
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438

this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)

as for your link - https://github.com/bitcoin/bitcoin/pull/8438
Quote
Treat high-sigop transactions as larger rather than rejecting them

meaning they acknowledge they are allowing transactions to be more quadratically used to attack.

they simply think that its not a problem. but lets say in the future. things move forward. if they then made it 32000sigops per tx. and 160,000 per block. still thats 5 tx per block and also because a native malicious user will do it .. the TIME to process 5tx of 32,000 compared to last years 5tx of 4000 will impact...



the solution is yes increase BLOCK sigop limits. but dont increase TX sigop limits. keep it low 16,000 maybe but preferably 4000 as a constant barrier against native key malicious quadratic creators.
meaning if it was 80,000 a malicious user has to make 20 tx to fill the blocks 80,000 limit. instead of just 5..
and because its only 4000X20 instead of 16,000X5 the validation time is improved
but they havnt
sr. member
Activity: 476
Merit: 501
This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

Did any soft fork that came before it create a two tier network system? At least with a hard fork miners will not create segwit blocks until the vast majority of nodes have upgraded. Those who find there nodes unable to sync will upgrade there nodes. With the two tier network system introduced with the SWSF, nodes that have not been upgraded are being filtered data, so they are no longer full nodes. This appears to be a mechanism to bypass full node consensus, if the miners agree to start creating segwit blocks. Miners that do not wish to upgrade find they have too or risk having their blocks orphaned, so are basically forced to upgrade. Please someone correct my misunderstanding, otherwise I have a right to feel rather uncomfortable about this.
legendary
Activity: 2674
Merit: 2965
Terminated.
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.
admit there is a 2 tiered system. not the word twisting
As soon as you admit to being wrong with your "numbers". We all know that day won't come. Roll Eyes
legendary
Activity: 4410
Merit: 4766
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.

admit there is a 2 tiered system. not the word twisting
legendary
Activity: 2674
Merit: 2965
Terminated.
This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit
check the code
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438
sr. member
Activity: 476
Merit: 501
So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
legendary
Activity: 4410
Merit: 4766
sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit (for pools using these uptodate versions EG 0.14 today)
check the code
legendary
Activity: 2674
Merit: 2965
Terminated.
sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.
I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Let's say 5 years, 10 years maybe is too far away.
We also need to determine whether we are talking about a block size in the traditional sense or a post-Segwit 'base + weight' size (as the "new" block size). Which is it?
The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.
legendary
Activity: 4410
Merit: 4766
So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

something we can agree on.. needing segwit nodes as the 'upstream filters' (gmaxwell own buzzword) is bad for security. plus its not "backward compatible"

i prefer the term backward trimmed(trimmable), or backwards 'filtered' (using gmaxwells word) to make it clearer old nodes are not getting full validatable blockdata
not a perfect term. but atleast its slightly more clearer of what segwit is "offering" compared to the half truths and half promises and word twisting to offset giving a real answer.
legendary
Activity: 2674
Merit: 2965
Terminated.
The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
What is the reason for old tx's using quadratic hashing instead of linear hashing, and why is it considered safe with segwit if not for normal transactions?
That's the way that it is currently implemented; a known inefficiency (O(n^2) time). This is one of the reasons for which Segwit is quite beneficial. They packed up a lot of improvements at once.

But is there any reason that this could not be implemented on the old tx's?

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
legendary
Activity: 2674
Merit: 2965
Terminated.
you can by filling the baseblock.

EG
a block based on v:0.12 fills the 1mb block with sigop tx's totallying 4000 sigops per tx
a block based on v:0.14 fills the 1mb block with sigop tx's totallying 16000 sigops per tx
Are you trying to say that Bitcoin can be DOS'ed at 1 MB now? Roll Eyes

Is this because it will eventually only be possible to send to a segwit key, or is there some function in the two tier network that the SWSF creates?
No. You can refuse to use Segwit if you do not want to.

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible, therefore mitigating the risk of a network split.

Does flextrans require a new address key type as well?
Yes.
I believe Bitcoin's value comes mainly from its usability ....
 But that could lead to a long discussion, so here in this thread, let's focus on the block size issue. Wink
It sounded like I was talking to Roger Ver for a second, but okay.

Then I would encourage research on that topic - I think it's inevitable at some point to provide "lighter" IBD procedures. Maybe Electrum and other light wallets could serve as objects in such a study.
Then encourage it, but don't spread it around like it is trivial until we know for 'sure'.

No! Obviously the goal must be to allow end users running their node on PCs or notebooks. That was only a comment about professional equipment today - because the power of pro-equipment should be reached by consumer-level hardware at most a decade later. (Connectivity/bandwidth is another point, here you're right that mainly the upload bandwidth growth is a major bottleneck).
Noted. My bad.

Let's say 5 years, 10 years maybe is too far away.

(The 20 MB blocks were only an example to show the approximate relation between block size and possible user base, for now, I won't insist on this number)
We also need to determine whether we are talking about a block size in the traditional sense or a post-Segwit 'base + weight' size (as the "new" block size). Which is it?
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
If you don't believe that Bitcoin is digital gold, or you don't understand where the current value stems from, then you have to re-examine everything.
Maybe we've different opinions here. I believe Bitcoin's value comes mainly from its usability as a value transfer (and later, also value storage) platform for many use cases among many users ("network effect") and its advantage ahead of similar cryptocurrencies ("altcoins"). But that could lead to a long discussion, so here in this thread, let's focus on the block size issue. Wink

In the case of IBD I think that in that in that "drastic future" most users will end downloading blockchain snapshots. That has some centralization risks but I think they are manageable. [...]
You shouldn't throw in centralizing aspects like they are trivial changes. The impact of something like that, and potential security concerns are probably not properly researched.
Then I would encourage research on that topic - I think it's inevitable at some point to provide "lighter" IBD procedures. Maybe Electrum and other light wallets could serve as objects in such a study.

We're obviously talking about end users with consumer-level equipment. Professional users that use servers in well-connected datacenters should have no problems with 20 MB blocks, I think.
I don't understand why you want me, as a user, to spend a lot of money to run my node in datacenters? I use Bitcoin Core for everything, node, wallet, cold storage.

No! Obviously the goal must be to allow end users running their node on PCs or notebooks. That was only a comment about professional equipment today - because the power of pro-equipment should be reached by consumer-level hardware at most a decade later. (Connectivity/bandwidth is another point, here you're right that mainly the upload bandwidth growth is a major bottleneck).
Edit: What upper limit would you consider realistic?
In what time frame? Next 5, 10 years?
Let's say 5 years, 10 years maybe is too far away.

(The 20 MB blocks were only an example to show the approximate relation between block size and possible user base, for now, I won't insist on this number)
sr. member
Activity: 476
Merit: 501
-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

Is this because it will eventually only be possible to send to a segwit key, or is there some function in the two tier network that the SWSF creates?

If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good.
No. The difference between SFSF and SFHF is negligible (aside from hard forks being dangerous without consensus). In order for something like that happen, it would probably require a whole different BIP and approach.

So does this mean a soft fork bypasses consensus?

Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee and put unnecessary pressure on network capacity)?
I doubt it. Even the other attempt at fixing malleability with a hard fork called Flextrans (from the Classic dev, i.e. a BTU supporter) doesn't do that.

Does flextrans require a new address key type as well?
legendary
Activity: 4410
Merit: 4766
-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

you can by filling the baseblock.

EG
a block based on v:0.12 fills the 1mb block with sigop tx's totallying 4000 sigops per tx
a block based on v:0.14 fills the 1mb block with sigop tx's totallying 16000 sigops per tx

edit: here is the clincher justdoing 5 tx's uses up the blocks max tx count.. no one else can be added

READ THE CODE. not the sales pitch by blockstreamers on reddit
legendary
Activity: 2674
Merit: 2965
Terminated.
-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

If segwit was implemented as a hard fork, could the transaction malleation and quadratic sigops spam attack be solved for good.
No. The difference between SFSF and SFHF is negligible (aside from hard forks being dangerous without consensus). In order for something like that happen, it would probably require a whole different BIP and approach.

Could a native address automatically be a segwit address, negating the need for users to move UTXO's from native keys to segwit keys (which is going to cost a transaction fee and put unnecessary pressure on network capacity)?
I doubt it. Even the other attempt at fixing malleability with a hard fork called Flextrans (from the Classic dev, i.e. a BTU supporter) doesn't do that.
Pages:
Jump to: