Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 11. (Read 14392 times)

hero member
Activity: 994
Merit: 544
I think I misunderstood the below and thought you was talking about a post segwit block size increase.

Quote
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).

So that would be

3) 2 MB estimated effective capacity (1MB transaction block limit) post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
No, that is not the case. Basically the first two points are about a 'base block size increase', and the second one are just pure Segwit as it currently is. You can formulate the points in a better fashion, as long as they stay correct.

Using simple logic these arguments are nothing to be debated. We cannot deny that segwit came out to the open because it can solve the issue on blocksize. If it cannot help solve the current problems on bitcoin network then they will not propose it. For now instead of debating why not just wait for the consensus to occur so we can enjoy bitcoins without flaw.
legendary
Activity: 2674
Merit: 2965
Terminated.
So how would you reword point 3? The original version may be open to interpretation which could lead to misunderstanding. Why do you think my wording is not correct?
I did not say it was incorrect. If you want to be even further specific, stop referring to it as 'block size' if you're talking about a scenario in which Segwit is activated. The block size is split into two parameters:
1) 1 MB base.
2) 4 MB weight.
sr. member
Activity: 476
Merit: 501
I think I misunderstood the below and thought you was talking about a post segwit block size increase.

Quote
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).

So that would be

3) 2 MB estimated effective capacity (1MB transaction block limit) post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
No, that is not the case. Basically the first two points are about a 'base block size increase', and the second one are just pure Segwit as it currently is. You can formulate the points in a better fashion, as long as they stay correct.

So how would you reword point 3? The original version may be open to interpretation which could lead to misunderstanding. Why do you think my wording is not correct?
legendary
Activity: 2674
Merit: 2965
Terminated.
I think I misunderstood the below and thought you was talking about a post segwit block size increase.

Quote
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).

So that would be

3) 2 MB estimated effective capacity (1MB transaction block limit) post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
No, that is not the case. Basically the first two points are about a 'base block size increase', and the second one are just pure Segwit as it currently is. You can formulate the points in a better fashion, as long as they stay correct.
sr. member
Activity: 476
Merit: 501
5) 2 MB post Segwit (implies 100% native keys) -> higher DoS risk (quadratic hashing) - unless there are plans to limit native key space in blocks
No. You still don't understand Segwit. You can not create a 2 MB block using 100% native keys when Segwit is activated. You can only create a 1 MB block if you're using 100% native keys.

I think I misunderstood the below and thought you was talking about a post segwit block size increase.

Quote
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).

So that would be

3) 2 MB estimated effective capacity (1MB transaction block limit) post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
legendary
Activity: 2674
Merit: 2965
Terminated.
5) 2 MB post Segwit (implies 100% native keys) -> higher DoS risk (quadratic hashing) - unless there are plans to limit native key space in blocks
No. You still don't understand Segwit. You can not create a 2 MB block using 100% native keys when Segwit is activated. You can only create a 1 MB block if you're using 100% native keys.

As core not done any research on this then?
I'm saying that you and I don't have adequate data, and no exact data in this thread. There was some article about a block that takes longer than 10 minutes to validate at 2 MB somewhere.
sr. member
Activity: 476
Merit: 501
Yes and no. Writing it like that seems rather vague considering that we don't have exact data on it. It would be nice if someone actually did some in-depth research into this and tried to construct the worst kind of TX possible (validation time wise).

As core not done any research on this then? - also check edits above.
legendary
Activity: 2674
Merit: 2965
Terminated.
1) 1 MB (current) -> lower DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> higher DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No lower risk (quadratic hashing); the same as the first line.

Is that a fair FTFY?
Yes and no. Writing it like that seems rather vague considering that we don't have exact data on it. It would be nice if someone actually did some in-depth research into this and tried to construct the worst kind of TX possible (validation time wise).
sr. member
Activity: 476
Merit: 501
I'd like to understand the reason spamming with native keys is useless after segwit activation.
This is quite simple. Take a look:
1) 1 MB (current) -> No DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No DoS risk (quadratic hashing); the same as the first line.


1) 1 MB (current) -> lower DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> higher DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> lower DoS risk (quadratic hashing); the same as the first line.

Is that a fair FTFY?

Also

5) 2 MB post Segwit (implies 100% native keys) -> higher DoS risk (quadratic hashing) - unless there are plans to limit native key space in blocks
legendary
Activity: 2674
Merit: 2965
Terminated.
I'd like to understand the reason spamming with native keys is useless after segwit activation.
This is quite simple. Take a look:
1) 1 MB (current) -> No DoS risk (quadratic hashing).
2) 2 MB (bare block size increase) -> DoS risk (quadratic hashing).
3) 2 MB post Segwit (implies almost 100% Segwit usage) -> No DoS risk (linear hashing).
4) 1 MB post Segwit (implies 100% native keys) -> No DoS risk (quadratic hashing); the same as the first line.

I'll check the remainder of your post later today.
sr. member
Activity: 476
Merit: 501
-snip-
nothing stops a native tx from sigops spamming
You still don't understand it. Spamming with native keys after Segwit has been activated is useless. You can't DoS the network with them.

I'd like to understand the reason spamming with native keys is useless after segwit activation.

However, it would seem that core is mitigating the problem by putting in restrictive policies right now:

Note: Code from 0.14 branch, not backtracked it to see when it was added - edit checked it is in the 0.13.x branch, so maybe not something new

policy.h

Code:
/** The maximum weight for transactions we're willing to relay/mine */
static const unsigned int MAX_STANDARD_TX_WEIGHT = 400000;

policy.cpp

Code:
    // Extremely large transactions with lots of inputs can cost the network
    // almost as much to process as they cost the sender in fees, because
    // computing signature hashes is O(ninputs*txsize). Limiting transactions
    // to MAX_STANDARD_TX_WEIGHT mitigates CPU exhaustion attacks.
    unsigned int sz = GetTransactionWeight(tx);
    if (sz >= MAX_STANDARD_TX_WEIGHT) {
        reason = "tx-size";
        return false;
    }

net_processing.cpp

Code:
    // Ignore big transactions, to avoid a
    // send-big-orphans memory exhaustion attack. If a peer has a legitimate
    // large transaction with a missing parent then we assume
    // it will rebroadcast it later, after the parent transaction(s)
    // have been mined or received.
    // 100 orphans, each of which is at most 99,999 bytes big is
    // at most 10 megabytes of orphans and somewhat more byprev index (in the worst case):
    unsigned int sz = GetTransactionWeight(*tx);
    if (sz >= MAX_STANDARD_TX_WEIGHT)
    {
        LogPrint("mempool", "ignoring large orphan tx (size: %u, hash: %s)\n", sz, hash.ToString());
        return false;
    }

wallet.cpp

Code:
        // Limit size
        if (GetTransactionWeight(wtxNew) >= MAX_STANDARD_TX_WEIGHT)
        {
            strFailReason = _("Transaction too large");
            return false;
        }

In other words, segwit activation is not needed for the change. It is effective right now. So what does segwit activation bring?






legendary
Activity: 2674
Merit: 2965
Terminated.
That isn't disputed by most SegWit supporters.
Source?

I think you are on the verge of understanding that issue.
I don't see why it is an issue. I see it as a non-issue, just as you see quadratic validation as a non issue. Roll Eyes

OK... sure. I'm quite certain you are unable to poke a hole in my scenario there. Why don't you try? Or even ... why don't you ping Harding with what I posted, and have him see if he can poke holes in it?
It was not worth bothering to be frank[1]; I just quickly went through it and saw your conclusion. I'm not going to be a messenger between you and someone with clearly superior understanding. Find a way to contact him yourself.

[1] - Looks like I'm turning into Franky. Roll Eyes
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility? What happens when *all* (in theory) keys are Segwit UTXO? Fungibility suddenly returned?

I think you are on the verge of understanding that issue.

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
I've come to realize that it is pointless to event attempt that since you only perceive what you want to. You are going to come to the same conclusion each time, regardless of whether you're wrong or not.

OK... sure. I'm quite certain you are unable to poke a hole in my scenario there. Why don't you try? Or even ... why don't you ping Harding with what I posted, and have him see if he can poke holes in it?
full member
Activity: 182
Merit: 107
Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility?

Actually it is. That isn't disputed by most SegWit supporters.
legendary
Activity: 2674
Merit: 2965
Terminated.
Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility? What happens when *all* (in theory) keys are Segwit UTXO? Fungibility suddenly returned?

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
I've come to realize that it is pointless to event attempt that since you only perceive what you want to. You are going to come to the same conclusion each time, regardless of whether you're wrong or not.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow.

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
legendary
Activity: 2674
Merit: 2965
Terminated.
this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)
Nope. Wrong. You are confusing policy & consensus rules and Segwit. The 80k number is Segwit only. A non-Core client can create a TX with 20k maximum sigops, which is the maximum that the consensus rules allow (not the numbers that you're writing about, e.g. 4k nor 16k).

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow. Roll Eyes
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
This is a massive issue. Im surprised at the lack of votes so far

'Voting' is pointless. The only 'votes' that matter are tendered by people choosing which code they are running.

I'm 'voting' BU.
newbie
Activity: 17
Merit: 0
This is a massive issue. Im surprised at the lack of votes so far
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.

Harding's concern would be founded. But only to the point that all miners would suddenly start performing only zero-transaction block mining. Which of course is ludicrous.

What is not said, is that miners who perform zero-transaction mining do so only until they are able to validate the block that they are mining atop. Once they have validated that block, they modify the block that they are mining to include a load of transactions. They cannot include the load of transactions before validation, because until validated, they have no idea which transactions they need to exclude from the block they are mining. For if they mine a block that includes a transaction that was mined in a previous block, their block would be orphaned for invalidity.

So what would happen with parallel validation under such a scenario?

Miner A is mining at height N. As he is doing so, miner B solves a block that contains a aberrant quadratic-hash-time transaction (let us call this 'ADoS block' (attempted denial of service)) at height N, and propagates it to the network.
Miner A, who implements parallel validation and zero-transaction mining stops mining his height A block. He spawns a thread to start validating the ADoS block at height N. He starts mining a zero-transaction block at height N+1 atop ADoS.
Miner C solves a normal validation time block C at height N and propagates it to the network.
When Miner A receives block C, he spawns another thread to validate block C. He is still mining the zero-transaction block atop ADoS.
A short time thereafter, Miner A finishes validation of block C. ADoS is still not validated. So Miner A builds a new block at height N+1 atop block C, full of transactions, and switches to mining that.
From the perspective of Miner A, he has orphaned Miner B's ADoS block.
Miner A may or may not win round N+1. But statistically, he has a much greater chance to win round N+1 than any other miner that does not perform parallel validation. Indeed, until the ADoS block is fully validated, it is at risk of being orphaned.
The net result is that miners have a natural incentive to operate in this manner, as it assures them a statistical advantage in the case of ADoS blocks. So if Miner A does not win round N+1, another miner that implements parallel validation assuredly will. End result: ADoS is orphaned.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Pages:
Jump to: