Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 10. (Read 14409 times)

legendary
Activity: 2674
Merit: 2965
Terminated.
The steps come in an order, Lauda. The 1st step, bizarrely, comes first.

There's little point in talking about the subsequent steps, if the 1st step is a significant problem in it's own right
So what is your point.. do Segwit and then nothing? That makes nothing better; it makes things actually worse.
legendary
Activity: 3430
Merit: 3080
The steps come in an order, Lauda. The 1st step, bizarrely, comes first.

There's little point in talking about the subsequent steps, if the 1st step is a significant problem in it's own right
legendary
Activity: 2674
Merit: 2965
Terminated.
It is well known that you could attempt to manipulate it in order to fit a lower amount of TXs. However, I was not talking about this part. I was talking about the other steps. The increase of the 'base size' helps in cases where users (or malicious actors) attempt to use a lot of "native keys". I will respond in that thread as well.

I think the system is very hard to game if you set:
1) A lower bound (size which is the absolute minimum when determining the maximum block size).
2) An upper bound (size which is the absolute maximum when determining the maximum block size).
3) Maximum movements per period (up and down).
4) Maximum total growth per year.

However, it may be very hard to gain consensus as this would have a lot of newly 'chosen' parameters.
full member
Activity: 182
Merit: 107
2MB is an action that would "compromise" decentralization and therefore security. and it would be occurring as a result of political pressure rather than technical necessity.

that is not a compromise i am willing to make.

Do you have any scientific research to back this up?

Any centralization of mining is power cost, not block size.
legendary
Activity: 2674
Merit: 2965
Terminated.
It can be seriously abused at Step 1 Lauda, the yearly maximums are a part of the steps subsequent to Step 1
Care to elaborate further on that with an example? I don't think we are on the same page.
legendary
Activity: 3430
Merit: 3080
Nope. Entirely unacceptable, all of that will be abused in seven ways from Sunday
I don't see how it could possibly be abused (size wise) if you add a maximum yearly growth. The consensus rules would dictate that it can't be increased more than that.

It can be seriously abused at Step 1 Lauda, the yearly maximums are a part of the steps subsequent to Step 1
legendary
Activity: 2674
Merit: 2965
Terminated.
To summarize the last discussions: Would that be an acceptable base for a "real" BIP?

1) Segwit as first measure;
2) adopt DooMAD/Garzik/Upal proposal (10%+ or 10%- voting when conditions are met), modified by adding a "upper limit"
3) for the first year, a 1,5 MB upper base limit (equivalent to ~3 MB maximal "block unit" size, for now)
4) for years 2-5, a 3 MB upper base limit (equivalent to 6-12 MB [worst case] maximal "block unit" size)
It doesn't look like you entirely understand how the block size works after Segwit. It changes the block size into two parameters, base size and weight (currently 1:4). If you use base size of 2 MB, you can get up to 4 MB worth of Segwit TXs or a maximum of 8 MB(!) in case of extreme multisignature usage. Your 'proposal' needs to be rewritten. A 'upper base limit of 1.5 MB' is actually equivalent to a maximum of 6 MB and not 3 MB.

Nope. Entirely unacceptable, all of that will be abused in seven ways from Sunday
I don't see how it could possibly be abused (size wise) if you add a maximum yearly growth. The consensus rules would dictate that it can't be increased more than that.

You initially denied the fact that it does ... now you are backtracking to 'not an issue'. Perhaps you should actually think about your claims before you make them.
I'm not backtracking on anything. This just shows open-mindedness, unlike what can be said for you and your kind. Money rules I guess.

IOW, you are happy to wallow in your ignorance. ::sigh:: Oh well - it certainly would not be a first.
Parallel validation is so useless that it isn't worth reading through its BIP. It's not about ignorance, it's about time efficiency.

I don't need to contact him. You are the one that appealed to his supposed authority.
I barely know who the person is. Stop using logical fallacies when you are clearly not adequately educated to properly do so.
legendary
Activity: 4410
Merit: 4766
knowing that ultimately (due to not preventing native key use) segwits only real promise is a fee discount(for the segwit key volunteers)
causes fungibility issues between which tx types cause preference or not(sigop,malle armed+ expensive or disarmed +cheap)

where people will have preference over which type of funds they receive/they prefer to deal with, based on which keys are used.
eg native(starting 1) becomes nasty and segwit(starting 3) becomes 'good money' people will not want to risk funds coming from a 1address

where not only uses start having thier preference but pools do too. where by if you know ur dealing with funds coming from native key may end up taking longer to confirm, or rejected to never confirm if pools start ignoring native keys. etc

aswell as the things jbreher  mentioned
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
That isn't disputed by most SegWit supporters.
Source?

I think you are on the verge of understanding that issue.
I don't see why it is an issue. I see it as a non-issue, just as you see quadratic validation as a non issue.

I said The SegWit Omnibus Changeset destroys fungibility. I did not say it was an issue. You initially denied the fact that it does ... now you are backtracking to 'not an issue'. Perhaps you should actually think about your claims before you make them.

OK... sure. I'm quite certain you are unable to poke a hole in my scenario there. Why don't you try? Or even ... why don't you ping Harding with what I posted, and have him see if he can poke holes in it?
I just quickly went through it and saw your conclusion.

IOW, you are happy to wallow in your ignorance. ::sigh:: Oh well - it certainly would not be a first.

Quote
I'm not going to be a messenger between you and someone with clearly superior understanding. Find a way to contact him yourself.

I don't need to contact him. You are the one that appealed to his supposed authority. Which, if accurately relayed by you in both directions (which may or may not have been the case) displays an incomplete analysis if the scenario.
legendary
Activity: 2674
Merit: 2965
Terminated.
So what is the definition of weight, and how does this relate to data storage requirements?
You should research Segwit yourself. I don't plan on rewriting what is already written in a lot of articles. Use Google.

but are you thinking that a quadratic sigop attack is just about limiting sigops to not cause validation delays when a solved block is relayed.
No.

EG if a block in v.14 only allowed 80ksigops for the block and 16k sigops per tx... it only takes 5 tx to fill a block. and not let anything else in.
It is 80k sigops ONCE Segwit is activated. It is scaled with the 4 MB weight, so it comes down to the same thing as before Segwit.

thus a native key user can with just 5 tx stop other tx's getting in. and thus all segwit promises cant be met.
It is quite likely that transactions with such a high amount of sigops would be above 100kb; therefore, they'd not be relayed nor mined by any miner using Bitcoin Core (a non standard TX).

so effectively.. what does segwit actually offer that is a real feature that has real 100% guarantee promise
Miners can prioritize Segwit transactions (e.g. 20% vs. 80%). Simple solution.

That's not what I was looking for. I wanted to know how you managed to extrapolate the conclusion that "most Segwit supporters" don't dispute that.

This is inadequate data. I'd like to see worst case numbers for every block size (e.g. 1 MB, 2 MB, ... 8 MB). This could be nicely represented in a table.

2MB is an action that would "compromise" decentralization and therefore security. and it would be occurring as a result of political pressure rather than technical necessity.
Without adequate consensus (this being a economic super majority and 95% of the network), it does harm every part of the ecosystem.
sr. member
Activity: 476
Merit: 501
In my opinion segwit should be a hard fork. It's not safe or desirable as a soft fork.
A dynamic base block solution is required that doesn't use arbitrary growth settings or further future block size hard fork debates.
Miners aren't going to create gigantic blocks by midnight, it would be orphaned by the speed of their competitors.
Technology limitations will ultimately decide blocksize growth. Miners are not going to spend 10mins creating a block.
Let a natural market fee market develop between on-chain transactions and off-chain service providers.
The two need implementing at the same time in the same hard fork, or conflict of interest issues could re-occur.
legendary
Activity: 4410
Merit: 4766
To summarize the last discussions: Would that be an acceptable base for a "real" BIP?

1) Segwit as first measure;
2) adopt DooMAD/Garzik/Upal proposal (10%+ or 10%- voting when conditions are met), modified by adding a "upper limit"
3) for the first year, a 1,5 MB upper base limit (equivalent to ~3 MB maximal "block unit" size, for now)
4) for years 2-5, a 3 MB upper base limit (equivalent to 6-12 MB [worst case] maximal "block unit" size)

all that coded into a single BIP/pull request, to make it attractive for "big blocker" miners to accept it.

(I hope the terminology is OK)
3 and 4.. = spoon feeding numbers by devs hard writing the limits and needing users to download new versions (if a dev team even rlease the limit).
if the last 2 years are anything to go by.. forget it.. 2 years SO FAR to debate just getting a 2mb REAL limit even when all devs admit 4-8 is safe so the 2015 2mb compromise was actually ok all along....
we cant keep having these "please dev release a version that has X" debates
and we cant even have users set limit and release to public in their own repo due to "its just a clone of core but not peer reviewed REKT it"

instead
each node could have a speedtest mechanism. it start-point is when it sees a new height block. its end-point is after it has validated and relayed it out.
then over a scale of 2016 blocks it combines the times to get a total score.(recalculated every 2016 blocks) that way it can flag its capability.
thn we can see what is capable

that way the network can know safe capable growth amounts without spoon feeding and 2 year debates.

then below the network limit (capability) upper level:
preference lower level:
also at GUI (options tab) and rpc-console(debug) level.. or even a downloadable .ini file patch USERS can change the settings themselves without needing to recompile each time their independant client. or having to wait for devs to spoonfeed it.
which is adjustable by consensus.



legendary
Activity: 3430
Merit: 3080
To summarize the last discussions: Would that be an acceptable base for a "real" BIP?

1) Segwit as first measure;
2) adopt DooMAD/Garzik/Upal proposal (10%+ or 10%- voting when conditions are met), modified by adding a "upper limit"
3) for the first year, a 1,5 MB upper base limit (equivalent to ~3 MB maximal "block unit" size, for now)
4) for years 2-5, a 3 MB upper base limit (equivalent to 6-12 MB [worst case] maximal "block unit" size)

all that coded into a single BIP/pull request, to make it attractive for "big blocker" miners to accept it.

(I hope the terminology is OK)

Nope. Entirely unacceptable, all of that will be abused in seven ways from Sunday
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
As core not done any research on this then?
I'm saying that you and I don't have adequate data, and no exact data in this thread. There was some article about a block that takes longer than 10 minutes to validate at 2 MB somewhere.
https://rusty.ozlabs.org/?p=522
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
To summarize the last discussions: Would that be an acceptable base for a "real" BIP?

1) Segwit as first measure;
2) adopt DooMAD/Garzik/Upal proposal (10%+ or 10%- voting when conditions are met), modified by adding a "upper limit"
3) for the first year, a 1,5 MB upper base limit (equivalent to ~3 MB maximal "block unit" size, for now)
4) for years 2-5, a 3 MB upper base limit (equivalent to 6-12 MB [worst case] maximal "block unit" size)

all that coded into a single BIP/pull request, to make it attractive for "big blocker" miners to accept it.

(I hope the terminology is OK)
legendary
Activity: 1652
Merit: 1029
2MB is an action that would "compromise" decentralization and therefore security. and it would be occurring as a result of political pressure rather than technical necessity.

that is not a compromise i am willing to make.
full member
Activity: 182
Merit: 107
That isn't disputed by most SegWit supporters.
Source?

Quote
The existence of two UTXO types with different security and economic properties also deteriorates Bitcoin’s fungibility. Miners and fully validating nodes may decide not to relay, or include in blocks, transactions that spend to one type or the other. While on one hand this is a positive step towards enforceability (i.e. soft enforceability), it is detrimental to unsophisticated Bitcoin users who have funds in old or non-upgraded wallets. Furthermore, it is completely reasonable for projects such as the lightning network to reject forming bidirectional payment channels (i.e. a multisignature P2SH address) using non-SW P2SH outputs due to the possibility of malleability. Fundamentally this means that the face-value of Bitcoin will not be economically treated the same way depending on the type of output it comes from.

https://medium.com/the-publius-letters/segregated-witness-a-fork-too-far-87d6e57a4179#.mt4mf9jjh
legendary
Activity: 4410
Merit: 4766
lauda.

take no offense by this..(think of this as a genuine question, requiring your critical thinking cap to be worn)
but are you thinking that a quadratic sigop attack is just about limiting sigops to not cause validation delays when a solved block is relayed.
..
because thats how im reading your thought process on your definition of DoS attack


have you thought about this as a DoS attack:
those limits which you think are defending validation time attack of solved blocks. can actually be used as an attack by filling a block

EG if a block in v.12 only allowed 20ksigops for the block and 4k sigops per tx... it only takes 5 tx to fill a block. and not let anything else in.
EG if a block in v.14 only allowed 80ksigops for the block and 16k sigops per tx... it only takes 5 tx to fill a block. and not let anything else in.

thus a native key user can with just 5 tx stop other tx's getting in. and thus all segwit promises cant be met.
no boost to blocksize as no segwit tx's adding to the block.

rmember im not talking about delay in validation times of propagation after block is solved. im just talking about filling blocks (mempool attack leaving everyone waiting)



so knowing that the segwit 'activation' does nothing to disarm native keys.
meaning
blocks can stay at 1mb. (either with as little as 5 native(sigopsbloatfill) or thousands of natives(databloatfill))
malleation and sigops are not disarmed.

so effectively.. what does segwit actually offer that is a real feature that has real 100% guarantee promise
sr. member
Activity: 476
Merit: 501
if you're talking about a scenario in which Segwit is activated. The block size is split into two parameters:
1) 1 MB base.
2) 4 MB weight.

So what is the definition of weight, and how does this relate to data storage requirements?
Pages:
Jump to: