Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 15. (Read 14376 times)

legendary
Activity: 4270
Merit: 4534
you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!
Wrong. The DOS attack vector is not present at 1 MB, and you can't create a 2 MB block with native keys when Segwit is activated.

lauda please

native keys would fill the 1mb base block so that segwit cant get a chance.. thus there wont be a 2mb block..
EG
imagine there were 4500 users. so far they argue over blocksize that can only fit ~2250
even if 4499 users moved to sgwit.
1 users can make 2249 NATIVE transactions. meaning only 1 segwit transaction gets in the base. so the 'blocksize. only becomes 1.000444
in short
even if 99.9% of users moved over to segwit, they are still subject to normal bloat from a malicious bloater filling the base block. which takes up the base block space to not allow segwit key users in. thus the ratio of segwit in base:witness is super low... thus total blocksize remains super low, but the base block is super filled with native bloat
legendary
Activity: 2674
Merit: 2965
Terminated.
emphasis >2mb
> (UPTO) but only if 100% segwit key use to get 2mb
dont down play it that nothing needs to be done by users to attain the 2mb.. users need to move funds to new keys to attain it.
There is nothing wrong with that. Users are incentivized to start using Segwit and plenty of providers are either already ready or are 'in-progress'.

lauda compromise meaning lost, sold out, victim 'you left your password on girlfriends phone, now your funds are compromised'
No. That is just one of the meanings, see here: http://www.dictionary.com/browse/compromise

segwit is not an agreed reduced level. its a risk of screwing many over for the fortunes of the corporate elite
This is bullshit and you know it.

go on PROVE IT!! explain it
Everything is properly explained on the Bitcoin Core website. Do I really need to draw it out for you?

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!
Wrong. The DOS attack vector is not present at 1 MB, and you can't create a 2 MB block with native keys when Segwit is activated.

~3000 upstream full validation UPSTREAM filters.
and
3000 hodgepodge of downstream nodes that dont fully validate, may have witness may not have, may be prunned may not be.
You can blame BU for their stubbornness to implement SWSF. A lot of the very outdated nodes are irrelevant IMO, they don't properly validate some newer soft forks anyways (+ potentially have security holes as they can be very outdated, e.g. <0.10.0).
legendary
Activity: 4270
Merit: 4534
And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.
In theory you can get up to 14 TPS with Segwit. However, with realistic usage that is not the case (similarly with the current network having a theoretical capacity of 7 TPS). Segwit will definitely deliver >2 MB according to the latest usage patterns.
emphasis >2mb
> (should be UPTO, but your saying more than) but only if 100% segwit key use to get 2mb
dont down play it that nothing needs to be done by users to attain the 2mb..
also factoring in native spam and users not using sgwit keys. the entire baseblock wont be 100% segwit users meaning not attaining 2mb
EG
imagine there were 4500 users. so far they argue over blocksize that can only fit ~2250
even if 4499 users moved to segwit.
1 users can make 2249 NATIVE transactions. meaning only 1 segwit transaction gets in. so the 'blocksize. only becomes 1.000444

segwit is not the compromise
It is.

lauda: compromise meaning lost, sold out, victim 'you left your password on girlfriends phone, now your funds are compromised'
community: compromise meaning agreed reduced level

segwit is not an agreed reduced level. its a risk of screwing many over for the fortunes of the corporate elite

activating segwit solves nothing.
It does.

go on PROVE IT!! explain it

because even after activation segwit will still be contending against native key users
Nobody cares. You can't DOS the network with "native" keys post Segwit.

you can.
native keys still work after segwit activates. otherwise 16mill coins are locked and unspendable!!

segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.
This is only the case if the majority of nodes don't support Segwit. Ironically to your statement, the big majority is in favor of Segwit.
segwit activates by pool only.
meaning(if all pools were equal for simple explanation)
19 out of 20 pools activate it.
1 pool gets disreguarded.
but then the node count turns to
~3000 upstream full validation UPSTREAM filters.
and
3000 hodgepodge of downstream nodes that dont fully validate, may have witness may not have, may be prunned may not be.

which the upstream nodes wont sync from but "could" filter to (if they were not banlist biased)
legendary
Activity: 2674
Merit: 2965
Terminated.
And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.
In theory you can get up to 14 TPS with Segwit. However, with realistic usage that is not the case (similarly with the current network having a theoretical capacity of 7 TPS). Segwit will definitely deliver >2 MB according to the latest usage patterns.

segwit is not the compromise
It is.

activating segwit solves nothing.
It does.

because even after activation segwit will still be contending against native key users
Nobody cares. You can't DOS the network with "native" keys post Segwit.

segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.
This is only the case if the majority of nodes don't support Segwit. Ironically to your statement, the big majority is in favor of Segwit.
legendary
Activity: 4270
Merit: 4534
for clarity

Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is
1MB of transactional data space, and 3MB buffer space, that only partially fills dependant on the % of segwit users in the base block
(0% segwit in 1mb base=0of the 3mb extra used(1mb total))
(10% segwit in 1mb base=0.1mb of the 3mb used(1.1mb total))
(100% segwit in 1mb base=1.1mb of the 3mb used(2.1mb total))

 the latter of which(atleast 1.9mb) is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.

FTFY
legendary
Activity: 4270
Merit: 4534
They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?

or if having an organised hard consensus (meaning old nodes have to drop off anyway(the small minority outside activation threshold)
dynamic blocks (using policy.h(lower bound) as the dynamic flagging scaler) and segwit keys. where the witness is appended to the tail of the tx.
without needing to have separation(of tree's(blocks)).

that way ALL nodes validate the same thing.

(ill get to the punchline later about the then lack of need for segwit.. but want to see if people run scenario's in their head first to click their lightbulb moment into realising what segwit does or doesnt do)
sr. member
Activity: 476
Merit: 501
Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes

And that 4MB is 1MB of transactional data space, and 3MB of segwit data space, the latter of which is mostly reserved for future use.

So don't mislead others into thinking that all of a sudden we will get a 4 fold increase in transactional capacity. We won't.
sr. member
Activity: 406
Merit: 250
As many have argued before, I think this will not work, whereas in the other hand it is damaging, I personally think segwit + a different transformation will be more flexible, such as BIP 106. I have a great deal of faith in this transformation, and we need a healthy and predictable change that we should not take unexpected measures. But in the actual situation, I think this is very difficult, the miners will always keep their decision, so it is difficult to change.
legendary
Activity: 3430
Merit: 3074
They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?


Segwit IS the compromise, and it's more of a compromise towards big blocks than what you're suggesting. Roll Eyes


Which is bigger, 2 MB blocks or 4 MB blocks   Roll Eyes
hero member
Activity: 658
Merit: 501
This second compromise is nothing but a veiled attempt at setting a precedent that we force a HF without consensus on the community and either giving the decision to miners or developers instead of the users themselves. As we can see from this poll , consensus over a HF is not anywhere near being found and thus the HF proposal offered isn't anywhere good enough to be considered. I don't want to even consider accepting politically motivated hard forks and just want to focus on whats right for bitcoin.
legendary
Activity: 2282
Merit: 1023
They are just two very different approaches... I thought the activation of Segwit will then followed by "easier/better" future plan of blocksize increase?

Perhaps we can compromise and buy time by allowing bigger blocks (eg. 2MB) to activate, and then decide if Segwit should be implemented?
legendary
Activity: 4270
Merit: 4534
segwit is not the compromise

activating segwit solves nothing.
moving people to segwit keys after activation is then a 'percentage of solution'

never a complete 100% solving bugs, or never 100% fixing or never 100% boosting. because even after activation segwit will still be contending against native key users

also the 4mb segwit weight is not utilised.
AT VERY BEST the expectation is 2.1mb.. the other 1.9mb would be left empty.
segwit cannot resegwit again to utilise the 1.9mb extra weight.

the extra weight would be (from reading core/blockstream plans) with bloat data to include confidential commitments appended onto the end of a tx. (not extra tx capacity) to bloat a tx that would have, without confidential commitments been alot leaner



segwit also turns into a 2 tier network of upstream 'filters' and downstream nodes. rather than a equal network of nodes that all agree on the same thing.

for the reddit crew.. in simple terms. segwit fullnode = full data.. downstream= 'tl:dr' nodes
legendary
Activity: 3430
Merit: 3074
just compromise already

Segwit IS the compromise. I've refrained from saying this up until recently, but I think 4MB is too big. I'd be much happier with a Segwit proposal that kept the size at 1MB, but in the hope that others would recognise that 4MB is meeting in the middle, I helped to promote Segwit hoping they would accept it. The fact they have rejected Segwit only demonstrates that bigger blocks has got nothing to do with it, it's about having power over the source code.
copper member
Activity: 2898
Merit: 1464
Clueless!
I have read this compromise proposal from "ecafyelims" at Reddit and want to know if there is support for it here in this forum.

Compromise: Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF)

Quote from: Reddit user ecafyelims
Let's merge BIP 102 (2MB HF) and BIP 141 (Segwit SF) into a single HF (with overwhelming majority consensus).

Since Segwit changes how the blocksize is calculated to use weights, our goal with the merger would be 2MB of transactional data.

Segwit weighting system measures the transaction weight to be 3x(non-witness base data) + (base data with witness data). This weight is then limited to 4M, favoring witness data.

Transactions aren't all of base or witness. So, in practice, the blocksize limit is somewhere between 1MB (only base data) and 4MB (only witness data) with Segwit.

With this proposed merger, we will increase Segwit weight limit from 4M to 8M. This would allow 2MB of base data, which is the goal of the 2MB HF.

It's a win-win solution. We get 2MB increase and we get Segwit.

I know this compromise won't meet the ideals of everyone, but that's why it's a compromise. No one wins wholly, but we're better off than where we started.

It's very similar to what was already proposed last year at the Satoshi Roundtable. What is the opinion of the Bitcointalk community?



AT this point in time it is about POWER to move the future of btc imho. The devs of any flavor..most are mega whales..so it is the coding/power trip now...
as reasonable as this sounds...I just don't see it happening because of  price above 1k and 1mb block size for bitcoin core is just dandy for all they care. NOT saying
I agree on either camp but bitcoin core...sees BTC as a store of value..so imho...it can sit at 1mb for years as long as price does reflect that store of value thinking

thus stalemate..thus 1mb btc...so only other option if I'm correct (hope i'm not) is an attempted BU fork and/or BU gets 51% of the folk to push their view

all very silly..just compromise already..its NOT like if we had another unexpected btc fork like back in the day ..they would not pop out a hard fix anyway

(what do I know I at one time drank the BFL kool aid) but just seems it is about status/power and the devs of any flavor just really, really don't like the other camp

 
legendary
Activity: 4270
Merit: 4534
Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?

to answer your question is..

basically that BU and core already have the variables..

nodes: consensus.h policy.h
pools: consensus.h policy.h

and that all nodes have 2 limits although not utilitised to the best of their ability.. meaning at non mining level core does not care about policy.h

and the punchline i was going to reveal to Lauda about my example of dynamics.
BU uses
consensus.h (...) as the upperbound limit (32mb(2009), then 1mb for years and in the future going up as the hard limits EG 16mb)
policy.h (...) as the more fluid value BELOW consensus.h that if the node is in minority. can be pushed by EB or the user manually without needing to wait for events. which is signalled in their useragent eg 2mb and dynamically going up

core however, require tweaking code and recompiling to change both each time)
sr. member
Activity: 476
Merit: 501
Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

You're looking ridiculous again, franky1. Y'all might wanna reel you-self back in.
legendary
Activity: 4270
Merit: 4534
Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, thus still quadratic spamming even with segwit active.. meaning segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

i now await the usual rebuttal rhetoric
"blockstream never made any contractual commitment nor guarantee to fix sigop spamming" - as they backtrack earlier promises and sale pitches
or
personal attack (edit: there we have it, p.S personal attacks aimed at me sound like whistles in the wind)
legendary
Activity: 4270
Merit: 4534

i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)


i just quickly opened up microsoft excel and added some 'insert shape' and lines..
i use many different packages depending on what i need. some graphical some just whatever office doc i happen to already have open
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic hash time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

Lesser implementations that have no embedded nullification of this exploit may wish to take note.
Pages:
Jump to: