Pages:
Author

Topic: Segregated witness - The solution to Scalability (short term)? - page 3. (Read 23163 times)

legendary
Activity: 994
Merit: 1035

This is indeed true and is included in the FAQ backed by most of the developers and something I was unaware of as well. I haven't done the math but appears that a 2 MB block with heavy P2SH that can extend validation time to those lengths with certain nodes. It is likely representative as a worst case scenario but does support an idea of how even a modest increase can bring down nodes in an already delicate environment where we have too much centralization.


If this is true then SW is not a good idea since it increased the effective block size, and when you have signature and transactions separated, shouldn't it take longer time to verify? If a 3.2MB block takes 10 minutes to verify then the SW will not work at all since it bumps it to 4MB, attackers only need to send out such specifically constructed blocks to stall the network

The point of the FaQ is to point out that simply increasing the block limit isn't enough and ....

In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.

The developers are cognizant of these problems and thus why they are proposing a more complicated proposal than simply changing one variable.... Meaning they are going to prevent that attack when they raise capacity by 1.6-4x with SepSig specifically by changing those other lines of code.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

This is indeed true and is included in the FAQ backed by most of the developers and something I was unaware of as well. I haven't done the math but appears that a 2 MB block with heavy P2SH that can extend validation time to those lengths with certain nodes. It is likely representative as a worst case scenario but does support an idea of how even a modest increase can bring down nodes in an already delicate environment where we have too much centralization.


If this is true then SW is not a good idea since it increased the effective block size, and when you have signature and transactions separated, shouldn't it take longer time to verify? If a 3.2MB block takes 10 minutes to verify then the SW will not work at all since it bumps it to 4MB, attackers only need to send out such specifically constructed blocks to stall the network
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Bitcoin itself is a huge "Economic Change Event" in the wider context of the existing monetary systems (i think this is where Jeff probably got the idea from) ... fees coming online for bitcoin TX is a storm in a teacup by comparison.

During July and September coinwallet.eu attack, all the blocks were full for at least a week, but you just need to raise the fee to 0.0005 btc to get a confirmation in 10 minutes, how is that a storm in a teacup?
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

To be clear, is this re-serialization totaling 1.25 GB something that the _current_ Bitcoin Core does when faced with this aberrant block, or are we comparing apples to oranges?

Got a link to the presentation?

F2Pool did this on their node, the video is from the September scaling bitcoin conference,

 https://www.youtube.com/watch?v=TgjrS-BPWDQ

https://scalingbitcoin.org/montreal2015/presentations/Day2/11-Friedenbach-scaling-bitcoin.pdf
legendary
Activity: 994
Merit: 1035
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012104.html

3 BiPs for SepSig being developed-

CONSENSUS BIP: witness structures and how they're committed to blocks,
cost metrics and limits, the scripting system (witness programs), and
the soft fork mechanism.  Draft - https://github.com/bitcoin/bips/pull/265

PEER SERVICES BIP: relay message structures, witnesstx serialization,
and other issues pertaining to the p2p protocol such as IBD,
synchronization, tx and block propagation, etc...

APPLICATIONS BIP: scriptPubKey encoding formats and other wallet
interoperability concerns.

-------------------------------------------------------------

legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
   Other changes required: Even a single-line change such as increasing the maximum block size has effects on other parts of the code, some of which are undesirable. For example, right now it’s possible to construct a transaction that takes up almost 1MB of space and which takes 30 seconds or more to validate on a modern computer (blocks containing such transactions have been mined). In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.

The average non-developer likely naturally assumes that 2MB blocks is a safe change to make and conservative, but a targeted DDOS attack exploiting that 10 minute validation delay(up from 30 seconds for 1MB ) would be disastrous.  

I must admit that I am guilty of such an assumption. Time linear with block size seems rational on the surface before looking into the matter.

Are you asserting that the worst cast for a 1MB block size today is less than 30 seconds on the same hardware that would have a worst case of 10 minutes if the only variable is a blocksize doubled to 2MB?

What are the characteristics of such an aberrant block?



I heard that some new libs can dramatically increase the verification speed, this might not be a large concern by then

Thanks. To be clear, is this re-serialization totaling 1.25 GB something that the _current_ Bitcoin Core does when faced with this aberrant block, or are we comparing apples to oranges?

Got a link to the presentation?
legendary
Activity: 994
Merit: 1035
   Other changes required: Even a single-line change such as increasing the maximum block size has effects on other parts of the code, some of which are undesirable. For example, right now it’s possible to construct a transaction that takes up almost 1MB of space and which takes 30 seconds or more to validate on a modern computer (blocks containing such transactions have been mined). In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.

The average non-developer likely naturally assumes that 2MB blocks is a safe change to make and conservative, but a targeted DDOS attack exploiting that 10 minute validation delay(up from 30 seconds for 1MB ) would be disastrous.  

I must admit that I am guilty of such an assumption. Time linear with block size seems rational on the surface before looking into the matter.

Are you asserting that the worst cast for a 1MB block size today is less than 30 seconds on the same hardware that would have a worst case of 10 minutes if the only variable is a blocksize doubled to 2MB?

What are the characteristics of such an aberrant block?

This is indeed true and is included in the FAQ backed by most of the developers and something I was unaware of as well. I haven't done the math but appears that a 2 MB block with heavy P2SH that can extend validation time to those lengths with certain nodes. It is likely representative as a worst case scenario but does support an idea of how even a modest increase can bring down nodes in an already delicate environment where we have too much centralization.

I would like to see the math as well.

---------------------------------------------------------------

https://bitcoinmagazine.com/articles/segregated-witness-part-why-you-should-care-about-a-nitty-gritty-technical-trick-1450827675

Segregated Witness, Part 2: Why You Should Care About a Nitty-Gritty Technical Trick


I heard that some new libs can dramatically increase the verification speed, this might not be a large concern by then

If you review their FAQ you can see this is precisely why they want to roll out the other libraries first before increasing the limit with a hardfork. They are well aware that LN won't be very useful at 1MB + 4MB SepSig.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
   Other changes required: Even a single-line change such as increasing the maximum block size has effects on other parts of the code, some of which are undesirable. For example, right now it’s possible to construct a transaction that takes up almost 1MB of space and which takes 30 seconds or more to validate on a modern computer (blocks containing such transactions have been mined). In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.

The average non-developer likely naturally assumes that 2MB blocks is a safe change to make and conservative, but a targeted DDOS attack exploiting that 10 minute validation delay(up from 30 seconds for 1MB ) would be disastrous.  

I must admit that I am guilty of such an assumption. Time linear with block size seems rational on the surface before looking into the matter.

Are you asserting that the worst cast for a 1MB block size today is less than 30 seconds on the same hardware that would have a worst case of 10 minutes if the only variable is a blocksize doubled to 2MB?

What are the characteristics of such an aberrant block?



I heard that some new libs can dramatically increase the verification speed, this might not be a large concern by then
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size.  

Good observation

In change management, the first question is not about if a change is good or bad, it is why must we make this change. It is always the motivation behind the change worth looking

It seems SegWit is invented to temporary circumvent the hard coded 1MB limitation of the block size, so that the traffic can still grow without trigger a fee event

So the next question will be: Why are we so fear of a fee event?

Jeff gave some answer here:
Quote
*Key observation:   A Bitcoin Fee Event (see def. at top) is an Economic
Change Event.*

An Economic Change Event is a period of market chaos, where large changes
to prices and sets of economic actors occurs over a short time period.

A Fee Event is a notable Economic Change Event, where a realistic
projection forsees higher fee/KB on average, pricing some economic actors
(bitcoin projects and businesses) out of the system.

I don't think this so called "chaos" is convincing enough, so the next question: Who are these bitcoin projects and businesses, and is bitcoin's goal to benefit average people or serve these projects/businesses?

Although institutions have large capital and influence in the industry, I don't think bitcoin's purpose is to become banks' another payment network (Banks being the highest form of business, a business large enough will start to do banking)

In fact, businesses can always pass the fee cost to customer, and those customers are not fee sensitive (Statistics showed that majority of users come to bitcoin and use it for long term value store and high value international remittance, both are not very sensitive to fee and transaction frequency) So a higher fee will not affect business either. And large businesses can establish clearing channel to dramatically reduce the fee cost, this is a common practice in financial world, they don't need to change bitcoin architecture to do that

So I think the motivation behind the architecture change of bitcoin is still not enough convincing. Since no one have seen a fee event, it might not be the "chaos" that is predicted by Jeff, so people must see it with their own eyes to be convinced that it is a problem that really need to be solved. What if it is not a problem at all? Banks are still closed during weekends and holiday, is that a problem for our financial system?

Even a fee event negatively affect majority of the user experience, the way to future scaling should still follow Satoshi's vision as much as possible. Anyway this is his invention, no one except him have the right to change it to something totally different


Bitcoin itself is a huge "Economic Change Event" in the wider context of the existing monetary systems (i think this is where Jeff probably got the idea from) ... fees coming online for bitcoin TX is a storm in a teacup by comparison.
legendary
Activity: 994
Merit: 1035
Since no one have seen a fee event, it might not be the "chaos" that is predicted by Jeff, so people must see it with their own eyes to be convinced that it is a problem that really need to be solved. What if it is not a problem at all? Banks are still closed during weekends and holiday, is that a problem for our financial system?
There is already some evidence that a fee market has existed :

https://rusty.ozlabs.org/?p=564

Even wallets like mycelium have 4 fee settings at the point of payment (low priority , economy, normal, priority) to address the fee market that has already occurred and is occurring.


Even a fee event negatively affect majority of the user experience, the way to future scaling should still follow Satoshi's vision as much as possible. Anyway this is his invention, no one except him have the right to change it to something totally different. Anyone not satisfied by his design can just create their alt

Satoshi did build a lot of extensibility and op codes within the original design so bitcoin could grow, evolve, and use layers like the lightning network. While I do respect Satoshi we shouldn't worship him and treat everything he has done as sacrosanct as he has made many mistakes. What is more important is us respecting the investment contract we have all agreed to over the years about respecting the core fundamentals that makes bitcoin unique. Satoshi can always sign a PGP key and jump and and make a comment if he has some serious concerns as well.  
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
No one answered your question, so I will.  The answer is yes, all you have to do is find unfixable problems with it and exploit them on testnet.  Doing the same with fixable problems will delay it at the very least while also ensuring it is more secure if it is ultimately deployed live.  Several potential attack vectors have been discussed in this thread, if any of them truly exist, you can "take advantage" of them on testnet and protect bitcoin at the same time.

Great answer. IF there are no killer problems in SegWit, I can be swayed to support it (my current position, as a conservative measure, is to try to uncover faults). If indeed we can throw all possible attacks at it on the test net, and it comes through unscathed, then what would be the downside to adopt it on the main net? (he asked rhetorically)
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
   Other changes required: Even a single-line change such as increasing the maximum block size has effects on other parts of the code, some of which are undesirable. For example, right now it’s possible to construct a transaction that takes up almost 1MB of space and which takes 30 seconds or more to validate on a modern computer (blocks containing such transactions have been mined). In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.

The average non-developer likely naturally assumes that 2MB blocks is a safe change to make and conservative, but a targeted DDOS attack exploiting that 10 minute validation delay(up from 30 seconds for 1MB ) would be disastrous.  

I must admit that I am guilty of such an assumption. Time linear with block size seems rational on the surface before looking into the matter.

Are you asserting that the worst cast for a 1MB block size today is less than 30 seconds on the same hardware that would have a worst case of 10 minutes if the only variable is a blocksize doubled to 2MB?

What are the characteristics of such an aberrant block?
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size.  

Good observation

In change management, the first question is not about if a change is good or bad, it is why must we make this change. It is always the motivation behind the change worth looking

It seems SegWit is invented to temporary circumvent the hard coded 1MB limitation of the block size, so that the traffic can still grow without trigger a fee event

So the next question will be: Why are we so fear of a fee event?

Jeff gave some answer here:
Quote
*Key observation:   A Bitcoin Fee Event (see def. at top) is an Economic
Change Event.*

An Economic Change Event is a period of market chaos, where large changes
to prices and sets of economic actors occurs over a short time period.

A Fee Event is a notable Economic Change Event, where a realistic
projection forsees higher fee/KB on average, pricing some economic actors
(bitcoin projects and businesses) out of the system.

I don't think this so called "chaos" is convincing enough, so the next question: Who are these bitcoin projects and businesses, and is bitcoin's goal to benefit average people or serve these projects/businesses?

Although institutions have large capital and influence in the industry, I don't think bitcoin's purpose is to become banks' another payment network (Banks being the highest form of business, a business large enough will start to do banking)

In fact, businesses can always pass the fee cost to customer, and those customers are not fee sensitive (Statistics showed that majority of users come to bitcoin and use it for long term value store and high value international remittance, both are not very sensitive to fee and transaction frequency) So a higher fee will not affect business either. And large businesses can establish clearing channel to dramatically reduce the fee cost, this is a common practice in financial world, they don't need to change bitcoin architecture to do that

So I think the motivation behind the architecture change of bitcoin is still not enough convincing. Since no one have seen a fee event, it might not be the "chaos" that is predicted by Jeff, so people must see it with their own eyes to be convinced that it is a problem that really need to be solved. What if it is not a problem at all? Banks are still closed during weekends and holiday, is that a problem for our financial system?

Even a fee event negatively affect majority of the user experience, the way to future scaling should still follow Satoshi's vision as much as possible. Anyway this is his invention, no one except him have the right to change it to something totally different
legendary
Activity: 994
Merit: 1035
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
No one answered your question, so I will.  The answer is yes, all you have to do is find unfixable problems with it and exploit them on testnet.  Doing the same with fixable problems will delay it at the very least while also ensuring it is more secure if it is ultimately deployed live.  Several potential attack vectors have been discussed in this thread, if any of them truly exist, you can "take advantage" of them on testnet and protect bitcoin at the same time.

Yes , that is the another solution to potentially either stop SepSig or improve it when solutions are discovered. This should naturally be the first course of action one should take as all other implementations can learn from and or use SepSig as well because it is a valuable solution to many problems. I highly recommend everyone test and attack SepSig on the testnet for the betterment of our ecosystem. Please post any results in this thread and mailing lists.

The second solution I already mentioned can be carried out simultaneosly if you wish -
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
This is rather simple to answer. Gavin is in support of SepSig(although prefers a hardfork) , Hearn is no longer interested in XT or working with Bitcoin directly. So your first task is to amass a separate group developers to maintain a github fork and than convince enough nodes and miners to adopt your version whether its Bitcoin XT or Bitcoin Unlimited, or something else. I encourage you to do this as I think a diversity of choice and implementations is a good thing for bitcoin. I also encourage any supporters of alternate implementations to be both proud of their work and not play the victim if their implementation doesn't get adopted en masse.

This discussion sometimes has gotten heated and bitter, but without evidence to the contrary, I will assume good faith and welcome other ideas and competing development teams that we can all learn and share from.
hero member
Activity: 807
Merit: 500
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
No one answered your question, so I will.  The answer is yes, all you have to do is find unfixable problems with it and exploit them on testnet.  Doing the same with fixable problems will delay it at the very least while also ensuring it is more secure if it is ultimately deployed live.  Several potential attack vectors have been discussed in this thread, if any of them truly exist, you can "take advantage" of them on testnet and protect bitcoin at the same time.
legendary
Activity: 994
Merit: 1035

I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size.  
So? Nobody ever said it was elegant.
Yes.

Bitcoin has never been simple or elegant. Much of the work done by developers has been to clean up the messy tarball of code Satoshi created. Those wanting to have a simple increase in blocksize have good intentions but may be missing out on some finer nuances that make such increase premature. It will be great to support other implementations now however because no one has a good idea of what is going to happen if and when a  "Fee Market Event"* occurs. It will be great if we have working and tested backup solutions.

* There is good evidence that suggests we already have a fee market for tx's
legendary
Activity: 2674
Merit: 2965
Terminated.
Meh, people need to stop fucking around with The Protocol.

People are free to fork off and insert whatever 'solution' in their shitty altcoin.

There is no scaling problem for bitcoin that can be adresses by retardedly bending the rules.

Fuck the masses, fuck the industry, fuck the devs.

Bitcoin does not need them.
Once I thought better of you, now you are acting equal to that Veritas and Zara something guy. Segwit is not bad; it's actually very far from that.

I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size. 
So? Nobody ever said it was elegant.
legendary
Activity: 994
Merit: 1035
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
The only people who want to stop it so far are ones that either hate on Core developers or any of their proposals, or who lack the understanding for Segwit.

I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size.  

The delay is intentional as the core developers need more time to test and optimize the protocol to anticipate increasing the blocksize which all but Luke and Peter appear to be in favor of.

This fact stood out to me and stressed the importance to carefully prepare and study all attack vectors and changes that need to be done beforehand:

   Other changes required: Even a single-line change such as increasing the maximum block size has effects on other parts of the code, some of which are undesirable. For example, right now it’s possible to construct a transaction that takes up almost 1MB of space and which takes 30 seconds or more to validate on a modern computer (blocks containing such transactions have been mined). In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors. Other lines of code would need to be changed to prevent these problems.


The average non-developer likely naturally assumes that 2MB blocks is a safe change to make and conservative, but a targeted DDOS attack exploiting that 10 minute validation delay(up from 30 seconds for 1MB ) would be disastrous.  


Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad

This is rather simple to answer. Gavin is in support of SepSig(although prefers a hardfork) , Hearn is no longer interested in XT or working with Bitcoin directly. So your first task is to amass a separate group developers to maintain a github fork and than convince enough nodes and miners to adopt your version whether its Bitcoin XT or Bitcoin Unlimited, or something else. I encourage you to do this as I think a diversity of choice and implementations is a good thing for bitcoin. I also encourage any supporters of alternate implementations to be both proud of their work and not play the victim if their implementation doesn't get adopted en masse.

legendary
Activity: 1260
Merit: 1002
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
The only people who want to stop it so far are ones that either hate on Core developers or any of their proposals, or who lack the understanding for Segwit.

I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size.  

Meh, people need to stop fucking around with The Protocol.

People are free to fork off and insert whatever 'solution' in their shitty altcoin.

There is no scaling problem for bitcoin that can be adresses by retardedly bending the rules.

Fuck the masses, fuck the industry, fuck the devs.

Bitcoin does not need them.
legendary
Activity: 1512
Merit: 1057
SpacePirate.io
Is there anyway to stop or block segregated witness? From what I understand, it hits testnet in two days... Sad
The only people who want to stop it so far are ones that either hate on Core developers or any of their proposals, or who lack the understanding for Segwit.

I understand it and don't hate the core developers. Seg wit is a thoughtful solution, but not an elegant one. IMHO, it's just more spackle over the cracks and a further delay to increasing block size. 
Pages:
Jump to: