Pages:
Author

Topic: Satoshi Nakamoto: "Bitcoin can scale larger than the Visa Network" - page 6. (Read 18415 times)

legendary
Activity: 2674
Merit: 3000
Terminated.
Fixing TX Malleability is beneficial to everyone. This *other benefits* - they include giving the ability to introduce consensus changes without hard forking. This is because we are told that a contentious hard fork is a terrible thing. How does anyone know this for sure!?
So being able to run multiple soft forks at once is a bad thing for you? Include the ability to introduce consensus changes without a HF? Source please.

Why does segregated witness change the tx fee calculation?
I don't really have an answer to this question. This might do:
My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

The same sigop and hash limits could, in theory, be used at any block size limit.
Replacing 1 limit with another is anything, but a nice way of solving problems.
sr. member
Activity: 423
Merit: 250

Hmm, the link took me to a whole lot of Asian looking characters; am I meant to use a translator?  As such I couldn't find the quoted material.

Question:  Can the BIP109 magic be applied if we have the 1MB block size limit?  If not, why not?

Sorry try this:
https://www.reddit.com/r/btc/comments/47f0b0/f2pool_testing_classic/d0deh29

It can, but it needs to be in hardfork, so 2MB is usefull anyway.



Yes, already available few days. Note the sigop is reduced to 1.3 GB only after the 2 MB hard fork is activated and grace period over, blocks with more sigops will become invalid the same way blocks over 1MB are invalid now.

Notable changes from Bitcoin Core version 0.12.0:


Quote
Bitcoin Classic 0.12.0 is based on Bitcoin Core version 0.12.0, and is compatible with its blockchain files and wallet.
For a full list of changes in 0.12.0, visit Core’s website here.
Additionally, this release includes all changes and additions made in Bitcoin Classic 0.11.2, most notably the increase of the block size limit from one megabyte to two megabytes.

    Opt-in RBF is set to disabled by default. In the next release, opt-in RBF will be completely removed.
    The RPC command "getblockchaininfo" now displays BIP109's (2MB) status.
    The chainstate obfuscation feature from Bitcoin Core is supported, but not enabled
hero member
Activity: 709
Merit: 503
The full math is here - David you would probably be interested in this if you haven't already seen it.

http://www.bitcoinunlimited.info/resources/1txn.pdf

The paper also describes how the sigops attack is mitigated through miners simply mining 1tx blocks whilst validating then pushing that out to other miners whilst they are still validating the 'poison' block. Rational miners will validate the smaller block, and they also be able to mine another block on top of this, orphaning the poison block.

The attacker would get one shot, and would quickly be shut out. If you have enough hash rate to be mining blocks yourself its really much more profitable to behave!
Yummy; thanks.
hero member
Activity: 709
Merit: 503
Oh, I see, per https://github.com/bitcoin/bips/blob/master/bip-0109.mediawiki, it is just artificial.  The same sigop and hash limits could, in theory, be used at any block size limit.
legendary
Activity: 996
Merit: 1013

Why does segregated witness change the tx fee calculation?

My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.
legendary
Activity: 2576
Merit: 1087
Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
agreed.

and last I heard its exactly how the attack remains mitigated in classic...

AS BU supporter though, we don't need limits!

IMHO the financial incentives are strong enough that block size (in terms of both bandwidth to transmit/ and CPU to process) is self limiting. Propagation time is a combination of the two things and to (over)simplify propagation time vs orphan risk is enough to make sure miners don't do stupid things, unless they want to lose money.

The full math is here - David you would probably be interested in this if you haven't already seen it.

http://www.bitcoinunlimited.info/resources/1txn.pdf

The paper also describes how the sigops attack is mitigated through miners simply mining 1tx blocks whilst validating then pushing that out to other miners whilst they are still validating the 'poison' block. Rational miners will validate the smaller block, and they also be able to mine another block on top of this, orphaning the poison block.

The attacker would get one shot, and would quickly be shut out. If you have enough hash rate to be mining blocks yourself its really much more profitable to behave!
hero member
Activity: 709
Merit: 503
This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109 2 MB hard fork used by Bitcoin Classic is necessary:

http://8btc.com/forum.php?mod=viewthread&tid=29511&page=1#pid374998

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.
Hmm, the link took me to a whole lot of Asian looking characters; am I meant to use a translator?  As such I couldn't find the quoted material.

Question:  Can the BIP109 magic be applied if we have the 1MB block size limit?  If not, why not?
legendary
Activity: 2576
Merit: 1087
You and I think very much alike.  Lauda, can you point us at a really big but totally legit/non-abusive transaction?
I don't think that there are many transactions that are so large in nature (both 'abusive' and non). This is the one that I'm aware of. However, you'd also have to define what you mean by "big". Do you mean something quite unusually big (e.g. 100kB) or something that fills up the entire block? I'd have to a lot more analysis to try and find one (depending on the type).

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.
TX malleability (e.g.) is 'undefined'? Segwit provides additional transaction capacity while carrying other benefits. How exactly is this bad?

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?
The question would have to be correct for one to be able to answer it. In this case, I have no idea what you are trying to ask.

Fixing TX Malleability is beneficial to everyone.

This *other benefits* - they include giving the ability to introduce consensus changes without hard forking. This is because we are told that a contentious hard fork is a terrible thing. How does anyone know this for sure!?

A hard fork is good. (Note the absence of the word contentious). A hard fork establishes Nakamoto consensus, and is the only consensus vital to the ongoing successful operation of the bitcoin network. The incentives that drive this consensus mechanism are sound. The fear from those that do not see this is overwhelming. To subvert this is to destroy fundamental parts of bitcoins architecture.

I thought you would understand what I meant when I asked the question, sorry if I have used the wrong terminology or something. I can make it a broader question, then perhaps we can investigate the specifics.

Why does segregated witness change the tx fee calculation?
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying they are monsters out there that could take hours or even days to verify?
we can't allow any transactions whether or not they are innocent or malevolent to clog up the network. there's no debating this.

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger than the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?
yes i believe this is the case, one day, the coins will be way too fragmented.

some kind of "defragmention" will need to take place at one point.

i dont blieve this is a problem for us to worry about... its to far in the future. ( i'm guessing , i do a lot of guesswork )
hero member
Activity: 709
Merit: 503
what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.

zillions of inputs!  Grin this i can understand


This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109  2 MB hard fork used by Bitcoin Classic is necessary:


http://8btc.com/forum.php?mod=viewthread&tid=29511&page=1#pid374998

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.

Whoa.  Hmm, is there a 0.12 version of Classic yet?
hero member
Activity: 709
Merit: 503
Anyone know where someone is tracking average transaction size (# of inputs) over time?
sr. member
Activity: 423
Merit: 250
what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.

zillions of inputs!  Grin this i can understand


This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109  2 MB hard fork used by Bitcoin Classic is necessary and SegWit does not help with:


https://www.reddit.com/r/btc/comments/47f0b0/f2pool_testing_classic/d0deh29

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.
hero member
Activity: 709
Merit: 503
Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying there are monsters out there that could take hours or even days to verify?

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger then the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Imagine this; you agree to sell something to someone and they will pay you in Bitcoins.  It turns out they have an address with a zillion little outputs in it.  So, they go to launch a send to you and find the fee is going to be huge (to cover the cost of all those inputs in a timely fashion).  The deal falls through; Bitcoin loses.

Now we can wonder how their address ended up so fragmented but what does it matter?  Maybe they were collecting a zillion little drips from faucets.  Whatever; they can't spent it like a large output.
the needs the of the many outweigh the needs of the few, or the spammer.
the TX inquestion comes from a "stress test", someone wanted to see how much SPAM bitcoin could swallow at once.
if the TX size was made to be >1MB what would've happened then?
finding a good limit shouldn't be very hard.


Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
agreed.
hero member
Activity: 709
Merit: 503
Imagine this; you agree to sell something to someone and they will pay you in Bitcoins.  It turns out they have an address with a zillion little outputs in it.  So, they go to launch a send to you and find the fee is going to be huge (to cover the cost of all those inputs in a timely fashion).  The deal falls through; Bitcoin loses.

Now we can wonder how their address ended up so fragmented but what does it matter?  Maybe they were collecting a zillion little drips from faucets.  Whatever; they can't spent it like a large output.
hero member
Activity: 709
Merit: 503
Oh, I was wrong; get over it, I am.  Smiley  We can't just add together inputs.  Here's an example address https://blockchain.info/unspent?active=1Gx8ivf4xSCqNNtUXQxoyBFd4FeGZvwCHT&format=html with multiple outputs, 7 in this case.  To spend the entire lot would involve a transaction with 7 inputs, i.e. not one with just 1 input with the net amount.  Bummer.

So, then the question is what happened to 19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 that it had so many tiny outputs in it?

Still the owner could have created multiple smaller transactions instead of one large one.
legendary
Activity: 2674
Merit: 3000
Terminated.
You and I think very much alike.  Lauda, can you point us at a really big but totally legit/non-abusive transaction?
I don't think that there are many transactions that are so large in nature (both 'abusive' and non). This is the one that I'm aware of. However, you'd also have to define what you mean by "big". Do you mean something quite unusually big (e.g. 100kB) or something that fills up the entire block? I'd have to a lot more analysis to try and find one (depending on the type).

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.
TX malleability (e.g.) is 'undefined'? Segwit provides additional transaction capacity while carrying other benefits. How exactly is this bad?

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?
The question would have to be correct for one to be able to answer it. In this case, I have no idea what you are trying to ask.
legendary
Activity: 2576
Merit: 1087
Setting a block size limit of 1MB was, and continues to be a hacky workaround.
It is certainly not a hacky workaround. It is a limit that was needed (it still is for the time being).

Theory drives development, but in practice sometimes hacky workarounds are needed.
If it can be avoided, not really.

The block size limit was a hacky workaround to the expensive to validate issue. An issue that is now mitigated by other much better solutions, not least a well incentivised distributed mining economy. That is now smart enough to route around such an attack, making it prohibitively expensive to maintain.
So exactly what is the plan, replace one "hacky workaround" with another? Quite a lovely way forward. Segwit is being delivered and it will ease the validation problem and increase the transaction capacity. What is the problem exactly?

Problem: an attacker can create a block that is so expensive to validate that other miners would get stuck validation the block.
Hack: Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
Solution: 1 transaction blocks.

Problem: the block size limit is causing transactions to get stuck in the mempool
Hack: raise the block size limit to 2MB
Solution: remove the block size limit

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?
hero member
Activity: 709
Merit: 503
that is the key

i say keep it simple stupid

if it has more than a zillion it's not getting included.
Smiley  You and I think very much alike.  Lauda, can you point us at a really big but totally legit/non-abusive transaction?
Pages:
Jump to: