Pages:
Author

Topic: Satoshi Nakamoto: "Bitcoin can scale larger than the Visa Network" - page 7. (Read 18415 times)

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Luada, again we owe you a debt of gratitude.  You do the work the rest of us are too lazy to do.  Now I am beginning to understand why the verification process scales quadratic; not that me understanding matters per se but it is nice to know.
Example of such transaction a can be seen here (from last year).
Holy cow!
now does this constitute a legitimate TX?

can't we just ignore this type of TX?
Depends; sorry.  In a sense it is absolutely legit; it follows all of the rules.  But it easily could have been coded less abusively.

Let's first look at a much more classic transaction https://blockchain.info/tx/3a6a7d2456bfd6816ee1164e7c11307fa1c6855ee3116b6a1f8e6a14a98b04c4;

address (amount)
12kDK8snhBD6waJ2NaMB7QvSf4DzMcE9ad (0.21963496 BTC - Output) {call it what you want; this is an input}

1FmUPrnZTBymMT3ktgMx61hQ3JRyWD7NPY - (Unspent) 0.14193496 BTC
13dhrUuUe2MrZsufYGAZnwnyE7k97FRZ1v - (Unspent) 0.0776 BTC

Nice and easy; one input, two outputs; classic.

Now compare that to the big one;

19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output)
19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output)
...
19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output) {this is the 1598 occurrence}
...
{followed by zillions of more inputs}

This could have been codes as;

19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.01598 BTC - Output)
...

and saved 1597 inputs on this batch of inputs alone.  Ok, sure, we could ignore this one; um, need to figure out how to recognize such without rejecting non-abusive transactions.

that is the key

i say keep it simple stupid

if it has more than a zillion it's not getting included.
hero member
Activity: 709
Merit: 503
Luada, again we owe you a debt of gratitude.  You do the work the rest of us are too lazy to do.  Now I am beginning to understand why the verification process scales quadratic; not that me understanding matters per se but it is nice to know.
Example of such transaction a can be seen here (from last year).
Holy cow!
now does this constitute a legitimate TX?

can't we just ignore this type of TX?
Depends; sorry.  In a sense it is absolutely legit; it follows all of the rules.  But it easily could have been coded less abusively.

Let's first look at a much more classic transaction https://blockchain.info/tx/3a6a7d2456bfd6816ee1164e7c11307fa1c6855ee3116b6a1f8e6a14a98b04c4;

address (amount)
12kDK8snhBD6waJ2NaMB7QvSf4DzMcE9ad (0.21963496 BTC - Output) {call it what you want; this is an input}

1FmUPrnZTBymMT3ktgMx61hQ3JRyWD7NPY - (Unspent) 0.14193496 BTC
13dhrUuUe2MrZsufYGAZnwnyE7k97FRZ1v - (Unspent) 0.0776 BTC

Nice and easy; one input, two outputs; classic.

Now compare that to the big one;

19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output)
19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output)
...
19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.00001 BTC - Output) {this is the 1598 occurrence}
...
{followed by zillions of more inputs}

This could have been codes as;

19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 (Brainwallet - dog) (0.01598 BTC - Output)
...

and saved 1597 inputs on this batch of inputs alone.  Ok, sure, we could ignore this one; um, need to figure out how to recognize such without rejecting non-abusive transactions.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Luada, again we owe you a debt of gratitude.  You do the work the rest of us are too lazy to do.  Now I am beginning to understand why the verification process scales quadratic; not that me understanding matters per se but it is nice to know.
Example of such transaction a can be seen here (from last year).
Holy cow!
now does this constitute a legitimate TX?

can't we just ignore this type of TX?
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.

zillions of inputs!  Grin this i can understand
hero member
Activity: 709
Merit: 503
Luada, again we owe you a debt of gratitude.  You do the work the rest of us are too lazy to do.  Now I am beginning to understand why the verification process scales quadratic; not that me understanding matters per se but it is nice to know.
Example of such transaction a can be seen here (from last year).
Holy cow!
legendary
Activity: 2674
Merit: 3000
Terminated.
If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.
Example of such transaction a can be seen here (from last year).
hero member
Activity: 709
Merit: 503
what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
hero member
Activity: 709
Merit: 503
and libsecp256k1 offers 5x validation speeds.
Okay, I've asked around on IRC and got this:
Quote
in order to sign or verify the tx, each input has to construct a special version of the tx and hash it. so if there are n inputs there are nxn hashes to be done. hence quadratic scaling.
the TLDR I believe is: ecdsa operations are the most computationally expensive part of verifying transactions, for normal, small size transactions, but they scale linearly with the size (number of inputs).whereas if a transaction in current bitcoin has tons of inputs, the bottleneck moves over to the hashing/preparing data to to be signed, because that time depends on the *square* of the number of inputs.
so usually it's ultra small, but it blows up for large N inputs.
Why doesn't libsecp256k1 have an effect on this?
Quote
because libsecp256k1 is an ECC library so it's only the "ecdsa" part in the above.
Hopefully this helps, albeit I doubt that many are going to understand it. It certainly isn't easy.
Luada, again we owe you a debt of gratitude.  You do the work the rest of us are too lazy to do.  Now I am beginning to understand why the verification process scales quadratic; not that me understanding matters per se but it is nice to know.
hero member
Activity: 709
Merit: 503
ill yield to your 2309
Outstanding.  Thank you; I could not ask for more.
hero member
Activity: 709
Merit: 503
a fee of 5¢/transaction isn't so burdensome, is it?
5¢ makes things like mixing coins expensive
and suddenly anything less then 5$ TX is not as fun to do
also its not so much the cost that bothers me its the fact that it make newbies lives harder.
we never use to have " My TX is taking forever to confrim " threads before, and i wish they would go away.
lets not forget that ~90% of TX on the network today are low value... i would assume the 5¢ fee is already prohibiting all kinds of legit TX that would otherwise take place on the blockchain.
Couldn't agree more.  As long as we are careful about the signature verification scaling issue, and the releasing multiple features around the same time issue, and improving the wallet software to include a big enough fee to get transactions through in a timely fashion; I'm all for it.

My personal proposal would be to check on the progress of SegWit (give us the real story here).  If it is indeed really ready for primetime in April then, fine, roll with it.  If there's any chance of slipping then delay it until it is soup and in the meantime, put out a very simple block increase *with* a check to reject transactions with more than some limit of signatures/inputs.  Those needing to push through work with more inputs can split their work into multiple transactions.  Either way, I'll be happy.  Both ways should see the normal fees drop to earlier levels, reducing the pain/anxiety.
hero member
Activity: 709
Merit: 503
if 1000 has 33% increase=1333
2000 =2666
Yes, of course, you are correct, allowing for rounding, increasing 1000 by 33% is 1000 + 1000 * 33% = 1333 and increasing 2000 by 33% is 2000 + 200 * 33% = 2666.

Hmm, I'll try to explain;

1200MHz - 900Mhz = 300MHz
300Mhz / 900MHz = 33% -- so we say we have increased the processor frequency by 33%.

Also,

900MHz / 1200MHz = 75% -- so we say instructions will take only 75% as long to run.

So, for a fixed amount of work, we can calculate how long it will take to run.  If a 2000 sigop transaction takes t amount of time on the 900MHz processor then we expect it to only take 0.75t on the faster one.  There are potential hazards in this assertion.  The software might not be strictly compute bound.  Also, not every instruction will necessarily get the same advantage from the speedup, e.g. branches, etc.

*But* this does not indicate how much faster more work will run unless the software scales linearly with frequency.  If the software does indeed scale linearly then the amount of time it takes to get a fix amount of work done will decrease linearly and the amount of work that can get done in the same amount of time will increase linearly.

For some unexplained (to me yet) reason, signature verification does not scale linearly.  Instead it scales as the square;

11
24
39
416
525
......
n
......

So, yes, one signature verification could indeed get done in 0.75t but 2000 will take 2000²*(0.75t).

Gosh, I am terribly sorry if I haven't explained this well.
legendary
Activity: 2674
Merit: 3000
Terminated.
LN whitepaper is based on assumptions?
No. This was a part explaining how many users could theoretically use Bitcoin at a 1 MB block size limit under certain circumstances (I quoted as I mistakenly read it and spread false information (corrected it everywhere already, I hope). Please read the white-paper before making assumptions.


and libsecp256k1 offers 5x validation speeds.
Okay, I've asked around on IRC and got this:
Quote
in order to sign or verify the tx, each input has to construct a special version of the tx and hash it. so if there are n inputs there are nxn hashes to be done. hence quadratic scaling.
the TLDR I believe is: ecdsa operations are the most computationally expensive part of verifying transactions, for normal, small size transactions, but they scale linearly with the size (number of inputs).whereas if a transaction in current bitcoin has tons of inputs, the bottleneck moves over to the hashing/preparing data to to be signed, because that time depends on the *square* of the number of inputs.
so usually it's ultra small, but it blows up for large N inputs.
Why doesn't libsecp256k1 have an effect on this?
Quote
because libsecp256k1 is an ECC library so it's only the "ecdsa" part in the above.
Hopefully this helps, albeit I doubt that many are going to understand it. It certainly isn't easy.


legendary
Activity: 4424
Merit: 4794
your assumption that quadric wont be solved in april.
True enough, I did assume the same software running on both rasPi2 and 3 (which you did too to get to your 2660 number, so I guess that was fair, no?).

Do we want to then make an effort to recalculate the rest using my 2309 number and ignore the question about linear scaling with GHz?

we are currently throwing random scenario numbers around.
my assumption was rasp2 using old software where quadratics was an issue compared to april rasp3 where it wasnt an issue
making not only a ghz performance increase (ill yield to your 2309) but then a multiple gain due to the code efficiency increase

what would be best is to use real bitcoin data as the ultimate goal rather then random scenario speculation
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
a fee of 5¢/transaction isn't so burdensome, is it?
5¢ makes things like mixing coins expensive
and suddenly anything less then 5$ TX is not as fun to do
also its not so much the cost that bothers me its the fact that it make newbies lives harder.
we never use to have " My TX is taking forever to confrim " threads before, and i wish they would go away.
lets not forget that ~90% of TX on the network today are low value... i would assume the 5¢ fee is already prohibiting all kinds of legit TX that would otherwise take place on the blockchain.
sr. member
Activity: 448
Merit: 250
I don't understand why this wasn't implemented in the first place? Since Satoshi had already predicted this why didn't he put the necessary codes in. Sorry I am no computer expert so I hope someone could shed some light on this, thanks.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
I assume hardware is always getting better because HELLO! and libsecp256k1 offers 5x validation speeds.
Ah, I shall research these but they both sound like software to me.  Hmm, unless "HELLO!" isn't software and you were just indicating that faster hardware is just so obvious.

Certainly faster hardware is coming at us; but a quadratic growth problem will *always* out scale even the greatest conceivable hardware.

A linear improvement in the software is always appreciated but again it is *only* linear (even if it is a massive 5x) as compared to quadratic growth, i.e. n².  For example, suppose it takes t amount of time to process one sigop.  Then a transaction with n sigops will take approximately n²*t amount time.  Now we unleash the mighty libsecp256k1 5x improvement.  So, we have n²*(t/5).  When n is small this is great news.  For example, 1²*t vs. 1²*(t/5) or 1t vs. t/5 gets us the full 5x advantage but 10²*t vs. 10²*(t/5) or 100t vs. 20t is still taking 20t and not 10t let alone 2t to do those 10 sigops.  Moreover, 100 sigops works out to 10,000t vs. 2,000t; and who wants to compute 2,000t for just 100 sigops?  Honestly/sincerely I am utterly delighted at the 5x offered by libsecp256k1 but against quadratic growth it pales.

1)how insane would a TX have to be to make it so validation time is hindered?
2)can we simply ignore TX that are that crazy?
3) is it conceivable that a legit TX would be that crazy?

i have no clue my guess is

1) extremely insane
2) yes
3) no

but guessing is not cool, would be nice if some math could back up me up.
hero member
Activity: 709
Merit: 503
I think that it is entirely possible for the Botcoin network to be scaled so that it can handle all the transactions that would be occurring during a normal day, but there has to be the implementation of the 2MB blocks for right now.

I'm not sure why this has to be such an argument and why it can't be scaled. As long as Bitcoin remains as decentralized as it currently is, would that not be beneficial to the network?
Agreed; we just have to watch out for the quadratic growth sigop verification issue.  Also, there is the issue of releasing multiple changes over a short period of time to be managed.  Also, a fee of 5¢/transaction isn't so burdensome, is it?  Finally, we *really* need *all* wallet software to automatically and by default provide for a fee which will get the transaction blocked pretty quick and set user expectations accordingly.
hero member
Activity: 709
Merit: 503
your assumption that quadric wont be solved in april.
True enough, I did assume the same software running on both rasPi2 and 3 (which you did too to get to your 2660 number, so I guess that was fair, no?).

Do we want to then make an effort to recalculate the rest using my 2309 number and ignore the question about linear scaling with GHz?
legendary
Activity: 1218
Merit: 1007
I think that it is entirely possible for the Botcoin network to be scaled so that it can handle all the transactions that would be occurring during a normal day, but there has to be the implementation of the 2MB blocks for right now.

I'm not sure why this has to be such an argument and why it can't be scaled. As long as Bitcoin remains as decentralized as it currently is, would that not be beneficial to the network?
Pages:
Jump to: