Pages:
Author

Topic: Satoshi Nakamoto: "Bitcoin can scale larger than the Visa Network" - page 5. (Read 18415 times)

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Quote
By Moore's Law, we can expect hardware speed to be 10 times faster in 5 years and 100 times faster in 10. Even if Bitcoin grows at crazy adoption rates, I think computer speeds will stay ahead of the number of transactions.
Did it increase by tenfold in 5 years? Not even close. Satoshi did not have the adequate data here.

There is a lot of processing power being untapped right now. This is typically found in GPUs though:

Not in my GPUs.

I typically use Intel HD when my processor / board supports it.

When I build a Xeon box without Intel HD GPU, well then *if* I install a video card at all it is Geforce 405 w/ 512MB of RAM, clearly not a powerhouse as it gets all the power it needs from the PCIe bus meaning it at most pulls 512W (that model is intended for OEMs but you can find it you search. Also comes in 1GB of RAM but 512 MB suffices for me)

even that low power GPU is probably better than any CPU no?
full member
Activity: 182
Merit: 107
Quote
By Moore's Law, we can expect hardware speed to be 10 times faster in 5 years and 100 times faster in 10. Even if Bitcoin grows at crazy adoption rates, I think computer speeds will stay ahead of the number of transactions.
Did it increase by tenfold in 5 years? Not even close. Satoshi did not have the adequate data here.

There is a lot of processing power being untapped right now. This is typically found in GPUs though:

Not in my GPUs.

I typically use Intel HD when my processor / board supports it.

When I build a Xeon box without Intel HD GPU, well then *if* I install a video card at all it is Geforce 405 w/ 512MB of RAM, clearly not a powerhouse as it gets all the power it needs from the PCIe bus meaning it at most pulls 512W (that model is intended for OEMs but you can find it you search. Also comes in 1GB of RAM but 512 MB suffices for me)
legendary
Activity: 2576
Merit: 1087
Your wrong, follow the thread url. The hard sigop limit is the way to fight such attacks. All times are for worst case blocks than can be used for attacks and the same computer to make the comparsion fair.
I don't have time to follow that thread. So this is the worst case scenario? If we assume that this is true, then then Segwit seems pretty good. Added capacity without an increase in the amount of hashed data and no additional limitations. Did I understand this correctly?

You didn't logic it correctlly.

the block size limit was used to protect against "poison" blocks.

The block size limit is in the way of natural growth.

A different solution to "poison" blocks exists now so block size limit is redundant.

Segwit has nothing to do with any of this
legendary
Activity: 2576
Merit: 1087
Thank you so much sgbett; that was a really great read.  I must admit I didn't work through all of the math yet but at first blush it appears ok until;
Quote
The Bitcoin network is naturally limited by block validation and construction times. This puts an upper limit on the network bandwidth of 60KB/sec to transmit the block data to one other peer.
Hmm, really?  There's no way ever to improve on block validation times?  Quantum computers?  Nothing?  That doesn't ring true.

I am totally with you. I read 60kb/s more as a theoretical *minimum* rate we can achieve Smiley
legendary
Activity: 1708
Merit: 1049
Quote
By Moore's Law, we can expect hardware speed to be 10 times faster in 5 years and 100 times faster in 10. Even if Bitcoin grows at crazy adoption rates, I think computer speeds will stay ahead of the number of transactions.
Did it increase by tenfold in 5 years? Not even close. Satoshi did not have the adequate data here.

There is a lot of processing power being untapped right now. This is typically found in GPUs though:

The max radeon single card (not a 2x) of 2008 was, the 4870 doing

1.2 TFLOP (single) / 0.24 TFLOP (double) / 115gb/s ram

The max radeon single card (not a 2x) of 2015 was R9 Fury 9 doing:

8.6 TFLOP (single) / 0.53 TFLOP (double) / 512 gb/s ram

----
Data from: https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

---

And two imgs from NVIDIA (2-3 y. old but anyway)



sr. member
Activity: 423
Merit: 250
Your wrong, follow the thread url. The hard sigop limit is the way to fight such attacks. All times are for worst case blocks than can be used for attacks and the same computer to make the comparsion fair.
I don't have time to follow that thread. So this is the worst case scenario? If we assume that this is true, then then Segwit seems pretty good. Added capacity without an increase in the amount of hashed data and no additional limitations.

Yes, your right, these are worst case blocks known for such attacks, and it cant be worse with SegWit, only better. But because block can still be created without any SegWit transactions then the attack threat remains the same with future SegWit as is today.

SegWit is usefull and Bitcoin Core should start cooperate with Bitcoin Classic to get both activated, SegWit and BIP109. Without this, both activations will be much more difficult Wink
legendary
Activity: 4424
Merit: 4794

1) More transaction capacity
2) Fixes TX mallaeability
3) New mechanism for adding OPcodes
4) More flexible security model (fraud proofs)
5) Potential bandwidth decrease for SPV nodes.


point 2 will be fixed, yet blockstream introduced RBF as the new way to 'con' merchants. ultimately there is still a problem

point 3 actually reduces point 1. its like having a 10 bedroom house each room has a bunk bed. but then you let the neighbours kids take the top bunk. such as the fat kid known as confidential payment codes(250bytes extra bloat per tx). while making your own kids get adopted by the neighbours(sidechains) and you charge them 60cents each time they want to come home for the day, hoping they eventually stay at the neighbours because the family home is too crowded

point 5, along with no-witness mode, along with pruned mode reduces the real fullnode count which makes point 4 more of a headache if there are less real honest fullnodes.

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
Whatever; I've removed Core 0.12 and am now running Classic 0.12.  I am unsettled.
I thought better of you at times. I guess I was wrong.
there is no wrong choice.
he asked questions, thought about things, weighed the pros and cons, and made a choice, Classic.
legendary
Activity: 2674
Merit: 3000
Terminated.
Your wrong, follow the thread url. The hard sigop limit is the way to fight such attacks. All times are for worst case blocks than can be used for attacks and the same computer to make the comparsion fair.
I don't have time to follow that thread. So this is the worst case scenario? If we assume that this is true, then then Segwit seems pretty good. Added capacity without an increase in the amount of hashed data and no additional limitations. Did I understand this correctly?
sr. member
Activity: 423
Merit: 250
Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)
Two things:
1) These are apparently estimations; so this is inadequate data. Blocks differ in size, and types of transactions in them.
2) The comparison makes little sense as they've added a hard limit in Classic.

Your wrong, follow the posts in the thread url. The hard sigop limit is the way to fight such attacks. All times are for worst case blocks than can be used for attacks and the same computer to make the comparsion fair.
legendary
Activity: 2674
Merit: 3000
Terminated.
Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)
Two things:
1) These are apparently estimations; so this is inadequate data. Blocks differ in size, and types of transactions in them.
2) The comparison makes little sense as they've added a hard limit in Classic.

When I play this out in my mind I see this;

1) eventually SegWit gets out the door and is adopted but if it doesn't reduce verification times then what was the point?
Segwit:
1) More transaction capacity
2) Fixes TX mallaeability
3) New mechanism for adding OPcodes
4) More flexible security model (fraud proofs)
5) Potential bandwidth decrease for SPV nodes.

Linear scaling of sighash operations

A major problem with simple approaches to increasing the Bitcoin blocksize is that for certain transactions, signature-hashing scales quadratically rather than linearly.
Whatever; I've removed Core 0.12 and am now running Classic 0.12.  I am unsettled.
I thought better of you at times. I guess I was wrong.
sr. member
Activity: 423
Merit: 250
Afaik Classic has no plans to adopt SegWit. Is this correct? Just how malleable are transactions, exactly?

Classic Roadmap proposal:
https://github.com/bitcoinclassic/documentation/blob/master/roadmap/roadmap2016.md

Phase 3 (Q3-Q4 2016)
Simplified version of Segregated Witness from Core, when it is available.
Incorporate segregated witness work from Core (assuming it is ready), but no special discount for segwit transactions to keep fee calculation and economics simpl


When I play this out in my mind I see this;

1) eventually SegWit gets out the door and is adopted but if it doesn't reduce verification times then what was the point?
2) even if SegWit does reduce verification times it won't ultimately be enough and a hard limit on sigops will be required
3) the block size limit is adjusted up and up but validation times ultimately dominate the mining process
4) multiple block chain-based applications besides Bitcoin are required to handle the workload in a timely fashion


1) malleability fix + trick to get bit more transactions in block by separating signatures to special data structure for SegWit txs
2) SegWit does not help these attacks
3) you can decrease SigOp limits more for future bigger block sizes to be certain no attack is possible for any kind of blocksize increase
4) it is not unless everyone need to use decentralized currency soon. Time + tech advacement allows onchain scalling for Bitcoin unless we get worldwide demand within 10-20 years (not likely mankind replaces fiat so soon)
hero member
Activity: 709
Merit: 503
Whatever; I've removed Core 0.12 and am now running Classic 0.12.  I am unsettled.
hero member
Activity: 709
Merit: 503
The full math is here - David you would probably be interested in this if you haven't already seen it.

http://www.bitcoinunlimited.info/resources/1txn.pdf

The paper also describes how the sigops attack is mitigated through miners simply mining 1tx blocks whilst validating then pushing that out to other miners whilst they are still validating the 'poison' block. Rational miners will validate the smaller block, and they also be able to mine another block on top of this, orphaning the poison block.

The attacker would get one shot, and would quickly be shut out. If you have enough hash rate to be mining blocks yourself its really much more profitable to behave!
Thank you so much sgbett; that was a really great read.  I must admit I didn't work through all of the math yet but at first blush it appears ok until;
Quote
The Bitcoin network is naturally limited by block validation and construction times. This puts an upper limit on the network bandwidth of 60KB/sec to transmit the block data to one other peer.
Hmm, really?  There's no way ever to improve on block validation times?  Quantum computers?  Nothing?  That doesn't ring true.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner

Why does segregated witness change the tx fee calculation?

My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

This is called discount, eg for the same work (1KB transactions) miners are supposed to get less in fees if it is SegWit transaction within Bitcoin Core.

Fortunatelly miners are free to set their fee policy and hopefully there will be full node client available requiring for the same work (1KB transactions) the same fees whether it is normal or SegWit transaction (if soft fork SegWit gets activated which is uncertain).


Afaik Classic has no plans to adopt SegWit. Is this correct? Just how malleable are transactions, exactly?

its part of their road map

when core has it ready they will adopt it.

thats the plan anyway.
legendary
Activity: 1260
Merit: 1116

Why does segregated witness change the tx fee calculation?

My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

This is called discount, eg for the same work (1KB transactions) miners are supposed to get less in fees if it is SegWit transaction within Bitcoin Core.

Fortunatelly miners are free to set their fee policy and hopefully there will be full node client available requiring for the same work (1KB transactions) the same fees whether it is normal or SegWit transaction (if soft fork SegWit gets activated which is uncertain).


Afaik Classic has no plans to adopt SegWit. Is this correct? Just how malleable are transactions, exactly?
hero member
Activity: 709
Merit: 503
When I play this out in my mind I see this;

1) eventually SegWit gets out the door and is adopted but if it doesn't reduce verification times then what was the point?
2) even if SegWit does reduce verification times it won't ultimately be enough and a hard limit on sigops will be required
3) the block size limit is adjusted up and up but validation times ultimately dominate the mining process
4) multiple block chain-based applications besides Bitcoin are required to handle the workload in a timely fashion
sr. member
Activity: 423
Merit: 250

Why does segregated witness change the tx fee calculation?

My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

This is called discount, eg for the same work (1KB transactions) miners are supposed to get less in fees if it is SegWit transaction within Bitcoin Core.

Fortunatelly miners are free to set their fee policy and hopefully there will be full node client available requiring for the same work (1KB transactions) the same fees whether it is normal or SegWit transaction (if soft fork SegWit gets activated which is uncertain).
legendary
Activity: 996
Merit: 1013

Replacing 1 limit with another is anything, but a nice way of solving problems.

Tend to agree.. but there is a difference between a hard limit that wards
off an attacker and a hard limit that restricts normal transacting.
legendary
Activity: 4424
Merit: 4794
Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying there are monsters out there that could take hours or even days to verify?

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger then the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?

thats under the assumtion that with a 2mb buffer.. miners will allow themselves to jump to 1.995mb of data instantly.

the real assumtion is however just like in 2013. miners knew they suddenly became able to grow passed the 500k bug and utilize the 1mb buffer. but it took a couple years for them to slowly grow,
and that was the decision of the miners.

we should not leave it to blockstream to set a 1.1mb limit every 2 years knowing that miners will be at the max in maybe 4 months.
instead it should be a 2mb buffer and then let the miners have their own separate preferential rules to grow slowly and just ignore obvious spam transactions until they drop out of the mempool after 48 hours.
knowing that they can happily grow by 0.1mb very 4months+ without needing to ask blockstream for permission or receive abuse or insults

analogy
knowing one day you are going to have 19 children in the next xx years(you already have 9 and live in a 10bedroom house).
(1.9mb data in x years time, currently at 900k data with a 1mb buffer)
would you go through the headache of 2 years of mortgages and legal stuff to get an 11 bedroom house then another 2 years of headaches for a 12 bedroom house
or would you:
go through one headache and get a 20 bedroom house and then spend the next 20 years impregnating your wife 10 times, slowly gaining a child once every couple years.

i know segwit tries to say, lets stay with 10 rooms and fit in some bunkbeds.. so more kids can fit into the 10 rooms. but the problem is that blockstreams other features. like confidential transaction codes. makes all the kids obese with twice the amount of clothing that needs storing too.. so the house becomes overcrowded and slow to get everyone ready in the morning.
which leads to blockstream to instead of expanding to a 20 bedroom house. pushes some of the kids to get adopted by the neighbours(sidechains). and only allowed to visit the real family home if they pay rent
Pages:
Jump to: