Pages:
Author

Topic: Satoshi Nakamoto: "Bitcoin can scale larger than the Visa Network" - page 12. (Read 18415 times)

hv_
legendary
Activity: 2534
Merit: 1055
Clean Code and Scale
As hard as it might be to see, I really believe the crisis in front of us is one of perception as opposed to anything technical.  Perceptions are manageable while the real work of sorting through the technical issues is taken out of the limelight.

The network is working and it's still relatively cheap (5cents pre TX)
But we have threads that are titled " why is my TX not confirming ?? " or somthing to that effect
Newbies are using bitcoin for the first time and are having a hard time, with their BTC tie up seemly never to confrim, and they conclude that bitcoin is not all that it's cracked up to be....
is this a problem?

I remember when i was a newbie, i would check over and over waiting for the first 6 confirmations, everything went smoothly, but I didnt fully trust that it would go smoothly, i was afraid my money would get lost or somthing. slowly my confidence in the system grew as i used it more and understood it more.

not sure i would have be able to build any confidence had i started using bitcoin today....

Good spoken! The newbies are the masses.
hero member
Activity: 812
Merit: 500
He thought Moore's law would keep up with the scalability demands...

Moore's law has never applied to bandwidth (so if Satoshi really thought that then clearly he wasn't a genius was he).

Or perhaps you think that because Satoshi could never be wrong then if he said that "Moore's Law" applies to bandwidth it actually does?


Bandwidth is not a technological issues is a centralized market control by big players on Internet market. I pay 10$ per month on a fiber optic in-house connection 1gbit without any issue. Except that the fiber optic is kinda sensitive I got it broken a few times *lol*. No bandwidth issue. The same computer that is also running a Bitcoin Node I play via Steam Stream full HD games. So again what did you said about bandwidth? I downloaded the blockchain with 25mbit/sec. It was finished in exactly 3 and half hours.

My phone will be able to keep the Bitcoin blockchain right now without any issues my internet connection can do that. If miners bandwidth is an issue they they should be out of business. As they don't want power outages , they don't want bad internet connection. That's their business.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
As hard as it might be to see, I really believe the crisis in front of us is one of perception as opposed to anything technical.  Perceptions are manageable while the real work of sorting through the technical issues is taken out of the limelight.

The network is working and it's still relatively cheap (5cents pre TX)
But we have threads that are titled " why is my TX not confirming ?? " or somthing to that effect
Newbies are using bitcoin for the first time and are having a hard time, with their BTC tie up seemly never to confrim, and they conclude that bitcoin is not all that it's cracked up to be....
is this a problem?

I remember when i was a newbie, i would check over and over waiting for the first 6 confirmations, everything went smoothly, but I didnt fully trust that it would go smoothly, i was afraid my money would get lost or somthing. slowly my confidence in the system grew as i used it more and understood it more.

not sure i would have be able to build any confidence had i started using bitcoin today....
sr. member
Activity: 294
Merit: 250
Satoshi did plan for Bitcoin to compete with PayPal/Visa in traffic volumes.

With a initial block limit at 33,5Mb : Yes.

And you do realise that if someone created such a block that no-one else could verify it in the 10 minutes required to create the next block right?

(or - you don't care about the fact that this just wouldn't work?)


Not true, with BIP 109 activated, there cannot be transactions with extremly long validation times. Actually this reducting SigOp to 1.3 GB operations is reccomended ever for 1 MB limit. Dont worry, keep solving problems to experts, not to those saying its impossible to scale onchain, because all the evidence points onchain scalling is possible for many years to keep the demand and necessary decentralization with full nodes running at home computers (not necessary every obsolute low spec home computer though!)
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
but miners can still generate these transactions.
but the block would get orphaned?
hero member
Activity: 709
Merit: 503
Thank you Lauda:  https://en.wikipedia.org/wiki/Quadratic_growth ... ouchie; so if a 1MB block with y transactions in it takes x seconds to validate then 32 similar 1MB blocks will take about 32x seconds but a 32MB block can be expected to take about (32y)²x seconds.  Or is the quadratic growth on something other than transaction count?
It can be a bit tough to understand (I feel you). The number of transactions is irrelevant from what I understand. it is possible to construct a single transaction that would fill up a block. From the Core roadmap:
Quote
In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors.
The validation time of the block itself is what has quadratic scaling. From one of the workshops last year (might help understand the problem):
Quote
So how bad is it? There was a block consisting of one transaction in the coinbase, it was broadcasted about one month ago. It was f2pool that was clearing up a bunch of spam on the blockchain. The problem is that this block takes 30 seconds to validate. It's a 990 kilobyte transaction. It contains about 500,000 signature operations, which each time serializes the entire 1 MB transaction out, moves buffers around, then serializes it, then hashes it, it creates 1.25 GB. The bitcoin serializer is not fast, it's about where 60% of the validation time was. So 30 seconds to validate and that's on a beefy computer, I don't know what that is on a raspberrypi, it's not linear it's quadratic scaling... If you are doing 8 MB, then it's 2 hours and 8 minutes. There are some ways that gavinandresen has proposed that this can be fixed. This transaction would be considered non-standard on the network now, but miners can still generate these transactions.
Sincerely, thank you Lauda:  I really do try hard to understand and your patience is much appreciated.  It did seem unreasonable for the scaling to hinge on transaction count.  Serialization is a classic scaling killer.  So, one very large transaction with numerous sigops leads to the quadratic growth.  Hmm, so blindly increasing the block size is asking for trouble.  How many of these troublesome transactions are launched against us recently?  Or is it an unexploited vulnerability?

Perhaps we could increase the block size *and* constrain sigops/transaction until we get SegWit out the door?
legendary
Activity: 4424
Merit: 4794
Thank you franky1:  I understand but a 10% jump is at least something and then if all goes well the stage would be set to jump to something more, e.g. 1.2MB.  Breaking the standoff seems more important to me at this time; the world is watching us.

since summer 2015 people have been asking for a buffer increase. the roadmap plans one for 2 years (summer 2017) after a year of grace(starting summer 2016, we hope).
that means when miners get to 1.1 they then have to beg for a year and then wait a grace period of a year before getting to 1.2.

its much easier to have 2mb buffer and let the miners themselves slowly increase 1.1 then 1.2 when they are ready. as a soft rule within their own code below the hard rule of consensus
after all they are not going to risk pushing too fast as their rewards would be at risk due to not only competition but also orphans, so even with a 2mb buffer we wont see miners pushing to 1.950 anytime soon, just like in 2013.. when they realised they had 50% of growth potential they could fill.. they didnt straight away
hero member
Activity: 709
Merit: 503
As hard as it might be to see, I really believe the crisis in front of us is one of perception as opposed to anything technical.  Perceptions are manageable while the real work of sorting through the technical issues is taken out of the limelight.
hv_
legendary
Activity: 2534
Merit: 1055
Clean Code and Scale
One thing that seems apparent to me is the lack of willingness to compromise.  Appeasement is a powerful marketing tool.  Could we reasonably raise the block size limit to 1.1MB without wrecking Bitcoin?  Wouldn't the good will generated be worth it?  Along the way we might learn something important.  I fully realize the 2MB being bandied about is already a giant compromise down from the 32MB or 8MB sizes being proposed before.  Is there something special about doubling?  It can be set to 1.1MB easily, right?
1.1mb is an empty gesture. and solves nothing long term. meaning its just a poke in the gut knowing that more growth would be needed soon.

the 2mb is not forcing miners to make 2mb blocks. its a BUFFER to allow for growth without having to demand that core keep chaing the rules every month.

meaning even with 2mb buffer. miners can set a preferential limit of 1.1mb and still have 45% of growth potential(the 2mb hard limit) not even tapped into and not needing to beg core to alter for months-years

imagine it. 2mb buffer and miners grow slowly month after month growing by 0.1mb when they are happy to. without hindrance or demands

just like in 2013 when miners were fully able to use the 1mb buffer the miners did not jump to 0.95mb, they grew slowly over months and months. without having to ask core to change..
Thank you franky1:  I understand but a 10% jump is at least something and then if all goes well the stage would be set to jump to something more, e.g. 1.2MB.  Breaking the standoff seems more important to me at this time; the world is watching us.

Great, watching...
legendary
Activity: 2674
Merit: 3000
Terminated.
Thank you Lauda:  https://en.wikipedia.org/wiki/Quadratic_growth ... ouchie; so if a 1MB block with y transactions in it takes x seconds to validate then 32 similar 1MB blocks will take about 32x seconds but a 32MB block can be expected to take about (32y)²x seconds.  Or is the quadratic growth on something other than transaction count?
It can be a bit tough to understand (I feel you). The number of transactions is irrelevant from what I understand. it is possible to construct a single transaction that would fill up a block. From the Core roadmap:
Quote
In 2MB blocks, a 2MB transaction can be constructed that may take over 10 minutes to validate which opens up dangerous denial-of-service attack vectors.
The validation time of the block itself is what has quadratic scaling. From one of the workshops last year (might help understand the problem):
Quote
So how bad is it? There was a block consisting of one transaction in the coinbase, it was broadcasted about one month ago. It was f2pool that was clearing up a bunch of spam on the blockchain. The problem is that this block takes 30 seconds to validate. It's a 990 kilobyte transaction. It contains about 500,000 signature operations, which each time serializes the entire 1 MB transaction out, moves buffers around, then serializes it, then hashes it, it creates 1.25 GB. The bitcoin serializer is not fast, it's about where 60% of the validation time was. So 30 seconds to validate and that's on a beefy computer, I don't know what that is on a raspberrypi, it's not linear it's quadratic scaling... If you are doing 8 MB, then it's 2 hours and 8 minutes. There are some ways that gavinandresen has proposed that this can be fixed. This transaction would be considered non-standard on the network now, but miners can still generate these transactions.
hero member
Activity: 709
Merit: 503
One thing that seems apparent to me is the lack of willingness to compromise.  Appeasement is a powerful marketing tool.  Could we reasonably raise the block size limit to 1.1MB without wrecking Bitcoin?  Wouldn't the good will generated be worth it?  Along the way we might learn something important.  I fully realize the 2MB being bandied about is already a giant compromise down from the 32MB or 8MB sizes being proposed before.  Is there something special about doubling?  It can be set to 1.1MB easily, right?
1.1mb is an empty gesture. and solves nothing long term. meaning its just a poke in the gut knowing that more growth would be needed soon.

the 2mb is not forcing miners to make 2mb blocks. its a BUFFER to allow for growth without having to demand that core keep chaing the rules every month.

meaning even with 2mb buffer. miners can set a preferential limit of 1.1mb and still have 45% of growth potential(the 2mb hard limit) not even tapped into and not needing to beg core to alter for months-years

imagine it. 2mb buffer and miners grow slowly month after month growing by 0.1mb when they are happy to. without hindrance or demands

just like in 2013 when miners were fully able to use the 1mb buffer the miners did not jump to 0.95mb, they grew slowly over months and months. without having to ask core to change..
Thank you franky1:  I understand but a 10% jump is at least something and then if all goes well the stage would be set to jump to something more, e.g. 1.2MB.  Breaking the standoff seems more important to me at this time; the world is watching us.
legendary
Activity: 2576
Merit: 1087
You are way off topic. Lets zoom back out again.

You are an idiot (and most likely @franky1 who I've already put on ignore).

I am putting you on ignore as well so feel free to post your rubbish (I don't think that anyone is actually taking you seriously anyway but I don't actually care).


As you are not refuting anything I have said then I will assume it is because you cannot.

The only logical conclusion is that is because you were in fact wrong, and so your comments about my lack of understanding are demonstrably false.

As I have now cleared up any misconceptions about what I do or do not know. I don't think there is anything left to discuss

If you do come up with any factual evidence I'll be happy to reopen a dialogue.
hero member
Activity: 709
Merit: 503
I have a question:  The total amount work to verify N 1MB blocks is about the same as single N-MB block, right?  For example, 32 1MB blocks take about the same amount of work to verify as a single 32MB block, right?  
No. The scaling of validation time is quadratic (look up quadratic growth if unsure what this means). In other words, 32 1 MB blocks != a single 32 MB block. Segwit aims to scale down the validation time and make it linear. Classic (BIP109) adds a sigops limitation to prevent this from happening (so not a solution, but limitation to size of TX IIRC). If anyone claims that this is false or whatever, that means they are saying that all the people who signed the Core roadmap are wrong/lying (2 MB block size limit is mentioned there IIRC).

It can be set to 1.1MB easily, right?
That would work.
Thank you Lauda:  https://en.wikipedia.org/wiki/Quadratic_growth ... ouchie; so if a 1MB block with y transactions in it takes x seconds to validate then 32 similar 1MB blocks will take about 32x seconds but a 32MB block can be expected to take about (32y)²x seconds.  Or is the quadratic growth on something other than transaction count?
legendary
Activity: 4424
Merit: 4794
One thing that seems apparent to me is the lack of willingness to compromise.  Appeasement is a powerful marketing tool.  Could we reasonably raise the block size limit to 1.1MB without wrecking Bitcoin?  Wouldn't the good will generated be worth it?  Along the way we might learn something important.  I fully realize the 2MB being bandied about is already a giant compromise down from the 32MB or 8MB sizes being proposed before.  Is there something special about doubling?  It can be set to 1.1MB easily, right?

1.1mb is an empty gesture. and solves nothing long term. meaning its just a poke in the gut knowing that more growth would be needed soon.

the 2mb is not forcing miners to make 2mb blocks. its a BUFFER to allow for growth without having to demand that core keep chaing the rules every month.

just like in 2013 when miners were fully able to use the 1mb buffer the miners did not jump to 0.95mb, they grew slowly over months and months. without having to ask core to change it endlessly from 0.5 to 0.55 to 0.6 to 0.65

meaning even with 2mb buffer. miners can set a preferential limit of 1.1mb and still have 45% of growth potential(the 2mb hard limit) not even tapped into and not needing to beg core to alter anything for months-years, instead of 1.1mb hard limit which requires endless debates

imagine it. 2mb buffer and miners grow slowly month after month growing by 0.1mb when they are happy to. without hindrance or demands

and while you are investigating validation times.. please validate a 450kb block vs a 900kb block and see if the whole quadratic buzzword holds wait.
as it would be an interesting answer (as a comparison between 900kb vs 1800kb which is not measurable on the bitcoin network yet)

legendary
Activity: 4424
Merit: 4794
I have a question:  The total amount work to verify N 1MB blocks is about the same as single N-MB block, right?  For example, 32 1MB blocks take about the same amount of work to verify as a single 32MB block, right?  Just please ignore the live delivery of blocks for the moment.  Or is there some advantage to large blocks where less headers have to be processed.  Imagine a full node was off the air for a day or two and is just trying to catch up as fast as possible.  What block size facilitates that best?

a much easier way. to get an initial bases number to then compare is to start from scratch. and time how long it takes to resync from 0 to the latest block
..then do the maths

EG
someone pointed out to lauda that at a 1mb internet connection http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/17#l1453064029.0
it would take 12 days to resync and validate 400,000 blocks

so basing it on a very slow connection is a good bases of capability
which is basically 400,000 /12 /24/ 60 = 0.39 blocks a minute. so lets call that 1 block in under 3 minutes.

that is the basic total propogation time including download time using a 1mb connection speed.

though it would be useful to work out how long it takes to validate the data without the connection speed hindering it.
and also know the total propogation time at a varying amount of internet speeds too

so i wish you luck with your investigations and i hope your results give some conclusive results
legendary
Activity: 2674
Merit: 3000
Terminated.
I have a question:  The total amount work to verify N 1MB blocks is about the same as single N-MB block, right?  For example, 32 1MB blocks take about the same amount of work to verify as a single 32MB block, right?  
No. The scaling of validation time is quadratic (look up quadratic growth if unsure what this means). In other words, 32 1 MB blocks != a single 32 MB block. Segwit aims to scale down the validation time and make it linear. Classic (BIP109) adds a sigops limitation to prevent this from happening (so not a solution, but limitation to size of TX IIRC). If anyone claims that this is false or whatever, that means they are saying that all the people who signed the Core roadmap are wrong/lying (2 MB block size limit is mentioned there IIRC).

It can be set to 1.1MB easily, right?
That would work.
hero member
Activity: 709
Merit: 503
One thing that seems apparent to me is the lack of willingness to compromise.  Appeasement is a powerful marketing tool.  Could we reasonably raise the block size limit to 1.1MB without wrecking Bitcoin?  Wouldn't the good will generated be worth it?  Along the way we might learn something important.  I fully realize the 2MB being bandied about is already a giant compromise down from the 32MB or 8MB sizes being proposed before.  Is there something special about doubling?  It can be set to 1.1MB easily, right?
legendary
Activity: 3248
Merit: 1070
the signature concern is not even a concern, in the link wiki that i've posted it explain how with some optimization it is possible to increment the number of signature per second

Aha - and that optimisation can do that 10x or 100x can it?

(I am only rude to people who are rude to me btw and if you want to be taken seriously then please dump your ad sig for a start)


what sig have to do with our discussion, there are many troll even worse among those without sign...and anyway i'm not payd for monere than 100 post per week, and guess what this is the 200°+ post....

and anyway yes those optimization can reach 10x increasing and even higher



obviously lauda is joking because in 2017 they are going to implement 2mega hard fork anyway

there is no real valid concern against 2mega all i see is a non-sense fud
hero member
Activity: 709
Merit: 503
I would be willing to run a full node on a testnet to see if my system could handle larger blocks, i.e. verify a large block in less than the average time between blocks.

I have a question:  The total amount work to verify N 1MB blocks is about the same as single N-MB block, right?  For example, 32 1MB blocks take about the same amount of work to verify as a single 32MB block, right?  Just please ignore the live delivery of blocks for the moment.  Or is there some advantage to large blocks where less headers have to be processed.  Imagine a full node was off the air for a day or two and is just trying to catch up as fast as possible.  What block size facilitates that best?

To me it seems fees tend to be inversely proportional to block size, i.e. with smaller blocks fees rise as folks compete to get into blocks, with larger blocks fees get smaller with less competition to get into blocks.  What does it cost a bad actor (if there is truly such a thing in this realm) to clog up the works?  I suppose we are looking for the right size of block to cause them to expend their resources most quickly.  Make the block size very small and the fee competition would rise high enough to deplete the bad actor very fast; everyone suffers higher fees until they are run out of town (so to speak).  Hmm, but if the block size is very small then even when there aren't any bad actors on the scene, regular legit users would be forced to compete.  At the other end of the spectrum; make the block size very large and with such low competition fees would diminish.  The real question here is what happens to the fees/MB across the spectrum of block sizes.

Is there *anyone* preferring a smaller than 1MB block size right now?  I haven't heard of any but you never know.  I do think some miners do artificially constrain the block size they produce to like 900KB or so (I'm not sure of their motivation).  Even if the block size were increased then such miners could still constrain the ones they produce, right?

A transaction cannot span multiple blocks, right?  I suppose the block size creates a functional limit on transaction sizes.  Or is the size of a transaction constrained some other way?
legendary
Activity: 4424
Merit: 4794
You're wasting your time with him. Obviously there is a good reason for which a 2 MB block size limit is dangerous;

and there is the hat trick
2mb dangerous?? ?? ?

yet strangely segwits 4mb not dangerous??

you cant have it both ways
Pages:
Jump to: