Pages:
Author

Topic: Please run a full node - page 2. (Read 6650 times)

hero member
Activity: 770
Merit: 629
May 14, 2017, 11:32:29 PM

I must admit, for some reason I had thought that these times would be a lot closer to the 10 min average since pooling is supposed to "smooth out" the times.


Nope, they remain exponential.  The only thing pooling does is smoothing out the GAINS as compared to solo mining for each of its customers (and a bit of economy of scale which is probably lost on the fact that the customers have some overhead to prove their work: trustlessness comes at a price).

If you have a small amount of hash rate that would, on average, let you win a block per month, some times you might only win a block after 3 months and no gains in between ; some times you might win 3 blocks in one month.  This uncertainty of income is smoothed out by pooling together, where the pool will pay you regularly about one block per month minus his fees and margins etc... and the pooling together also removes the hassle to have to constitute blocks yourself, check, have a good network node, etc... (and at the same time, take away your decision power on that).
hero member
Activity: 770
Merit: 629
May 14, 2017, 11:27:12 PM
@franky1.  One more trial.

Take an old piece of block chain, say, around block number 200 000 or so, but consider the actual, today's, difficulty, take a given miner setup, with a given hash rate, say, 1/6 of the total hash rate for that difficulty, and compare two different experiments:

A) take the transactions of block 200 000, make your own block of it, and hash on it.  Regularly, you will find a solution, but you continue trying to find new solutions on that very same block.  Do this for a day.   ==> at what average rate do you think you will find solutions for this same block ?

B) do the same as in A, but switch blocks every 30 seconds, that is, work 30 seconds on a block made of the transactions of block 200 000 ; then work 30 seconds on the block made of the transactions of block 201 000 ; then work 30 seconds on the transactions of block 200 002 etc...  Do this also for a day.
==> at what average rate do you think this time, you will find solutions for some of the blocks during the time you hash on them ?

How do the rates in A and in B compare ?
sr. member
Activity: 686
Merit: 320
May 14, 2017, 11:26:29 PM
Some real block times over a few hours from yesterday. Each pool was working towards solving a block at each of those heights. Each pool was trying to solve a completely different "block" as the data they work on is different from any other pool. I seriously don't know how franky1 could possibly think that a pool with 5 S9s (as an example), would be able to solve their unique block at the same average time as a pool with 1000 S9s. At this point I have to conclude he's simply incapable of admitting he's wrong and/or is trolling us.

466332 05:22
466331 35:01
466330 34:56
466329 09:24
466328 05:02
466327 11:12
466326 03:29
466325 13:08
466324 42:03
466323 05:09
466322 07:15
466321 01:17
466320 01:40
466319 24:02
466318 06:19
466317 14:06
466316 04:10
466315 00:52
466314 14:32
466313 07:36
466312 10:35
466311 08:05
466310 02:43
466309 03:03

I must admit, for some reason I had thought that these times would be a lot closer to the 10 min average since pooling is supposed to "smooth out" the times.
legendary
Activity: 4270
Merit: 4534
May 14, 2017, 04:22:47 PM
The natural frequency to find a block for the entire network (which is set by the difficulty level) is always 600 seconds on average.[/b]

you are right. but your not seeing it from more than a 1 dimension...

so lets just get back to the topic at hand..

running a node is just as important as running an asic. infact more important

having diverse codebases of nodes is as important as having multiple pools. infact more important
legendary
Activity: 4270
Merit: 4534
May 14, 2017, 04:19:42 PM
Moral of this topic:  franky1 isn't listening to a vast selection of technically proficient users explaining in detail why his perception of mining is wrong.

i understand more then you think. but people cant even get passed the basics for me to even start confusing them further with the extra dimensions..
it would take a book to explain it all.. but some are stuck at the first paragraph.. so this topic im only talking about their first paragraph failures ..

ok.. lets word it this way to confuse the matter by talking about some 2 dimensional stuff
(using some peoples rationale)
if it only takes 70ms (im laughing) to see a block, grab the block, validate the block, make a new(unsolved) block template, add transactions..
                                ...... before hashing

then why SPV??
why do(avoiding grey): see a block, grab the block validate the block, make a new block add transactions start hashing
hint:  its more then 70ms to do all the tasks before hashing.
hint: the efficiency gains of doing spv are noticable
hint: by doing spv, the gains are more than 5%, compared to a pool that does the full validation
hint: even OVERT asicboost can gain more than 5% efficiency by tinkering around with certain things too
hint: even COVERT asicboost can gain more than 5% efficiency by tinkering around with certain things too

remember 5% of 10 minutes is 30 seconds.
there are ways to shave off more than 20% of average block creation processes (2minutes) without buying 20% more hash power

once you realise there is much more than just hashing to make a block. the difference between each pools "hash power" becomes negligible..

where all those tasks sat beside the time of hashing to make up the solved block creation time..
dilute the 'hash time' per block solution variance. thus making the "hashing time" negligible

tl:dr;
without buying more ASIC rigs
a 11% hashpower pool can out perform a 13% hashpower pool. just by knowing some efficiency tricks
meaning arguing about the



until dino and others can grasp the basics that pools dont just work on 1 block an hour.. there is no point going into the deeper level stuff


third level hint..
if a pool went at it alone. it can happily avoid all the latency, validation, propogation times (which would be more than 70ms if it was competing)
because going it alone means the previous block already belongs to them so they already know the data.. and as such they gain time to create the next block by not having to relay, propagate, etc, etc..

totally separate matter.
but the bit i laugh at.
if it only takes 70ms to see a block download a block validate a block then why are cry babies crying so much that "2mb blocks are bad".

look beyond the curtain, find the answers. piece the layers together, see the whole picture
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 14, 2017, 02:57:59 PM
Franky,  I have ran the scenarios in my head.  I could give one to you but it would be a waste of time because we'd end up in a circular argument.

...because you would challenge the underlying assumption behind the scenario, which is this:

The natural frequency to find a block for the entire network (which is set by the difficulty level) is always 600 seconds on average.

Unless and until you accept that assumption, discussing scenarios are pointless.

legendary
Activity: 4270
Merit: 4534
May 14, 2017, 02:34:47 PM
(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )

Nonsense.

An orphan only becomes an orphan because another valid block beat it out.

Since the time between valid blocks is so much larger than the propagation/validation time
(which is seconds, not minutes), the proportion of orphans to valid blocks has to be tiny.

The only way that, say, 5 orphans would be created during 1 valid block is if they
all happened to be published within a few seconds of each other -- which, given
that valid blocks only occur about every 600 seconds, is quite unlikely.
[/quote]

(facepalm)
seems your not gonna run any scenarios.. so you might aswell just carry on with one dimensional thinking and move on
its like i open up a curtain. and all you want to talk about is the next wall.. your not ready to see beyond the wall and finding reasons to avoid looking beyond the wall..

might be best to let you have more time to immerse yourself with all the extra things behind the scene.. which your not ready to grasp just yet
hero member
Activity: 546
Merit: 500
May 14, 2017, 02:26:33 PM
Moral of this topic:  franky1 isn't listening to a vast selection of technically proficient users explaining in detail why his perception of mining is wrong.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 14, 2017, 01:49:36 PM
Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )

Nonsense.

An orphan only becomes an orphan because another valid block beat it out.

Since the time between valid blocks is so much larger than the propagation/validation time
(which is seconds, not minutes), the proportion of orphans to valid blocks has to be tiny.

The only way that, say, 5 orphans would be created during 1 valid block is if they
all happened to be published within a few seconds of each other -- which, given
that valid blocks only occur about every 600 seconds, is quite unlikely.
legendary
Activity: 4270
Merit: 4534
May 14, 2017, 01:46:55 PM
moral of this topic:

run a full node, not just to:
make transactions without third party server permission
see transactions/value/balance without third party server permission
secure the network from pool attack
secure the network from cartel node(sybil) attack
secure the network from government shutdown of certain things
secure the data on the chain is valid
secure the rules
help with many other symbiotic things


but
to also be able to run tests and scenarios and see beyond the curtain of the immutable chain and see all the fascinating things behind the scene that all go towards making bitcoin much more then just a list of visible blocks/transactions
legendary
Activity: 4270
Merit: 4534
May 14, 2017, 12:36:31 PM
Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )
forget about counting acceptd blocks over an hour and divide by brand amount (theres more then you think. )


instead JUST LOOK at the times to create a BLOCK:
height X to height X+1...
not
height of last visible brand z to height of next visible brand z / hour

what you dont realise is that more block attempts occur then people think.
EG. dino thought the only blocks a pool works on is the block that gets accepted(visible), hense the bad maths.

i did not bring up showing the orphans to talk about %
just to display and wake people up to the fact that more blocks are being attempted in the background

look beyond the one dimensional (literal) view.
actually run some scenarios!!


P.S
orphan % is only based on the blocks that actually got to a certain node..
EG blockchain.info lists
466252
465722
464681

blockstrail lists
466252
466161
463316

cryptoid.info lists
466253
466252
464792

again.. dont suddenly think you have to count orphans. or do percentage games..
just wake up and realise that pools do more block attempts then you thought.
think of it only as a illustration of opening the curtains to a window to a deeper world beyond the wall that the blockchain paints

then do tests realising if those hidden attempts behind the curtain (all pools every blockheight) worked out...
the times of every blockheight by continuing instead of staling, giving up, orphaning, etc....
you would see a big difference between  
height X to height X+1...
vs
height of last visible by brand z to height of next visible by brand z / hour
sr. member
Activity: 462
Merit: 250
May 14, 2017, 12:35:22 PM
If you have a machine you can spare, please run a full node. The more nodes there are, the stronger the network is. Also, if you run a full node you can potentially mine your node for information of various kinds. You can tell if you have a full node by giving the following command:

Code:
bitcoin-cli getinfo

If the "connections" field is greater than 8, then you are running a full node, congratulations!

You can find information on how to run a full node on bitcoin.org here:

https://bitcoin.org/en/full-node


To run full node, atleast need minimum storage around 200GB with unlimited bandwidth. These node wont make network stronger, thats miner will make network stronger. Full node really help to secure network if network use POS as main consensus
hero member
Activity: 770
Merit: 629
May 14, 2017, 12:27:24 PM
Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

Franky1 is still locked in thinking that a mining pool that had 1/6 of the hash rate, and hence 1/6 of the blocks when he was in consensus with the others, is going to make 6 times faster blocks if he forks of on a hard fork, pleasing the full nodes and leaving his peers behind with their 5/6 of the hash rate.

Of course, he will make all the blocks on his own new forked chain ; but he will make them 6 times slower too, so concerning "winning rewards", he's not going to make any more profit if his chain gets adopted in the end, than when he was remaining on the consensus chain.  Franky1 thinks he will make 6 times more rewards because now "all the blocks are his", but he doesn't understand that our miner will make 6 times less blocks in the same time.  

(back to square one....)

Caveat: at least, if our forking miner is not MODIFYING the difficulty or reward of the chain, but that's not very probable that the full nodes would be running code that has this changed...
hero member
Activity: 770
Merit: 629
May 14, 2017, 12:22:50 PM
because a miner with 10% of the hash power has NO INCENTIVE to step back from remaining in agreement with the other miners, simply because he's then hard-forking all by himself, and will make a 10 times shorter chain.

Your erroneous understanding of mining made (probably still makes) you think that that betraying miner is going to mine all by himself a fork of just the same length as the chain of the rest of the miners, and hence "reap in all the rewards, orphaning the 90% chain" because full nodes agree with him, and not with the miner consortium.  

But this is not the case: our dissident miner will make just as many blocks on his own little fork, than he would have made on the consortium chain (*), with just as many rewards: so there's no incentive for him to leave the consortium,

(facepalm)

im starting to see where you have gone wrong...

you at one point say
"then hard-forking all by himself"
"going to mine all by himself"

 but then you back track by bringing him back into the competition by talking about orphans.

if a pool went at it alone.. there would be no competition. no stales no orphans no giving up..

now can you see that it would get every block
now can you see that if he only got 1 block out of 6 in the "consortium competition", he will get 6 out of 6 "on his own"
now can you see that instead of timing an hour and dividing it by how many blocks solved in competition.. you instead of look at the ACTUAL TIME of a block from height to height+1... not height to height+6

I know you still think that.  See above. 

But it is still just as wrong as before Smiley

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 14, 2017, 12:06:53 PM
Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.
legendary
Activity: 4270
Merit: 4534
May 14, 2017, 10:48:16 AM
because a miner with 10% of the hash power has NO INCENTIVE to step back from remaining in agreement with the other miners, simply because he's then hard-forking all by himself, and will make a 10 times shorter chain.

Your erroneous understanding of mining made (probably still makes) you think that that betraying miner is going to mine all by himself a fork of just the same length as the chain of the rest of the miners, and hence "reap in all the rewards, orphaning the 90% chain" because full nodes agree with him, and not with the miner consortium.  

But this is not the case: our dissident miner will make just as many blocks on his own little fork, than he would have made on the consortium chain (*), with just as many rewards: so there's no incentive for him to leave the consortium,

(facepalm)

im starting to see where you have gone wrong...

you at one point say
"then hard-forking all by himself"
"going to mine all by himself"

 but then you back track by bringing him back into the competition by talking about orphans.

if a pool went at it alone.. there would be no competition. no stales no orphans no giving up..

now can you see that it would get every block
now can you see that if he only got 1 block out of 6 in the "consortium competition", he will get 6 out of 6 "on his own"
now can you see that instead of timing an hour and dividing it by how many blocks solved in competition.. you instead of look at the ACTUAL TIME of a block from height to height+1... not height to height+6
sr. member
Activity: 686
Merit: 320
May 13, 2017, 12:14:29 PM
I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)

because they are not LOSING efficiency pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" still takes 20min to get to 10mill (average 10min to win)


now do you want to know the mind blowing part..
lets say we had 10minutes of time
you would think if pool W had 650peta and that if pool Z had 450peta
you would think pool Z =14 minutes due to hash difference

but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so
if pool W had 650peta
if pool Z had 450peta
pool Z =11min33 due to other factors because the calculating of hash is not based on 10 minutes.. but only ~8ish (not literally) of hashing occuring per new block to get from 0-9999999 (not literally)

now imagine Z done spv mining.. to save the seconds-2minutes of the non-hashing tasks- propogation, latency, validation, utxo cache.. (note: not the hashing))
Z averages under 11min:33sec

so if Z went alone his average would be UNDER 11:33sec average


so while some are arguing that out of 6 blocks
U wins once, V wins once, W wins once, X wins once, Y wins once, Z wins once..
they want you to believe it take 60 minutes per pool to solve a block (facepalm) because they only see W having 1 block in an hour

if you actually asked each pool not to giveup/stale/orphan .. you would see the average is 10 minutes(spv:10min average or 11:33 if validate/propagate).. but only 1 out of 6 gets to win thus only 1 gets to be seen.

but if you peel away what gets to be seen and play scenarios on the pools that are not seen (scenarios of if they didnt give up).. you would see it not 60 minutes

There really isn't any efficiency there. How you assign the nonces doesn't matter since which nonces will result in a solution is completely random. For example, nonces that yield a solution for a given block could be 500000, 600000, 7000000. In which case the distribution you've shown would result in them taking a very long time to get a solution. You could just give each miner the next sequential nonce as they completed their work and it would be just as "efficient".

The only reason in your example all the pools take the same amount of time to get to 10mil nonces is because you're giving them the exact same hash rate which isn't reality. They're also each trying to solve completely different blocks (same # but different data). The block they're solving for has their address in it and whatever transactions they decide to put in that block. They're not all racing towards the exact same nonce/solution.

Regardless, can you explain how the entire premise and math that has gone into bitcoin that says that 4,300PH at 559,970,892,891 difficulty will yield an average block time of 10 minutes and yet a pool with 20% of that hash rate will still get an average block time of 10 minutes. Can you provide the math that shows that?

hero member
Activity: 770
Merit: 629
May 13, 2017, 11:18:51 AM
I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)
...


Your error is (again) that in your proposal, one is doing cumulative work.  If you are going to do an exhaustive search, here, over 10 million potential solutions, and you have done 5 million of them without success, then you have INCREASED your probability to have a good answer next time from 1/10 million, to 1/5 million.  The more you work on a block, the higher the probability becomes that the next trial will be a winning one.

So having to reset, in this case, is a pain in the butt, because you lose the advantage of cumulative work.  This is because your statistical model (in 20 minutes, you have the answer for sure) is not the one of the proof of work.    If you had been working for 19 minutes and 59 seconds on a block, you KNOW that you will win in the second that follows.  Your probability to win is 1 now.  While the probability to win, the first second you started on that block, was 1/1200.  In the bitcoin PoW function, this is never the case: the probability to win, after working for 10 hours on a block, is exactly the same as the probability to win the first second.  This is because there is not "one answer in a set of 10 million" but "gazillion answers out of megasupergazillion".

I know it's in vain, but I am fascinated.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 13, 2017, 10:50:28 AM
Block validation speed depends on a combination of software, hardware, optimisations, block complexity etc. An average current 1MB block is about 3000 inputs, 30,000 sigops and on my pool's heavily customised coin daemon and server hardware it takes 70ms. This is done in parallel to the functioning of bitcoind, but it cannot process more than one block at a time - if it did, the memory requirements of doing so would blow all out of proportion for a sigop heavy block. Older versions of the core client were much slower (such as 0.12 which BU is based on.) This doesn't take into account time to generate new work for the pool which adds another 70ms.

70 milliseconds?

So, then... negligible, right?  As far as orphan rates.

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
May 13, 2017, 10:48:37 AM
Block validation speed depends on a combination of software, hardware, optimisations, block complexity etc. An average current 1MB block is about 3000 inputs, 30,000 sigops and on my pool's heavily customised coin daemon and server hardware it takes 70ms. This is done in parallel to the functioning of bitcoind, but it cannot process more than one block at a time - if it did, the memory requirements of doing so would blow all out of proportion for a sigop heavy block. Older versions of the core client were much slower (such as 0.12 which BU is based on.) This doesn't take into account time to generate new work for the pool which adds another 70ms.
Pages:
Jump to: