Author

Topic: Gold collapsing. Bitcoin UP. - page 151. (Read 2032248 times)

sr. member
Activity: 420
Merit: 262
July 04, 2015, 10:31:27 PM
I'd love to see you show with mathematics that under certain assumptions the mining will centralize. Why don't you give it a try?

Just model the relative ROI even assuming same costs per hash (even exacerbate by applying the selfish mining attack math) with the orphan rate higher for those who don't have as much bandwidth and verification resources as centralized mining (which can amortize those fixed size per block costs over greater hashrate). If you want, insert IBLT to rectify that orphan rate disparity which is obfuscated centralization (e.g. in terms of your upthread definition of "entity" and not "node").
legendary
Activity: 1162
Merit: 1007
July 04, 2015, 10:23:52 PM
Quote
Your egregious mathematical error (myopia) of course is that you assume k is the same for all miners. And this is why you totally miss the centralization caused by your nonproof.

I agree that in reality, k will not be constant across miners. What I posted was a simplified model so that the effect can be easily isolated and understood. It would be interesting to try to model a distribution of verification times, among other details. In any case, I suspect the important result will be the same.

I'd love to see you show with mathematics that under certain assumptions the mining will centralize. Why don't you give it a try?
sr. member
Activity: 420
Merit: 262
July 04, 2015, 10:14:58 PM
The fraction of time the miner does not know whether the most recent block was valid is clearly τ / T, which means the fraction of the time the miner does know is 1 - τ / T = (T - τ) / T.

You are attempting to develop an equation on orphan rate relative to the propagation delay (which includes verification delay), but this can't be done without context of the distribution of computation in the network, which some argue is modeled by a Poisson distribution.

... miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks.

I had argued upthread that bandwidth limitation (propagation delay) is the only justification for not expending more resources on verification instead of forming 0 txn blocks (when blocks are mostly funded by txn fees, which may not be the case now):

This delay is a form of propagation delay and thus drives up the orphan rate for miners with less resources. Afaik proportional increases in orphan rate are more costly than proportional decreases in hashrate, because the math is compounded (but diminishing) on each subsequent block of the orphaned chain. Thus this action doesn't appear to make economic sense unless it is explained as a lack of bandwidth and not a lack of desire to apply more of their resources to processing the txns than to hashrate. If it is bandwidth that is culprit, then it argues against larger block sizes.

This poll is inaccurate because voters can't change their vote!! Peter R posts nonsense, then the Yes votes go bonkers and they can't change their vote after Peter R has been thoroughly refuted.
sr. member
Activity: 420
Merit: 262
July 04, 2015, 10:03:28 PM
Let τ be the time it takes to verify a typical block and let T be the average block time (10 min).  The fraction of time the miner does not know whether the most recent block was valid is clearly τ / T; the fraction of the time the miner does know is 1 - τ / T = (T - τ) / T.  We will assume that every miner applies the same policy of producing empty SPV blocks before they've verified, and blocks of size S' after they've verified.  

Under these conditions, the expectation value of the blocksize is equal to the expectation value of the blocksize during the time a miner doesn't know, plus the expectation value of the blocksize during the time he does know:

    Seffective = ~0 [(τ / T)]   +  S' [(T - τ) / T]          
                 = S' [(T - τ) / T]                          (Eq. 1)

The time, τ, it takes to process a block is not constant, but rather depends linearly** on the size of the block.  Approximating the size of the previous block as S', we get:

    τ = k S'

....

QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.

Your egregious mathematical error (myopia) of course is that you assume k is the same for all miners. And this is why you totally miss the centralization caused by your nonproof.
legendary
Activity: 1512
Merit: 1057
SpacePirate.io
July 04, 2015, 09:57:50 PM
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

I agree with part of the quote. Also, the blocksize debate needs to go away by people stopping throwing 2 cents in a problem Gavin has a good solution for.

What we really need solutions for...
-Greater adoption with ease of use (Generation x's and earlier are still using paypal because it's easier)
-China mining pools are greed based and harm bitcoin and the blockchain
-When people think of bitcoins, they assume drugs and other shady activities

Enough.... sorry, no idea who you are, so no offense, but tired of seeing these blocksize posts show up on Reddit.
sr. member
Activity: 420
Merit: 262
July 04, 2015, 09:38:54 PM
actually, there's 2 names for what's he's done.

one, from a moral standpoint, and one from a legal standpoint.  i'll let you figure out what those names are.

Unethical and extortion? (my pops is an attorney perhaps I inherited some of it ... and now you know why I won't go on Skype with you ...)
legendary
Activity: 1162
Merit: 1007
July 04, 2015, 09:17:54 PM
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.
Its a strategy (implemented unconsciously by many) to limit participation to the select few.  Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.

Thanks.  That makes me feel better.

...in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139
For antpool, this is 4506 / 246.  

it looks like the average time these pools are mining empty blocks is 16 seconds (F2Pool) and 35 seconds (AntPool) before switching to non-empty blocks.  Like you said, why are these numbers so big if processing the blocks is so fast?
legendary
Activity: 1246
Merit: 1010
July 04, 2015, 09:06:05 PM
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

Its a strategy (implemented unconsciously by many) to limit participation to the select few.  Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.


I read his 21ms validation number but its weird because I was wondering just weeks ago why it was taking so long to sync a measly week of blockchain data and came to the conclusion that either the P2P code is complete garbage (compared to bittorrent for example) OR the validation cost is high (given my fan speed, I assumed it was validation).  And if validation is so fast, why would these pools have custom code to skip it?

It will be interesting to look at stats gathering mode he mentions.
sr. member
Activity: 384
Merit: 258
July 04, 2015, 08:53:15 PM
What this shows is that since the subtracted term, τ (1- Pvalid), is strictly positive, the miner's expectation of revenue, , is maximized if the time to verify the previous block is minimized (i.e., if τ is as small as possible).
Actually, is also maximized if Pvalid == 1 (or Pvalid as close as possible to 1).
How to reach this result ? My humble proposal: make a deal with a few mining pools. Participants will never push invalid blocks to others participants. Blocks received from the cartel aren't checked before hashing a new block.

Conclusion: As the average blocksize gets larger, the time to verify the previous block also gets larger. This means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks or that they will be more motivated to trick the system.

EDIT:
Quote from: Peter R
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?
Well, he seems a bit upset for now Wink but I think his message is close from what I've tried to suggest with my comment.
We must analyze all the possibilities before jumping to a conclusion which backs our initial hypothesis. The point is valid for all of us, whatever our opinion on this blocksize issue.
legendary
Activity: 2002
Merit: 1040
July 04, 2015, 08:50:41 PM
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

He sounds bitter.
legendary
Activity: 1162
Merit: 1007
July 04, 2015, 08:46:56 PM
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.
sr. member
Activity: 420
Merit: 262
July 04, 2015, 08:35:33 PM
The block size debate is iatrogenesis — any 'cure' is worse than the illness.


Freemarket, in 2010 everyone could buy thousands of Bitcoin for almost nothing, what hindered it was, besides being relatively unknown at that point in time, few people actually believed cryptocurrencies could be a thing, with Monero its almost the same, the difference being its swiming in a sea of shitcoins and not many can see its potential, its the second Cryptonote coin, the first being heavily premined, and it was launched with a MIT licence, there is absolutely no merit to claims Monero stole anything, its like saying Ubuntu stole code from Debian, or that Apple stole from FreeBSD, so even though Monero market cap is low, few people will actually bother buying a large stack because it is not a 100% certain bet, but its clear there is nothing close to Monero as Zerocash/Zerocoin is vaporware and Bitcoin sidechains are like dragons.

My point about my personal preference where I employed the word "clusterfuck" to describe the hundreds of Cryptonote clones and that Monero's marketing (on these forums) to some extent had to vilify other CN clones in order to assert its dominance of CN clones, is instead I would have preferred to add features to CN that would naturally assert dominance over other CN clones. It felt to me like Monero used strong-armed community tactics instead to gain more critical mass than the other CN clones, yet not so much capabilities innovation (rather a lot of refinement which I assume includes a lot of fine grained performance innovations). And I am nearly certain this (lack of outstanding capabilities other than the on chain rings) is why Monero is not more widely adopted and will be the downfallstunted growth of Monero (and I say this with specific knowledge of capabilities that I think will subsume Monero very soon). And that is precisely why I would not prematurely release those features in a whitepaper for 1000s of clones to go implement simultaneously. And yet people criticize me for not spilling the beans before the software is cooked.

The marketing battle is not against the other "shitcoins" thus differentiating Monero from shit. Rather the battle is against Bitcoin core on who is going to own the chain that most of the BTC migrates to.

Also most of the interest in altcoins is not ideological, but rather speculative. We are in a down market now until BTC bottoms this October, so only getting mostly ideological investment in Monero, not speculative fever. This will turn after October, but it might be too late for Monero depending on the competition that might arise interim. However, I tend to think Monero will get a big boost after October in spite of any new competition, because it is a more stable codebase. As smooth pointed out, the greatest threat to breakage is in implementation error. It would behove Monero to be the first CN coin to apply my suggested fix to insure combinatorial analysis of partially overlapping rings can't occur.

P.S. CN is very important.
legendary
Activity: 1246
Merit: 1010
July 04, 2015, 08:31:40 PM
If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?
That's one way to do it, however even this can be shortened.

Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.

By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.

It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.

Makes sense... I'd recommend a quick line or two in your blog to explain that:

"In order to reduce the size of the fraud proof needed to show that a transaction input does not exist, additional information must be added to Bitcoin blocks to indicate the block which is the source of each outpoint used by every transaction in the block.

A node can provide the source block to the SPV client to prove or disprove the existence of this transaction.  But with a few more changes we can provide a subset of the source block.  This may become very important if block sizes increase.
"

legendary
Activity: 1764
Merit: 1002
July 04, 2015, 08:03:38 PM
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.

If someone wants to tabulate that data, I'll update my post.  

If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139
For antpool, this is 4506 / 246.  Not sure if that's all the info you'd need, though.
See also: Empty blocks [bitcointalk.org]

is there a way for you to tell what % of blocks have been full over the last 3 wks and compare that to prior going back to say Jan 1?

i'd include those in the 900+ and 720-750 kB range as being full.
legendary
Activity: 1162
Merit: 1007
July 04, 2015, 07:41:29 PM
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.

If someone wants to tabulate that data, I'll update my post.  

If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139
For antpool, this is 4506 / 246.  Not sure if that's all the info you'd need, though.
See also: Empty blocks [bitcointalk.org]

Awesome!  Thanks!!

We can estimate the average effective time it takes to process the blocks, then, as

    τ ~= T (Nempty / Nnotempty)
      ~= T (Nempty / (Ntotal - Nempty))

F2Pool:

      ~= (10 min) x [139 / (5241 - 139)] = 16.3 seconds

AntPool:

      ~= (10 min) x [246 / (4506 - 246)] = 34.6 seconds
  
legendary
Activity: 1400
Merit: 1013
July 04, 2015, 07:41:24 PM
If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?
That's one way to do it, however even this can be shortened.

Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.

By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.

It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.
legendary
Activity: 1246
Merit: 1010
July 04, 2015, 07:31:33 PM
Thoughts on how fraud proofs could make it possible for SPV clients to reject an invalid chain, even if the invalid chain contains the most PoW:

https://gist.github.com/justusranvier/451616fa4697b5f25f60

(some modifications to the Bitcoin protocol required)

Your modification to require the inputs to state which block it comes from is a clever way to reduce the addr does not exist proof.  But I dont understand your subsequent complexity.  If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?
hero member
Activity: 686
Merit: 500
FUN > ROI
July 04, 2015, 07:23:37 PM
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.

If someone wants to tabulate that data, I'll update my post.  

If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139
For antpool, this is 4506 / 246.  Not sure if that's all the info you'd need, though.
See also: Empty blocks [bitcointalk.org]
legendary
Activity: 1162
Merit: 1007
July 04, 2015, 07:07:18 PM
Some numbers:

Assume it takes on average 30 seconds to verify 1 MB of typical transactional data (k =0.5 min / MB).  Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to:

    Seffective   =   T / (4 k)   =   (10 min) / (4 x 0.5 min / MB)
                   = 5 MB.

QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.  

Great work Peter, but do we have any empirical evidence for the 30 seconds? Seems surprisingly high and I would have guessed just a few seconds.

No, I just made it up.  I think I'll change it to 15 second, as I agree it's probably too high.  I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.

If someone wants to tabulate that data, I'll update my post.  
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 04, 2015, 07:02:59 PM
Some numbers:

Assume it takes on average 30 seconds to verify 1 MB of typical transactional data (k =0.5 min / MB).  Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to:

    Seffective   =   T / (4 k)   =   (10 min) / (4 x 0.5 min / MB)
                   = 5 MB.

QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.  

Great work Peter, but do we have any empirical evidence for the 30 seconds? Seems surprisingly high and I would have guessed just a few seconds.
Jump to: