Author

Topic: Gold collapsing. Bitcoin UP. - page 150. (Read 2032248 times)

legendary
Activity: 1162
Merit: 1007
July 05, 2015, 04:34:07 AM
Thanks Mixles!  Do you have any thoughts on why F2Pool and AntPool are mining so many empty blocks if the previous block can be validated so quickly?

They gain a 600ms advantage on the verification, then another few seconds on the download, depending on their bandwidth. 600 seconds average in a block, so if they manage to get an advantage of 10 seconds, that's 10/600 or 2% greater profitability than other miners. There will be less incentive for that when the reward halvings happen (fees double relative to subsidy). *shrug* Maybe they are also making some political point, or simply can't be bothered dealing with the txns. But the recent orphans should have cost them a bit. They are playing their own sort of game theories, and not always winning.

I'm not sure I'm following.  Based on the ratio of empty blocks to non-empty blocks, it looks like the mean time to process a block is 16 sec and 35 sec for F2Pool and AntPool, respectively.  Are you suggesting they're "faking it"?

member
Activity: 63
Merit: 11
July 05, 2015, 04:17:20 AM
Thanks Mixles!  Do you have any thoughts on why F2Pool and AntPool are mining so many empty blocks if the previous block can be validated so quickly?

They gain a 600ms advantage on the verification, then another few seconds on the download, depending on their bandwidth. 600 seconds average in a block, so if they manage to get an advantage of 10 seconds, that's 10/600 or 2% greater profitability than other miners. There will be less incentive for that when the reward halvings happen (fees double relative to subsidy). *shrug* Maybe they are also making some political point, or simply can't be bothered dealing with the txns. But the recent orphans should have cost them a bit. They are playing their own sort of game theories, and not always winning.
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 04:07:50 AM
...we are talking less than 100ms even without the fix.

Thanks Mixles!  Do you have any thoughts on why F2Pool and AntPool are mining so many empty blocks if the previous block can be validated so quickly?
member
Activity: 63
Merit: 11
July 05, 2015, 04:04:46 AM
Great point.  Does anyone know how Bitcoin Core currently works in this regards?  Is every transaction in a block (re-) verified?  Or are the transactions first hashed, and if matched to a TX in mempool not re-verified, thus saving time.  

It would be stupid to forward unverified txns, and it would also be stupid to verify them twice. Easy to fix, so there's no reason for it to verify twice.

https://github.com/bitcoin/bitcoin/pull/5835

Thanks.  So it looks like the fix was implemented this February, and that now TXs are not re-verified if they're already in mempool?

Not sure if any solution is merged yet (see my edit above). But we are talking less than 600ms even without the extra speedup, or 10ms with the speedup.
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 04:03:07 AM
Great point.  Does anyone know how Bitcoin Core currently works in this regards?  Is every transaction in a block (re-) verified?  Or are the transactions first hashed, and if matched to a TX in mempool not re-verified, thus saving time.  

It would be stupid to forward unverified txns, and it would also be stupid to verify them twice. Easy to fix, so there's no reason for it to verify twice.

https://github.com/bitcoin/bitcoin/pull/5835

Thanks.  So it looks like the fix was implemented this February, and that now TXs are not re-verified if they're already in mempool?
member
Activity: 63
Merit: 11
July 05, 2015, 03:56:16 AM
Great point.  Does anyone know how Bitcoin Core currently works in this regards?  Is every transaction in a block (re-) verified?  Or are the transactions first hashed, and if matched to a TX in mempool not re-verified, thus saving time.  

It would be stupid to forward unverified txns, and it would also be stupid to verify them twice. Easy to fix, so there's no reason for it to verify twice.

About 600 milliseconds to verify a 1MB block, will be reduced to about 10 ms

https://github.com/bitcoin/bitcoin/pull/6077

https://github.com/bitcoin/bitcoin/pull/5835
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 03:54:23 AM
I see no problem with SPV mining while verifying the previous block. It would also make my above suggestion pointless.

Yes.

Moreover, I don't see why the time spent verifying is very significant. The vast majority of transactions would have been verified by a full node before being included in the block, and then it is just a case of checking the hash. The verification time for a block would then be insignificant relative to its download time.

Great point.  Does anyone know how Bitcoin Core currently works in this regards?  Is every transaction in a block (re-) verified?  Or are the transactions first hashed, and if matched to a TX in mempool not re-verified, thus saving time. 
member
Activity: 63
Merit: 11
July 05, 2015, 03:52:01 AM
I see no problem with SPV mining while verifying the previous block. It would also make my above suggestion pointless.

Yes.

Moreover, I don't see why the time spent verifying would be significant. The vast majority of transactions would have been verified by a full node before being included in the block, and then it is just a case of checking the hash, and looking up that the txn was previously mempool verified. The verification time for a block should be insignificant relative to its download time (unless it's a flood attack of new txns by a miner). Or am I missing something?
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 03:27:37 AM
We just need someone to figure out how to constantly feed them invalid blocks. Wink

I'm surprised to hear this from you, Holliday.  Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad?  It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).  

I personally don't see much wrong with replace-by-fee...

I'd like to hear Holliday's take on replace-by-fee. And DeathAndTaxes take on this too (where did he go??).
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 03:25:45 AM
Clearly F2Pool messed up.  And they lost 100 BTC because of it.  But that wasn't for SPV mininig during the short amount of time it takes to process the previous block; it was from not bothering to validate the previous block at all.

Right, and that's the harmful behavior which led to my response above.

SPV mining while the previous block is being verified is the profitable strategy, as shown here.  It also serves to produce "defensive blocks" that limit the effect blocksize without the need for a protocol-enforced blocksize limit, as discussed here.

I see no problem with SPV mining while verifying the previous block. It would also make my above suggestion pointless.

Cool.  Seems we agree then.
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 03:15:17 AM
We just need someone to figure out how to constantly feed them invalid blocks. Wink

I'm surprised to hear this from you, Holliday.  Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad?  It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).  

Are they mining on the blockheader during the short time it takes to validate the block, or are they simply skipping validation entirely?

Miners are supposed to secure the network, ehh? If they are just blindly mining on whatever they are fed, they aren't doing a very good job, are they?

Clearly F2Pool messed up.  And they lost 100 BTC because of it.  But that wasn't for SPV mininig during the short amount of time it takes to process the previous block; it was for not bothering to validate the previous block at all.

SPV mining while the previous block is being verified is the profitable strategy, as shown here.  It also serves to produce "defensive blocks" that limit the effective blocksize without the need for a protocol-enforced blocksize limit, as discussed here.
legendary
Activity: 4690
Merit: 1276
July 05, 2015, 03:13:11 AM
We just need someone to figure out how to constantly feed them invalid blocks. Wink

I'm surprised to hear this from you, Holliday.  Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad?  It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).  

I personally don't see much wrong with replace-by-fee, but I'm old-school and believe that everyone who uses native Bitcoin should do so based on confirmations and tuning the number of confirmations one needs based on the value and importance of a given transaction, operational aspects of the functioning system at a given time, etc. This was, to me, always a fundamental concept of Bitcoin.  Trying to mold Bitcoin into a real-time system has resulted in the problems and hassles which simply do not need to exist, and that is even more the case when sidechains can proxy Bitcoin via a more purpose-oriented instant transaction solutions.

RBF has been described as 'vandalism'.  I don't see it that way as much as a timely and necessary lesson delivered in a way that will make more people perk up their ears and listen before it is to late (if it is not already.)  Irrespective of Satoshi's writings I consider SPV to be fraught with risks and dangers.

The only possible upside here is that perhaps these dick-head miners will pay the ultimate price and that we'll be more-or-less forced to entirely re-do (and more closely tie) the mining and transfer functions of the network.  In doing so, throwing sha256 out on it's ass as an side-effect would be quite positive to my way of thinking.

legendary
Activity: 2968
Merit: 1198
July 05, 2015, 02:59:28 AM
My point about my personal preference where I employed the word "clusterfuck" to describe the hundreds of Cryptonote clones and that Monero's marketing (on these forums) to some extent had to vilify other CN clones in order to assert its dominance of CN clones

I can see why one might want to spin it that way. In fact Monero's dominance was largely assured by the fact that it was the first (even remotely) fairly launched cryptonote coin. The 82% ninja premine fraud was never going to fly. None of the others were sufficiently better in terms of features, leadership, community or anything else, to overcome that lead.

Quote
is instead I would have preferred to add features to CN that would naturally assert dominance over other CN clones.

Again, the dominance was never really in question, but it also isn't clear that "features" are needed (or at least  they don't need to be rushed out in some sort of race) or that a feature war is that way to go. A large part of the user base simply wants a non-transparent blockchain with a credibly reviewed, sufficiently refined, and well maintained (and therefore for these three reasons relatively secure and stable) code base.

Quote
It would behove Monero to be the first CN coin to apply my suggested fix to insure combinatorial analysis of partially overlapping rings can't occur

There is an ongoing question (requiring actually analysis and research, not just forum posts) whether this change to the privacy model is needed or desired. It is being considered.
legendary
Activity: 1162
Merit: 1007
July 05, 2015, 02:28:42 AM
We just need someone to figure out how to constantly feed them invalid blocks. Wink

I'm surprised to hear this from you, Holliday.  Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad?  It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).  
legendary
Activity: 1750
Merit: 1036
Facts are more efficient than fud
July 05, 2015, 01:54:33 AM
Rarely, do good critics make better artist.

Except I've already proven three times in my history (twice as main author and the last and most successful as the ONLY* author), that I am also builder of popular software.

https://www.google.com/search?q=neocept+wordup (late 80s and early 90s)
http://relativisticobserver.blogspot.com/2012/01/illustrated-evolution-of-painter-ui.html (mid 90s)
https://www.google.com/search?q=3Dize+coolpage (late 90s and early 00s)

I fell off the cliff in the mid-00s due to mostly what I can summarize as "Philippines" and family background. The details are blinding (see light & dark only) an eye, lost marriage, mid-life crisis, severe STD infection in last week of May 2006 (leading to M.S. by now) and murder of my only non-step sister in last week May 2006, etc..

I really hate this being incited to talk about myself. I know people are going to use this against me. Besides I am interested in creating new things, not the past.

P.S. Armstrong's 8.6 cycle (1000 x Pi days) added to last week of May 2006, is Feb 2015. That was exactly when I began coding a social network which I did ship. First software I've shipped since 2006. Note in 2008 - 2011 period I got off into learning new programming languages such as Haxe, Haskell, and Scala and completely new ways to think about programming. In that period I messed around with numerous projects without shipping any. I also got acutely ill (ER, ICU) in May 2012 and chronically hence.

Edit: I am not really acting as a critic, i.e. I haven't changed what I've always done. I am doing engineering analysis for design work. I just happen to share it, which turns out to be critical against inferior engineering.

* except for the 2-3 weeks of coding for the Objects window, for which I paid $30,000 in 2001 (inflation-adjust that!) to a former programmer of Borland C (because I was face down in the bed with gas in my eye to hold the 100% detached retina in place so the lasered joins could solidify).

...inciting to talk about myself--LOL.

Until you offer your own coin for peer review, you're going to sound like that know-it-all brat on the basketball court who points out the flaws in everyone else's game, goes on endlessly about how great he is, but never picks up a ball and backs up the talk. Since your American, you've probably heard this: Put up or shut up.

Again, it's not that you are criticizing or refusing peer review, it's that you are simultaneously criticizing and refusing to put your project up for peer review--it may be a good (or even correct) strategy, but don't bitch when others point out its annoyance factor.
legendary
Activity: 1260
Merit: 1008
July 05, 2015, 01:44:22 AM
interesting, isn't it?

We will continue do SPV mining despite the incident, and I think so will AntPool and BTC China.

Another very good reason people should not mine on Chinese pools.  This is EXTREMELY bad for the Bitcoin network.
sr. member
Activity: 420
Merit: 262
July 05, 2015, 01:11:01 AM
Rarely, do good critics make better artist.

Except I've already proven three times in my history (twice as main author and the last and most successful as the ONLY* author), that I am also builder of popular software.

https://www.google.com/search?q=neocept+wordup (late 80s and early 90s)
http://relativisticobserver.blogspot.com/2012/01/illustrated-evolution-of-painter-ui.html (mid 90s)
https://www.google.com/search?q=3Dize+coolpage (late 90s and early 00s)

I fell off the cliff in the mid-00s due to mostly what I can summarize as "Philippines" and family background. The details are blinding (see light & dark only) an eye, lost marriage, mid-life crisis, severe STD infection in last week of May 2006 (leading to M.S. by now) and murder of my only non-step sister in last week May 2006, etc..

I really hate this being incited to talk about myself. I know people are going to use this against me. Besides I am interested in creating new things, not the past.

P.S. Armstrong's 8.6 cycle (1000 x Pi days) added to last week of May 2006, is Feb 2015. That was exactly when I began coding a social network which I did ship. First software I've shipped since 2006. Note in 2008 - 2011 period I got off into learning new programming languages such as Haxe, Haskell, and Scala and completely new ways to think about programming. In that period I messed around with numerous projects without shipping any, which maybe was a required gestation period for the new insights and to recharge artistic inspiration. I also got acutely ill (ER, ICU) in May 2012 and chronically hence.

Edit: I am not really acting as a critic, i.e. I haven't changed what I've always done. I am doing engineering analysis for design work. I just happen to share it, which turns out to be critical against inferior engineering.

* except for the 2-3 weeks of coding for the Objects window, for which I paid $30,000 in 2001 (inflation-adjust that!) to a former programmer of Borland C (because I was face down in the bed with gas in my eye to hold the 100% detached retina in place so the lasered joins could solidify).
legendary
Activity: 1750
Merit: 1036
Facts are more efficient than fud
July 04, 2015, 11:55:50 PM
And yet people criticize me for not spilling the beans before the software is cooked.


Wrong assertion of why people (at least me and some observable others) are criticizing you. Maybe, someone is, but certainly not everyone....

Gene Siskel, the better half of the movie critic duo Siskel and Ebert, was asked about if he ever wanted to direct--since he was so knowledgeable and his love of films was evident. He stated that he did want to direct, but as long as he was a critic, thought any directing efforts would be a conflict of interest. To put it more simply, he knew he could game his fans into believing his techniques were the best techniques and didn't want the potential to delude himself or the audience--even if his efforts were a sincere and earnest attempt at making a great film.

Rarely, do good critics make better artist. ATM Ben Jonson, the poet/critic is the only example I can think of, but that is more of a tie than him being better at one or the other.
sr. member
Activity: 420
Merit: 262
July 04, 2015, 11:26:29 PM
I am reminded of Mike Hearn's words of wisdom:
Quote
Scaling Bitcoin can only be achieved by letting it grow, and letting people tackle each bottleneck as it arises at the right times. Not by convincing ourselves that success is failure.

One less bottleneck to expect with larger blocks.

This is an example of myopic, one-dimensional 'thinking' (illogic) that is promulgated widely by n00bs.

Centralization isn't a bottleneck. Monopolies don't restrict degrees-of-freedom. Neither are an outcome of increasing orphan rate by increasing blocksize relative to block period.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 04, 2015, 11:01:12 PM
it looks like the average time these pools are mining empty blocks is 16 seconds (F2Pool) and 35 seconds (AntPool) before switching to non-empty blocks.  Like you said, why are these numbers so big if processing the blocks is so fast?

I think these metrics should specifically exclude the two pools above as they are the ones who were SPV mining and not switching to using a validated block-header as soon as possible.

Don't get put-off from posting by Greg's feedback as it is easy to be silenced by someone-who-knows-best and let them run the show.
My big takeaway from his comment is how you got him into arguing from a position that big blocks are OK (handclap and kudos to you). In fact, I have learnt now that even 7.5GB blocks are today theoretically tolerable in at least one respect (validation where tx are pre-validated), although l suspect not in quite a number of other crucial respects.
Wasn't Gavin's doubling end-point 8GB in 2036? Effectively the same end-point!

I am reminded of Mike Hearn's words of wisdom:
Quote
Scaling Bitcoin can only be achieved by letting it grow, and letting people tackle each bottleneck as it arises at the right times. Not by convincing ourselves that success is failure.

One less bottleneck to expect with larger blocks.

Jump to: