Author

Topic: Gold collapsing. Bitcoin UP. - page 142. (Read 2032266 times)

sr. member
Activity: 420
Merit: 262
July 07, 2015, 01:46:28 AM
A clinic full of subordinate eye frying minions would be another...

Many of my co-workers kept a stock market ticker within easy visual reach and referred to them about 60 times per hour as they performed the tasks of the day.  That's fine for code monkeys, but...

+1 for dry humor style points  Cheesy

If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point...

That was when I arrived in the thread.  Wink

As long as he sticks to techno babble and steers clear of excessive ad hominem (against me for example that got his HF case revealed by vokain), then afaics he bolsters his case by remaining the owner of the most read thread on bitcointalk, i.e. in the bitcoin universe.

Note I am no longer fighting with him and I assume he has declared a truce with me also. Best for both of us. I have no skin in the HF debacle (although I feel for vokain et al, they need to learn and bounce back[1]). If he has cajones, Slypherdoc will invest some of his 3000 BTC in NewCoin and perhaps make off like a $billionaire bandit (untraceable). As for justice, don't people who fall for scams have some culpability too? The universe has a way of handing out justice. I lost my eye because of some decisions I had made. I had to own up to responsibility for it, i.e. involving the government in dispensing justice wasn't justified nor efficient. Life is too short.

[1] the day after the event on Dec. 1, 1999 that cost me blindness in my eye, I in my mind and heart forgave the guys who did it to me and decided to moveon. I don't have time for grudges. There is too much stuff I can still do. Until you render me incapable of doing anything, I will try my best to remain positive about life. I did however take up boxing as form of self-defense training and exercise since that event. And I continue to do sprinting training, because the smartest defense is often to run.


cypherdoc, who is paying you now ? (KNC ?)

Perhaps all those who placed preorders with HF.
sr. member
Activity: 420
Merit: 262
July 07, 2015, 01:33:04 AM
Quote from: domob
So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?

Who decides what enough fees is?

Actually you ask precisely the correct question, if you want to find the correct answer for decentralization. Hint, hint. But that answer is more clever and complex than I expect you to find.

I think this whole blocksize debate comes down to ideology.  It's not a technical decision. Some people think we need to keep 1 degree of freedom of control to shape Bitcoin "correctly," while others think we should let destiny take the wheel.

You think that because you are blinded.

My well informed, prescient suggestion is wholeheartedly support the pegged side chain direction; otherwise prepare to exchange your BTC for another monetary unit or accept centralization.
legendary
Activity: 1162
Merit: 1007
July 07, 2015, 01:06:02 AM
Quote from: domob
So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?

Who decides what enough fees is?

I think this whole blocksize debate comes down to ideology.  It's not a technical decision. Some people think we need to keep 1 degree of freedom of control to shape Bitcoin "correctly," while others think we should let destiny take the wheel.
legendary
Activity: 1135
Merit: 1166
July 07, 2015, 12:35:05 AM
look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:



So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?
sr. member
Activity: 420
Merit: 262
July 07, 2015, 12:16:48 AM
I favor Adam Backamoto's extension block proposal.

The 1MB blocksize limit reminds me of the old 640k limit in DOS.

Rather than destroy Window's interoperability with the rich/valuable legacy of 8088 based software, that limit was extended via various hacks sublime software engineering.

Where can I find a specification for this extension blocks proposal, so I can determine how this proposal is differentiated from a pegged side chain?

Mike Hearn is so obviously for centralization it is a requirement to separate his objective points, and appears he may have a point about Lightning networks being not viable for much but very specialized interactions (which was also my upthread point about them).

I continue to think existing PoW designs are stuck between a rock and a hard place, thus I am very interested to read more detail about this proposal.


Edit: I found it, http://sourceforge.net/p/bitcoin/mailman/message/34157058/

The only way I see this is differentiated from pegged side chains, is it could be optionally a one-way transfer of BTC to the extension block chain, thus removes the need for the reliance on federated servers.

I understand that if address formats are differentiated between the two chains, then it is claimed one could pay from one chain to other but I can't see how that can work because the miners on the lower bandwidth chain would be SPV miners on the extension block chain. Thus all the dominos (due to orphaned chains) insecurity ramifications of pegged side chains which I argued upthread is untenable, unless you allow the BTC on each chain to have a different value. Afaics, the only reliable way to move pegged value back and forth between chains is as I wrote upthread with very long challenge periods on transferred funds and to use the Blockstream SPV proofs.

Thus one can view this proposal as a trojan horse to require Blockstream's pedded chains (and either the federated servers or the soft fork for the added OP code to eliminate the federated servers). Clever.

However, I still like the proposal for the same reasons I liked pegged side chains. But pleeeaaaase do it correctly. None of the mumbo jumbo about instant transfers between pegged chains. Sheesh.

And note this does nothing to solve the decentralization problem and rather just transfers the centralization to the extension block chain, because the 1MB chain can ultimately be 51% attacked because its relative block rewards will wither as debasement diminishes and transactions increase on the extension block chain (that is unless extension block transactions will have so extremely small fees and the 1MB chain sufficiently high enough fees which is very very unlikely because centralization by definition drives towards monopolistic pricing). Also because per math and economic reasoning above, solo mining is doomed any way.
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 12:07:30 AM
looks like they may have turned off SPV mining.  that would be good.
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 12:05:27 AM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

but turning off minfee rules would only be used to include 0 fee tx's.

look at how much BTC value is included in that 37MB.
sr. member
Activity: 420
Merit: 262
July 06, 2015, 11:55:51 PM
If you don't want to be called LeBron until the day Lord Satoshi moves His Holy Coins, I suggest apologizing for accusing the core devs (who write the free code you run) of impropriety/malfeasance/obstructionism/etc.

And you have the audacity to assert I am delusional for having faith in Armstrong's investment in his models and the performance I have observed thereof. At least I am not blind to the existence of my faith — as opposed to first hand knowledge of the open source, which you don't have either in the case of your Lord and I also doubt you've internalized all the source code that has been written by ongoing devs.

Haters gonna hate...
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 11:53:38 PM
Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Miners can't tell as soon as a block header is published, but they can tell once they get the block, according to Peters analysis bigger blocks received take longer to propagate process so SPV mining fills the gap until the block is known has been processed. So yes you're correct they can't know but not knowing isn't a good explanation as to why we see a higher percent of empty block after a big block.

ftfy
sr. member
Activity: 420
Merit: 262
July 06, 2015, 11:49:12 PM
Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Good point and more accurate is "we can only measure how we think it is". We don't even have absolute proof that someone isn't running a superior algorithm or hardware solution although we can often make very strong educated inductions.
legendary
Activity: 1372
Merit: 1000
July 06, 2015, 11:42:38 PM
Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Miners can't tell as soon as a block header is published, but they can tell once they get the block, according to Peters analysis bigger blocks take longer to propagate so SPV mining fills the gap until the block is known. So yes you're correct they can't know but not knowing isn't a good explanation as to why we see a higher percent of empty block after a big block.
sr. member
Activity: 420
Merit: 262
July 06, 2015, 11:30:58 PM
Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.

I think you missed the economic reasoning, which is that pools have to compete at near 0 margins which means any losses due to overpaying some ephemeral miners during periods (e.g. any variant of pool hopping or just miners who come and go for any reason), is bankruptcy relative to those with larger hashrate share which don't incur that variance in profitability.

Variance such as in unit-of-exchange kills currency and it does the same concentrating effect on pools, as they are currently defined in the current network design for PoW coins (which btw I have a fix for[1]).

And that centralizing effect is exacerbated by the in-band (e.g. lower relative to hashrate latency and orphan rate) and out-of-band incentives (e.g. being paid to double-spend a 0-conf, etc) that may apply to any who may be running hidden monopolies via the Sybil attack, perhaps able to run at negative profit margins thus driving smaller pools extinct. Again I challenge any one to prove that pools are not Sybil attacked (meaning multiple pools being effectively owned by the same controlling entity). Since it can't be disproved, then Cypherdoc's assumption that miners are independent can not be proven to be true, yet it can be falsied by the math and economic reasoning I presented above (although some will refuse to admit the math because they can't falsify it with physical evidence).

All of this gets worse as bandwidth requirements are scaled up.

[1] my improvement over CN need not be anonymity (which could just be copied from CN), but rather network characteristics.
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 10:55:22 PM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

this is what they say on their website.  i should try to find out the exact #:

TradeBlock maintains an extensive bitcoin network data architecture with multiple nodes across geographies. With the ability to view and record every message broadcast to the network, including those that are not extensively relayed, unique insights regarding the network may be derived.
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 10:45:24 PM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

yeah, i had noticed that.  strange...
legendary
Activity: 1372
Merit: 1000
July 06, 2015, 10:44:58 PM
why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


So one could assume those empty block would never be mined or would be orphaned should a competitor be SPV mining.

I'd also assume this phenomena is also only viable while the block subsidy is high enough that transaction fees are inconsequential. In the near future this strategy would most likely be optimized to including the minimum tx's that are low risk high reward to balanced against the risk of loss for the fiew seconds it would take to include in a block.
staff
Activity: 4284
Merit: 8808
July 06, 2015, 10:33:50 PM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.
legendary
Activity: 1162
Merit: 1007
July 06, 2015, 10:25:28 PM
I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

Not yet, but I had the same idea!  Ugh…but right now I have to get back to non-bitcoin work...
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 10:24:19 PM
as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

i'm not really arguing about this with you.  you said UTXO is not in memory.  i'm saying it depends on how fast a node wants to verify tx's via the dynamic caching setting they choose which does get stored in memory.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).

staff
Activity: 4284
Merit: 8808
July 06, 2015, 10:23:56 PM
Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley
legendary
Activity: 1162
Merit: 1007
July 06, 2015, 10:18:18 PM
...
So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

Interesting!  

And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.

But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool (but not AntPool) is indeed more likely to produce an empty block when the previous block was large (suggesting that processing large blocks [including the mining process latency] takes longer for some reason).
Jump to: