Author

Topic: Gold collapsing. Bitcoin UP. - page 141. (Read 2032266 times)

legendary
Activity: 1764
Merit: 1002
July 07, 2015, 10:05:28 AM
Uber bad:

legendary
Activity: 1764
Merit: 1002
July 07, 2015, 10:04:21 AM
bad, bad:

legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
July 07, 2015, 09:58:55 AM
Coz someone linked a post earlier in this thread ...

For pools that normally mine full blocks as well as uncommon empty blocks, the empty blocks are the work the pool sends every block change.
They are of the opinion that because Luke-jr's eloipool is shit slow at handling block changes, they should send out empty work first and then full work soon after it.
Obviously the empty work is mined for a small % of the average block change length so it would also mean that the miners finding block with the empty work would be a smaller % of all the blocks found by the pools that do this.

As I've explained in the empty blocks thread, when comparing eligius with it's empty block change work and my pool https://kano.is/ with our full block change work, my pool beats eligius on average sending out block change work.
During normal times when the relay is working and there isn't a massive spam test running, my pools beats eloipool >90% of the time with block changes.
legendary
Activity: 1162
Merit: 1007
July 07, 2015, 09:00:36 AM
http://rusty.ozlabs.org/?p=515
Quote
The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear.  (A 17MB block took 171 milliseconds).

Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block.  At least it’s scaling linearly, but it’s just slow.

Not quite Peter R's calculated value, but indicative that there may be issues hidden in the code that do sum up to significant block delay.

The processing time, τ, in my model includes more than what Rusty is considering above.  It includes all delays between the moment the miner has enough information to begin mining (an empty block) on the block header, to the moment he's validated the previous block, created a new non-empty block template, and has his hashing power working on that new non-empty block.  I realize the way I wrote my post, I implied that the validation times were the significant part of τ (which they appear not to be), but that doesn't actually change the results provided the processing time (τ) does indeed tend to increase monotonically with the size of the previous block.  I've edited my post to make this more clear.

Last night, I showed based empirical data that this delay (τ) is significantly larger when the previous block was large, than when the previous block was not large, for F2Pool.  For AntPool, the delay (τ) did not appear to be correlated with the size of the previous block.  

Greg mentioned that he thought the big τ values I'm measuring are due to the lag it takes to re-assign the hashpower to the new non-empty block.  
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 08:18:37 AM
You people are getting vicious and inane.  Name calling like a pack of 8 year olds?  Huh

I know right?  If you're going to start referring to Bitcoin as "Cripplecoin" /r/buttcoin is over ==> there.
legendary
Activity: 1246
Merit: 1010
July 07, 2015, 08:10:46 AM
You people are getting vicious and inane.  Name calling like a pack of 8 year olds?  Huh

On another topic:

http://rusty.ozlabs.org/?p=515
Quote
The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear.  (A 17MB block took 171 milliseconds).

Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block.  At least it’s scaling linearly, but it’s just slow.

Not quite Peter R's calculated value, but indicative that there may be issues hidden in the code that do sum up to significant block delay.
legendary
Activity: 2968
Merit: 1198
July 07, 2015, 07:41:41 AM
I suppose you're thinking of 21 Inc's plan for zombie miners?

Somewhat. I'm simply thinking that a large number of smaller miners (whether of the variety envisioned by 21 Inc or otherwise) has an advantage because none of them particularly care about variance or actually prefer it, both for lottery reasons and because in general increased variance is a good way to reduce effective transaction costs.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 07:00:34 AM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

yeah, i had noticed that.  strange...

Maybe a bloated swap file is a surprise Easter egg 'feature' of XT nodes?

As I said weeks ago, good luck getting BTC core devs to help fix it when your Gavinsista troll fork goes haywire.   Tongue
sr. member
Activity: 420
Merit: 262
July 07, 2015, 06:48:52 AM
Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.

I think you missed the economic reasoning, which is that pools have to compete at near 0 margins which means any losses due to overpaying some ephemeral miners during periods (e.g. any variant of pool hopping or just miners who come and go for any reason), is bankruptcy relative to those with larger hashrate share which don't incur that variance in profitability.

That is not clear as long as the pools are differentiated (for example by geography/latency, support, value added features). They have to compete but they are providing a service and can charge for it. f2pool charges 4% for PPS. Presumably they have some (imperfect) defenses against pool hoppers that to make that margin profitable. AntPool charges 2.5% for PPS and likewise must be able to cover losses. AntPool offers PPLNS which isn't susceptible to hopping at all, but obviously shifts some variance to miners.

I agree variance favors large pools to a point (and I said so in my previous post). At some point though, possibly as little as 1% of total hash rate (where blocks are found essentially every month), the variance isn't necessarily significant to operations, and things like geographic diversity may dominate. At, say, 0.01% it would be catastrophic so this certainly puts a lower limit on pool size without some other form of centralization (sharing rewards between pools, which could include sybils).

But any margin they charge can be beat by a pool that has less variance and thus doesn't have to account for the risk in their margins.

As to whether other (economic and marketing) factors such as geographical distance or jurisdiction could defend higher margins, I have not analyzed the pool market (distinguished from the in-band economics) but one would think that a Sybil attack could put servers in multiple locations. This wouldn't impact the advantage of having them share risk by sharing block rewards, unless there was Coasian jurisdictional obstruction to doing so (which they could probably subvert any way).

I don't know where the inflection point occurs where increasing hashrate share as it pertains to variance risk becomes over weighed by other factors in terms of relative profitability and marketing, but it looks unlikely to be as low as 1% which already somewhat centralized (51 pools maximum can do a 51% attack). Perhaps I should endeavor to attempt some mathematical model. And it appears unlikely that we can optimize away all of the latency costs, so that is another factor giving higher profitability to higher hashrate, with 1% share no where near diminishing returns on that vector in isolation. It is a complex model perhaps ... would need to spend more time thinking about it. Is anyone doing research in this area?

I'd be very surprised if we don't have 10 controlling entities (perhaps hidden) of pools that can do a 51% attack, especially as we scale up transaction volume to Visa scale.

At some much smaller fraction (e.g. <0.00001%) solo mining becomes a lottery with a low ticket cost and variance might be desired or irrelevant.

I suppose you're thinking of 21 Inc's plan for zombie miners?
legendary
Activity: 1153
Merit: 1012
July 07, 2015, 06:42:28 AM
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.

The 1MB sanity check/safety limit is the reason you aren't seeing nodes destabilized by "huge #unconf tx's."

We would be "seeing that" if [email protected] and [email protected] had gotten their way and successfully rammed through their poorly researched, hilariously ill-conceived 20MB proposal.

Luckily, Team Gavincoin got fukkin' r3kt exactly as I foretold:
[img]

If you ask me, Gavin is a burden for Bitcoin, because he is constantly out for a power grab (the Bitcoin Foundation was his first attempt). His narcissistic behavior is unacceptable and his suggestions are unscientific (although he claims to be "chief scientist") and lack substance. He should be forced to surrender his development privileges.

Sadly, so many people follow him blindly, just because he enjoys the public spotlight.
legendary
Activity: 2968
Merit: 1198
July 07, 2015, 05:49:56 AM
Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.

I think you missed the economic reasoning, which is that pools have to compete at near 0 margins which means any losses due to overpaying some ephemeral miners during periods (e.g. any variant of pool hopping or just miners who come and go for any reason), is bankruptcy relative to those with larger hashrate share which don't incur that variance in profitability.

That is not clear as long as the pools are differentiated (for example by geography/latency, support, value added features). They have to compete but they are providing a service and can charge for it. f2pool charges 4% for PPS. Presumably they have some (imperfect) defenses against pool hoppers that to make that margin profitable. AntPool charges 2.5% for PPS and likewise must be able to cover losses. AntPool offers PPLNS which isn't susceptible to hopping at all, but obviously shifts some variance to miners.

I agree variance favors large pools to a point (and I said so in my previous post). At some point though, possibly as little as 1% of total hash rate (where blocks are found essentially every month), the variance isn't necessarily significant to operations, and things like geographic diversity may dominate. At, say, 0.01% it would be catastrophic so this certainly puts a lower limit on pool size without some other form of centralization (sharing rewards between pools, which could include sybils).

At some much smaller fraction (e.g. <0.00001%) solo mining becomes a lottery with a low ticket cost and variance might be desired or irrelevant.




sr. member
Activity: 420
Merit: 262
July 07, 2015, 04:24:46 AM
Luckily, Team Gavincoin got fukkin' r3kt exactly as I foretold:



Oh I didn't realize we are modeling this in a video game.

legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 04:02:34 AM
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.

The 1MB sanity check/safety limit is the reason you aren't seeing nodes destabilized by "huge #unconf tx's."

We would be "seeing that" if [email protected] and [email protected] had gotten their way and successfully rammed through their poorly researched, hilariously ill-conceived 20MB proposal.

Luckily, Team Gavincoin got fukkin' r3kt exactly as I foretold:

legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 03:47:59 AM
LN which doesn't exist yet and will arrive far too late to help with scaling when blocks are (nearer to) averaging 1MB.

The 1MB is proving a magnet for spammers as every day the average block size creeps up and makes their job easier. A lot of people have vested interest in seeing Bitcoin crippled. We should not provide them an ever-widening attack vector.

BTC blocks are always going to tend towards full because demand is infinite and supply is not.  Get used to it.

The Lightening Network will exist in due course.  Meanwhile, the Litecoin Network is entirely capable of absorbing excess transactions should BTC be pushed over capacity (whatever that means...where is JR when we need him to jump up and down demanding 'overcapacity' be rigidly defined in objective metrics?).

Don't fear spammers; they provide incentives to optimize.

Stop fretting about people desiring to see Bitcoin crippled; their attacks should be welcomed because antifragile systems grow stronger when, and only when, presented with adversity.

That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.

Stop the presses, the Free Shit Army wants moar free shit!  So what?

It costs much more than 14 cents worth of electricity/equipment/labor to process a BTC tx.  The block reward has done its job, and it's time to start the process of weaning the ecosystem (IE economy) off such giant subsidies by making each BTC tx do more to earn its keep.

So, in 2010 Satoshi was forward looking, when the 1MB was several orders of magnitude larger than block sizes.. Yet today we are no longer forward-looking or care about peak volumes, and get ready to fiddle while Rome burns.

 Roll Eyes

If you could calm down and stop exaggerating, that would be great.
sr. member
Activity: 420
Merit: 262
July 07, 2015, 03:37:05 AM
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Compression of exchange of differential sets is interesting. Your extensive real-world experience in codecs is really evident at the page you linked which is outside my current knowledge. I would need to devote some time to fully digest the specifics of your proposal (and perhaps Rusty's optimizations to IBLT). I do understand the concept that error correcting code allows us to reconstruct a signal in a noisy channel.

As you've pointed out today, large hashrate will still have the latency of 0 for all the blocks it mines. So unless these methods can bring latency down to a level which has a holistic (in and out-of-band game theory) stable equilibrium in economic cascade less than oligarchy centralization, then it won't necessarily stop censorship.

Let's assume we can optimize away latency (which is more general to the economics than just orphan rate) in the current design of PoW cryptocurrency, yet we are still faced with the unavoidable centralization due to block reward variance (that I explained today) and that due to transaction rate exceeding the bandwidth of home internet connections which vary widely around the world and between wireless and wired.

And then if you were really serious about Ultimate Decentralization™ (the ™ is a joke), everyone who transacts would be a miner. In that case, the bandwidth of home connections could be a critical consideration depending on the design of the consensus mining system.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 03:22:21 AM
my reference to Peters argument above aid nothing about mempool; I was talking  about block verification times. You're obfuscation again.
In your message to me you argued that f2pool was SPV mining becuase "the" mempool was big. I retored that their mempool has nothing to do with it, and besides they can make their mempool as small as they want. You argued that the mempools were the same, I pointed out that they were not. You responded claiming my responses was inconsistent with the points about verification delay; and I then responsed that no-- those comments were about verification delay, not mempool. The two are unrelated.  You seem to have taken as axiomatic that mempool == verification delay, a position which is technically unjustified but supports your preordaned conclusions; then you claim I'm being inconsistent when I simply point out that these things are very different and not generally related.

Yes, that's exactly what happened.  It's very kind of gmax to so diligently care for our enfeebled but honored elder by chewing his food, changing his diapers, wiping the drool from his chin, and forgiving his inability to form new memories (much less understand new concepts).  Gmax is a saint for lovingly patting Dr. Frappe on the head when he gets cranky, instead of reaching for the Thorazine.

It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.

I'm fascinated by the trolling and research possibilities of such 'contrived' blocks.  Generating them seems less expensive than the brute force high-fee crapflood attacks recently used by the Gavinistas, but it isn't in their interest to publicize them as they contraindicate their desired (>)>1MB blocks.

In order to optimize their contrivance, one would have to understand how best to exploit the weaknesses of Createnewblock's existing inefficiencies.  OTOH, being able to generate such extra-obnoxious blocks provides an excellent tool for testing proposed optimizations.
sr. member
Activity: 420
Merit: 262
July 07, 2015, 02:58:08 AM
Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim.

One of the costs of UXTO is it must be downloaded by every ephemeral miner that wants to be a full node (well I guess I am conflating UXTO and the entire block chain). This has been the source of some complaints against CN coins with large block chains. Now granted that given my points recently, solo mining is really pointless especially for Bitcoin where you need to amortize an ASIC purchase. So this is a non-issue for Bitcoin (but only because Bitcoin is already centralized) whereas it is for CPU mine-able coins such as the newer CN coins.

However, I must take issue with your assertion that unbounded UXTO could never cause a full node to run out of RAM. Afaics it is possible to envision a level of swapping that the system can not keep up with the transaction rate. That is however perhaps irrelevant if for example solo mining is not viable due to variance for small hashrate and where bandwidth (or latency) being the predominant bound for miners with less resources (or who wish to remain ephemeral for privacy reasons and any other reasons).
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 07, 2015, 02:54:57 AM
Rather amazing that he still is posting.  For starters, his counsel should have advised him to stop posting here.

Frap.doc is the world's leading expert on everything; he is the LeBron of Dunning-Kruger.

What makes you think he'd respect some dumb old lawyer's advice, when he refuses to swallow basic technical facts charitably spoon-fed to him by several of the world's top 10 Bitcoin experts?

His saving grace is that the absurd, pro-forma, borderline barratry lawsuit against him is being laughed out of court; the bankruptcy lawyers just did it to look busy and thus justify their fees and the endless process.
sr. member
Activity: 420
Merit: 262
July 07, 2015, 02:11:17 AM
You think that because you are blinded.

My well informed, prescient suggestion is wholeheartedly support the pegged side chain direction; otherwise prepare to exchange your BTC for another monetary unit or accept centralization.

I've got to put this type of comment in the "give-up" file.

...

It is only a matter of time before Bitcoin can handle a large proportion of world transactions. The key thing is not to f*ck up the process in the meantime.

Give up if you don't care about centralization. I for one think a Rottweiler will bite the arms off those who don't care:


And I don't think centralized scaling can attain the network effects of decentralized scaling. Thus I think the heavy end of the scale goes to NewCoin by orders-of-magnitude (note NewCoin might be something created by Blockstream, not limiting it to Trapqoin by Shetty One Eye).

Make your choice do you want NewCoin pegged to BTC or not. That may be the only choice you have and you may not even have that choice as that appears to maybe be inevitable (although I am concerned about TPTB's ability to attack the federated servers if the community is reticient on the necessary OP code soft fork, but the way around that is build the OP code into NewCoin and aim for extracting all the BTC out of Core asap).

legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 07, 2015, 02:04:25 AM
You think that because you are blinded.

My well informed, prescient suggestion is wholeheartedly support the pegged side chain direction; otherwise prepare to exchange your BTC for another monetary unit or accept centralization.

I've got to put this type of comment in the "give-up" file.
It would have merit if the following weren't the case:

x=Software & design optimizations. e.g. BitcoinCore is orders of magnitude faster in many respects than in 2009.
y=Moore's "Law" and Nielsen's "Law" and other relevant descriptions of the rate of improvement in computing technology

x*y > rate-of-growth-of-the-world-economy  (here's a visual)



It is only a matter of time before Bitcoin can handle a large proportion of world transactions. The key thing is not to f*ck up the process in the meantime.

others think we should let destiny take the wheel.

I think this is the only viable option.
Jump to: