Author

Topic: Gold collapsing. Bitcoin UP. - page 143. (Read 2032248 times)

legendary
Activity: 1764
Merit: 1002
July 06, 2015, 10:13:17 PM
look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:

staff
Activity: 4284
Merit: 8808
July 06, 2015, 10:09:22 PM
as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).
legendary
Activity: 1162
Merit: 1007
July 06, 2015, 09:59:41 PM
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 16 sec (F2Pool) and 35 sec (AntPool), based on these estimates.  
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 09:56:24 PM
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
legendary
Activity: 1162
Merit: 1007
July 06, 2015, 09:45:25 PM
why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?

Yes, exactly.  And we've just shown that, in the case of F2Pool, the effect is real.  We're not imagining it. 
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 09:41:59 PM
why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?
staff
Activity: 4284
Merit: 8808
July 06, 2015, 09:41:27 PM
Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Quote
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalksearch.org/topic/just-what-is-a-fair-fee-to-send-a-bitcoin-transaction-827209
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley

legendary
Activity: 1764
Merit: 1002
July 06, 2015, 09:23:35 PM
why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block / "defensive block" when the previous block was large than they are when the previous block was small or medium. 



it might be interesting to see if Antminer becomes statistically significant after 2 blocks instead of 1.
legendary
Activity: 1162
Merit: 1007
July 06, 2015, 09:22:04 PM
why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 09:18:13 PM
1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Poisson distribution, so I integrated the corresponding Poisson distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.

why does the 0 tx block have to come "immediately" after a large block?
legendary
Activity: 2968
Merit: 1198
July 06, 2015, 09:09:04 PM
furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.

From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is:

1 - e-(% of network hashrate) x 144, where there are 144 blocks per day

Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block.

This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics).

In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate.

Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work.

QED.

Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably:

1 - e-(% of network hashrate) x 576, where there are 576 blocks per day

Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day.

However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.

Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.


legendary
Activity: 1764
Merit: 1002
July 06, 2015, 09:02:12 PM
just look at this.  pitiful.  just shameful that core dev allows attacking spammers, emboldened by Stress Tests 1&2, to disrupt new and existing users who can be found complaining all over Reddit with stuck tx's.  this is exactly the dynamic that Mike Hearn was talking about.  look at that level of unconf tx's, 51000, never seen before and the highly disruptive 2.90 TPS:

https://www.reddit.com/r/Bitcoin/comments/3cbpwe/new_transaction_record_just_reached_147_txs/csu5leg


legendary
Activity: 1162
Merit: 1007
July 06, 2015, 09:00:49 PM
1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Binomial distribution, so I integrated the corresponding Binomial distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 08:43:31 PM
no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh
Quote

Quote
have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.
"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger.

Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap.

i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
sr. member
Activity: 420
Merit: 262
July 06, 2015, 08:09:08 PM
furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.

From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is:

1 - e-(% of network hashrate) x 144, where there are 144 blocks per day

Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block.

This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics).

In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate.

Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work.

QED.

Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably:

1 - e-(% of network hashrate) x 576, where there are 576 blocks per day

Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day.

However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 06, 2015, 07:49:01 PM
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.

Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults. Pools like Eligius with very different policies are the outliers. IBLT will help by incentivising node owners to converge to the same policies.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request. Since this block propagation efficiency was identified there could have been a lot of work done in Core Dev to advance it further (though I fully accept that other major advances like headers-first were in train and draw down finite resources). I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It is the LN which doesn't exist yet and will arrive far too late to help with scaling when blocks are (nearer to) averaging 1MB.

the 1MB was either forward-looking, set too high, or only concenred about the peak (and assuming the average would be much lower) ... or a mixture of these cases.

So, in 2010 Satoshi was forward looking, when the 1MB was several orders of magnitude larger than block sizes.. Yet today we are no longer forward-looking or care about peak volumes, and get ready to fiddle while Rome burns. The 1MB is proving a magnet for spammers as every day the average block size creeps up and makes their job easier. A lot of people have vested interest in seeing Bitcoin crippled. We should not provide them an ever-widening attack vector.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}



That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalksearch.org/topic/just-what-is-a-fair-fee-to-send-a-bitcoin-transaction-827209

Maybe this is likely naive and unrealistic long-term, and a viable fees market (once the reward is lower) could push this up a little. Or is this another case where the majority of users are wrong yet again?

Peter made the point that Bitcoin is at a convergence of numerous disciplines, of which no-one is an expert in all. I suggest that while your technical knowledge is absolutely phenomenal, your grasp of the economic incentives in the global marketplace is much weaker.
While Cypherdoc might have had errors in judgment in the Hashfast matter (I know zero about this, and have zero interest in it), his knowledge of the financial marketplace is also phenomenal, and he correctly assesses how Bitcoin can be an economic force for good, empowering people trapped in dysfunctional 3rd world economies. He is right how Bitcoin has to scale and cheaply for users to maintain a virtuous feedback cycle of ecosystem growth, hashing power growth and SoV.
Lots of people will not pay fees of 14c per tx when cheaper alternatives like LTC are out there. I see the recent spike in it (disclaimer: I don't have any) as the market "pricing in" that BTC tx throughput is going to be artificially capped. Whle BTC tx throughput will always be capped by technology, we should not be capping it at some lower level in the misguided belief that this "helps".


legendary
Activity: 4690
Merit: 1276
July 06, 2015, 07:28:49 PM
...
Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?

Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.

Thanks for the input.  Yes, as an attack.  Say, for instance, one primed the blockchain with a lot of customized high-overhead transactions over a period of time.  Then, when one wished to create a disruption, take action on all of them at once thereby upsetting those who were doing real validation.

The nature of the blockchain being what it is, I see an attack being most productive at creating a period of unusability of Bitcoin rather than a full scale failure (excepting a scenario where secret keys could be compromised through a flaw in the generation process which would, of course, be highly devastating.)

I was unaware that even today it would be possible to formulate transactions of the verification complexity that you mention.  It would be interesting to know if anyone is watching the blockchain for transactions which seem to be deliberately designed this way.

sr. member
Activity: 420
Merit: 262
July 06, 2015, 07:22:55 PM
Without reading every page in this thread, I'll add my two cents worth here.

I can't see a reason why Gold can't rise along with Bitcoin at the moment, just at different rates. Whereas Bitcoin can approach $1000 again by the end of year (nearly 4x the current price) similarly Gold can approach $2000 by the end of the year (nearly 2x the current price). Neither Bitcoin or Gold are undermined by debt compared to all the trillions of dollars in stocks and bonds which are leveraged to general confidence in elite lending strategies.

Yeah, I don't think it makes sense to come up with the idea that Bitcoin and precious metals would be mutually exclusive. I'm pretty sure that both will rise. Even if gold might ultimately be replaced by Bitcoin I doubt that this process will be fast enough to obstruct the general upward momentum of gold in a collapsing world economy.

After all, Bitcoin's concept is like virtual gold: The supply is limited, it's very difficult to counterfeit and you have to put in substantial effort to obtain it.

The key phase shift between gold and cryptocoin will likely come after 2017, when gold will be much easier for the Rottweilers to expropriate, steal, plunder, declare as Civil Asset Forfeiture, etc. See my upthread discussion with OROBTC (or just go to his profile and read his posts as he posts infrequently).
staff
Activity: 4284
Merit: 8808
July 06, 2015, 07:09:19 PM
for each block in the Blockchain, which will help answering Q1.  Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
No idea, I'm not aware of anything that tracks that-- also what does "typical mean", do you mean stock unmodified Bitcoin Core?

I expect correlation between empty blocks and mempool size-- though not for the reason you were expecting here: Createnewblock takes a long time, easily as much as 100ms,  as it sorts the mempool multiple times-- and no one has bothered optimizing this at all becuase the standard mining software will mine empty blocks while it waits for the new transaction list. So work generated in the first hundred milliseconds or so after a new block will usually be empty. (Of course miners stay on the initial work they got for a much loonger time than 100ms).

This is, however, unrelated to SPV mining-- in that case everything is still verified. As many people have pointed out (even in this thread) the interesting thing here isn't empty blocks, its the mining on an invalid chain.

And before someone runs off with an argument that aspect of the behavior, instead defines some kind of upper limit-- optimizing the mempool behavior would be trivial if anyone cared to, presumably people will care to when the fees they lose are non-negligible.  Beyond elimiating the inefficient copying and such, the simple of expident of running a two stage pool where the block creation is done against a smaller pool that constains only enough transactions for 2 blocks (which is refilled from a bigger one), would eliminate virtually all the cost. Likewise, as I pointed out up-thread incrementing your minfee can make your mempool as small as you like (the data I captured before was at a time when nodes with a default fee policy had 2.5 MB mempools).

First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.
Uh. I don't care what the consensus of the "Gold collapsing" thread is, the UTXO set is not stored in memory. It's stored in disk,  it's in the .bitcoin/chainstate directory.  (And as you may note, a full node at initial startup uses much less memory than the current size of the UTXO). Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim.

Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.
sr. member
Activity: 420
Merit: 262
July 06, 2015, 06:50:13 PM
Without reading every page in this thread, I'll add my two cents worth here.

I can't see a reason why Gold can't rise along with Bitcoin at the moment, just at different rates. Whereas Bitcoin can approach $1000 again by the end of year (nearly 4x the current price) similarly Gold can approach $2000 by the end of the year (nearly 2x the current price). Neither Bitcoin or Gold are undermined by debt compared to all the trillions of dollars in stocks and bonds which are leveraged to general confidence in elite lending strategies.

mymy you are severely out of context considering the last 1000 or so pages you should have read here. Tongue

anyway, do not forget about how the gold market is rigged, rotten from its heart by the FED Masters, whom nonetheless deem worth accumulating/stealing shit tons of it @FortKnox.

bitcorn and popcoin is cheap now too tho Wink

I believe both of you are so incredibly removed from reality, that it boggles my mind.

Let me try to help you, and I mean that sincerely.

We are coming into a low for private assets[1] because for the moment the contagion in Europe is driving international capital flows (capital follows capital due to the wealth effect where Δ flow != Δ mcap) into the short end of the bond curve in the core EU economies in particular Germany (and away from the long-end and peripheral EU bond markets). October will be the bottom for private assets[1], after which they will begin to rise again as they did after their 2008 implosion (dollar and US stocks were making a phase transition from public to private alignment over this period).

So you will see radical bottom in gold and BTC roughly this Sept or Oct, probably south of $850 and $150. I am thinking possibly double-digits for BTC with $100 as a psychological barrier that is necessary to shake out all the fools who bought at $600.

New all-time highs for private assets will come in 2016 or 2017. By the end of 2017, the dollar and US stocks will fall away from private assets as the influx of safe haven capital will have peaked and the strong dollar will have choked off the US economy. Then USA will go over the cliff in 2018 being the last economy still standing in the world taking us into  global economic abyss of epic deflation.

Private assets will skyrocket after 2017, but they will also be hunted by the governments like mad dog Rottweilers munching on your arm (former US Treasury official, "we will burn the fingertips of goldbugs up to their armpits").

As for the manipulation thesis, it is utter nonsense. For example, Armstrong totally annihilated Fekete's backtwardation mumbo-jumbo, especially since Armstrong is the one who taught the Arabs to lease their gold to earn income to work around the anti-usury provision of the Islamic religion. Numerous other essays from Armstrong have explained why the nutter tinfoil hats are delusional about the manipulation argument. I don't have time to repeat all that. Do yourself a research favor.


[1] which includes the dollar, US stocks, gold, bitcoin, because these are all aligned now as safe haven juxtaposed against the European contagion, i.e. the dollar and USA will receive a stampede of capital fleeing Europe after the October 2015 BIG BANG explosion of the sovereign debt (loaned from EU banks) contagion in Europe.
Jump to: