Pages:
Author

Topic: Gold collapsing. Bitcoin UP. - page 85. (Read 2032247 times)

legendary
Activity: 1414
Merit: 1000
July 29, 2015, 03:38:43 PM
notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



your willingness to connect two dots is astounding
legendary
Activity: 1372
Merit: 1000
July 29, 2015, 03:36:13 PM
Lightning network and sidechains are not magical things, they're orthorgonal and not alternatives to relay improvements. But since you bring them up: They're both at a _futher_ level of development than IBLT at the moment; though given the orthogonality it's irrelevant except to note how misdirected your argument is...

I appreciate the further detail that you describe regarding the relay situation. So it seems that you are not optimistic that block propagation efficiency can have a major benefit for all full-nodes in the near future. Yet block propagation overhead is probably the single biggest argument for retaining the 1MB as long as possible. To cap volume growth to a certain extent.

I'm not sure about that.  My opinion is that the biggest argument for retaining the cap (or, at least, not introducing a 20x plus exponential increase or something like that) is the burden on full nodes due to (cumulated) bandwidth, processing power and, last but not least, disk space required.  Of course, for miners the situation may be different - but from my own point of view (running multiple full nodes for various things but not mining) the block relay issue is not so important.  As gmaxwell pointed out, except for reducing latency when a block is found it does only "little" by, at best, halving the total bandwidth required.  Compare this to the proposed 20x or 8x increase in the block size.

Bandwidth is always a much bigger concern than blockchain usage on disk. TB disks are very cheap, v0.11 has pruning.

Block (1) validation time as well as (2) propagation time are both issues for mining security and avoiding excessive forks/orphans which prolong effective confirmations.
They are separate issues however, and you can have either problem without the other in different blocks, or both together.

Both of these are exacerbated by block size.

As eager as I am for a block size increase for scalability, I'm not convinced that it is yet warranted given the risks...  The developers are aware of the issues.  They aren't goofing off.  It is just better to get it right than to get it done.
Validation time, in its self is not an issue, miners have approximately 10 minus to validate, the issues is not the time, it's that you can hack the protocol and validate nothing or force your competitor to waste time validating to gain an advantage, (this is also a temporary hack and if anything highlights where development should occur)

Propagation time is a feature not an issue, (a symbiotic byproduct) that adds to the incentives that make bitcoin work.

their are hypothetical issues with how these features could be abused, the problem is the developers are not separating the hypothetical issues form the features, and working on that, some are working to to change the features.    
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
July 29, 2015, 03:34:45 PM

Greg has made it abundantly clear that he is a Bear on Bitcoin.  he should step down as a core dev.

and what's this about him not thinking Bitcoin can be used on small devices?  smartphones are KEY to Bitcoin's long term success.

Yeah... maybe you need to read this again. What did I tell you about putting words in people's mouth.

"A bear on Bitcoin"  Cheesy So much non sense!
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 03:31:43 PM

Greg has made it abundantly clear that he is a Bear on Bitcoin.  he should step down as a core dev.

and what's this about him not thinking Bitcoin can be used on small devices?  smartphones are KEY to Bitcoin's long term success.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
July 29, 2015, 03:28:21 PM
Now its Frap.doc's turn to get rekt:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy

i find it amusing that you bash Satoshi on the one hand but when you find a morsel quote of his (which you've misread) you latch onto it as gospel.

if you read the actual link you posted, Satoshi was talking about how Bitcoin should not be combined with BitDNS.  he's not even talking about tx's.  he wanted them to be separate with their own fates.  he also drew an extreme example of how BitDNS might want to include other huge datasets while Bitcoin might want to keep it small as an example of how the decision making might diverge btwn the two.  not that Bitcoin users wanted to keep a small blockchain.


Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Huh

Are you really trying to spin this one again?
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 03:22:56 PM
Now its Frap.doc's turn to get rekt:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy

i find it amusing that you bash Satoshi on the one hand but when you find a morsel quote of his (which you've misread) you latch onto it as gospel.

if you read the actual link you posted, Satoshi was talking about how Bitcoin should not be combined with BitDNS.  he's not even talking about tx's.  he wanted them to be separate with their own fates.  he also drew an extreme example of how BitDNS might want to include other huge datasets while Bitcoin might want to keep it small as an example of how the decision making might diverge btwn the two.  not that Bitcoin users wanted to keep a small blockchain.
legendary
Activity: 1246
Merit: 1010
July 29, 2015, 03:21:55 PM
anyone concerned about the BTC lower high?
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 29, 2015, 03:08:23 PM
-
#rekt

   >It was _well_ .... understood that the users of Bitcoin would wish to protect its decenteralization by limiting the size of the chain to keep it verifyable on small devices.

No it wasn't. That is something you invented yourself much later. "Small devices" isn't even defined anywhere, so there can't have been any such understanding.

Hearn #rekt confirmed:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
newbie
Activity: 28
Merit: 0
July 29, 2015, 02:58:50 PM
he's backed off the 100kB tx limit in deference to limiting the #sigops and simplifying the hashing process.  for details, someone provide the link to his ML post.

Thanks for the update! The effect on transaction validation time should be essentially the same, though... it should be enough to make 'overly complicated' blocks impossible for the time being.
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 02:52:01 PM

1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.


he's backed off the 100kB tx limit in deference to limiting the #sigops and simplifying the hashing process.  for details, someone provide the link to his ML post.
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 02:50:22 PM
i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

b/c i'm assuming it would be cost prohibitive for you?  and pretty much anyone else i might add altho i'm sure you'd argue not for a gvt; which actually might be true which is why i advocate NO LIMIT as that would throw a huge amount of uncertainty as to just how successfully one could jack the mempool in that scenario as it removes user disruption and thus greatly increases the financial risk ANY attacker would be subject o.

Quote
But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

i know; your normal sized convoluted multi-input block that would have to, btw, be constructed as a non-standard tx block that is self mined by a miner which makes it unlikely b/c you'd have to believe a miner would be willing to risk his reputation and financial viability by attacking the network in such a way which has repercussions. that's also good news in that it removes a regular spammer or gvt attacker (non-miner) from doing this. and finally, that type of normal sized but convoluted validation block will propagate VERY slowly which risks orphaning which makes that attack also unlikely.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 29, 2015, 02:37:10 PM
The introduction of sidechains, LN and whatever other solutions is a complicated solution. It's also a solution motivated by a desire to fix a perceived economic issue, rather than sticking to the very simple issue at hand. It is the very opposite of what you are claiming to be important, that software should be kept simple.

That is a contradiction.

The simple solution is to remove the artificial cap. A cap that was put in place to prevent DDOS.

Your reference of CVE-2013-2292 is just distraction. It is a separate issue, one that exists now and would continue to exists with a larger block size.

Bloating Layer 1 is a complicated solution; scaling at Level 2+ is an elegant one.

You still don't understand Tannenbaum's maxim.  Its point isn't 'keep software simple FOREVER NO MATTER WHAT.'  That is your flawed simpleton's interpretation.

"Fighting features" means ensuring a positive trade-off in terms of security and reliability, instead of carelessly and recklessly heaping on additional functionality without the benefit of an adversarial process which tests their quality and overall impact.

One does not simply "remove the artificial cap."  You may have noticed some degree of controversy in regard to that proposal.  Bitcoin is designed to strenuously resist (IE fight) hard forks.  Perhaps you were thinking of WishGrantingUnicornCoin, which leaps into action the moment anyone has an idea and complies with their ingenious plan for whatever feature or change they desire.

Like DoS, CVE-2013-2292, as an issue that exists now, is fairly successfully mitigated by the 1MB cap.  It is not a separate concern because larger blocks exacerbate the problem in a superlinear manner.  You don't get to advocate 8MB blocks, but then wave your hands around eschewing responsibility when confronted with the immediate entailment of purposefully constructed 8MB tx taking 64 times longer to process than a 1MB one.  The issue is intrinsic to larger blocks, which is why Gavin proposed a 100k max tx size be married to any block size increase.

Fully parsed, what you are claiming is

Quote
The simple solution is to remove the artificial cap hard fork Bitcoin.

Do you realize how naive that makes you look?
newbie
Activity: 28
Merit: 0
July 29, 2015, 02:36:56 PM

1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 29, 2015, 02:36:30 PM
i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 02:32:48 PM
i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?

legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 29, 2015, 02:12:00 PM
notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



Yes Smiley

More "attacks" please.

Just not really big blocks yet.

1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.

Fortunately with better code optimization, we may get that validation time down even more, as well as the other advances to make this safer.

We'll get there.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 29, 2015, 02:09:03 PM
i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.
legendary
Activity: 1764
Merit: 1002
July 29, 2015, 02:05:32 PM
notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:

legendary
Activity: 1764
Merit: 1002
July 29, 2015, 02:02:02 PM
clearing out the mempool quickly would solidify the attackers losses and gratify the previous tx validation work already done by all full nodes in the network.
Pages:
Jump to: