Pages:
Author

Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... - page 8. (Read 105082 times)

hero member
Activity: 714
Merit: 500
However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?
hero member
Activity: 938
Merit: 1000
www.multipool.us
However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
hero member
Activity: 938
Merit: 1000
www.multipool.us
If the block size remains at 1 MB, most of humanity will be forced by necessity to transact off chain.

(or on another chain)
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
"anti-fork" people, or should I say "pro-bitcoin" people, really sound a lot saner to me in their phrasing, there is some tangible dementia and hostility in most of the "lets fork bitcoin" folks.

You want to point fingers about hostility?  I think you'll find this discussion started because MP accused Gavin of being a scammer and saying there was no way under any circumstances that he would accept a fork.  If we had set off on a more polite note, something along the lines of "let's weigh up the pros and cons and come to some sort of compromise", then maybe things would have gone a little differently.
legendary
Activity: 4760
Merit: 1283
...
The original client did have a 33.5MB constraint on a message length.   It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message.  There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol.  It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated.  Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node.  Sanity checks are always a good idea to remove edge cases.

What's the problem with that?  You've never heard of Moore's law?  It's the universal faith-based psycho-engineering solution in these parts, and it seems to work fine on a lionshare of the participants.  Normally simply uttering the two words is sufficient to win any argument (hollow though the victory may be.)

legendary
Activity: 1316
Merit: 1000
Si vis pacem, para bellum
Could it become like those congressional bills where all the crap legislation gets stuck inside the fine print of the guns 'n drugs 'n terrroists 'n kids cover page?

While in theory that is the most sensible idea, in practice, adding other changes will only slow down the implementation of what already has proved to be very contentious (yet shouldn't have been).


Heh.  It would be a good idea, for example, to add code that sweeps "dust" more than 8 years old (ie, starting with the very oldest dust, at the beginning of next year) into the miners' pockets.  If something has been sitting there for 8 years and it's too small to pay for the fees that would be needed to spend it, then it's useless to its present owner, burdensome to the whole network to keep track of, and ought to be aggregated and paid to miners for network security. 

But if you think the blocksize discussion is contentious?  Sweeping dust would make the ultraconservatives and ultralibertarians here absolutely foam at the mouth.  I'll not even suggest such a thing because the discussion would go absolutely off the rails.

I'd cheerfully go even further and "sweep" *any* output that's been sitting there for >20 years (ie, lost keys) into a "mining fund," then have the coinbase of each block take 0.001% of the current mining fund balance in addition to the block subsidy.  Anybody whose keys aren't actually lost can avoid the haircut by moving her funds from one pocket to another in a self-to-self transaction.



sweep the dust thats too small to transact  but anything substancial should NEVER be swept imo
some people , maybe even satoshi has left the early  blocks in a will to the kids or grandchildren etc

we have to remember that dust today might be a enough to buy a lamborghini in 50 years

others maybe young and keeping cold storage for retirement while working on other projects sway from bitcoin

i dont think wallets should be "reposessed under any circumstances " even if they appear to be abandoned for years

if we did that it would be a matter of time before someones wallet got reposessed then owner  later tried to claim it

hero member
Activity: 836
Merit: 1030
bits of proof
That is why the 1MB limit is the single most important threat to Bitcoin, because ossification is rapidly approaching.
The limit can irreparably damage Bitcoin's future growth, its winner-take-all honey-badger image.

The block limit increase may eventually pass as an intentional hard fork for good reasons, but I think it is not relevant for the long term fate of Bitcoin, the digital gold.

It would be naive to assume that the block size limit is the last feature we need to change to ensure eternal prosperity for Bitcoin. We will hit other limits or yet unknown issues and the problem will repeat, but by that time we will have even less, likely zero, chances to orchestrate a hard fork.

The block chain as we know bootstrapped the digitial gold, but is unlikely its final home. We have to restore the ability to innovate and enable individuals to express their opinion by moving their digital gold to a side chain they prefer for whatever reason.

I know it was simpler if one was not forced to compare alternatives but simply HODL, but that is unfortunatelly orthogonal to innovation and diverse needs of a global adoption.

legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
HForks will become virtually impossible in the future as too many opinionated developers bash each other. My only hope is that the important scalability changes are made before there are too many dependencies. Ossification is rapidly approaching.  

That is why the 1MB limit is the single most important threat to Bitcoin, because ossification is rapidly approaching. The limit can irreparably damage Bitcoin's future growth, its winner-take-all honey-badger image. Its default status as an "electronic gold" store-of value is badly tarnished if it can be allowed to run into a brick wall when there was years of warnings, threads and discussions beforehand. For non-technical supporters and investors the fear will remain that other major problems are latent and ignored.

Some people argue against increasing the limit because they "want Bitcoin the way it is now", that "it is working fine". The reality is counter-intuitive. The way it is now is that it has no effective block limit, and that allowing the average block size to approach 1MB is to introduce a massive untested change into every full node, simultaneously. I say untested because the only comparable event was the 250KB soft-limit which rapidly went wrong, and the major mining pools had to come to the rescue and increase their default block limit. With the 1MB the number of nodes which need to quickly upgrade is several thousand times larger. Chaos.

People are worried about government action, yet for every government which tries to ban Bitcoin another will give it free rein, for every government that wants to over-regulate it, another will decide upon regulation-lite. The threat is from within, not without.

Off-chain solutions are developing. An Overstock purchase by a Coinbase account holder does not hit the blockchain. There must be many business flows which are taking volume away from the main-chain. But this is not enough to stabilize the average block size.

Challenges are also opportunities. A successful hard fork, executed as Gavin suggests with a block version supermajority, will take many months and allow a smooth transition to a scaling state. Hard forks on the wish list, and ideas like stale dust reverting to the miners, (if there is majority consensus) will be seen as achievable. Ossification might be delayed a little to allow other hard-fork improvements if this first challenge is successfully handled.

full member
Activity: 236
Merit: 100
Satoshi stated in the white paper that consensus rules can be changed if needed.  

He didn't even need to state that, anyone can start an alternative network that forks from the existing one. Whether the majority follows the changed fork or the unchanged fork is immaterial.


Quote
There are four universal truths about open source peer to peer networks:
a) It is not possible to prevent a change to the protocol.
b) A change in the protocol will lead to two mutually incompatible networks.
c) These parallel networks will continue to exist until one network is abandoned completely.
d) There is no mechanism to force users to switch networks so integration is only possible through voluntary action.

There is a concept called the tyranny of the minority.  It isn't possible for a protocol to prevent changes without explicit approval of every single user but even if it was that would not be an anti-fragile system.  A bank could purchase a single satoshi, hang on to it, and use that as a way to prevent any improvements to the protocol ensuring it will eventually fail.  The earliest fork fixed a bug which allowed to the creation of billions of coins.  There is no evidence it has universal support.  The person who used it to create additional coins saw the new version erased coins that the prior network declared valid.  Still a fork is best if either a negligible number of people support it or a negligible number of people oppose it.  The worst case scenario would be a roughly 50-50 split and both forks continuing to co-exist.  

What you mean is, you can't prevent other people from using a different protocol.

They can't change YOUR protocol, but they can leave you all alone with few other people (or none) to talk to.

Still you have a choice. You can accept a smaller consensus. Let's say someone forked bitcoin to change the rules so that instead of 21 million coins, an infinite number would be produced (25 coins per block forever, for example). And let's say 90% of bitcoin users accepted that fork.

If you don't want to accept this change, all it means is, your consensus size shrunk, probably back to 2011 levels. Personally I would accept this before infinite coins.

Just because you lose 90% of the consensus doesn't mean you'll eventually lose 100% of it.
donator
Activity: 1218
Merit: 1079
Gerald Davis
OP, Weren't you vehemently against raising the limit few years ago? I think I remember a lot of intellectual technical discussion around here involving you, gmaxwell and others regarding this matter. Around v0.0.9?

I don't recall that being the case but I could be wrong (on other issues my opinion has changed over time Smiley ).  I do recall around that time (I assume you mean v0.9.0 not v0.0.9) the backlog of unconfirmed transactions was growing.  The reason is that while the soft limit had been raised the default value hard coded in the bitcoind was 250KB and most miners never changed that.  The only miners which were targeting a different size were targeting smaller not larger blocks.  Even as the backlog grew they felt no urgency in changing it.  Some uninformed people advocated raising the 1MB cap as a way to "fix" the problem.  I tried (and mostly) failed explaining that miners could already make blocks at least 4x larger and were opting not to.  Changing the max only allows larger blocks if miners decide to make them.  The developers ended up forcing the issue by first raising the default block size so that it didn't remain fixed at 250KB and them removing the default size all together.  Today bitcoind require you to set an explicit block size.  If you don't set one you can't mine.

Quote
I am personally very much against hard forks of such. However I am all in for new crypto with new parameter that considers how a previous crypto has lagged behind. From my point of view a hard fork with a lot of publicity to adhere to and "update" to keep up with is simply an act of how a few control the mass. Whether it is for a good reason or bad or whatever, It really breaks the original principles of decentralization and fairness.

I have to disagree.  Bitcoin is a protocol and protocols change overtime.  Constantly reinventing the wheel means a lot of wasted effort.  You end up with dozens of half used systems instead of one powerful one.  Look at TCP/IP or even HTML.  Yeah they have a lot of flaws, they also have a lot of early assumptions backed in which produce inefficiency.  Evolution is also hard. Look at the mess of browser standards or how long the migration to IPv6 has taken.  Despite the problems the world is better for having widely deployed protocols in favor on constantly starting over.  

In the crypto space however 'starting over' means a chance at catching lightning in a bottle and becoming insanely rich.  That has lead to a lot of attempts but not much has come from it so far.  Alternates are an option but they shouldn't be the first option.  The first option should be evolution but if a proposal fails and a developer strongly believes that in the long run the network can't be successful without it then it should be explored in an alternate system.  There are some things about Bitcoin which may be impossible to change and for those an alternate might be the only choice.  The block size isn't one of them.

Jumping specifically to the block size.  The original client had no* block size limit. The fork occurred when it was capped down to 1MB.  The 1MB has in some circles become a holy text but there isn't anything which indicates it had any significance at that time.  It was a crude way to limit the damage caused by spam or a malicious attacker.  It is important to understand that the cap doesn't even stop spam (either malicious or just wasteful).  The cost of mining, the dust limit and the min fee to relay low priority txns are what reduced spam by making it less economical.

The cap still allowed the potential for abuse but the scope of that abuse is limited.  If 10% of the early blocks were maxed out it would still have added 5GB per year to the blockchain size.  That would have been bad but it would have been survivable and would have required the consent of 10% of the hashrate.  Without the cap a single malicious entity could have increased the cost of joining the network by a lot more.  Imagine if before you even knew about Bitcoin and the only client was the full node that to join the network would have required downloading 100GB or 500GB.  Would Bitcoin even be here today if that had happened?  

Satoshi stated in the white paper that consensus rules can be changed if needed.  He also directly stated that the block limit was temporary and could be increased in the future and phased in at a higher block.  Now I am not saying everything Satoshi said is right but I have to disagree that the 'original principles of decentralization and fairness' preclude changes to the protocol.  Some people may today believe that any change invalidations the original principles but that was never stated as an original principle. The protocol has always been changeable by an 'economic majority' and the likewise that majority can't prevent the minority from continuing to operate the unmodified protocol.  It is impossible to design an open protocol which can't be changed however as a practical matter a fork (any fork) will only be successful if a supermajority of users, miners, developers, companies, service providers, etc support the change.

There are four universal truths about open source peer to peer networks:
a) It is not possible to prevent a change to the protocol.
b) A change in the protocol will lead to two mutually incompatible networks.
c) These parallel networks will continue to exist until one network is abandoned completely.
d) There is no mechanism to force users to switch networks so integration is only possible through voluntary action.

There is a concept called the tyranny of the minority.  It isn't possible for a protocol to prevent changes without explicit approval of every single user but even if it was that would not be an anti-fragile system.  A bank could purchase a single satoshi, hang on to it, and use that as a way to prevent any improvements to the protocol ensuring it will eventually fail.  The earliest fork fixed a bug which allowed to the creation of billions of coins.  There is no evidence it has universal support.  The person who used it to create additional coins saw the new version erased coins that the prior network declared valid.  Still a fork is best if either a negligible number of people support it or a negligible number of people oppose it.  The worst case scenario would be a roughly 50-50 split and both forks continuing to co-exist.  

The original client did have a 33.5MB constraint on a message length.   It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message.  There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol.  It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated.  Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node.  Sanity checks are always a good idea to remove edge cases.
hero member
Activity: 544
Merit: 500
Ok, so say we go ahead with a hard fork to a > 1MB max block size ... are we also going to see a couple of other hard forking issues put through all at the same time or will this one be done alone?

Could it become like those congressional bills where all the crap legislation gets stuck inside the fine print of the guns 'n drugs 'n terrroists 'n kids cover page?

While in theory that is the most sensible idea, in practice, adding other changes will only slow down the implementation of what already has proved to be very contentious (yet shouldn't have been).


Hard forks are a thing for the protocol youth stage. HForks will become virtually impossible in the future as too many opinionated developers bash each other. My only hope is that the important scalability changes are made before there are too many dependencies. Ossification is rapidly approaching. 



legendary
Activity: 924
Merit: 1132
Could it become like those congressional bills where all the crap legislation gets stuck inside the fine print of the guns 'n drugs 'n terrroists 'n kids cover page?

While in theory that is the most sensible idea, in practice, adding other changes will only slow down the implementation of what already has proved to be very contentious (yet shouldn't have been).


Heh.  It would be a good idea, for example, to add code that sweeps "dust" more than 8 years old (ie, starting with the very oldest dust, at the beginning of next year) into the miners' pockets.  If something has been sitting there for 8 years and it's too small to pay for the fees that would be needed to spend it, then it's useless to its present owner, burdensome to the whole network to keep track of, and ought to be aggregated and paid to miners for network security. 

But if you think the blocksize discussion is contentious?  Sweeping dust would make the ultraconservatives and ultralibertarians here absolutely foam at the mouth.  I'll not even suggest such a thing because the discussion would go absolutely off the rails.

I'd cheerfully go even further and "sweep" *any* output that's been sitting there for >20 years (ie, lost keys) into a "mining fund," then have the coinbase of each block take 0.001% of the current mining fund balance in addition to the block subsidy.  Anybody whose keys aren't actually lost can avoid the haircut by moving her funds from one pocket to another in a self-to-self transaction.

donator
Activity: 1218
Merit: 1079
Gerald Davis
According to my data the average transaction size is higher than expected, and slowly growing.

Indeed and this fits with expectation for a couple of reasons.  The first is that P2PkH ("normal") transactions are the most compact.  For a given number of inputs and outputs any other script is going to be larger.  Even a relatively simple 2-of-3 multisig script is about 50% larger than a Pay2PubKeyHash script and there is a lot more potential than just straight multikey transactions.  The good news is that the complex scripts is where all the interesting innovation is going to come from.  Without the scripting Bitcoin wouldn't be programmable money it would just be digital cash.  Now a digital cash system is interesting but scripting is what allows doing interesting things in a trustless manner.  

Adding to that the average number of inputs and outputs per transaction has risen over time.  As the number of users increases the total number of outputs will also increase.  While individual transactions will vary for the overall blockchain the number of inputs and outputs will be roughly the same.  We can think of inputs and outputs as a single roundtrip script.  The other elements of the transaction are relatively small and fixed in size so:

Average txn size = (Script Complexity) * (Number of IO)

The bottom line is both elements of transaction size are increasing slowly over time.  Unless adoption dies off (in which case all this is academic) there is no reason to believe that trend will change anytime soon.  The good news is that average transaction size will probably not go above 800 or so bytes so I think a very conservative estimate of throughput is at least 2.0 tps per MB.

Quote
The full data table is available here, though as image and only until block 327500, but it should provide a ballpark: somewhere around 500-600 byte per transaction.
Nice to see other data points which line up with the OP.  500 bytes per txn would equate to 3.3 tps max per MB and 600 bytes would be 2.7 tps per MB.  In the OP I expanded that to cover the likely transaction size growth and to all provide a lower bound for an unrealistic but at least possible txn size.  Rounding it all out that is why most of the arguments revolved around a sustainable 2 to 4 tps (per MB).  If you read anything on the topic about scalability which uses values outside that range I would question the knowledge of the author.  I see lots of good intended articles which cling to the gospel of 7tps.

* As a side note in an optimal world there would have been no scripts in the output portion of the txn.  Output would just contain a 20 byte field to contain the ScriptHash.  All transactions would involve paying to a scriptHash and all addresses would be encoded ScriptHashes.  Even today the most common script would probably be P2PkH but it would "work" the same way as any other arbitrary script.  We probably are never going to change this but it could be done as a hardfork.  In addition to simplicity and consistency it would make the UTXO smaller with compact fixed length records  without any loss of functionality.  Related to that store outputs as a merkle tree and you could have intra-transaction pruning as well.
legendary
Activity: 1400
Merit: 1013
The hysteria is reached even into the bitcoin-development mailing list.

could you share some of it here please?
http://www.mail-archive.com/[email protected]/msg06992.html

Note the degree to which nobody is even willing to mention solutions to the presented problem other than trusted third parties (micropayment channels, etc)

That unwillingness to discuss alternate solutions is strong evidence that the proposals and supporting comments are agenda-driven, by an agenda that is not being openly stated.
legendary
Activity: 1260
Merit: 1002
The hysteria is reached even into the bitcoin-development mailing list.

could you share some of it here please?
legendary
Activity: 1400
Merit: 1013
The hysteria is reached even into the bitcoin-development mailing list.

It's amazing how many high-profile names are extremely interested in forcing as many Bitcoin users as possible into trackable relationships with trusted third parties.

Might be worth a "Meet the Developers Who Want to Destroy Your Privacy" exposé.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Ok, so say we go ahead with a hard fork to a > 1MB max block size ... are we also going to see a couple of other hard forking issues put through all at the same time or will this one be done alone?

Could it become like those congressional bills where all the crap legislation gets stuck inside the fine print of the guns 'n drugs 'n terrroists 'n kids cover page?

While in theory that is the most sensible idea, in practice, adding other changes will only slow down the implementation of what already has proved to be very contentious (yet shouldn't have been).
legendary
Activity: 1652
Merit: 1016
It is you that are actually scared and does not support bitcoin the way it is.

The pro-progress crowd are actually supporting bitcoin because the block size cap is and always was temporary.

You haven't thought your comment through properly. Cheesy
legendary
Activity: 1904
Merit: 1007
I only see hysteria amongst the pro side.. Roll Eyes
Its you and his royal clueless roadstress that spreads fud and do nothing but fagocitating the debate with 'future' fear of bitcoin not scaling..

Proof of fud spread? I'm only advocating for a neutral bitcoin network without any constrains for anyone to use. Nothing else.

Quote
It is you that are actually scared and does not support bitcoin the way it is.

Please tell me how do you support bitcoin.

Quote
Please go ahead and fork that shit up.

Global consensus?! You wish..

Ok.
hero member
Activity: 658
Merit: 500
I only see hysteria amongst the pro side.. Roll Eyes
Its you and his royal clueless roadstress that spreads fud and do nothing but fagocitating the debate with 'future' fear of bitcoin not scaling..

It's interesting that you use this word. A phagocyte is a cell from our immune system that eats bacteria and other harmful creatures. So, you're saying that the pro side is actually defending against those who want to harm Bitcoin.
Pages:
Jump to: