Pages:
Author

Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... - page 14. (Read 105069 times)

legendary
Activity: 1904
Merit: 1007
If the Blocksize increase exponentially the numbers or nodes may drop in a significant way just because they require to much disk, actually the network increase but there are not so many full nodes (nodes sharing the blocks)

Have you read all post from this thread? The "much disk" will not be needed! We have blockchain pruning.
hero member
Activity: 772
Merit: 501
We see a huge concentration in mining environment 4-5 hands control 75%+ of mining power (Pools and Large Ops)

If the Blocksize increase exponentially the numbers or nodes may drop in a significant way just because they require to much disk, actually the network increase but there are not so many full nodes (nodes sharing the blocks)

The dominance of large mining pools is not due to the cost of running a node. There are thousands of full nodes, and yet the top 4-5 pools direct the majority of the network hashrate, as you note. The reason large pools have high hashrates is that pool size reduces the payout variance to miners, so miners prefer to use large pools.

So you need to be clear what you're trying to solve, and what is the cause of the problem. Because keeping the block size at 1 MB forever is not going to reduce the dominance of mining pools.
hero member
Activity: 532
Merit: 500
TaaS is a closed-end fund designated to blockchain
Well another grate post of D&T

I am not a big fan of increasing the Blocksize to 20MB

Having said that.

We see a huge concentration in mining environment 4-5 hands control 75%+ of mining power (Pools and Large Ops)

If the Blocksize increase exponentially the numbers or nodes may drop in a significant way just because they require to much disk, actually the network increase but there are not so many full nodes (nodes sharing the blocks)

With centralized mining and few nodes Bitcoin will be weaker.

I understand the reasons of Gavin A. to increase the Blocksize but we need to find a solution for the problem and an analysis that includes all the players in the Bitcoin ecosystem.

Lately I read many times that "The value is in the Blockchain" is true but without truly decentralized mining and a huge nodes constellation in line with the P2P Crypto Decentralized there is no Blockchain or Bitcoin (as currency)

Core Developers are a crucial part of all this, but also miners invest lot of money (more than all the VCs combined) and nodes add really decentralization to the system, we need an improvement that will generate more miners (not pools) more nodes and a healthy Bitcoin Network.

Regards

Juan
hero member
Activity: 688
Merit: 500
ヽ( ㅇㅅㅇ)ノ ~!!
How do we decide what level of security is "required" for the network? What amount of hashing, what $$ spent on mining hardware... Is there a knowable, correct, answer?

Available hashing power must provide more reward when used to gain transaction fees than attacking.... right?

How do we know that existing coin minting even does this? (apart from it vaguely seeming like a very large amount, and no attacks really having happened)
newbie
Activity: 39
Merit: 0
Thank you for this interesting resumé D&T, always enjoy to read your posts, they were very helpful since the early days to understand the inner workings of the technology better. I am convinced  Smiley 
legendary
Activity: 924
Merit: 1132
Satoshi didn't have a 1MB limit in it.

Are you saying that the original plan was to have no limit at all? What would happen if we were just to remove the limit now, instead of making it bigger?

The limit wasn't part of Satoshi's original plan, but keep in mind that the original plan was at that time very much a work in progress.  I was only involved for about a month back then.  When Hal and Satoshi started talking about involving more people, I dropped it.
hero member
Activity: 658
Merit: 500
Satoshi didn't have a 1MB limit in it.

Are you saying that the original plan was to have no limit at all? What would happen if we were just to remove the limit now, instead of making it bigger?
hero member
Activity: 700
Merit: 501
...

20 mb is nice for a start, it will last as a year or two, three maybe, but it will need to be upgraded eventually as well. We are dangerously close to our limit with 1 my and we can't stay at 1 mb.
Gavins proposal automatically scales 40% per year after the 20mb raise, then stops after 20 years - last time I read up on it.

After 20 years this puts us at ~16.000 mb or 32.000 TX/second. That's more than the VISA network and I think it will last a long time.

I also support Gavin and OP.

Gavin is right, we should do this now before its too late.
Heres a video for noobs that explains it quickly:
https://www.youtube.com/watch?v=U1KNbgoF-ZY
legendary
Activity: 924
Merit: 1132
For what it's worth: 

I'm the guy who went over the blockchain stuff in Satoshi's first cut of the bitcoin code.  Satoshi didn't have a 1MB limit in it. The limit was originally Hal Finney's idea.  Both Satoshi and I objected that it wouldn't scale at 1MB.  Hal was concerned about a potential DoS attack though, and after discussion, Satoshi agreed.  The 1MB limit was there by the time Bitcoin launched.  But all 3 of us agreed that 1MB had to be temporary because it would never scale.

Several attempted "abuses" of the blockchain under the 1MB limit have proved Hal right about needing the limit at least for launching purposes.  A lot of people wanted to piggyback extraneous information onto the blockchain, and before miners (and the community generally) realized that blockchain space was a valuable resource they would have allowed it.  The blockchain would probably be several times as big a download now if that limit hadn't been in place, because it would have a lot of random 1-satoshi transactions that exist only to encode information for altcoins etc.

At this point I don't think random schmoes who would allow just any transaction are getting a  lot of blocks. The people who have made a major investment in hashing power are doing the math to figure out which tx are worthwhile to include because block propagation time (and therefore the risk of orphan blocks) is proportional to block size. So at this point I think blockchain bloat as such is no longer likely to a problem, and the 1MB limit is no longer necessary.  It has been more-or-less replaced by a profitability limit that motivates people to not waste blockchain bandwidth, and miners are now reliably dropping transactions that don't pay fees. 
member
Activity: 112
Merit: 10

1) max block size design : is switching from hard coded max block size 1MB to hard coded 20MB the smartest way to proceed ?
2) timing: is it the best timing to do that? It  appears there is no real argument to rush. Tons of innovation is going on, we are months away from a first side chain implementation. Perhaps in 6 months or a year from now we will have ideas we hadn't thought.


Double the max blocksize every two years which roughly keeps in line with Moore's law.
member
Activity: 554
Merit: 11
CurioInvest [IEO Live]
Great post Death&Taxes.

However I have never heard anywhere people supporting to keep permanently the max block size to 1MB. So your post is somewhat misleading and is avoiding the most interesting questions. Which are:
1) max block size design : is switching from hard coded max block size 1MB to hard coded 20MB the smartest way to proceed ?
2) timing: is now the best timing to do that? It  appears there is no real argument to rush. Tons of innovation is going on, we are months away from a first side chain implementation. Perhaps in 6 months or a year from now we will have ideas we hadn't thought of.

From the comments here and on reddit, I don't see a real consensus on that subject. So imo it's better to keep discussing about it. We can reasonably expect to see concurrent proposals to Gavin's, with a different MBS design and perhaps a more appropriate timing.
sr. member
Activity: 342
Merit: 250
[For the entirety of Bitcoin's history, it has produced blocks smaller than the protocol limit.

Why didn't the average size of blocks shoot up to 1 MB and stay there the instant Satoshi added a block size limit to the protocol?

I'm not sure what you're getting at. Clearly there just hasn't been the demand for 1 MB worth of transactions per block thus far, but that could change relatively soon., and thus the debate over lifting the 1 MB cap before we get to that point. If suddenly the block limit were to drop to 50kb, I think we'd start seeing a whole lot of 50kb blocks, no?
Justus is, I believe, pointing out that until very recently bitcoin has effectively had no block size limit, as blocks near the protocol limit were almost non-existent.  More recently we tend to get a few a day, mostly from F2Pool.

Those claiming we'll have massive runaway blocks full of one satoshi / free transactions have never adequately explained why it wasn't true historically when the average block size was 70k, and why people still felt the need to pay fees then.

Anyone trying to send free / very low fee transactions recently will know from having it backfire that they have to think long and hard about taking the risk if they want confirmation in a reasonable time, and that's the way it should be and likely always will be.   Each incremental transaction increases miner risk, and therefore has a cost, and that's natural and good, and enough for an equilibrium to be found.

Heck, were the cap completely removed, and some major pools concerned about spam (aren't we all?) stated that, for their own values of X, Y and Z, that they'd not relay blocks larger than (say) 500KB that pay total fees of less than X satoshis per kilobyte, and would not even build on blocks paying fees of less than Y per kilobyte unless they had managed to become Z blocks deep, would have a huge deterrent effect of making it expensive to try to spam the network.  Not many people are willing to risk 25 BTC to make a point, never mind be willing to continue to do so repeatedly.   X, Y and Z wouldn't need to be uniform across pools, and of course could change with time and technology changes.  An equilibrium would be found and blocks would achieve a natural growth rate than no central planner can properly plan.

I agree, and I never meant to suggest otherwise. Bitcoin still has effectively no block size limit, and if the block limit became 1GB tomorrow it most likely wouldn't result in blocks being any larger in the foreseeable future. I corrected myself because I at first said that no block limit would result in the greatest overall transaction fees being paid, but I don't think that's true. Given the tragedy of the commons issue surrounding blockchain size, the marginal cost of any individual miner including a transaction in his block is only negligibly higher than the risk of an orphaned block. If a miner doesn't include the transactions with fees above that marginal cost, they can be profitably taken by the next miner to create a block. That's not necessarily how it has to work, miners may attempt to employ strategies (like you mentioned) where that wouldn't be the case, but there's no guarantee they would succeed.
hero member
Activity: 544
Merit: 500
Great effort in posting. You hit the hammer right on the nail. But i think the title is slightly misleading for me. Perhaps a question mark at the end instead would suffice. I think a decision has to be made immediately because we are really close to reaching the limit with the current 1MB restriction.
hero member
Activity: 772
Merit: 501
...

20 mb is nice for a start, it will last as a year or two, three maybe, but it will need to be upgraded eventually as well. We are dangerously close to our limit with 1 my and we can't stay at 1 mb.
Gavins proposal automatically scales 40% per year after the 20mb raise, then stops after 20 years - last time I read up on it.

After 20 years this puts us at ~16.000 mb or 32.000 TX/second. That's more than the VISA network and I think it will last a long time.

I also support Gavin and OP.

+1 At that point we'll have a ton of sidechains (hopefully) that can handle any further growth in demand for peer-to-peer electronic cash transactions.
hero member
Activity: 815
Merit: 1000
...

20 mb is nice for a start, it will last as a year or two, three maybe, but it will need to be upgraded eventually as well. We are dangerously close to our limit with 1 my and we can't stay at 1 mb.
Gavins proposal automatically scales 40% per year after the 20mb raise, then stops after 20 years - last time I read up on it.

After 20 years this puts us at ~16.000 mb or 32.000 TX/second. That's more than the VISA network and I think it will last a long time.

I also support Gavin and OP.
member
Activity: 109
Merit: 10
Is John Galt, Satoshi ? or viceversa
So basically a solution should

- increase the block value (~20Mb)
- use high speed super nodes (in order to propagate in 2 minutes interval)
- use some kind of multi-threaded P2P protocol
- increase the reward on miners.

good.

what next ?
staff
Activity: 4242
Merit: 8672
In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer
Blocks are just transaction data which has almost all already been relayed through the network. All that one has to send is just a tiny set of indexes to indicate which of the txn in circulation were included and in what order  and there are already alternative transports that do this (or even less-- just a difference between a deterministic idealized list and the real thing).  The data still has to be sent in the network, so it doesn't fundamentally improve scaling to be more efficient here (just a constant factor), but it gets block size pretty much entirely out of the critical path for miners.
sr. member
Activity: 266
Merit: 250
In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer

bigger pools already use another block propagation between themself. i am not sure how much it can handle, but obviously its optimized for miners needs
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

Of course, 28 minutes is still long. That is based on 2013 data.
This data is massively outdated... it's before signature caching and ultra-prune, each were easily an order of magnitude (or two) improvements in the transaction dependent parts of propagation delay. It's also prior to block relay network, not to mention the further optimizations proposed but not written yet.

I don't actually think hosts are faster, actually I'd take a bet that they were slower on average, since performance improvements have made it possible to run nodes on smaller hosts than were viable before (e.g. crazy people with Bitcoind on rpi). But we've had software improvements which massively eclipsed anything you would have gotten from hardware improvements. Repeating that level of software improvement is likely impossible, though there is still some room to improve.

There are risks around massively increasing orphan rates in the short term with larger blocks (though far far lower than what those numbers suggest), indeed... thats one of the unaddressed things in current larger block advocacy, though block relay network (and the possibility of efficient set reconciliation) more or less shows that the issues there are not very fundamental though maybe practically important.

In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer
legendary
Activity: 1400
Merit: 1013
until very recently bitcoin has effectively had no block size limit, as blocks near the protocol limit were almost non-existent.
Pages:
Jump to: