Pages:
Author

Topic: The MAX_BLOCK_SIZE fork - page 5. (Read 35619 times)

legendary
Activity: 1106
Merit: 1004
February 06, 2013, 08:48:47 AM
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.

I understand, I just think that's not such a serious issue to motivate a change in the 10 min interval too. Instead, if relaying orphans becomes a normal practice, nodes would be able to see whether there's another branch in which their transactions don't exist. If your transaction is currently in all branches being mined, you're certain that you got your confirmation.
So, to counter the problem you raise, I think that relaying orphans is good enough. Why wouldn't it be?

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible. These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

Plus, if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it. Even if you could, individual peers would never have all necessary data to feed to this formula, as it would have to take into consideration the hardware resources of all miners and the network as a whole. That's impracticable. Such maximum size must be established via a decentralized/spontaneous order. It's pretty much like economical central planning versus free markets actually.
legendary
Activity: 1064
Merit: 1001
February 06, 2013, 04:02:16 AM
Problem is, how do you measure the number of bitcoins transmitted?...This also opens it up to manipulation...So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.

Whoops! You're right of course, and I was expecting at least a hole or two. Here's an alternative:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

Example:

A block size adjustment arrives, and the current subsidy is 12.5BTC. The last 210,000 blocks are analyzed, and it is determined that 62% of them have over 37.5BTC in transaction fees. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed percentage of fees (1.5% in my original proposal), this targets a fixed block value (measured in BTC). This scheme still creates scarcity while allowing the max block size to grow. One interesting property is that during growth phases, blocks will reward 50BTC regardless of the subsidy. If transaction volume declines, fees will be reduced. Hopefully this will be the result of Bitcoin gaining purchasing power (correlating roughly to the fiat exchange rate). For this reason, the scheme does not allow the block size to shrink, or else the transaction fees might become too large with respect to purchasing power.

Another desirable property is that a client can display a reasonable upper limit for the default fee given the size of the desired transaction. It is simply 50BTC divided by the block size in bytes, multiplied by the size of the desired transaction.

Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

I believe this problem is solved with the new proposal. If someone mines a block with a huge fee, it still counts as just one block. This would be a problem if the miner could produce 50% of the blocks in the interval with that property, but this is equivalent to a 51% attack and therefore irrelevant.

The expected behavior of miners and clients is a little harder to analyze than with the fixed fee, can someone help me with a critique?

legendary
Activity: 1064
Merit: 1001
February 06, 2013, 03:52:08 AM
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down...Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.

I agree that there needs to be scarcity. I believe that tying the scarcity to the average amount of tx fees assures that that block size can grow but also that there will always be a market for tx fees.

I strongly disagree with the idea that changing the max block size is a violation of the "Bitcoin currency guarantees"...It's not totally clear that an unlimited max block size would work.

I agree. It seems obvious that if the max block size is left at 1MB, and there are always non-free transactions that get left out of blocks, that the fees for transactions will keep increasing to a high level.

Each node could automatically set its max block size to a calculated value based on disk space and bandwidth

Not really a fan of this idea. Disk space and bandwidth should have little to do with the determination of max block size. Disk space should be largely a non issue: if the goal is to make Bitcoin more useful as a payment network, we should not be hamstrung by temporary limitations in storage space. If bandwidth is an issue then we have bigger problems than max block size - it means that the overlay network (messages sent between peers) has congestion and we need some sort of throttling scheme. If the goal is to make Bitcoin accommodate as much transaction volume as possible, the sensible choice is for nodes to demote themselves to thin clients if they can't keep up.

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

If the blocks never get appreciably bigger than they do now, well any half-decent laptop made in the past few years can handle being a full node with no problem.

If Bitcoin's transaction volume never exceeds an average of 1mb per block then we have bigger problems, because the transaction fees will tend towards zero. There's no incentive for paying a fee if transactions always get included. To maintain fees, transaction space must be scarce. To keep fees low, the maximum block size must grow, and in a decentralized fashion that doesn't create extra orphans.

the best proposal I've heard is to make the maximum block size scale based on the difficulty.

Disagree. If this causes the maximum block size to increase to such a size that there is always room for more transactions, then we will end up killing off the fees (no incentive to include a fee).

newbie
Activity: 24
Merit: 1
February 06, 2013, 03:41:49 AM
misterbigg - interesting idea, and I agree with your stance but here are some problems.  While it seems intuitively clear that, “Hm, if transaction fees are 3% of total bitcoins transmitted, that’s too high, the potential block space needs to expand.”

Problem is, how do you measure the number of bitcoins transmitted?

Address A has 100BTC.  It sends 98BTC to address B and 1BTC to address C.  How many bitcoins were spent?

Probably 1, right?  That is the assumption the blockchaininfo site makes.  But maybe it’s actually sending 98.  Or none.  Or 99, to two separate parties.  So we can’t know the actual transfer amount.  We can assume the maximum.  But then that means anyone with a large balance which is fairly concentrated address-wise is going to skew that “fee %” statistic way down.  In the above transaction, you know that somewhere between 0 and 100BTC were transferred.  The fee was 1BTC.  Was the fee 100% or 1%?

This also opens it up to manipulation.  Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.  If we want to maintain scarcity of transactions in the blockchain while still having a way to expand it, I think the (total fee) / (block reward) is a good metric because it scales with time and maintains miner incentive.  While in its simplistic form it is also somewhat open to manipulation, you could just have an average of 10 blocks or so, and if an attacker is publishing 10 blocks in a row you’ve got way bigger problems. (Also I don’t think a temporary block size increase attack in really that damaging... within reason, we can put up with occasional spam.  Hack, we’ve all got a gig of S.Dice gambling on our drives right now.)
legendary
Activity: 1064
Merit: 1001
February 06, 2013, 02:55:26 AM
I really like the idea of dynamic block sizes. but i dont know enough about economy to know what magic numbers are needed.

Thanks. I used 10% and 1.5% as examples but they are not based on any calculations. My intuition tells me that the 10% figure doesn't matter a whole heck of a lot, it mostly controls the rate of convergence. The penalty for a number that is too high is that the size would overshoot and there wouldn't be any scarcity. I think that this would only happen if the percentage was huge, like 50% or more. If this number is too low it would just take longer to converge, and there would be a temporary period when miners generated above average profits.

As for the miner's fees I would imagine that number matters more. Too high, and the block size might never increase. Too low and there might never be scarcity in the blocks.

What are the "correct" values? I have no idea.
legendary
Activity: 1428
Merit: 1000
February 06, 2013, 02:47:46 AM
legendary
Activity: 1064
Merit: 1001
February 06, 2013, 02:45:52 AM
There is something about the artificial scarcity of transaction space in a block that appeals to me. My gut tells me that miners should always have to make a choice about which transactions to keep and which ones to drop. That choice will probably always be based on the fees per kilobyte, so as to maximize the revenue per block. This competition between transactions solves the problem where successive reductions in block subsidies are not balanced by a corresponding increase in transaction fees.

If the block size really needs to increase, here's an idea for doing it in a way that balances scarcity versus transaction volume:

DEPRECATED DUE TO VULNERABILITIES (see the more recent post)

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or goes up by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if the sum of miner's fees excluding block subsidies for all blocks since the last adjustment would exceed a fixed percentage of the total coins transmitted (say, 1.5%). This percentage is also a baked-in constant.

4) There should be no client or miner limits on the number of kilobytes of free transactions in a block - if there's space left in the block after including all the paid transactions, there's nothing wrong with filling up the remaining space with as many free tx as possible.

Example:

When an adjustment period arrives, clients add up the miner's fees exclusive of subsidies, and add up the total coins transferred. If the percentage of miner's fees is 1.5% or more of the total coins transferred then the max block size is permanently increased by 10%.

This scheme offers a lot of nice properties:

- Consensus is easy to determine

- The block size will tend towards a size that accommodates the total transaction volume over a 24 hour period

- The average transaction fees are capped and easily calculated in the client when sending money. A fee of 1.5% should get included after several blocks. A fee greater than 1.5% will get included faster. Fees under 1.5%, will get included slower.

- Free transactions will eventually get included (during times of the day or week where transaction volume is at a low)

- Since the percentage of growth is capped, any increase in transaction volume that exceeds the growth percentage will eventually get accommodated but miners will profit from additional fees (due to competition) until the blocks reach the equilibrium size. Think of this as a 'gold rush'.

legendary
Activity: 1064
Merit: 1001
February 06, 2013, 02:18:35 AM
Bitcoin works great as a store of value, but should we also have as a requirement that it operates great as a payment network?

It seems that the debate over whether the maximum block size should be increased is really a question of whether or not the Bitcoin protocol should be improved so that it serves the dual purposes. Specifically:

1) That transactions should verify quickly (less time between blocks)

2) Transactions fees should be low

3) There should be no scarcity for transaction space in blocks

Open questions:

Right now there's about what, 1.4 SatoshiDICEs worth of transaction volume?

Should Bitcoin scale to support 20 times the volume of SatoshiDICE?

Should Bitcoin scale to support 1000 times the volume of SatoshiDICE?

Should we allow the blockchain to grow without a bound on the rate (right now it is 1 megabyte per 10 minutes, or 144MB/day)?

Is it reasonable to require that Bitcoin should always be able to scale to include all possible transactions?

Is it a requirement that Bitcoin eventually be able to scale to accommodate the volume of any existing fiat payment system (or the sum of the volumes of more than one existing payment system)?

Will it ever be practical to accept transactions to ship physical goods with 0 confirmations?

Will the time for acceptance of a transaction into a block ever be on the order of seconds?

How would one implement a "Bitcoin vending machine" which can dispense a product immediately without the risk of fraud?

Can't we just leave parameters like 10 minutes / 1 megabyte alone (since they require a hard fork) and build new market-specific payment networks that use Bitcoin as the back end, processing transactions in bulk at a lower frequency (say, once per 10 minutes)?

Aren't high transaction fees a good thing, since they make mining more profitable resulting in greater total network hashrate (more security)?

legendary
Activity: 1204
Merit: 1015
February 05, 2013, 06:22:02 PM
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit?
Quite possibly. However, if we think of the 10 minute constant as not actually having to stay at that constant, we can adjust it so that at the time we disable the 1 MB limit, the largest block that miners would practically want to make at that time would be 1 MB. Basically, this would protect us from having a 1 MB limit one day, to a practical 50 MB limit (or whatever is currently practical with the 10 minute constant). I mainly want people to remember that changing the block time is also something that's also able to be on the table.

Can you please clarify. Are you proposing reducing the 10 min average block creation time?
Yes.

If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?
Just like you said, it would have a pro-rata reduction for increased block frequency. Sorry, I assumed that was obvious, since changing anything about the total currency created is absolutely off the table.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.
legendary
Activity: 1106
Merit: 1004
February 05, 2013, 03:04:34 AM
EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really? I've never seen any actual analysis, but I'd say that honest splits would mostly carry the same transactions, with the obvious exception of coinbase and "a few others". Has anyone ever done an analysis of how many transactions (in relative terms) are actually lost in a reorg and need to get reconfirmed?

Btw, interested nodes could attempt to download, and perhaps even relay, all sides of a split. If you see that your transaction is in all of them, you know it actually had its first confirmation for good. Relaying orphans sounds a less radical change than changing the 10m delay...
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 05, 2013, 02:08:27 AM
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit? Can you please clarify. Are you proposing reducing the 10 min average block creation time? If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?

legendary
Activity: 1204
Merit: 1015
February 05, 2013, 01:46:12 AM
EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks). So, to mitigate the damage that will cause to the practical confirmation time...
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. Additionally, by lowering the block-creation-time constant, you increase the chances of there being natural orphans by a much larger factor than you are lowering the constant (5 minute blocks would on average have 4x as many orphans as 10 minute blocks over the same time period). Currently, we see that as a bad thing since it makes the network weaker against an attacker. So, the current block time was set so that the block verification time network-wide would be mostly negligible. Let's make it so that it's not.

To miners, orphans are lost money, so instead of using such a large constant for the block time so that orphans don't happen much in the first place, force the controlling of the orphan rate onto the miners. To avoid orphans, they'd then be forced to use such block-ignoring features. In turn, the smaller the constant for block time that we pick, the exponentially smaller the blocks would have to be. Currently, I suspect that a 50 MB block that was made up of pre-verified transactions would be no big deal for the current network. However, a .2 MB block on a 2.35 seconds per block network (yes, extreme example) absolutely would be a big deal (especially because at that speed even an empty block with just a coinbase is a problem).

There are also some side benefits: because miners would strongly avoid transactions most of the network hasn't seen, only high-fee transactions would be likely to make it into the very next block, but many transactions would make it eventually. It might even encourage high-speed relay networks to appear, who will require a cut of the transaction fees the miners make in order to let them join this network.

In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)
sr. member
Activity: 294
Merit: 250
February 05, 2013, 01:15:06 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



does this still involve a fork?
legendary
Activity: 1792
Merit: 1111
February 04, 2013, 11:50:09 PM
1M MAX_BLOCK_SIZE is obviously an arbitrary and temporary limit. Imagine that bitcoin was invented in 1996 instead of 2009, when 99% normal internet users connected though telephone lines with 28.8kb/s, or  3.6kB/s. To transfer a typical block of 200kB today, it would take more than 1 minute and the system would fail due to very high stale rate and many branches in the chain. If the "1996 Satoshi" used a 25kB MAX_BLOCK_SIZE, are we still going to stick with it till the end of bitcoin?
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 04, 2013, 11:40:50 PM
#99
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy

We're only 2 minor releases away!!!.... from 0.10

1.0 or 1.10?  Smiley
legendary
Activity: 1904
Merit: 1002
February 04, 2013, 11:34:12 PM
#98
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy

We're only 2 minor releases away!!!.... from 0.10
legendary
Activity: 1246
Merit: 1016
Strength in numbers
February 04, 2013, 03:49:43 PM
#97
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



This rule would apply to blocks until they are 1 deep, right? Do you envision no check-time or size rule for blocks that are built on? Or a different much more generous rule?
hero member
Activity: 756
Merit: 522
February 04, 2013, 03:12:55 PM
#96
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

Spoken like a true Gavin. No objections.
legendary
Activity: 1778
Merit: 1008
February 04, 2013, 02:25:34 PM
#95
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy
legendary
Activity: 1106
Merit: 1004
February 04, 2013, 02:25:00 PM
#94
There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

That's nice. Just don't forget to include total download time in the "time to verify", as well as any other I/O time. Bandwidth will be a significant bottleneck once blocks start getting larger.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Pages:
Jump to: