Pages:
Author

Topic: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks - page 7. (Read 155564 times)

member
Activity: 112
Merit: 10
Admin at blockbet.net
Not sure why everyone is so panicked.

When I have a lot of money invested in Bitcoin and I get an error message and I'm not sure what it means... Well, let's just say that I got a bit worried. Not everybody knows the technical aspects that well. I'm glad they addressed the issues quickly and in a professional manner though.
legendary
Activity: 2053
Merit: 1356
aka tonikt
Transactions are valid on both branches of the fork.  You send 1000 BTC to a merchant on v0.8 it is seen by the merchant on v0.7 so when you attempt to double spend it is rejected by the merchant and/or every relay node.
But the real world is not as perfect as it should be. Smiley
It's all about statistics and chances - probability.

If you have X connections from your node to the network, and you broadcast transaction A to half of them, while at the same time broadcasting transaction B (spending the same 1000 BTC) to the other half - then you have a significant chance that both; A and B will be mined in the alternate branches. And by "significant chance" I mean: significant enough to cause the panic...
donator
Activity: 1218
Merit: 1079
Gerald Davis
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.

Transactions are valid on both branches of the fork.  You send 1000 BTC to a merchant on v0.8 it is seen by the merchant on v0.7 so when you attempt to double spend it is rejected by the merchant and/or every relay node.

The fork was on BLOCKS not transactions.  All the transactions (except coinbase tx which are unspendable for 100 blocks) on the v0.8 branch are visible on v0.7 branch and vice versa.
legendary
Activity: 2053
Merit: 1356
aka tonikt
So the solution is to continue to increase the block size as demand provokes this issue then?
I guess.. because what else? Are you going to appeal to people to do less transactions, hoping that it would solve the problem? Smiley
I don't believe you can i.e. convince satoshidice to drop down their lucrative business, just because other ppl's transactions are getting queued...
Why should they care anyway, if they pay the same fees as you do?
Since we don't have other solution at hand, scaling up the storage limits seems to be the only option ATM.
Unless we're OK with increasing the fees?

I'm totally OK with increasing fees.
I would not mind it, either.

Whatever the community decides, I'm fine with it, as long as it solves the issue of transactions with proper fees waiting hours for the first confirmation.
The network really seems to be getting stuck already and I don't think it will just get better by itself.
legendary
Activity: 1078
Merit: 1002
Bitcoin is new, makes sense to hodl.
well, maybe having the concept of "block" is the root of design flaw in the first place.
legendary
Activity: 4760
Merit: 1283
So the solution is to continue to increase the block size as demand provokes this issue then?
I guess.. because what else? Are you going to appeal to people to do less transactions, hoping that it would solve the problem? Smiley
I don't believe you can i.e. convince satoshidice to drop down their lucrative business, just because other ppl's transactions are getting queued...
Why should they care anyway, if they pay the same fees as you do?
Since we don't have other solution at hand, scaling up the storage limits seems to be the only option ATM.
Unless we're OK with increasing the fees?

I'm totally OK with increasing fees.  I thought that was, by design, the way to modulate load.

If Mike is saying something like "yes, that's the way we modulate load, and yes, transactions that didn't make the cut are eventually purged, but BDB was not up to the task of processing under the currently configured garbage regime" then I'm totally down with that and feel that switching to LevelDB is an appropriate response.  But I'm not sure that is what he is saying which is why I asked.

I'm also saying that to me, working on the 'garbage collection' mechanism and/or tuning strikes me as a high priority line of development, and this seems like an opportune time to do it.  As always, as a user I hope for the highest degree of transparency as things progress.

legendary
Activity: 1400
Merit: 1005
Its official, i guess SD just broke bitcoin.  Thanks eric, for all those unspendable outputs and massive block sizes.   I'm sure your apologists will claim that this event is good for bitcoin because we stress tested it.  Fuck that noise.  If it had taken longer to reach this point every one would have been running 0.8 or newer, the issue caused by old clients could not have happened.

I blame SD.  SD pushed our beta product way too far.  Shame on eric and his greedy little BS company.  I hope its stocks tanks.  I hope miners filter out the 1Dice.  Fuck that noise!
To be clear, those outputs are perfectly spendable if other coins are sent at the same time.

I present this example:
https://blockchain.info/tx/0895a6fa923d399f5079c5a444a70a7543b5c34ebe4a5d21ae522350042b311e

This was a ZERO FEE transaction.  The default Bitcoin-QT software included transactions of varying size, all the way down to 0.00000003 BTC.  As far as I know, I don't have any unspent Satoshis in either of those addresses, but this just proves the point that tiny amounts are most definitely spendable in the right situation.
hero member
Activity: 504
Merit: 500
WTF???
The result is that 0.7 (by default, it can be tweaked manually) will not accept "too large" blocks (we don't yet know what exactly causes it, but it is very likely caused by many transactions in the block).
The "manual tweak" is exactly two lines. Anyone can apply it, because the recompilation is not necessary. All it takes is to create a short text file and restart the bitcoin client.

https://bitcointalksearch.org/topic/no-recompilation-fix-for-the-lock-table-is-out-of-available-lock-entries-152208


Block size isn't the problem, it's the BDB.

The limitation is not related to block size, but rather number of transactions accessed/updated by a block.

In general, the default for 0.8 (250k) is fine.  Raising that to 500k is likely just as fine.

But it is not very simple to answer the question, because this 0.7 BDB lock issue becomes more complicated with larger BDB transactions.
legendary
Activity: 2128
Merit: 1073
The result is that 0.7 (by default, it can be tweaked manually) will not accept "too large" blocks (we don't yet know what exactly causes it, but it is very likely caused by many transactions in the block).
The "manual tweak" is exactly two lines. Anyone can apply it, because the recompilation is not necessary. All it takes is to create a short text file and restart the bitcoin client.

https://bitcointalksearch.org/topic/no-recompilation-fix-for-the-lock-table-is-out-of-available-lock-entries-152208

Edit: I'm including the fix here because some users can't easily click through with their mobile browsers:

Just create the file named "DB_CONFIG" in the ".bitcoin" or "AppData/Roaming/Bitcoin" directory that contains the following:
Code:
set_lg_dir database
set_lk_max_locks 40000
legendary
Activity: 2053
Merit: 1356
aka tonikt
Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing? 
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.
This is not so simple: the transaction you end to the first merchant is seen by all bitcoind nodes running 0.7.

So unless I missed something about tx management in Bitcoin nodes, to mount a double spend you must both:
  • have this fork happening so that 6 confirmations on the 0.8 fork doesn't really mean anything
  • propagate both transactions at the same time targeting 0.8 nodes with the first and 0.7 with the second, more or less blindly hoping that the first will reach a node run by the next 0.8 miner to find a block and the second a node run by the next 0.7 miner to mine a block.
Yes, you are right.
I am not saying that it was easy - and that is why we don't know about anyone who was able to take advantage of it, during the incident.
But it was definitely possible - and thus the panic.
legendary
Activity: 2053
Merit: 1356
aka tonikt
So the solution is to continue to increase the block size as demand provokes this issue then?
I guess.. because what else? Are you going to appeal to people to do less transactions, hoping that it would solve the problem? Smiley
I don't believe you can i.e. convince satoshidice to drop down their lucrative business, just because other ppl's transactions are getting queued...
Why should they care anyway, if they pay the same fees as you do?
Since we don't have other solution at hand, scaling up the storage limits seems to be the only option ATM.
Unless we're OK with increasing the fees?
hero member
Activity: 896
Merit: 1000
Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing?  
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.
This is not so simple: the transaction you send to the first merchant is seen by all bitcoind nodes running 0.7.

So unless I missed something about tx management in Bitcoin nodes, to mount a double spend you must both:
  • have this fork happening so that 6 confirmations on the 0.8 fork doesn't really mean anything
  • propagate both transactions at the same time targeting 0.8 nodes with the first and 0.7 with the second, more or less blindly hoping that the first will reach a node run by the next 0.8 miner to find a block and the second a node run by the next 0.7 miner to mine a block.
sr. member
Activity: 308
Merit: 258
I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of max-transaction-per-block, when the number of transactions is going up, they eventually go above the "bandwidth" limit (which is: 6 * max-tx-in-block / hour) and instead of being mined at the time when they are announced, they are getting queued, to be mined later...
Which is exactly what I have been observing for the last few weeks - even transaction with a proper fee needed like hours to be mined.
And this is very bad.
Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will just jam, if we stay at the old limit.
I don't believe the design specs for bitcoin will allow this, it just isn't possible to scale it without a complete redesign of how the internals work. Kind of the reason this problem has shown up now while other theoretical problems will come in the near future.  Sad
legendary
Activity: 4760
Merit: 1283
...

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.

I would like to understand with better precision what you mean by this.  Can you point to a particularly enlightening bit of documentation or discussion about this issue?
I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of the maximum transaction per block, when the number of transaction is going up they will eventually go above the limit and instead of being mined at the time when they are made, they are getting queued...
Which is exactly what I have been observing for the last few weeks - even transaction with proper fees are mined like hours later.

Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will get stuck if we stay at the limit.

So the solution is to continue to increase the block size as demand provokes this issue then?  It does not strike me as a particularly good strategy for a lot of reasons.  Among them, I can envision demand side growth growing exponentially and at a much faster rate than processing capacity can be developed even by highly capitalized and proficient entities.

The only up-side to this solution in the not to distant future only a handful of large entities will have problems because only they will be forming critical infrastructure part of the Bitcoin network.  In fact it could be legitimately argued that we are already at that point due to the makeup of the mining pools.  At the time of this last issue they seem cooperative and in favor of the dev team's desired 'fix'.  Or at least the consensus of the current dev team.  What happens in future 'events' will be interesting to observe.

legendary
Activity: 2053
Merit: 1356
aka tonikt
Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing? 
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.
newbie
Activity: 13
Merit: 0
Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing?  Seems to me like the devs and the pools worked together to quickly address the issue and that there is a plan to move forward with a permanent fix. Just my $.02.
legendary
Activity: 826
Merit: 1001
rippleFanatic
So my question is who did this effect negatively & who took the hit because of this?

Someone had to lose a good amount of BTC yesterday.

> Eleuthria: ~1500 BTC lost in 24 hours from this

http://bitcoinstats.com/irc/bitcoin-dev/logs/2013/03/12

He lost BTCguild's entire hot wallet in under 60 seconds. But that was due to a different issue with the way his pool software messed up while upgrading to 0.8 (miners were suddenly being credited at difficulty=1). It was a separate issue from the blockchain fork.
legendary
Activity: 2053
Merit: 1356
aka tonikt
...

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.

I would like to understand with better precision what you mean by this.  Can you point to a particularly enlightening bit of documentation or discussion about this issue?
I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of max-transaction-per-block, when the number of transactions is going up, they eventually go above the "bandwidth" limit (which is: 6 * max-tx-in-block / hour) and instead of being mined at the time when they are announced, they are getting queued, to be mined later...
Which is exactly what I have been observing for the last few weeks - even transaction with a proper fee needed like hours to be mined.
And this is very bad.
Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will just jam, if we stay at the old limit.
legendary
Activity: 938
Merit: 1001
bitcoin - the aerogel of money
I'm kind of glad this happened.

Firstly, it's better now than 3-5 years from now. We want evolutionary pressure that gradually leads to a battle hardened Bitcoin, but we don't want extinction events.

Secondly, it illustrates an important principle of Bitcoin in practice:

Social convention trumps technical convention

The implications are that even if a fatal protocol flaw is discovered in future, and even if there is a 51% attack, people will not lose their bitcoins, as long as the community reaches consensus on how to change the rules.
hero member
Activity: 728
Merit: 500
I blame SD.  SD pushed our beta product way too far.  Shame on eric and his greedy little BS company.  I hope its stocks tanks.  I hope miners filter out the 1Dice.  Fuck that noise!

This is witch hunting. And unfair to SD. Which provided and provides a great service to this community.
Pages:
Jump to: