There was, indeed, testing on the testnet with a full (1 MB) block. This was accepted by both the 0.7 and 0.8 versions. There is no concern here.
Slush's block should have produced the same valid block. However, the block was structured carefully as to expose a problem in 0.7 that was never discovered. Not only was this an extremely difficult problem to catch, but its finding would in fact not have been accelerated with a mixed testnet. The introduction of 0.8 into the equation would actually delay finding the bug, as it would mean less time spent testing edge cases on 0.7.
Source?
@dree12: Do you mean it was done on purpose? Source?
It wasn't done on purpose. The bad block was produced by the slush pool (mining.bitcoin.cz) after he upgraded to 0.8 and set his max block size to 1MB. The problem was that the BDB (Berkeley database) settings (max_locks) used in the pre-0.8 clients were not sufficient to handle a large and complex (many txins and txouts) block. berkeleydb was replaced with leveldb in 0.8.
#bitcoin-dev IRC channel
<@gmaxwell> samurai1200: there is a target blocksize. It defaults to 250k. With txn slow last week due to SD being 90+% of all blocks, Gavin and Mike went around and nagged pools to increase their target sizes.. so its not surprising that someone was running with a big setting. The target size is just a commandline option.
<@gmaxwell> Arguably any setting over 500 is exposing undertested parts of bitcoin, because prior to 0.7.x (?) the target size was hard coded at 500.
<@gmaxwell> The soft limit change was slush manually setting his target blocksize to 1MB.
<@gmaxwell> One mistake we made here in hindsight was exposing the target going over 500kb without doing more testing with >500kb blocks.
<@gmaxwell> BlueMatt: Can you get a blocktester test in that tries to get a extreme maximum number of distinct inputs+outputs? Like 4000 in and 5000 out?
bitcoin-development mailing list
Í've just seen many reports of 0.7 nodes getting stuck around block 225430,
due to running out of lock entries in the BDB database. 0.8 nodes do not
seem to have a problem.
In any case, if you do not have this block:
2013-03-12 00:00:10 SetBestChain: new
best=000000000000015aab28064a4c521d6a5325ff6e251e8ca2edfdfe6cb5bf832c
height=225439 work=853779625563004076992 tx=14269257 date=2013-03-11
23:49:08
you're likely stuck. Check debug.log and db.log (look for 'Lock table is
out of available lock entries').
If this is a widespread problem, it is an emergency. We risk having
(several) forked chains with smaller blocks, which are accepted by 0.7
nodes. Can people contact pool operators to see which fork they are on?
Blockexplorer and blockchain.info seem to be stuck as well.
Immediate solution is upgrading to 0.8, or manually setting the number of
lock objects higher in your database. I'll follow up with more concrete
instructions.
If you're unsure, please stop processing transactions.
--
Pieter