Author

Topic: Is bitcoin-qt able to scale with large numbers of unconfirmed txIn? (Read 1049 times)

staff
Activity: 4284
Merit: 8808
That's actually kind of awesome.  How do you set that kind of test up?  You feed a script of RPC commands to add tx?  
It's actually quiet easy right now.  Use the invalidateblock rpc to kill an old block. It will disconnect the blocks after it, one a time backing off the tip... and put the transactions in the mempool. (some set of the transactions will fall out because they're descended from disconnected coinbases, but most will be in there.)

Use reconsiderblock to fix your node...

This may get broken in 0.11 since Gavin has a PR to cap the mempool size during reorgs.
sr. member
Activity: 315
Merit: 250
That's actually kind of awesome.  How do you set that kind of test up?  You feed a script of RPC commands to add tx? 

Or a modified DoS test script:
https://github.com/bitcoin/bitcoin/blob/cdae53e456ad35216e33a90f1681aade546cc431/src/test/DoS_tests.cpp

Eg.: BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
legendary
Activity: 924
Merit: 1132
Not that we're aware of, all the involved algorithms should be log() scaling.

Good.  That's as it should be. I'm betting the problem was my browser open on the blockchain.info page that was showing me how many tx there were....   Cry

4000 is not very large either. I've fairly recently tested with a good month's worth of blockchain in mempool at once. (Which does indeed cause some slowness, but its orders of magnitude more than you're talking about...)

That's actually kind of awesome.  How do you set that kind of test up?  You feed a script of RPC commands to add tx? 
sr. member
Activity: 315
Merit: 250
I'm seeing some possibly unstable behavior when the unconfirmed tx pool gets very large.  Right now there are 4000 unconfirmed tx
Not that we're aware of, all the involved algorithms should be log() scaling. 4000 is not very large either. I've fairly recently tested with a good month's worth of blockchain in mempool at once. (Which does indeed cause some slowness, but its orders of magnitude more than you're talking about...)

The current default limit is 10k TXs in the mempool isn't it?
staff
Activity: 4284
Merit: 8808
I'm seeing some possibly unstable behavior when the unconfirmed tx pool gets very large.  Right now there are 4000 unconfirmed tx
Not that we're aware of, all the involved algorithms should be log() scaling. 4000 is not very large either. I've fairly recently tested with a good month's worth of blockchain in mempool at once. (Which does indeed cause some slowness, but its orders of magnitude more than you're talking about...)
legendary
Activity: 1106
Merit: 1026
It's probably worth to mention: don't restart the client inbetween, when you have many unconfirmed transactions.

Quote
During startup, when adding pending wallet transactions, which spend outputs of
other pending wallet transactions, back to the memory pool, and when they are
added out of order, it appears as if they are orphans with missing inputs.

Those transactions are then rejected and flagged as "conflicting" (= not in the
memory pool, not in the block chain).

https://github.com/bitcoin/bitcoin/pull/5511
sr. member
Activity: 315
Merit: 250
Edit:  That must be it.  We just got a block that packed up over a thousand of them (block 350476) and my CPU load is back down to something near normal.

CPU usage was probably from running your own validation on the new block and not the size of the pool.

If the unconfirmed tx pool gets too large, does it do something which doesn't scale?

If it gets really large, you can have some older/lower fee/unconfirmed transactions being dropped from the mempool.
The clients that sent that transaction may still retry and add it back to the network by broadcasting it again.
legendary
Activity: 924
Merit: 1132
I'm seeing some possibly unstable behavior when the unconfirmed tx pool gets very large.  Right now there are 4000 unconfirmed tx showing on block explorer, and bitcoin-qt seems to be using up a whole lot of CPU.  If the unconfirmed tx pool gets too large, does it do something which doesn't scale?

Edit:  That must be it.  We just got a block that packed up over a thousand of them (block 350476) and my CPU load is back down to something near normal.
Jump to: