Author

Topic: Problems with send buffer limits in 0.3.20 (Read 1723 times)

legendary
Activity: 1596
Merit: 1100
March 03, 2011, 04:50:47 PM
#10
With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

That won't disable getblocks uploading.

But even so, the ideal would be to simply stop reading until the write buffer clears...

administrator
Activity: 5222
Merit: 13032
With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Please help test:  https://github.com/bitcoin/bitcoin/pull/95

Sets the -maxsendbuffer and -maxreceivebuffer limits to 10MB each (so possible max of 2GB of memory if you had 100 connections).

I tested by running a 0.3.20 node to act as server, then ran a client with:
  -connect={server_ip} -noirc -nolisten
... to make sure I was downloading the block chain from that 0.3.20 node.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

500MB per connection times 100 connections would be 50 GB.  That re-opens the door to a memory exhaustion denial-of-service attack, which is the problem -maxsendbuffer fixes.

As transaction volume grows I think there will be lots of things that need optimization/fixing.  One simple fix would be to request fewer blocks as they get bigger, to stay inside the sendbuffer limit...

(ps: I've been re-downloading the current block chain connected to a -maxsendbuffer=10000 0.3.20 node, and the workaround works)
administrator
Activity: 5222
Merit: 13032
Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Oops.

My fault-- I DID test downloading the entire production block chain with a 0.3.20 client, but I wasn't careful to make sure I downloaded it from another 0.3.20 client.

Workaround:  if you are running 0.3.20, run with -maxsendbuffer=10000
sr. member
Activity: 334
Merit: 250
so much spam in that block, sigh.
administrator
Activity: 5222
Merit: 13032
That's pretty bad. Good thing you caught this before everyone upgraded and new nodes were no longer able to connect.
legendary
Activity: 1596
Merit: 1100
I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).

Ultimately, the goal is flow control.  Your OS has a buffer for outgoing data.  When that gets full, we need to stop sending more data, and wait for empty buffer space.

The worst case buffer size of a hacker is zero.  The worst case "normal" buffer size 8k.

Since bitcoin needs to send more data than that in a single message, an implementation must choose:  (a) store a pointer to the middle of the object you were sending, for later resumption of transfer, or (b) provide an application buffer that stores a copy of all outgoing data until it is transmitted.  satoshi chose (b) but placed no limits on the size of that outgoing data buffer.

It does sound like the limits are tighter than they should be.

legendary
Activity: 1526
Merit: 1134
0.3.20 has a new feature that disconnects nodes if vSend gets larger than 256kb, controllable via a flag.

This has the unfortunate bug that it's no longer possible to download the production block chain. I suggest people do not upgrade past 0.3.20 for now, unless you run with -maxsendbuffer=2000 as a flag.

It dies when sending the block chain between block 51501 and 52000 on the production network. The problem is this block:

http://blockexplorer.com/block/00000000186a147b91a88d37360cf3a525ec5f61c1101cc42da3b67fcdd5b5f8

It's 200kb. During block chain download, this block plus the others in that run of 500 blocks pushes vSend over 256kb and it results in a disconnect.

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).
Jump to: