First set of data is irrelevant to question, just wanted to show dstat before the logfile output & stripping extraneous info
------sockets------ --net/eth0- -dsk/total- ---load-avg--- ----interrupts---
tot tcp udp raw frg| recv send| read writ| 1m 5m 15m | 51 52 53
986 810 3 0 0| 23k 17k| 0 0 |0.35 0.32 1.45| 92 74 0
986 810 3 0 0| 594k 7441k| 0 448k|0.32 0.31 1.44|1998 2183 0
986 810 3 0 0| 878k 54M| 0 0 |0.32 0.31 1.44|8278 11k 0
986 810 3 0 0|1062k 63M| 0 0 |0.32 0.31 1.44|9655 13k 0
986 810 3 0 0|1349k 72M| 0 0 |0.32 0.31 1.44| 11k 15k 0
986 810 3 0 0|1232k 50M| 24M 0 |0.32 0.31 1.44|9095 11k 0
986 810 3 0 0| 632k 23M| 44M 60k|0.38 0.32 1.44|4751 6215 0
986 810 3 0 0| 412k 19M| 44M 0 |0.38 0.32 1.44|3383 5325 0
986 810 3 0 0| 472k 17M| 19M 0 |0.38 0.32 1.44|4018 5333 0
986 810 3 0 0| 281k 16M| 16M 0 |0.38 0.32 1.44|2698 4440 0
986 810 3 0 0| 411k 18M| 20M 0 |0.38 0.32 1.44|3845 5218 0
986 810 3 0 0| 438k 18M| 18M 0 |0.35 0.32 1.43|3619 4957 0
986 810 3 0 0| 377k 17M| 12M 64k|0.35 0.32 1.43|3746 4934 0
986 810 3 0 0| 370k 16M| 12M 32k|0.35 0.32 1.43|3559 4908 0
986 810 3 0 0| 369k 18M| 16M 0 |0.35 0.32 1.43|3005 5266 0
986 810 3 0 0| 453k 17M| 13M 0 |0.35 0.32 1.43|3799 5294 0
986 810 3 0 0| 291k 16M|8688k 0 |0.32 0.31 1.43|2423 4368 0
986 810 3 0 0| 275k 15M|9200k 0 |0.32 0.31 1.43|2349 4285 0
986 810 3 0 0| 323k 15M| 15M 0 |0.32 0.31 1.43|2135 4029 0
986 810 3 0 0| 372k 15M| 10M 0 |0.32 0.31 1.43|3383 4571 0
986 810 3 0 0| 563k 12M| 10M 0 |0.32 0.31 1.43|3560 4088 0
986 810 3 0 0| 418k 15M| 14M 16k|0.37 0.32 1.42|3943 4697 0
986 810 3 0 0| 228k 13M|9852k 0 |0.37 0.32 1.42|2258 3587 0
986 810 3 0 0| 373k 15M| 11M 0 |0.37 0.32 1.42|3357 4305 0
986 810 3 0 0| 789k 14M| 11M 0 |0.37 0.32 1.42|2523 4063 0
986 810 3 0 0| 335k 13M|6556k 0 |0.37 0.32 1.42|2242 3832 0
986 810 3 0 0| 357k 14M|7280k 0 |0.34 0.32 1.42|3092 4362 0
986 810 3 0 0| 432k 12M|5376k 8192B|0.34 0.32 1.42|3945 4610 0^^ This is from Philadelphia node, the other two had the blockchain in ramfs (this was the only node with any significant disk activity).
*****
OK, now this data is all from one block:
Germany (nogleg), an i7-4770 w/ 32 GB of RAM w/ blockchain in ramfs,
(tcp sockets open (maybe a handful or two aren't bitcoind connections), eth0 received, eth0 sent, timestamp (GMT))
629,656147,721483,06-03 04:11:28
629,528982,44778852,06-03 04:11:29
629,1026082,78608445,06-03 04:11:30
629,1108072,92873237,06-03 04:11:31
629,1057960,67983803,06-03 04:11:32
630,460841,28231563,06-03 04:11:33
630,163400,10998207,06-03 04:11:34
630,210372,8074017,06-03 04:11:35
630,126797,7413886,06-03 04:11:36(capped at 88.57MB, or 93MB if we want to use fake infos, like manufacturers of hdds)
this is Philadelphia, PA, an e3-1270v1 w/ 16GB of RAM and SSD drive:
496,222221,244847,06-03 04:11:28
496,694519,6537351,06-03 04:11:29
496,867406,60435521,06-03 04:11:30
496,979387,67685208,06-03 04:11:31
496,820277,49585630,06-03 04:11:32
496,469354,19855676,06-03 04:11:33
496,126834,5740728,06-03 04:11:34
496,173320,4679675,06-03 04:11:35This is Houston, TX, which is by far the fastest machine (quad opteron 6274, 64gb RAM -- ~1800c/m yam and nearly 300khash scrypt, hoho) -- blockchain in ramfs. I don't want to use tmpfs since I'd rather have it crash & burn then start using swap space (so I have to remove disk space check every time since it doesn't work with ramfs.. well, I'd remove it regardless, since it's some small hit to efficiency)
I modified the source to allow 64 scriptcheck threads instead of 16, but unsure if it made any difference (it was "running" 64 threads at least). the clock on this had a couple of seconds drift. I took it offline after around 36 hours because of the inadequate upstream. The rest were shut down at end of day GMT on march 8th. I started Philadelphia & nogleg back up for approx. 23 hrs on march 12th (wanted to confirm correlating changes in tx propagation times on
http://bitcoinstats.com/network/propagation)
Data for Houston, TX server:
530,183301,6583325,06-03 04:11:30
530,277817,12304357,06-03 04:11:31
530,271737,12301635,06-03 04:11:32
530,303381,12300903,06-03 04:11:33
530,298444,12300672,06-03 04:11:34
530,313658,12298154,06-03 04:11:35
529,320124,12292110,06-03 04:11:36
528,288593,12293872,06-03 04:11:37
528,263904,12300140,06-03 04:11:38
528,278275,12303968,06-03 04:11:39
526,281720,12291286,06-03 04:11:40
527,279488,12289710,06-03 04:11:41
527,287642,12291841,06-03 04:11:42
526,290481,12278198,06-03 04:11:43
527,192947,9499450,06-03 04:11:44
527,83951,3973766,06-03 04:11:45Now my question is... you can see this is on a 100mbps uplink and that it continued sending at that max speed for maybe 8 seconds or so after the other two had stopped. I haven't looked through the code (& not even sure if I'd be able to tell if I did) or conducted a test, but would it be possible to delay someone receiving a block by deliberately decreasing your upstream bandwidth (or just by having poor upstream to begin with)? i.e. I can receive and process any block very quickly, I then limit my upstream to, say, 3mbps. that should make sure everyone that requests the block from me receives some small amt (~5kb/s). does this mean they won't request the block from another source?
I could see it requesting from 2 or 3 sources simultaneously, but more than that & you'd exacerbate the bandwidth issues. Client should probably determine the best sources over time if it doesn't already. I'm sure there's some relay in place between major pools (if not, there should be), but I'm guessing at least 1/4th of the blocks come from solo miners or small private pools... problematic if someone can slow the transfer to a trickle w/o it breaking off request.
I used to think those Swiss nodes were doing that (I had them blocked, the vbitcoin eth zurich nodes, as they never sent data & just received), but maybe not... or maybe they were, but since they weren't sending any data at all it requested the block from a secondary source semi-quickly?