Author

Topic: How to initially sync Bitcoin as fast as possible (Read 1016 times)

hero member
Activity: 714
Merit: 1298
The sync from scratch was going well with about 15% a day,


Thanks,"15% a day" speaks volumes.

My gained experience evidences that IBD involving 16 Gb RAM machine and external SSD (870 EVO Samsung) goes much faster. 4 weeks ago it took less than 36 hours to download full blockchain from the scratch. The properly tuned bincoin.conf (dbcache=1/4 RAM and blocksonly=1) facilitates IBD.
legendary
Activity: 1697
Merit: 1074
~


Interesting solution for IBD, never encountered it before.

Having in mind 16 Gb RAM, what  size should be assigned to RAMdisk ? And  how to set up  dbcache in this case, dbcache=1/4 RAMdisk or 1/4(RAM - RAMdisk).

 And another question. What is gain in comparison with  M.2 NVme SSD?

I assigned 10GB for dbcache, and 4GB for RAMDISK. Not needed big RAMdisk because it will store only a few blk and rev files, the rest will be links of that files to persistent storage.
There are many access to latest blk and rev files during syncronization.
The writes on chainstate are made up periodically or if you interrupt the sync, so this work will be amortized by download activity.
I said gain because generally RAM are faster than any SSD. But it depends on the machine you are using too. I do not have machines with M.2 connector to test.

I just synced the blockchain with this setup, have to reindex from 49% because the system crashed, since I am using 2x 500GB HDDs on a docking station connected though USB to notebook. The sync from scratch was going well with about 15% a day, but the reindex process was really slow, because i have put only index folder and chain state on RAMdisk, but not helped much.
The script needs some adjusts, removing set -e, because it not needed since if the script exits, the RAMdisk will be filled up and the bitcoind will stop after that.

So instead re-indexing, downloading from scratch is much more faster, and you can make checkpoints by interrupting the program at 30% for example, and saving chainstate and index folders and some of the recent blk and rev files. To put it back in the case of any critical event occurs further on the process.

Beginning of 2023 I was trying to run a full-node, but it doesn't worked because some problems with mergerfs. The initial sync long lasted about 4 months. But now with this strategy, i can conclude it in the same machine in less than one week.



...
I wouldnt help yogg with anything if I where you.
(Yogg hasn't been around since he swept all the keys and coin from coldkey wallets from the collectibles section)
And why are you necro posting? This topic is years old now.

I searched in this forum using many terms about "speeding up initial sync" and all the time it returned this thread as the only relevant one. But it lacks a satisfactory answer, so I just give one.
hero member
Activity: 714
Merit: 1298
~


Interesting solution for IBD, never encountered it before.

Having in mind 16 Gb RAM, what  size should be assigned to RAMdisk ? And  how to set up  dbcache in this case, dbcache=1/4 RAMdisk or 1/4(RAM - RAMdisk).

 And another question. What is gain in comparison with  M.2 NVme SSD?
legendary
Activity: 3360
Merit: 4727
diamond-handed zealot
This is still a valid avenue of inquiry.
hero member
Activity: 1438
Merit: 513
Answering the OP I just found a way to sync as fast as possible using RAMdisk, it will work even if you has a slow HDD or NAS. RAMdisk is faster than SSD but requires some complexity, like linking files

I observed which files are written frequently on sync, and I saw that are only the blk and rev dat ones on blocks folder.
bitcoind has the -blocksdir option, to specify the blocks folder, you just need to point it to a folder on RAMdisk.
I write the following script that check for completed blk and rev files on ramdisk/blocks folder and move them to persistent storage, and create a link to them, because these .dat files are used to build chainstate after completing initial sync (or successfully shutting down before it).

With this strategy the initial sync will have only network speed as bottleneck. At least for me, I have at most 10 Mb/s.

I am using a single core notebook from 2011 with debian 11, 16GB of RAM and two 500GB HDDs merged using mergerfs.
mergerfs was always using CPU before, and with this strategy, it use 0 almost all time, only when a dat file are completed.
I will post the results here after finishing the initial sync

Code:
#!/bin/bash


# first of all create and mount a ramdisk, i.e.
# sudo mkdir -p /mount/point/folder -m 755 alice
# sudo mount -o size=12G -t tmpfs tmpdir /mount/point/folder
# ajust the variables below

# point only blocks folder to ramdisk, i.e. -blocksdir=/mount/point/folder
# use it on first run
# run this script BEFORE starting bitcoind
# otherwise move some last blk and rev files to ramdisk
# symlink all blk and rev files from ramfs -> hdd/ssd
# make sure your pc doesnt run out of power
# I have a nobreak for that
# with this setup, the sync will be limited by others factors than media speed
# shutdown bitcoind with ctrl + c, and move all non link elements to hdd/ssd

# get the blocks starting from BNBER
# this will let at least 7 blk and 7 rev files on ramdisk
BNBER=7
ACTUALDIR=/home/alice/bcdata/BTC/blocks
RAMDISK=/home/alice/temp/BTC/blocks
set -e

while true
do
    # iterate only over regular files excluding links and the BNBER most recent ones
    for blkfile in $(find $RAMDISK/blk*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    for blkfile in $(find $RAMDISK/rev*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done

    sleep 10m
done




I wouldnt help yogg with anything if I where you.
(Yogg hasn't been around since he swept all the keys and coin from coldkey wallets from the collectibles section)
And why are you necro posting? This topic is years old now.
legendary
Activity: 1697
Merit: 1074
Answering the OP I just found a way to sync as fast as possible using RAMdisk, it will work even if you has a slow HDD or NAS. RAMdisk is faster than SSD but requires some complexity, like linking files

I observed which files are written frequently on sync, and I saw that are only the blk and rev dat ones on blocks folder.
bitcoind has the -blocksdir option, to specify the blocks folder, you just need to point it to a folder on RAMdisk.
I write the following script that check for completed blk and rev files on ramdisk/blocks folder and move them to persistent storage, and create a link to them, because these .dat files are used to build chainstate after completing initial sync (or successfully shutting down before it).

With this strategy the initial sync will have only network speed as bottleneck. At least for me, I have at most 10 Mb/s.

I am using a single core notebook from 2011 with debian 11, 16GB of RAM and two 500GB HDDs merged using mergerfs.
mergerfs was always using CPU before, and with this strategy, it use 0 almost all time, only when a dat file are completed.
I will post the results here after finishing the initial sync

Code:
#!/bin/bash


# first of all create and mount a ramdisk, i.e.
# sudo mkdir -p /mount/point/folder -m 755
# sudo mount -o size=4G -t tmpfs tmpdir /mount/point/folder
# ajust the variables on this script

# point -blocksdir to ramdisk
# link all files from persistent storage
# targeting blocks folder on ramdisk
# run this script on background to periodically flush
# blk and rev files to persistent storage and link them
# make sure your pc doesnt run out of power
# I have a nobreak for that
# You can do a checkpoint by stoping bitcoind, and making a
# copy of chainstate and index folder and some blk and rev files
# After finishing up the sync,
# just copy all the real files from ramdisk to persistent storage

# get the blocks starting from BNBER
# 6 will let at least 6 blk and 6 rev files on ramdisk
BNBER=6
ACTUALDIR=/home/alice/bcdata/BTC/blocks
RAMDISK=/home/alice/temp/BTC/blocks

mkdir -p $RAMDISK
while true
do
    # iterate only over regular files, not folders nor links
    for blkfile in $(find $RAMDISK/blk*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    for blkfile in $(find $RAMDISK/rev*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    sleep 10m
done


legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Since you mentioned Windows 10, i wonder if you've disabled it's automatic update since it could intervene with the benchmark. Sometimes it's quite annoying since it takes all bandwidth.

No automatic updates were running.

But it was fully updated / patched to the day I started the sync. Which was why I started a bit later in the day on the 13th. Patch Tuesday was the day before and I wanted to make sure it was fully done with updates before I began.

One of the next tests is definitely going to hit over June 9th, so running it again after the updates will give an interesting bit of data of how much they really kill performance.
 
achow101 came in and cleaned up a bit, I will just do the updates in this post.

-Dave



So the 2nd sync just finished to the same block as the last one. Same as before except for instead of just the 4 local nodes I let it connect to the the net.
Came up about 2 hours less. Over 5 days that seems to be a small enough difference that it does not matter.

The SSD has not arrived yet, and I am not motivated to drive to the office to add more ram the the machine I am going to leave it at 8GB installed but I will set the db cache for 4GB. Going to do it with the GUI not even the conf file to more imitate how a user might do it.

Lets see how it plays out.




And the results for the 4GB db cache are in.
A bit under 3 days. So a major improvement. And since most PCs these days are coming with 8GB of RAM having that at a high setting for a few days for an initial sync is probably not going to be that bad for most people.

Start:
2020-05-23T21:42:20Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

End:
2020-05-26T18:17:06Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=0.995235 cache=3079.8MiB(21238640txo)

Shut it down, added 8GB of RAM, set the dbcache to 8192, deleted the AppData\Roaming\Bitcoin folder and started again.
Lets see how it goes.



So the 8GB dbcache with 16GB of ram really made it faster. A bit under 24 hours start to finish.

2020-05-26T19:53:45Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

020-05-27T19:26:51Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=0.994597 cache=4882.4MiB(31913169txo)



As of now the SSD STILL has not come it. So testing paused for a bit.....

-Dave
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
5 Days? That's not so terrible. Not exactly great, but not super terrible. Undecided

Just to confirm... that test was using Windows 10 as the OS wasn't it? you're not using ubuntu or some other flavour of Linux are you? Huh

Yeah, Win 10. I am going for the "average" user experience. Figure most people are running home PCs and are going to install core on that.
*nix people are probably going to be more customized installs so it's a bit tougher to get a basline / tweak settings.

Just my view.

-Dave
HCP
legendary
Activity: 2086
Merit: 4361
5 Days? That's not so terrible. Not exactly great, but not super terrible. Undecided

Just to confirm... that test was using Windows 10 as the OS wasn't it? you're not using ubuntu or some other flavour of Linux are you? Huh
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
@DaveF

I Didn't mean to be disrespectful bro, just wanted to remind you that it is not how things work in the software world. Once you encounter  a stupid situation you need to ask "Why should I be here, facing such a nightmare, after all?"

Sorry, if I said it in the worst way ever.

I did not take offense, it's just that yeah it's broken and as of now we have a limited amount of tools to fix it.

So I figured this is a good test to do with what we have. And, it's somewhat not time intensive. I can start it and walk away. Even if I had not checked in before I went to sleep I could have had the answer in the morning.

-Dave



 
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@DaveF

I Didn't mean to be disrespectful bro, just wanted to remind you that it is not how things work in the software world. Once you encounter  a stupid situation you need to ask "Why should I be here, facing such a nightmare, after all?"

Sorry, if I said it in the worst way ever.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
So 5 days start to finish.
Congratulations! You are the champion of dealing with stupid nightmares  Cheesy

Feel free to go to github and open some issues.
Or clone it and make your own version of core

What I decided to do here after yogg started the thread was take a few of what I felt were some of the more common scenarios and see how long they take to sync.

AND then make 1 change to the conf file (dbcache) and see how long it takes.

Feel free to do your own testing, just remember much like the one run I just did and am re-doing for verification when you make the change you NEED a baseline.

-Dave

legendary
Activity: 1456
Merit: 1175
Always remember the cause!
So 5 days start to finish.
Congratulations! You are the champion of dealing with stupid nightmares  Cheesy
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
If you have >4GB ram, consider increasing your dbcache setting.

If you store the blockchain on HDD. Stop the bitcoin client - defragment - start the bitcoin client again. You will be surprised by the difference!



If you read above, that is part of the test. :-)
Start stock then increase the dbcache but want to get the I downloaded and installed this app and ran it experience 1st.
But it does kind of makes you wonder why they don't do some probing on the install and either create the conf file or at lest suggest one with some settings.

Stay safe.

-Dave



And so the 1st test begins:

2020-05-13T22:44:18Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

-Dave

And it's done

2020-05-18T23:20:09Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=1.000000 cache=307.0MiB(2275078txo) warning='68 of last 100 blocks have unexpected version'

So 5 days start to finish.

Going to re-do this test while allowing the machine access to the internet instead of just the local machines. Want to make sure that that it it about the same amount of time. Will only do the time to block 630,881 to keep everything consistent.

See you the end of the week :-)
-Dave
member
Activity: 84
Merit: 22
If you have >4GB ram, consider increasing your dbcache setting.

If you store the blockchain on HDD. Stop the bitcoin client - defragment - start the bitcoin client again. You will be surprised by the difference!

HCP
legendary
Activity: 2086
Merit: 4361
And so the 1st test begins:
Really going to be interesting to see how it all plays out. There are so many ideas about what has the biggest effect on speed/time when it comes to syncing...

Personally, I run my Bitcoin Node on an old i5-3570k, with 8gigs RAM and the blocks live on an old HDD. I used to have ADSL2+ (would get 1.5MB/sec downloads on a good day)... now I have Fibre... I also recently migrated the "chainstate" folder to my SSD. I don't really have many issues aside from the looming prospect of running out of storage space Tongue

Mind you, I haven't had to do a full sync from the genesis block in quite a while! Wink
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Is there a log entry when it finishes syncing and is on the current block?
Yeah... try searching for the first "UpdateTip:" record in the debug.log that says "progress=1.000000", after the node was started up.

Code:
2020-04-19T14:59:15Z UpdateTip: new best=000000000000000000058a121d528381db4b076ef68c30b769066f92bab1cd77 height=626706 version=0x20400000 log2_work=91.873412 tx=522195769 date='2020-04-19T14:14:27Z' progress=0.999981 cache=34.9MiB(256790txo) warning='65 of last 100 blocks have unexpected version'
2020-04-19T14:59:16Z UpdateTip: new best=0000000000000000000f07e447142f811d24c5c1784b23661ea28f2fe3ffcecc height=626707 version=0x20000000 log2_work=91.873432 tx=522197017 date='2020-04-19T14:15:40Z' progress=0.999981 cache=35.6MiB(262972txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:18Z UpdateTip: new best=00000000000000000005c9bb64ec93eaf645a027ee0c3e2c5e0653a374a5c782 height=626708 version=0x20000000 log2_work=91.873452 tx=522197418 date='2020-04-19T14:16:19Z' progress=0.999981 cache=36.3MiB(268132txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:19Z UpdateTip: new best=0000000000000000000ffd6e3f9cd137309333609a60045e0a27e18a2ad38947 height=626709 version=0x20000000 log2_work=91.873472 tx=522200472 date='2020-04-19T14:49:32Z' progress=0.999996 cache=37.2MiB(275481txo) warning='63 of last 100 blocks have unexpected version'
2020-04-19T15:00:46Z UpdateTip: new best=0000000000000000000ec003631af9681a550b0fa08cfbff8aef4832e51a63e4 height=626710 version=0x27ffe000 log2_work=91.873492 tx=522203095 date='2020-04-19T15:00:22Z' progress=1.000000 cache=38.0MiB(282765txo) warning='63 of last 100 blocks have unexpected version'

You can see that it was "fully synced" at 2020-04-19T15:00:46Z

Excellent. Spinning drive is installed, OS is updating now, both the OS and core will be on this same drive for the 1st test.

There are going to be 4 nodes on a gigabit switch that it will connect to.
1 is a RPi 4 running mynode https://mynodebtc.com/
1 is a RPi 4 running raspiblitz https://raspiblitz.com/
1 is a Core i7 7700k with a stupid amount of ram with everything on an SSD
1 is a Core i5 8600t with 16GB of ram and everything on an SSD

I figure the 4 of those should be able to spit out blocks as fast as the new PC can take them.

The test PC is a i5-9400 with 8GB of ram to start for this 1st test.
I figure it's a good baseline, not the newest & fastest but something you can walk into your local MicroCenter / Local PC store and buy for a reasonable amount ($449) not counting the fast spinning drive.

Will post the numbers when done.

Stay safe.

-Dave



And so the 1st test begins:

2020-05-13T22:44:18Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

-Dave
HCP
legendary
Activity: 2086
Merit: 4361
Is there a log entry when it finishes syncing and is on the current block?
Yeah... try searching for the first "UpdateTip:" record in the debug.log that says "progress=1.000000", after the node was started up.

Code:
2020-04-19T14:59:15Z UpdateTip: new best=000000000000000000058a121d528381db4b076ef68c30b769066f92bab1cd77 height=626706 version=0x20400000 log2_work=91.873412 tx=522195769 date='2020-04-19T14:14:27Z' progress=0.999981 cache=34.9MiB(256790txo) warning='65 of last 100 blocks have unexpected version'
2020-04-19T14:59:16Z UpdateTip: new best=0000000000000000000f07e447142f811d24c5c1784b23661ea28f2fe3ffcecc height=626707 version=0x20000000 log2_work=91.873432 tx=522197017 date='2020-04-19T14:15:40Z' progress=0.999981 cache=35.6MiB(262972txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:18Z UpdateTip: new best=00000000000000000005c9bb64ec93eaf645a027ee0c3e2c5e0653a374a5c782 height=626708 version=0x20000000 log2_work=91.873452 tx=522197418 date='2020-04-19T14:16:19Z' progress=0.999981 cache=36.3MiB(268132txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:19Z UpdateTip: new best=0000000000000000000ffd6e3f9cd137309333609a60045e0a27e18a2ad38947 height=626709 version=0x20000000 log2_work=91.873472 tx=522200472 date='2020-04-19T14:49:32Z' progress=0.999996 cache=37.2MiB(275481txo) warning='63 of last 100 blocks have unexpected version'
2020-04-19T15:00:46Z UpdateTip: new best=0000000000000000000ec003631af9681a550b0fa08cfbff8aef4832e51a63e4 height=626710 version=0x27ffe000 log2_work=91.873492 tx=522203095 date='2020-04-19T15:00:22Z' progress=1.000000 cache=38.0MiB(282765txo) warning='63 of last 100 blocks have unexpected version'

You can see that it was "fully synced" at 2020-04-19T15:00:46Z
copper member
Activity: 1666
Merit: 1901
Amazon Prime Member #7
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?
You would have to write a script to make RPC calls at various time intervals to get the best block according to your local blockchain.

You probably won’t have to sync the entire blockchain to measure performance, you would just need to measure how many blocks each implementation can verify and store per time interval.

You will just need to make sure you have captured periods of blockchain congestion in your test.
F2b
hero member
Activity: 2135
Merit: 926
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?

One of the drives should be in later today and I just thought about the fact that since I am not in the office on a regular basis anymore I might not know it's done till hours (day or more?) after it finished.

-Dave
I don't think so.
However you can maybe look at the 'progress' variable (IDK if it's really precise : I just looked at my testnet node log and it looks like the last synced blocks (around 30) have progress=1.000000 even if the syncing wasn't completely finished ; not sure though).
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?

One of the drives should be in later today and I just thought about the fact that since I am not in the office on a regular basis anymore I might not know it's done till hours (day or more?) after it finished.

-Dave
legendary
Activity: 3360
Merit: 4727
diamond-handed zealot
Really?

post deleted?

I put it here because I had an INTEREST in the results and I wanted to be reminded to find my way back here.

Do I have to waste EVERYONE'S time by writing a goddamned essay about what a great idea it is to gather comparative empirical data with hardware variations to test the effect on blockchain sync times and that, yes, I would like to know the results of said experiment in order for for my post to be deemed "on topic"?

Actually, I feel that "subbed for Dave's results" was a pretty efficient way of conveying that...ffs  Roll Eyes
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
So I am going to start some testing soon. *

*As in as soon as the drives I ordered come in.
Went to the office after spending all day in the field and found the only Samsung / Crucial SSDs we have around were 256GB and the 1TBs are all off-brand.
The spinning ones were all for DVRs / NAS so nothing that most people would have in their machines.

Ordered a WD and a Samsung.
Will begin testing when they show up.

Stay safe.

-Dave
staff
Activity: 4284
Merit: 8808
Is there a description of how a node pulls data from other nodes? Just did a quick look and could not find it.
Is it more like bittorrent where it pulls from all the nodes it can see and gets a bit from each as much as they want to give or does it have a bit of logic and will pull from local nodes on the same subnet 1st.
It will pull the history from all nodes that its connected out to mostly as fast as they'll give it, up to a 1000 block reordering window. Peers that stall the process get disconnected.  You can add a connection to a local host and it will speed things up for you-- but it won't go attempting to connect to something local on its own.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange

It would be better if you don't waste bandwidth of public full node. You could set up local full nodes that :
1. Have better specification than testing system
2. Located on same local network with testing device and configure testing system to only connect to local full nodes

I could do that I have a bunch of nodes here & at home.

Is there a description of how a node pulls data from other nodes? Just did a quick look and could not find it.
Is it more like bittorrent where it pulls from all the nodes it can see and gets a bit from each as much as they want to give or does it have a bit of logic and will pull from local nodes on the same subnet 1st.

I have faster machines at home but then it's going through the internet.

So I could only list the local ones and that would be fine, but figured having some data come across the wire from outside the network would also be a fair test.

-Dave
legendary
Activity: 2464
Merit: 3158
Would anyone be interested in me running a test like this:

Worth doing or just a waste of time?

That's a good idea !

Maybe the tests you listed are a bit extensive, but what I am interested to see are these variants.
It would make 4 syncs in total.

8GB RAM OS & blockchain on same spinning drive
8GB RAM OS & blockchain on same SSD

These use cases are the most common, imo.

Then duplicate the same 5 tests with 32 GB RAM and dbcache set to 16GB.

Yes, that would be the two other variants interesting to see.
On my desktop (with an overkill 32GB of RAM, I had those 4x8GB laying around ... Roll Eyes ) it took roughly 24hr to sync.
I set dbcache to 24GB but Bitcoin ended up gulping only half of that. My datadir is on an HDD.
The block count went down super fast, as the memory use went up.
At some points, it completely stopped for hours. Even the countdown in the Qt wallet were not updating and were stuck for longer than displayed.
I was also using txindex, and the HDD I/O activity showed that bitcoin was processing the blocks it downloaded.

The trial with SSD + extended RAM should be faster than ~20hr.

Also, another parameter is your connection speed.
I'm lucky to have a fast one :

legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Would anyone be interested in me running a test like this:


Base system stays the same:

8GB RAM OS & blockchain on same spinning drive
8GB RAM OS on one spinning drive blockchain on another spinning drive
8GB RAM OS on one spinning drive blockchain on an SSD
8GB RAM OS & blockchain on same SSD
8GB RAM OS on one SSD on another SSD

Then duplicate the same 5 tests with 32 GB RAM and dbcache set to 16GB.
All clean sync from 0 to current day.

Same PC, just swapping drives & adding RAM and the 1 conf change.

Downside would be that I would have to run them sequentially so the 1st run would have probably about 3+ weeks fewer blocks then the 10th run.
There would also be some bandwidth variations but I am sitting on a multiGB fiber run so probably nor that much.

Worth doing or just a waste of time?

-Dave

staff
Activity: 4284
Merit: 8808
As for the topic, you can suggest setting "assumevalid=(latest block's hash)" for devices with slow processor like RPi.
For your specs, there's not much of a difference.
Please don't suggest that.

If you just want to blindly trust some host on the internet that you got a block hash from, just use them as your node remotely.

Skipping an extra month of validation or whatever doesn't make a big difference in initial sync time.

Plus, if you set the value too recently you'll just cause it to disable AV entirely and your sync will be slower.

Verifying scripts is just one part of validation, there is still a lot of cpu resources spent on the rest.
legendary
Activity: 2618
Merit: 6452
Self-proclaimed Genius
If it is the CPU validating each block individually is the bottleneck, then a SSD won't do much.
This is where 'assumevalid' come into play.
Your CPU isn't actually verifying signatures of all the blocks unless you've set it from default (block height 453354) to '0'.
That's why syncing is faster at start (aside from the reason: most of the early blocks aren't full).

As for the topic, you can suggest setting "assumevalid=(latest block's hash)" for devices with slow processor like RPi.
For your specs, there's not much of a difference.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
SSD doesn't really make much of a difference so long as the dbcache is big enough (8GB - 10GB or so).

With a big dbcache validation is a write only process on the disk. Smiley

Not that SSDs aren't a lot better in general.  But if your choice is an anaemic 500GB SSD that with bitcoin and your other usage will be out of space in a year or a 10TB 7200RPM HDD, well... slow is better than not running.

We could probably go around and around on this a lot based on system configuration and other factors.

If you have a *good* 7200 RPM drive AND enough spare RAM to do a 8GB cache AND the blockchain is on a separate drive from the OS / other apps then yeah.
On the other hand if you are on lower end equipment and need to update a bit the difference between a 512GB SSD and a 1TB SSD is ~$50 the cost of going from 8GB to 16GB of RAM is about $40.
On a lower end system you are going to get more overall performance from an SSD then the extra 8GB of RAM for only $10 more.

Obviously there are 1000s of different ways things can be done and seen. Which ties back to this thread.

Other speed improvements for syncing (and overall system performance) can at times be obtained with other system tweaks.

Things to check:
Are your system / chipset drivers up to date?
Do you have the latest drivers for your drive controller or is it the one that came with your Dell 18 months ago?
Same with drive firmware. Sometimes it fixes bugs, other times it improves performance. Other times it lower performance but increases security.
etc.

There are countless gaming sites that spend hours showing you how to get every last bit of performance out of your system (NOT counting the graphics card tweaks) that can be applied here.

So yeah, my SSD suggestions was a bit off the cuff but it's probably neck and neck in performance for initial sync between that and RAM.
Or...do both.

yogg has his specific system and can do things differently then others so it's one thing but looking at all configurations and what would improve them is a bit more tricky.

Once fully synced, it works very well on an HDD. Actually I will put some old HDDs to use and have backups of the blockchain, just in case I jam it again with my crazy experimentation. Tongue

Just run a cron job once a day to copy it out. Since I have so many VMs and RPi units running here I constantly detonate my stuff while testing.


Stay safe.

-Dave
legendary
Activity: 2464
Merit: 3158
For the use case of syncing the blockchain an SSD is faster but will wear out sooner depending on the number of write cycles it's graded for (drive writes per day).

If you're going to use an SSD for this get one with SLC NAND because it has several times more write cycles than the newer NAND types. It'll be more expensive because SLC has fewer bits per cell but you won't be limited by the speed of an HDD.

Yep, I imagine it's faster for the part where the dbcache gets written in hard storage, but what would be the bottleneck here, in that case ?
Assuming a 12Gb dbcache, is it the CPU processing speed that prevents it from syncing faster, or the disk i/o bandwidth ?

If it is the CPU validating each block individually is the bottleneck, then a SSD won't do much.



Once fully synced, it works very well on an HDD. Actually I will put some old HDDs to use and have backups of the blockchain, just in case I jam it again with my crazy experimentation. Tongue
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
For the use case of syncing the blockchain an SSD is faster but will wear out sooner depending on the number of write cycles it's graded for (drive writes per day).

If you're going to use an SSD for this get one with SLC NAND because it has several times more write cycles than the newer NAND types. It'll be more expensive because SLC has fewer bits per cell but you won't be limited by the speed of an HDD.
staff
Activity: 4284
Merit: 8808
SSD doesn't really make much of a difference so long as the dbcache is big enough (8GB - 10GB or so).

With a big dbcache validation is a write only process on the disk. Smiley

Not that SSDs aren't a lot better in general.  But if your choice is an anaemic 500GB SSD that with bitcoin and your other usage will be out of space in a year or a 10TB 7200RPM HDD, well... slow is better than not running.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Let me bold large red color something:

USE AN SSD

There is a lot of disk I/O going on and having an SSD for the blockchain will save you a ton of time.

*Use a good SSD, somewhere in one of other posts I noted that a cheap generic SSD was slower then a 7200RPM spinning drive but even a halfway decent SSD is better.

Stay safe.

-Dave
legendary
Activity: 2464
Merit: 3158
Hey there,

Lately I have jammed my Bitcoin Core full node blockchain.
Don't ask why, I am not sure what happened. It was txindexed etc ...

When I tried to open it again, there was something wrong about the index, so I went on with provided instructions.
At some point I realized that the folder with the data weights a mere 100's of MBs : all blockchain data was gone.
I have good practices when it comes to handle crypto funds, so they were not at risk even in the case of total deletion of that bitcoin folder.

I have another full node on a server, but can't access the HDD.
I started zipping the blockchain directory on my server, to download it afterwards, but that also is very time extensive.



I need a full node to work with.
Syncing can take days, if not weeks.
Based on my trials, luckily there are a couple options / settings / things you can do to enhance the synchronization speed.

1) Don't put your "bitcoin folder" on an external drive.
The input/output speed of the external device is a bottleneck and BTC will take ages to sync that way.
I use a HDD rack to easily swap drives when needed. I hold this kind of data (blockchains) on drives I swap at need.
Despite the rack receiver being wired to my motherboard with a SATA cable, when I used it as the data directory for Bitcoin Core, the speed for synchronization was insanely slow.
I ended up using an HDD that is directly wired to my motherboard with SATA, cutting out every interface.

2) Have lots of RAM.
I'm lucky to have 32Gb RAM on my Desktop computer.
There is an option in bitcoin.conf called dbcache.
By default, it limits the usage of RAM for Bitcoin Core and leave enough for the rest of your processes.
I put 24GB as an option here. I am only syncing up on that computer for now.

While looking at the resources manager, I noticed that the Bitcoin Core process took up to 12GB in RAM.

3) Have a fast connection.
I'm lucky : I have a 1Gb fiber connection.
My RAM was filling up with raw block data, then Bitcoin Core processed it.



In the end, it took ~20hr to fully sync from scratch. It was on a Windows machine.
It was indicating an average speed of 12% progress per hour, which makes it ~8hr.
As I need the txindex, the data displayed on Bitcoin Core wasn't changing but the index files were created by the process : it didn't crash despite the info displayed remains still.

A few observations :
- Launch the initial sync and let it sync all the way through.
I noted that if you close Bitcoin Core and launch it later to keep syncing, the speed will drastically reduce.

- Sometimes it seems like the process crashed because nothing is happening.
As long as it didn't give an error that it crashed indeed, your Bitcoin Core client might simply be writing data to the disk, which doesn't impact the displayed data.

- Weirdly, in my case syncing was faster on Windows than on Linux.  Shocked



Thanks for reading.
If you have any other advice, please share them. Smiley
Jump to: