Pages:
Author

Topic: How to initially sync Bitcoin as fast as possible (Read 1023 times)

hero member
Activity: 714
Merit: 1298
The sync from scratch was going well with about 15% a day,


Thanks,"15% a day" speaks volumes.

My gained experience evidences that IBD involving 16 Gb RAM machine and external SSD (870 EVO Samsung) goes much faster. 4 weeks ago it took less than 36 hours to download full blockchain from the scratch. The properly tuned bincoin.conf (dbcache=1/4 RAM and blocksonly=1) facilitates IBD.
legendary
Activity: 1700
Merit: 1075
~


Interesting solution for IBD, never encountered it before.

Having in mind 16 Gb RAM, what  size should be assigned to RAMdisk ? And  how to set up  dbcache in this case, dbcache=1/4 RAMdisk or 1/4(RAM - RAMdisk).

 And another question. What is gain in comparison with  M.2 NVme SSD?

I assigned 10GB for dbcache, and 4GB for RAMDISK. Not needed big RAMdisk because it will store only a few blk and rev files, the rest will be links of that files to persistent storage.
There are many access to latest blk and rev files during syncronization.
The writes on chainstate are made up periodically or if you interrupt the sync, so this work will be amortized by download activity.
I said gain because generally RAM are faster than any SSD. But it depends on the machine you are using too. I do not have machines with M.2 connector to test.

I just synced the blockchain with this setup, have to reindex from 49% because the system crashed, since I am using 2x 500GB HDDs on a docking station connected though USB to notebook. The sync from scratch was going well with about 15% a day, but the reindex process was really slow, because i have put only index folder and chain state on RAMdisk, but not helped much.
The script needs some adjusts, removing set -e, because it not needed since if the script exits, the RAMdisk will be filled up and the bitcoind will stop after that.

So instead re-indexing, downloading from scratch is much more faster, and you can make checkpoints by interrupting the program at 30% for example, and saving chainstate and index folders and some of the recent blk and rev files. To put it back in the case of any critical event occurs further on the process.

Beginning of 2023 I was trying to run a full-node, but it doesn't worked because some problems with mergerfs. The initial sync long lasted about 4 months. But now with this strategy, i can conclude it in the same machine in less than one week.



...
I wouldnt help yogg with anything if I where you.
(Yogg hasn't been around since he swept all the keys and coin from coldkey wallets from the collectibles section)
And why are you necro posting? This topic is years old now.

I searched in this forum using many terms about "speeding up initial sync" and all the time it returned this thread as the only relevant one. But it lacks a satisfactory answer, so I just give one.
hero member
Activity: 714
Merit: 1298
~


Interesting solution for IBD, never encountered it before.

Having in mind 16 Gb RAM, what  size should be assigned to RAMdisk ? And  how to set up  dbcache in this case, dbcache=1/4 RAMdisk or 1/4(RAM - RAMdisk).

 And another question. What is gain in comparison with  M.2 NVme SSD?
legendary
Activity: 3374
Merit: 4738
diamond-handed zealot
This is still a valid avenue of inquiry.
hero member
Activity: 1439
Merit: 513
Answering the OP I just found a way to sync as fast as possible using RAMdisk, it will work even if you has a slow HDD or NAS. RAMdisk is faster than SSD but requires some complexity, like linking files

I observed which files are written frequently on sync, and I saw that are only the blk and rev dat ones on blocks folder.
bitcoind has the -blocksdir option, to specify the blocks folder, you just need to point it to a folder on RAMdisk.
I write the following script that check for completed blk and rev files on ramdisk/blocks folder and move them to persistent storage, and create a link to them, because these .dat files are used to build chainstate after completing initial sync (or successfully shutting down before it).

With this strategy the initial sync will have only network speed as bottleneck. At least for me, I have at most 10 Mb/s.

I am using a single core notebook from 2011 with debian 11, 16GB of RAM and two 500GB HDDs merged using mergerfs.
mergerfs was always using CPU before, and with this strategy, it use 0 almost all time, only when a dat file are completed.
I will post the results here after finishing the initial sync

Code:
#!/bin/bash


# first of all create and mount a ramdisk, i.e.
# sudo mkdir -p /mount/point/folder -m 755 alice
# sudo mount -o size=12G -t tmpfs tmpdir /mount/point/folder
# ajust the variables below

# point only blocks folder to ramdisk, i.e. -blocksdir=/mount/point/folder
# use it on first run
# run this script BEFORE starting bitcoind
# otherwise move some last blk and rev files to ramdisk
# symlink all blk and rev files from ramfs -> hdd/ssd
# make sure your pc doesnt run out of power
# I have a nobreak for that
# with this setup, the sync will be limited by others factors than media speed
# shutdown bitcoind with ctrl + c, and move all non link elements to hdd/ssd

# get the blocks starting from BNBER
# this will let at least 7 blk and 7 rev files on ramdisk
BNBER=7
ACTUALDIR=/home/alice/bcdata/BTC/blocks
RAMDISK=/home/alice/temp/BTC/blocks
set -e

while true
do
    # iterate only over regular files excluding links and the BNBER most recent ones
    for blkfile in $(find $RAMDISK/blk*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    for blkfile in $(find $RAMDISK/rev*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done

    sleep 10m
done




I wouldnt help yogg with anything if I where you.
(Yogg hasn't been around since he swept all the keys and coin from coldkey wallets from the collectibles section)
And why are you necro posting? This topic is years old now.
legendary
Activity: 1700
Merit: 1075
Answering the OP I just found a way to sync as fast as possible using RAMdisk, it will work even if you has a slow HDD or NAS. RAMdisk is faster than SSD but requires some complexity, like linking files

I observed which files are written frequently on sync, and I saw that are only the blk and rev dat ones on blocks folder.
bitcoind has the -blocksdir option, to specify the blocks folder, you just need to point it to a folder on RAMdisk.
I write the following script that check for completed blk and rev files on ramdisk/blocks folder and move them to persistent storage, and create a link to them, because these .dat files are used to build chainstate after completing initial sync (or successfully shutting down before it).

With this strategy the initial sync will have only network speed as bottleneck. At least for me, I have at most 10 Mb/s.

I am using a single core notebook from 2011 with debian 11, 16GB of RAM and two 500GB HDDs merged using mergerfs.
mergerfs was always using CPU before, and with this strategy, it use 0 almost all time, only when a dat file are completed.
I will post the results here after finishing the initial sync

Code:
#!/bin/bash


# first of all create and mount a ramdisk, i.e.
# sudo mkdir -p /mount/point/folder -m 755
# sudo mount -o size=4G -t tmpfs tmpdir /mount/point/folder
# ajust the variables on this script

# point -blocksdir to ramdisk
# link all files from persistent storage
# targeting blocks folder on ramdisk
# run this script on background to periodically flush
# blk and rev files to persistent storage and link them
# make sure your pc doesnt run out of power
# I have a nobreak for that
# You can do a checkpoint by stoping bitcoind, and making a
# copy of chainstate and index folder and some blk and rev files
# After finishing up the sync,
# just copy all the real files from ramdisk to persistent storage

# get the blocks starting from BNBER
# 6 will let at least 6 blk and 6 rev files on ramdisk
BNBER=6
ACTUALDIR=/home/alice/bcdata/BTC/blocks
RAMDISK=/home/alice/temp/BTC/blocks

mkdir -p $RAMDISK
while true
do
    # iterate only over regular files, not folders nor links
    for blkfile in $(find $RAMDISK/blk*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    for blkfile in $(find $RAMDISK/rev*.dat  -type f -printf "%f\n" | sort -r | tail -n +$BNBER)
    do
        mv $RAMDISK/$blkfile $ACTUALDIR/$blkfile
        ln -s $ACTUALDIR/$blkfile $RAMDISK/$blkfile
    done
    sleep 10m
done


legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Since you mentioned Windows 10, i wonder if you've disabled it's automatic update since it could intervene with the benchmark. Sometimes it's quite annoying since it takes all bandwidth.

No automatic updates were running.

But it was fully updated / patched to the day I started the sync. Which was why I started a bit later in the day on the 13th. Patch Tuesday was the day before and I wanted to make sure it was fully done with updates before I began.

One of the next tests is definitely going to hit over June 9th, so running it again after the updates will give an interesting bit of data of how much they really kill performance.
 
achow101 came in and cleaned up a bit, I will just do the updates in this post.

-Dave



So the 2nd sync just finished to the same block as the last one. Same as before except for instead of just the 4 local nodes I let it connect to the the net.
Came up about 2 hours less. Over 5 days that seems to be a small enough difference that it does not matter.

The SSD has not arrived yet, and I am not motivated to drive to the office to add more ram the the machine I am going to leave it at 8GB installed but I will set the db cache for 4GB. Going to do it with the GUI not even the conf file to more imitate how a user might do it.

Lets see how it plays out.




And the results for the 4GB db cache are in.
A bit under 3 days. So a major improvement. And since most PCs these days are coming with 8GB of RAM having that at a high setting for a few days for an initial sync is probably not going to be that bad for most people.

Start:
2020-05-23T21:42:20Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

End:
2020-05-26T18:17:06Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=0.995235 cache=3079.8MiB(21238640txo)

Shut it down, added 8GB of RAM, set the dbcache to 8192, deleted the AppData\Roaming\Bitcoin folder and started again.
Lets see how it goes.



So the 8GB dbcache with 16GB of ram really made it faster. A bit under 24 hours start to finish.

2020-05-26T19:53:45Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

020-05-27T19:26:51Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=0.994597 cache=4882.4MiB(31913169txo)



As of now the SSD STILL has not come it. So testing paused for a bit.....

-Dave
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
5 Days? That's not so terrible. Not exactly great, but not super terrible. Undecided

Just to confirm... that test was using Windows 10 as the OS wasn't it? you're not using ubuntu or some other flavour of Linux are you? Huh

Yeah, Win 10. I am going for the "average" user experience. Figure most people are running home PCs and are going to install core on that.
*nix people are probably going to be more customized installs so it's a bit tougher to get a basline / tweak settings.

Just my view.

-Dave
HCP
legendary
Activity: 2086
Merit: 4363
5 Days? That's not so terrible. Not exactly great, but not super terrible. Undecided

Just to confirm... that test was using Windows 10 as the OS wasn't it? you're not using ubuntu or some other flavour of Linux are you? Huh
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
@DaveF

I Didn't mean to be disrespectful bro, just wanted to remind you that it is not how things work in the software world. Once you encounter  a stupid situation you need to ask "Why should I be here, facing such a nightmare, after all?"

Sorry, if I said it in the worst way ever.

I did not take offense, it's just that yeah it's broken and as of now we have a limited amount of tools to fix it.

So I figured this is a good test to do with what we have. And, it's somewhat not time intensive. I can start it and walk away. Even if I had not checked in before I went to sleep I could have had the answer in the morning.

-Dave



 
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@DaveF

I Didn't mean to be disrespectful bro, just wanted to remind you that it is not how things work in the software world. Once you encounter  a stupid situation you need to ask "Why should I be here, facing such a nightmare, after all?"

Sorry, if I said it in the worst way ever.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
So 5 days start to finish.
Congratulations! You are the champion of dealing with stupid nightmares  Cheesy

Feel free to go to github and open some issues.
Or clone it and make your own version of core

What I decided to do here after yogg started the thread was take a few of what I felt were some of the more common scenarios and see how long they take to sync.

AND then make 1 change to the conf file (dbcache) and see how long it takes.

Feel free to do your own testing, just remember much like the one run I just did and am re-doing for verification when you make the change you NEED a baseline.

-Dave

legendary
Activity: 1456
Merit: 1175
Always remember the cause!
So 5 days start to finish.
Congratulations! You are the champion of dealing with stupid nightmares  Cheesy
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
If you have >4GB ram, consider increasing your dbcache setting.

If you store the blockchain on HDD. Stop the bitcoin client - defragment - start the bitcoin client again. You will be surprised by the difference!



If you read above, that is part of the test. :-)
Start stock then increase the dbcache but want to get the I downloaded and installed this app and ran it experience 1st.
But it does kind of makes you wonder why they don't do some probing on the install and either create the conf file or at lest suggest one with some settings.

Stay safe.

-Dave



And so the 1st test begins:

2020-05-13T22:44:18Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

-Dave

And it's done

2020-05-18T23:20:09Z UpdateTip: new best=00000000000000000007d614c0b374fb92737b358386cfe10318d8d99702c02d height=630881 version=0x20000000 log2_work=91.961602 tx=531173082 date='2020-05-18T23:20:04Z' progress=1.000000 cache=307.0MiB(2275078txo) warning='68 of last 100 blocks have unexpected version'

So 5 days start to finish.

Going to re-do this test while allowing the machine access to the internet instead of just the local machines. Want to make sure that that it it about the same amount of time. Will only do the time to block 630,881 to keep everything consistent.

See you the end of the week :-)
-Dave
member
Activity: 84
Merit: 22
If you have >4GB ram, consider increasing your dbcache setting.

If you store the blockchain on HDD. Stop the bitcoin client - defragment - start the bitcoin client again. You will be surprised by the difference!

HCP
legendary
Activity: 2086
Merit: 4363
And so the 1st test begins:
Really going to be interesting to see how it all plays out. There are so many ideas about what has the biggest effect on speed/time when it comes to syncing...

Personally, I run my Bitcoin Node on an old i5-3570k, with 8gigs RAM and the blocks live on an old HDD. I used to have ADSL2+ (would get 1.5MB/sec downloads on a good day)... now I have Fibre... I also recently migrated the "chainstate" folder to my SSD. I don't really have many issues aside from the looming prospect of running out of storage space Tongue

Mind you, I haven't had to do a full sync from the genesis block in quite a while! Wink
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Is there a log entry when it finishes syncing and is on the current block?
Yeah... try searching for the first "UpdateTip:" record in the debug.log that says "progress=1.000000", after the node was started up.

Code:
2020-04-19T14:59:15Z UpdateTip: new best=000000000000000000058a121d528381db4b076ef68c30b769066f92bab1cd77 height=626706 version=0x20400000 log2_work=91.873412 tx=522195769 date='2020-04-19T14:14:27Z' progress=0.999981 cache=34.9MiB(256790txo) warning='65 of last 100 blocks have unexpected version'
2020-04-19T14:59:16Z UpdateTip: new best=0000000000000000000f07e447142f811d24c5c1784b23661ea28f2fe3ffcecc height=626707 version=0x20000000 log2_work=91.873432 tx=522197017 date='2020-04-19T14:15:40Z' progress=0.999981 cache=35.6MiB(262972txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:18Z UpdateTip: new best=00000000000000000005c9bb64ec93eaf645a027ee0c3e2c5e0653a374a5c782 height=626708 version=0x20000000 log2_work=91.873452 tx=522197418 date='2020-04-19T14:16:19Z' progress=0.999981 cache=36.3MiB(268132txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:19Z UpdateTip: new best=0000000000000000000ffd6e3f9cd137309333609a60045e0a27e18a2ad38947 height=626709 version=0x20000000 log2_work=91.873472 tx=522200472 date='2020-04-19T14:49:32Z' progress=0.999996 cache=37.2MiB(275481txo) warning='63 of last 100 blocks have unexpected version'
2020-04-19T15:00:46Z UpdateTip: new best=0000000000000000000ec003631af9681a550b0fa08cfbff8aef4832e51a63e4 height=626710 version=0x27ffe000 log2_work=91.873492 tx=522203095 date='2020-04-19T15:00:22Z' progress=1.000000 cache=38.0MiB(282765txo) warning='63 of last 100 blocks have unexpected version'

You can see that it was "fully synced" at 2020-04-19T15:00:46Z

Excellent. Spinning drive is installed, OS is updating now, both the OS and core will be on this same drive for the 1st test.

There are going to be 4 nodes on a gigabit switch that it will connect to.
1 is a RPi 4 running mynode https://mynodebtc.com/
1 is a RPi 4 running raspiblitz https://raspiblitz.com/
1 is a Core i7 7700k with a stupid amount of ram with everything on an SSD
1 is a Core i5 8600t with 16GB of ram and everything on an SSD

I figure the 4 of those should be able to spit out blocks as fast as the new PC can take them.

The test PC is a i5-9400 with 8GB of ram to start for this 1st test.
I figure it's a good baseline, not the newest & fastest but something you can walk into your local MicroCenter / Local PC store and buy for a reasonable amount ($449) not counting the fast spinning drive.

Will post the numbers when done.

Stay safe.

-Dave



And so the 1st test begins:

2020-05-13T22:44:18Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)

-Dave
HCP
legendary
Activity: 2086
Merit: 4363
Is there a log entry when it finishes syncing and is on the current block?
Yeah... try searching for the first "UpdateTip:" record in the debug.log that says "progress=1.000000", after the node was started up.

Code:
2020-04-19T14:59:15Z UpdateTip: new best=000000000000000000058a121d528381db4b076ef68c30b769066f92bab1cd77 height=626706 version=0x20400000 log2_work=91.873412 tx=522195769 date='2020-04-19T14:14:27Z' progress=0.999981 cache=34.9MiB(256790txo) warning='65 of last 100 blocks have unexpected version'
2020-04-19T14:59:16Z UpdateTip: new best=0000000000000000000f07e447142f811d24c5c1784b23661ea28f2fe3ffcecc height=626707 version=0x20000000 log2_work=91.873432 tx=522197017 date='2020-04-19T14:15:40Z' progress=0.999981 cache=35.6MiB(262972txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:18Z UpdateTip: new best=00000000000000000005c9bb64ec93eaf645a027ee0c3e2c5e0653a374a5c782 height=626708 version=0x20000000 log2_work=91.873452 tx=522197418 date='2020-04-19T14:16:19Z' progress=0.999981 cache=36.3MiB(268132txo) warning='64 of last 100 blocks have unexpected version'
2020-04-19T14:59:19Z UpdateTip: new best=0000000000000000000ffd6e3f9cd137309333609a60045e0a27e18a2ad38947 height=626709 version=0x20000000 log2_work=91.873472 tx=522200472 date='2020-04-19T14:49:32Z' progress=0.999996 cache=37.2MiB(275481txo) warning='63 of last 100 blocks have unexpected version'
2020-04-19T15:00:46Z UpdateTip: new best=0000000000000000000ec003631af9681a550b0fa08cfbff8aef4832e51a63e4 height=626710 version=0x27ffe000 log2_work=91.873492 tx=522203095 date='2020-04-19T15:00:22Z' progress=1.000000 cache=38.0MiB(282765txo) warning='63 of last 100 blocks have unexpected version'

You can see that it was "fully synced" at 2020-04-19T15:00:46Z
copper member
Activity: 1666
Merit: 1901
Amazon Prime Member #7
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?
You would have to write a script to make RPC calls at various time intervals to get the best block according to your local blockchain.

You probably won’t have to sync the entire blockchain to measure performance, you would just need to measure how many blocks each implementation can verify and store per time interval.

You will just need to make sure you have captured periods of blockchain congestion in your test.
F2b
hero member
Activity: 2140
Merit: 926
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?

One of the drives should be in later today and I just thought about the fact that since I am not in the office on a regular basis anymore I might not know it's done till hours (day or more?) after it finished.

-Dave
I don't think so.
However you can maybe look at the 'progress' variable (IDK if it's really precise : I just looked at my testnet node log and it looks like the last synced blocks (around 30) have progress=1.000000 even if the syncing wasn't completely finished ; not sure though).
Pages:
Jump to: