Pages:
Author

Topic: How to initially sync Bitcoin as fast as possible - page 2. (Read 950 times)

legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange
Bonehead question.
Is there a log entry when it finishes syncing and is on the current block?

One of the drives should be in later today and I just thought about the fact that since I am not in the office on a regular basis anymore I might not know it's done till hours (day or more?) after it finished.

-Dave
legendary
Activity: 3066
Merit: 4195
diamond-handed zealot
Really?

post deleted?

I put it here because I had an INTEREST in the results and I wanted to be reminded to find my way back here.

Do I have to waste EVERYONE'S time by writing a goddamned essay about what a great idea it is to gather comparative empirical data with hardware variations to test the effect on blockchain sync times and that, yes, I would like to know the results of said experiment in order for for my post to be deemed "on topic"?

Actually, I feel that "subbed for Dave's results" was a pretty efficient way of conveying that...ffs  Roll Eyes
legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange
So I am going to start some testing soon. *

*As in as soon as the drives I ordered come in.
Went to the office after spending all day in the field and found the only Samsung / Crucial SSDs we have around were 256GB and the 1TBs are all off-brand.
The spinning ones were all for DVRs / NAS so nothing that most people would have in their machines.

Ordered a WD and a Samsung.
Will begin testing when they show up.

Stay safe.

-Dave
staff
Activity: 4158
Merit: 8382
Is there a description of how a node pulls data from other nodes? Just did a quick look and could not find it.
Is it more like bittorrent where it pulls from all the nodes it can see and gets a bit from each as much as they want to give or does it have a bit of logic and will pull from local nodes on the same subnet 1st.
It will pull the history from all nodes that its connected out to mostly as fast as they'll give it, up to a 1000 block reordering window. Peers that stall the process get disconnected.  You can add a connection to a local host and it will speed things up for you-- but it won't go attempting to connect to something local on its own.
legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange

It would be better if you don't waste bandwidth of public full node. You could set up local full nodes that :
1. Have better specification than testing system
2. Located on same local network with testing device and configure testing system to only connect to local full nodes

I could do that I have a bunch of nodes here & at home.

Is there a description of how a node pulls data from other nodes? Just did a quick look and could not find it.
Is it more like bittorrent where it pulls from all the nodes it can see and gets a bit from each as much as they want to give or does it have a bit of logic and will pull from local nodes on the same subnet 1st.

I have faster machines at home but then it's going through the internet.

So I could only list the local ones and that would be fine, but figured having some data come across the wire from outside the network would also be a fair test.

-Dave
legendary
Activity: 2464
Merit: 3158
Would anyone be interested in me running a test like this:

Worth doing or just a waste of time?

That's a good idea !

Maybe the tests you listed are a bit extensive, but what I am interested to see are these variants.
It would make 4 syncs in total.

8GB RAM OS & blockchain on same spinning drive
8GB RAM OS & blockchain on same SSD

These use cases are the most common, imo.

Then duplicate the same 5 tests with 32 GB RAM and dbcache set to 16GB.

Yes, that would be the two other variants interesting to see.
On my desktop (with an overkill 32GB of RAM, I had those 4x8GB laying around ... Roll Eyes ) it took roughly 24hr to sync.
I set dbcache to 24GB but Bitcoin ended up gulping only half of that. My datadir is on an HDD.
The block count went down super fast, as the memory use went up.
At some points, it completely stopped for hours. Even the countdown in the Qt wallet were not updating and were stuck for longer than displayed.
I was also using txindex, and the HDD I/O activity showed that bitcoin was processing the blocks it downloaded.

The trial with SSD + extended RAM should be faster than ~20hr.

Also, another parameter is your connection speed.
I'm lucky to have a fast one :

legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange
Would anyone be interested in me running a test like this:


Base system stays the same:

8GB RAM OS & blockchain on same spinning drive
8GB RAM OS on one spinning drive blockchain on another spinning drive
8GB RAM OS on one spinning drive blockchain on an SSD
8GB RAM OS & blockchain on same SSD
8GB RAM OS on one SSD on another SSD

Then duplicate the same 5 tests with 32 GB RAM and dbcache set to 16GB.
All clean sync from 0 to current day.

Same PC, just swapping drives & adding RAM and the 1 conf change.

Downside would be that I would have to run them sequentially so the 1st run would have probably about 3+ weeks fewer blocks then the 10th run.
There would also be some bandwidth variations but I am sitting on a multiGB fiber run so probably nor that much.

Worth doing or just a waste of time?

-Dave

staff
Activity: 4158
Merit: 8382
As for the topic, you can suggest setting "assumevalid=(latest block's hash)" for devices with slow processor like RPi.
For your specs, there's not much of a difference.
Please don't suggest that.

If you just want to blindly trust some host on the internet that you got a block hash from, just use them as your node remotely.

Skipping an extra month of validation or whatever doesn't make a big difference in initial sync time.

Plus, if you set the value too recently you'll just cause it to disable AV entirely and your sync will be slower.

Verifying scripts is just one part of validation, there is still a lot of cpu resources spent on the rest.
legendary
Activity: 2394
Merit: 5531
Self-proclaimed Genius
If it is the CPU validating each block individually is the bottleneck, then a SSD won't do much.
This is where 'assumevalid' come into play.
Your CPU isn't actually verifying signatures of all the blocks unless you've set it from default (block height 453354) to '0'.
That's why syncing is faster at start (aside from the reason: most of the early blocks aren't full).

As for the topic, you can suggest setting "assumevalid=(latest block's hash)" for devices with slow processor like RPi.
For your specs, there's not much of a difference.
legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange
SSD doesn't really make much of a difference so long as the dbcache is big enough (8GB - 10GB or so).

With a big dbcache validation is a write only process on the disk. Smiley

Not that SSDs aren't a lot better in general.  But if your choice is an anaemic 500GB SSD that with bitcoin and your other usage will be out of space in a year or a 10TB 7200RPM HDD, well... slow is better than not running.

We could probably go around and around on this a lot based on system configuration and other factors.

If you have a *good* 7200 RPM drive AND enough spare RAM to do a 8GB cache AND the blockchain is on a separate drive from the OS / other apps then yeah.
On the other hand if you are on lower end equipment and need to update a bit the difference between a 512GB SSD and a 1TB SSD is ~$50 the cost of going from 8GB to 16GB of RAM is about $40.
On a lower end system you are going to get more overall performance from an SSD then the extra 8GB of RAM for only $10 more.

Obviously there are 1000s of different ways things can be done and seen. Which ties back to this thread.

Other speed improvements for syncing (and overall system performance) can at times be obtained with other system tweaks.

Things to check:
Are your system / chipset drivers up to date?
Do you have the latest drivers for your drive controller or is it the one that came with your Dell 18 months ago?
Same with drive firmware. Sometimes it fixes bugs, other times it improves performance. Other times it lower performance but increases security.
etc.

There are countless gaming sites that spend hours showing you how to get every last bit of performance out of your system (NOT counting the graphics card tweaks) that can be applied here.

So yeah, my SSD suggestions was a bit off the cuff but it's probably neck and neck in performance for initial sync between that and RAM.
Or...do both.

yogg has his specific system and can do things differently then others so it's one thing but looking at all configurations and what would improve them is a bit more tricky.

Once fully synced, it works very well on an HDD. Actually I will put some old HDDs to use and have backups of the blockchain, just in case I jam it again with my crazy experimentation. Tongue

Just run a cron job once a day to copy it out. Since I have so many VMs and RPi units running here I constantly detonate my stuff while testing.


Stay safe.

-Dave
legendary
Activity: 2464
Merit: 3158
For the use case of syncing the blockchain an SSD is faster but will wear out sooner depending on the number of write cycles it's graded for (drive writes per day).

If you're going to use an SSD for this get one with SLC NAND because it has several times more write cycles than the newer NAND types. It'll be more expensive because SLC has fewer bits per cell but you won't be limited by the speed of an HDD.

Yep, I imagine it's faster for the part where the dbcache gets written in hard storage, but what would be the bottleneck here, in that case ?
Assuming a 12Gb dbcache, is it the CPU processing speed that prevents it from syncing faster, or the disk i/o bandwidth ?

If it is the CPU validating each block individually is the bottleneck, then a SSD won't do much.



Once fully synced, it works very well on an HDD. Actually I will put some old HDDs to use and have backups of the blockchain, just in case I jam it again with my crazy experimentation. Tongue
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
For the use case of syncing the blockchain an SSD is faster but will wear out sooner depending on the number of write cycles it's graded for (drive writes per day).

If you're going to use an SSD for this get one with SLC NAND because it has several times more write cycles than the newer NAND types. It'll be more expensive because SLC has fewer bits per cell but you won't be limited by the speed of an HDD.
staff
Activity: 4158
Merit: 8382
SSD doesn't really make much of a difference so long as the dbcache is big enough (8GB - 10GB or so).

With a big dbcache validation is a write only process on the disk. Smiley

Not that SSDs aren't a lot better in general.  But if your choice is an anaemic 500GB SSD that with bitcoin and your other usage will be out of space in a year or a 10TB 7200RPM HDD, well... slow is better than not running.
legendary
Activity: 3458
Merit: 6231
Crypto Swap Exchange
Let me bold large red color something:

USE AN SSD

There is a lot of disk I/O going on and having an SSD for the blockchain will save you a ton of time.

*Use a good SSD, somewhere in one of other posts I noted that a cheap generic SSD was slower then a 7200RPM spinning drive but even a halfway decent SSD is better.

Stay safe.

-Dave
legendary
Activity: 2464
Merit: 3158
Hey there,

Lately I have jammed my Bitcoin Core full node blockchain.
Don't ask why, I am not sure what happened. It was txindexed etc ...

When I tried to open it again, there was something wrong about the index, so I went on with provided instructions.
At some point I realized that the folder with the data weights a mere 100's of MBs : all blockchain data was gone.
I have good practices when it comes to handle crypto funds, so they were not at risk even in the case of total deletion of that bitcoin folder.

I have another full node on a server, but can't access the HDD.
I started zipping the blockchain directory on my server, to download it afterwards, but that also is very time extensive.



I need a full node to work with.
Syncing can take days, if not weeks.
Based on my trials, luckily there are a couple options / settings / things you can do to enhance the synchronization speed.

1) Don't put your "bitcoin folder" on an external drive.
The input/output speed of the external device is a bottleneck and BTC will take ages to sync that way.
I use a HDD rack to easily swap drives when needed. I hold this kind of data (blockchains) on drives I swap at need.
Despite the rack receiver being wired to my motherboard with a SATA cable, when I used it as the data directory for Bitcoin Core, the speed for synchronization was insanely slow.
I ended up using an HDD that is directly wired to my motherboard with SATA, cutting out every interface.

2) Have lots of RAM.
I'm lucky to have 32Gb RAM on my Desktop computer.
There is an option in bitcoin.conf called dbcache.
By default, it limits the usage of RAM for Bitcoin Core and leave enough for the rest of your processes.
I put 24GB as an option here. I am only syncing up on that computer for now.

While looking at the resources manager, I noticed that the Bitcoin Core process took up to 12GB in RAM.

3) Have a fast connection.
I'm lucky : I have a 1Gb fiber connection.
My RAM was filling up with raw block data, then Bitcoin Core processed it.



In the end, it took ~20hr to fully sync from scratch. It was on a Windows machine.
It was indicating an average speed of 12% progress per hour, which makes it ~8hr.
As I need the txindex, the data displayed on Bitcoin Core wasn't changing but the index files were created by the process : it didn't crash despite the info displayed remains still.

A few observations :
- Launch the initial sync and let it sync all the way through.
I noted that if you close Bitcoin Core and launch it later to keep syncing, the speed will drastically reduce.

- Sometimes it seems like the process crashed because nothing is happening.
As long as it didn't give an error that it crashed indeed, your Bitcoin Core client might simply be writing data to the disk, which doesn't impact the displayed data.

- Weirdly, in my case syncing was faster on Windows than on Linux.  Shocked



Thanks for reading.
If you have any other advice, please share them. Smiley
Pages:
Jump to: