Author

Topic: 🍑 PeachBitcoin [CHALLENGE] Run A Bitcoin Node: 14 Days To 14 Merits - page 147. (Read 35110 times)

sr. member
Activity: 1007
Merit: 279
Payment Gateway Allows Recurring Payments
Great Initiative @NotATether I always wanted to try Bitcoin Core but heard that it'll consume a lot of Data so I was kinda of confused. but now to take part in your challenge I am going to have a Experience Bitcoin Core Too  And hopefully learn something new as well.


Day 1 For Me:



It is my first time with Bitcoin Core in fact with any nodes. so Please guide me if I am doing it wrong.
legendary
Activity: 2268
Merit: 18771
The workload is equivalent to downloading a 600GB file in your browser and scanning it for viruses.
It's a lot more than that. My (laptop) sync is almost done, and my disk %util is now consistently around 96%. It has written 4.58 TB to disk in 42 hours.
When it's done, I'll create a progress graph.
Loyce is correct. The blocks folder will be ~550 GB. The chainstate, however, is what is constantly being read and written to.

The total size of the chainstate will only be ~9 GB once all is said and done, but it is updated after every block to remove all spent UTXOs and update all newly created UTXOs. Ideally this would be stored entirely within your RAM and only flushed to disk when you shutdown your node, but few people are running Core with a big enough dbcache, and so the chainstate file on your SSD is constantly being updated, especially during the IBD. Once you are fully synced and running, then Core will cache as much of the chainstate as possible in your RAM (up to your dbcache limit), and will optimize what it can cache to RAM (i.e. will preferentially cache newer UTXOs which are more likely to be spent rather than ones which have been dormant for years, specifically to minimize how much needs to be written to the chainstate file).

The more RAM you allocate, the less your chainstate will be written to. The less often you shutdown your node, the less your chainstate will be written to.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
It's specified in megabytes and it can be a maximum of 16GB.

Actually it's specified in mebibytes, according to this from bitcoin's github:

Quote
-dbcache= - the UTXO database cache size, this defaults to 450. The unit is MiB (1024).
The minimum value for -dbcache is 4.

It doesn't really make a big difference, I just mention it for the sake of accuracy.

Well, yeah. Megabytes is technically 1000 Kilobytes, but the only people who use that measurement are disk & memory manufacturers, everyone else considers them a factor of 1024 although technically it is not.
hero member
Activity: 560
Merit: 1060
It's specified in megabytes and it can be a maximum of 16GB.

Actually it's specified in mebibytes, according to this from bitcoin's github:

Quote
-dbcache= - the UTXO database cache size, this defaults to 450. The unit is MiB (1024).
The minimum value for -dbcache is 4.

It doesn't really make a big difference, I just mention it for the sake of accuracy.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
The workload is equivalent to downloading a 600GB file in your browser and scanning it for viruses.
It's a lot more than that. My (laptop) sync is almost done, and my disk %util is now consistently around 96%. It has written 4.58 TB to disk in 42 hours.
When it's done, I'll create a progress graph.

Quote
the process of syncing your Core Node one time doesn't affect your disk life much if at all.
For a modern disk, that's correct. My 2 TB Samsung has 1200 TBW.
One of my old SSDs can only write 36 TB. It won't instantly kill it, but it takes a significant chunk out of it's life.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
With this speed,  it is quick enough but I want to learn

How to set dbcache?
How to know how the sync progress kills % of my disk life?

Use the -dbcache setting in the command line or add dbcache=the value to your bitcoin.conf file. It's specified in megabytes and it can be a maximum of 16GB. Although most people should use 2 or 4GB of dbcache. There is little benefit to make it higher.

Are you using an HDD or SSD?

HDDs are designed to handle petabytes or at the very least, many, many terabytes of read/write IO total bytes. SSDs a bit less than that, but regardless, the process of syncing your Core Node one time doesn't affect your disk life much if at all. Having said that, if your disk is failing SMART tests, do not use any disk-intensive apps like running a Node, and get it replaced.
member
Activity: 97
Merit: 43
Yes, everyone can join. No, it will not kill your disk. The workload is equivalent to downloading a 600GB file in your browser and scanning it for viruses. Will that break your disk? No. No matter whether it is an HDD or SSD.
With this speed,  it is quick enough but I want to learn

How to set dbcache?
How to know how the sync progress kills % of my disk life?

I have Core i7, 8GB RAM and think I won't to do anything with dbcache, want to know only.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Yes, everyone can join. No, it will not kill your disk. The workload is equivalent to downloading a 600GB file in your browser and scanning it for viruses. Will that break your disk? No. No matter whether it is an HDD or SSD.
member
Activity: 97
Merit: 43
Can I join it?

I actually run a prune node since 3 Jan but on my desktop because I have my laptop to repair something for a few days. The syncing on desktop is very low.

Today, I copied the prune node data to my laptop, and it syncs very quickly.








Code:
{
  "chain": "main",
  "blocks": 420024,
  "headers": 824850,
  "bestblockhash": "00000000000000000165e59d1ba94b2b386a4d80aa318077941ac020c979e432",
  "difficulty": 213398925331.3239,
  "time": 1468095911,
  "mediantime": 1468095039,
  "verificationprogress": 0.1489870150749924,
  "initialblockdownload": true,
  "chainwork": "0000000000000000000000000000000000000000001f2d93b0bc10f3ee0455d7",
  "size_on_disk": 3332447503,
  "pruned": true,
  "pruneheight": 416588,
  "automatic_pruning": true,
  "prune_target_size": 4999610368,
  "warnings": ""
}

Reading LoyceV's posts, I am worrying that it will kill my laptop and its disk. Should I continue?
hero member
Activity: 560
Merit: 1060
Great initiative NotATether.

I am obviously running nodes since I started my bitcoin journey, but since I have less than 1000 merits, I will participate with one of my nodes.

DAY 1:

Code:
{
  "chain": "main",
  "blocks": 824839,
  "headers": 824839,
  "bestblockhash": "00000000000000000000e205f4449a5ddbbe5aa29964d00b05c6b88e4ad3757c",
  "difficulty": 73197634206448.34,
  "time": 1704704580,
  "mediantime": 1704699308,
  "verificationprogress": 0.9999952006027929,
  "initialblockdownload": false,
  "chainwork": "0000000000000000000000000000000000000000649ccedf7c72035b2f18ad28",
  "size_on_disk": 612733635152,
  "pruned": false,
  "warnings": ""
}
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Another experiment:
Code:
./bitcoind -dbcache=4096 -prune=550 -datadir=/dev/shm/bitcoin
I'll keep hourly status-snapshots again.

I'm running this on a 6-core Xeon (E-2236) with 32 GB RAM and 1 GBit internet (and HDD, but I won't be using that).
This took 11 hours and 10 minutes to sync Smiley Unfortunately, I didn't store data from getblockchaininfo. Somehow bitcoin-cli couldn't connect to the daemon now. So I can't tell how the progress estimate changed over time.
full member
Activity: 728
Merit: 151
Defend Bitcoin and its PoW: bitcoincleanup.com
Day 4

Code:

{
  "chain": "main",
  "blocks": 824835,
  "headers": 824835,
  "bestblockhash": "00000000000000000003d06c35c82f095a9f411d21fb53ebda255fb46be0a1ad",
  "difficulty": 73197634206448.34,
  "time": 1704699350,
  "mediantime": 1704698646,
  "verificationprogress": 0.9999996401959509,
  "initialblockdownload": false,
  "chainwork": "0000000000000000000000000000000000000000649bc493d635715d3be2c244",
  "size_on_disk": 4957545000,
  "pruned": true,
  "pruneheight": 822352,
  "automatic_pruning": true,
  "prune_target_size": 4999610368,
  "warnings": ""
}
sr. member
Activity: 1680
Merit: 288
Eloncoin.org - Mars, here we come!
I'm in! I started about 4 hours ago and this is my progress. The time left is showing just 3 days but it’s not always so, sometimes it reset and goes back to an average of 12 days or so.


These are my settings. I set pruning to 20GB (I think it should be more because my external ssd drive which I am using has over 500GB of space).

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Which SSD do you use and what is its TBW?
It's an older 128 GB disk, with 70 TBW. It's not something I worry about now, since that disk has served me for many years and still has enough left, but I was surprised that just 1 blockchain download will now take 10-20% of it's "life". It has written another 0.3 TB since my previous post.



Another experiment:
Code:
./bitcoind -dbcache=4096 -prune=550 -datadir=/dev/shm/bitcoin
I'll keep hourly status-snapshots again.

I'm running this on a 6-core Xeon (E-2236) with 32 GB RAM and 1 GBit internet (and HDD, but I won't be using that).
I expect this one not to slow down when chainstate grows, because it won't be low on memory (and even though dbcache isn't that high, that doesn't matter when using a RAM-drive for storage).
hero member
Activity: 714
Merit: 1010
Crypto Swap Exchange
I checked something else: writing 20-50 MB/s for a few days is very destructive for an SSD! Until now, I've written 2.6 TB in less than a day, which is already more than 10% of all writes that SSD has seen in it's entire life! My current estimate for this old disk is that this is going to cost 10-20% of the SSD's TBW (TeraBytes Written, a design-life span for SSDs). This is destructive for old SSDs with little RAM!

Which SSD do you use and what is its TBW?

Now that you made me curious about the wear of my own SSD in my Umbrel node (mine is a Samsung SSD 860 Evo 1TB), I checked S.M.A.R.T. attributes:
Code:
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   094   094   000    Old_age   Always       -       25807
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       787
177 Wear_Leveling_Count     0x0013   087   087   000    Pre-fail  Always       -       184
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0
181 Program_Fail_Cnt_Total  0x0032   100   100   010    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   010    Old_age   Always       -       0
183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   059   047   000    Old_age   Always       -       41
195 Hardware_ECC_Recovered  0x001a   200   200   000    Old_age   Always       -       0
199 UDMA_CRC_Error_Count    0x003e   100   100   000    Old_age   Always       -       0
235 Unknown_Attribute       0x0012   099   099   000    Old_age   Always       -       41
241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       202678239705

This SSD has endured two full intial blockchain downloads and 24/7 operation as Umbrel node for about two years now. Total Power_On_Hours indicates almost three years of use but the first year of life was not as storage device for a Bitcoin node.

Attribute ID# 177 Wear_Leveling_Count is to be read as remaining percentage of endurance. So after something over a total of 94TiBW (1TiB is 240 bytes) my SSD is at 87% of remaining endurance. I'm not worried at all now. I was a bit worried when I read LoyceV's post.

I hope this is not off-topic here.
sr. member
Activity: 322
Merit: 318
The Alliance Of Bitcointalk Translators - ENG>BAN
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
I'm currently at 2020-12-31 (63.38% progress). CPU-usage is around 40%, memory-usage at 23%, chainstate is 5.0 GB and bandwidth continuously varies between 0 and 100%.
System load is 2.6. iostat -dx /dev/sda 5 shows disk %util around 60%, and iotop shows reading 1-3 MB/s, and writing 20-50 MB/s. Memory usage of bitcoind is 12.0%.
Update: with 5.6 GB chainstate, progress (and bandwidth consumption) is going down. Disk reads are increasing while CPU-consumption is decreasing.
I checked something else: writing 20-50 MB/s for a few days is very destructive for an SSD! Until now, I've written 2.6 TB in less than a day, which is already more than 10% of all writes that SSD has seen in it's entire life! My current estimate for this old disk is that this is going to cost 10-20% of the SSD's TBW (TeraBytes Written, a design-life span for SSDs). This is destructive for old SSDs with little RAM!
hero member
Activity: 714
Merit: 1010
Crypto Swap Exchange
Some months ago I had some issues with my Raspi Umbrel node which I run for testing purposes only, not a big fan of Umbrel, (1TB SATA SSD, Raspi 4B with 8GB RAM) and I decided to throw away any Umbrel app (i.e. delete Bitcoin, Electrs, Lightning node containers on it). I didn't want to setup Umbrel completely from scratch though, because I also wanted to test how fast a (and in particular my) Raspi can perform the initial blockchain download (IBD) under slightly optimized conditions.

After deleting all Umbrel containers I made sure the blockchain files were truely gone. Then I re-installed the Umbrel Bitcoin Core container, opted for ~4.5GB of dbcache size, according to debug.log Umbrel passed dbcache=4883 to bitcoind.

Quote from: debug.log
2023-06-22T18:00:42Z Bitcoin Core version v25.0.0 (release build)
...
... Cache configuration:
... * Using 2.0 MiB for block index database
... * Using 610.1 MiB for transaction index database
... * Using 533.9 MiB for basic block filter index database
... * Using 8.0 MiB for chain state database
... * Using 3729.0 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space)

My Umbrel is configured to only communicate via Tor outside of my local network. My internet connection has stable 100MBit/s for download and 40MBit/s upload bandwidth. My Raspi isn't overclocked but it is actively cooled with a fan which keeps it under 40°C even at higher workloads.

I will list the timestamps of UpdateTip log entries at intervals of 50k blocks as processed by my Raspi Umbrel below:
Quote from: debug.log

2023-06-22T18:01:53Z UpdateTip: new best=00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 height=1 version=0x00000001 log2_work=33.000022 tx=2 date='2009-01-09T02:54:25Z' progress=0.000000 cache=0.0MiB(1txo)
...
2023-06-22T18:12:36Z UpdateTip: new best=000000001aeae195809d120b5d66a39c83eb48792e068f8ea1fea19d84a4278a height=50000 version=0x00000001 log2_work=48.367456 tx=50780 date='2010-04-10T16:22:18Z' progress=0.000060 cache=6.7MiB(33635txo)
...
2023-06-22T18:19:45Z UpdateTip: new best=000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506 height=100000 version=0x00000001 log2_work=58.648173 tx=216577 date='2010-12-29T11:57:43Z' progress=0.000256 cache=17.5MiB(104639txo)
...
2023-06-22T18:29:47Z UpdateTip: new best=0000000000000a3290f20e75860d505ce0e948a1d1d846bec7e39015d242884b height=150000 version=0x00000001 log2_work=67.003783 tx=1718407 date='2011-10-20T13:44:51Z' progress=0.002032 cache=165.1MiB(1240022txo)
...
2023-06-22T18:50:39Z UpdateTip: new best=000000000000034a7dedef4a161fa058a2d67a173a90155f3a2fe6fc132e0ebf height=200000 version=0x00000002 log2_work=68.741562 tx=7316696 date='2012-09-22T10:45:59Z' progress=0.008650 cache=345.2MiB(2621907txo)
...
2023-06-22T19:35:19Z UpdateTip: new best=000000000000003887df1f29024b06fc2200b55f8af8f35453d7be294df2d214 height=250000 version=0x00000002 log2_work=71.012098 tx=21491097 date='2013-08-03T12:36:23Z' progress=0.025408 cache=991.4MiB(7315303txo)
...
2023-06-22T20:33:26Z UpdateTip: new best=000000000000000082ccf8f1557c5d40b21edabb18d2d691cfbf87118bac7254 height=300000 version=0x00000002 log2_work=78.499549 tx=38463789 date='2014-05-10T06:32:34Z' progress=0.045474 cache=1541.7MiB(11807717txo)
...
2023-06-22T21:58:11Z UpdateTip: new best=0000000000000000053cf64f0400bb38e0c4b3872c38795ddde27acb40a112bb height=350000 version=0x00000002 log2_work=82.531372 tx=63960713 date='2015-03-30T22:17:14Z' progress=0.075615 cache=2593.3MiB(19434736txo)
...
2023-06-23T05:55:40Z UpdateTip: new best=000000000000000004ec466ce4732fe6f1ed1cddc2ed4b328fff5224276e3f6f height=400000 version=0x00000004 log2_work=84.183059 tx=112477766 date='2016-02-25T16:24:44Z' progress=0.132956 cache=2921.5MiB(20675352txo)
...
2023-06-23T13:17:17Z UpdateTip: new best=0000000000000000014083723ed311a461c648068af8cef8a19dcd620c07a20b height=450000 version=0x20000000 log2_work=85.875657 tx=190757161 date='2017-01-25T22:11:29Z' progress=0.225462 cache=3542.6MiB(25890028txo)
...
2023-06-23T20:47:40Z UpdateTip: new best=00000000000000000024fb37364cbf81fd49cc2d51c09c75c35433c3a1945d04 height=500000 version=0x20000000 log2_work=87.684014 tx=283595039 date='2017-12-18T18:35:25Z' progress=0.335151 cache=1905.8MiB(12482102txo)
...
2023-06-24T03:21:22Z UpdateTip: new best=000000000000000000223b7a2298fb1c6c75fb0efc28a4c56853ff4112ec6bc9 height=550000 version=0x20000000 log2_work=90.011241 tx=356588225 date='2018-11-14T02:35:41Z' progress=0.421371 cache=3917.7MiB(28934852txo)
...
2023-06-24T12:21:01Z UpdateTip: new best=00000000000000000007316856900e76b4f7a9139cfbfba89842c8d196cd5f91 height=600000 version=0x20000000 log2_work=91.230115 tx=466297405 date='2019-10-19T00:04:21Z' progress=0.550934 cache=1862.6MiB(12118385txo)
...
2023-06-24T23:39:25Z UpdateTip: new best=0000000000000000000060e32d547b6ae2ded52aadbc6310808e4ae42b08cc6a height=650000 version=0x20000000 log2_work=92.317149 tx=571876797 date='2020-09-25T23:39:25Z' progress=0.675557 cache=2766.8MiB(19500715txo)
...
2023-06-25T12:39:54Z UpdateTip: new best=0000000000000000000590fc0f3eba193a278534220b2b37e9849e1a770ca959 height=700000 version=0x3fffe004 log2_work=93.063032 tx=669566382 date='2021-09-11T04:14:32Z' progress=0.790797 cache=2384.8MiB(16335145txo)
...
2023-06-25T23:48:00Z UpdateTip: new best=0000000000000000000592a974b1b9f087cb77628bb4a097d5c2c11b3476a58e height=750000 version=0x3119a000 log2_work=93.683772 tx=757720472 date='2022-08-18T18:02:52Z' progress=0.894757 cache=1966.1MiB(12875717txo)
...
2023-06-26T16:46:31Z UpdateTip: new best=00000000000000000000a2c9c72ff100692ba39a89d5b84417a8b9d0e947db39 height=796033 version=0x2fffe000 log2_work=94.265140 tx=856792482 date='2023-06-26T16:46:05Z' progress=1.000000 cache=3589.5MiB(22517925txo)

If there's interest, I can also post UpdateTip logs at some constant time elapsed intervals, maybe every 6h would be convenient?

As you can see the IBD to current tip at block height 796033 on 2023-06-26T16:46:31Z (received time, not blocktime) took little less than 95 hours in total on such "weak" hardware like a Raspi 4B with 8GB RAM and 1TB SATA SSD, UASP active for USB3-SATA-adapter.
full member
Activity: 728
Merit: 151
Defend Bitcoin and its PoW: bitcoincleanup.com
Day 3

Code:
{
  "chain": "main",
  "blocks": 824738,
  "headers": 824738,
  "bestblockhash": "0000000000000000000015b6e3b72063bc3dcefb37b60fee1ba3ab59c988b931",
  "difficulty": 73197634206448.34,
  "time": 1704629572,
  "mediantime": 1704626490,
  "verificationprogress": 0.9999967297732044,
  "initialblockdownload": false,
  "chainwork": "000000000000000000000000000000000000000064828ae956f89d0ef207422b",
  "size_on_disk": 4913725693,
  "pruned": true,
  "pruneheight": 822275,
  "automatic_pruning": true,
  "prune_target_size": 4999610368,
  "warnings": ""
}
sr. member
Activity: 1680
Merit: 288
Eloncoin.org - Mars, here we come!
My system:
Intel i3 laptop with 8 GB RAM, no swap, SSD, VPN and about 70 Mbit/s fibre (shared with the rest of the house).

I haven't tried this with 8 GB RAM since the Ordinal spam largely increased chainstate. If this doesn't lead to surprises, I expect it to be done on Monday. If it does lead to surprises, I'll try the same with 16 GB RAM.

Oh wow, I felt it would require a higher specification system. My system is actually higher than yours, it’s a gaming system from barely a few years back so I believe if you can do it with this specifications then I can too. The difference is just the internet speed. I’ll just start today anyways and just see how it goes.
Jump to: