Author

Topic: Fun & learning Bitcoin blockchain downloaded on 1TB Silicon Power 2.5 SSD (Read 343 times)

legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Does it mean that what was defined earlier for the average block size doesn’t matter anymore and would not matter in the future also?
The manner in which that limit was defined is extremely important; through a softfork. An old client wouldn't see a 4MB block. The block size limit is, still, the same. What SegWit did, was to segregate the digital signature and place it in a separate, extended block, while retaining the validity of the transaction as old clients would see it. The size limit increase wasn't arbitrary.

If we find another option to redo that in the future, then good. It'll be down to the user's choice to opt-in or out the softfork; they'll still remain part of the Bitcoin network, enforcing the old rules as much as it's practically possible. But as I see it, the proposed way is through a hardfork, essentially tinkering with the block size limit, arbitrarily one could say.
full member
Activity: 1092
Merit: 227
Even these estimation may not hold for more than 10 years. Grin
They won't. Take into account that arbitrarily tinkering with the block size limit would result in more reckless usage of the chain space. Meaning perhaps even more Ordinals, and more transactions in general.

Can you explain what you mean by I/O jamming?
It's usual phenomenon when hardware has to deal with lots of data, as in blockchain. There are limitations (read and write speeds), background processes that might interfere, the drive health as a whole. I think it might have to do with the architecture of the drive; Bitcoin Core doesn't work best on everything.

Does it mean that what was defined earlier for the average block size doesn’t matter anymore and would not matter in the future also? Like one post above shows how the block average size has grown above 1 MB and there are also few blocks with 2 MB size.

Also, as far as I have read if the block size is rising then it will definitely cause the problems at mining end. For example if there x time that is required to solve the transactions in one block with 1 MB size then ultimately in the future things could be worst if block size grows let’s say to 10MB per block!!!

Secondly, this could be problem when downloading the core. If it is already needing 1 TB SSD, then what would happen in the future?
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Even these estimation may not hold for more than 10 years. Grin
They won't. Take into account that arbitrarily tinkering with the block size limit would result in more reckless usage of the chain space. Meaning perhaps even more Ordinals, and more transactions in general.

Can you explain what you mean by I/O jamming?
It's usual phenomenon when hardware has to deal with lots of data, as in blockchain. There are limitations (read and write speeds), background processes that might interfere, the drive health as a whole. I think it might have to do with the architecture of the drive; Bitcoin Core doesn't work best on everything.
full member
Activity: 896
Merit: 193
web developer for hire
im more inclined to think some software on your system is corrupt or the OS is hosed. any ssd will continue to run under heavy load, worse case it will just throttle, not cause something to toss an error.
It isn't corrupt. It should've throttled but doesn't. I can't explain why I get to 935MB download in Fulcrum before getblock errors. I'm running Bitcoin Core, Electrum & EPS on the same SSD with out experiencing a problem. When Fulcrums's involved it's affecting Bitcoin Core functioning.

Have you taken a look at the drive with either

crystaldiskinfo: https://crystalmark.info/en/download/
or
Hard Disk Sentinel: https://www.hdsentinel.com/

Both are well known in the PC / Server world for letting you know about disk issues that the OS may not.
I've used Crystal Disk Mark today there isn't any thing to worry on. Silicon Power's got SP Toolbox for checking disk status it isn't showing wrong the disk health's good.

As I posted in the other thread I am running core and Fulcrum on an older machine with no issues. And they cheapest 1TB drive that I could find. It happened to be a SP. Obviously, in the 'real world' this means nothing. I could have gotten lucky and gotten the best one they ever made.
I've been reading Silicon Power 2.5 SSD A55 reviews so wouldn't be surprised if it's my SSD. You're using m.2 from Silicon Power it's got faster speeds. Their reviews aren't bad so you shouldn't have a problem.

https://www.gadgetreview.com/silicon-power-a55-review
https://wccftech.com/review/silicon-power-a55-512gb-review-is-it-any-good/
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Have you taken a look at the drive with either

crystaldiskinfo: https://crystalmark.info/en/download/
or
Hard Disk Sentinel: https://www.hdsentinel.com/

Both are well known in the PC / Server world for letting you know about disk issues that the OS may not.

As I posted in the other thread I am running core and Fulcrum on an older machine with no issues. And they cheapest 1TB drive that I could find. It happened to be a SP. Obviously, in the 'real world' this means nothing. I could have gotten lucky and gotten the best one they ever made.

-Dave
legendary
Activity: 4354
Merit: 3614
what is this "brake pedal" you speak of?
There's getblock errors after 935MB download when Fulcrum runs with Bitcoin Core on the same SSD. Bitcoin Core get stuck it doesn't shut down without task manager.

So I'm confirming Silicon Power SSD A55 runs Bitcoin Core with I/O issues already posted. If you're running intensive downloads simultaneously you shouldn't buy this SSD because it isn't able to do it.

im more inclined to think some software on your system is corrupt or the OS is hosed. any ssd will continue to run under heavy load, worse case it will just throttle, not cause something to toss an error.




full member
Activity: 896
Merit: 193
web developer for hire
I've tested Fulcrum but it wasn't a success. There's getblock errors after 935MB download when Fulcrum runs with Bitcoin Core on the same SSD. Bitcoin Core get stuck it doesn't shut down without task manager.

So I'm confirming Silicon Power SSD A55 runs Bitcoin Core with I/O issues already posted. If you're running intensive downloads simultaneously you shouldn't buy this SSD because it isn't able to do it.
full member
Activity: 896
Merit: 193
web developer for hire
Although there's nothing wrong with your SSD since it's fast enough to handle Bitcoin Core.
My SSD isn't defective it's good & fast enough to handle Bitcoin Core. I/O isn't with consistency but I'm using it for my node. It hasn't got the highest reviews but it's working.

At very least, it shouldn't be caused by the SSD though. For reference, i use 3.5" HDD to run 3 instance of Bitcoin Core (mainnet, testnet and signet) without noticeable performance issue.
It's performance is stable now I don't know why it wasn't performing faster or better during downloading.
copper member
Activity: 2156
Merit: 983
Part of AOBT - English Translator to Indonesia
Short answer,
1. Before SegWit activation (which happened on 24 August 2017), Bitcoin block size limit was 1MB.
2. After SegWit activation, Bitcoin block size limit now is 4 million weight units. It allows block size have size bigger than 1MB, with theoretical limit 4MB. SegWit transaction has lower weight unit size compared with non-SegWit transaction which makes there are blocks with size bigger than 1MB.

Ok, thanks for the answer,  Wink

Is that why there is one block with the same tx that sometimes had a different size like in my shown images above, thee is because there are people still using the original address and there are people who are using native-segwit right?
hero member
Activity: 406
Merit: 443
None, you'd have to make several assumption and make quick calculation. For example, i'll make prediction with these details
1. Each block will have average size 1.86MB[1]
There is an additional challenge in this estimation, which is that all blocks have become full after ordinal inscriptions. Therefore, in estimating the average, the blocks must be during the last two months, taking into account the possibility of any future soft fork.
Personally, I prefer 2-3 megabytes per day, or about 300-432 MB a day as blockchain daily increase.

Even these estimation may not hold for more than 10 years. Grin
copper member
Activity: 2156
Merit: 983
Part of AOBT - English Translator to Indonesia
Thanks for the answer guys I just look up the data from the link that you provide and see this. Bitcoins have 1 MB but from the data below it now has an average 1.6 Mb right, I don't really understand and from year 2010 it keep getting peak so another question is, if the block is designed to hold 1 MB why the graphic show more than 1MB



Since I know nothing  Cry silly me I also look at the mempool.space and found that the block can reach more than 1 MB and just by looking at it easily get 2 MB of data can you guys explain to me  Grin

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange



For those who want to run a full node, the option of 500 GB is not available, given that we have reached 484 GB.
Just curious guys is there any website or thread who predicting the full size of BTC block transaction when all bitcoin has been mined 21 million  Grin .

None, you'd have to make several assumption and make quick calculation. For example, i'll make prediction with these details
1. Each block will have average size 1.86MB[1]
2. Current block height is 793386
3. Current blockchain size is 486.89GB (498575.36 MB)[2]
4. Block height 6930000 is when mining has 0 mining reward[3]

Code:
Total blocks will be mined = 6930000 - 793386 = 6136614 blocks
Blockchain size growth = 6136614 blocks * 1.86MB = 11414102.04 MB (11146.58 GB)
Blockchain size after 21M BTC mined = 486.89GB + 11146.58 GB = 11633.47 GB (11.36 TB)

But there's no way we won't see increase of maximum block size in a century.

[1] https://www.statoshi.info/d/000000002/blocks?orgId=1&from=1675021380854&to=1686147220976&viewPanel=4
[2] https://blockchair.com/bitcoin/
[3] https://en.bitcoin.it/wiki/Controlled_supply#Projected_Bitcoins_Long_Term
legendary
Activity: 2170
Merit: 1789
Just curious guys is there any website or thread who predicting the full size of BTC block transaction when all bitcoin has been mined 21 million  Grin .
I can find some tools that track blockchain size such as this one[1], but I can't find those who predict the size when all BTC has been mined. You can probably make a manual calculation with a fixed variable such as 1 MB per new block, 6 new blocks per day, and so on.

SP A55 doesn't have DRAM so that's the reason it's having I/O limits.
Maybe we can visualize it in detail if you show the comparison between and after this problem comes up. For example, my external SSD speed decreased by a blot (from 200MB/s write to 30MB/s after I copy a ton of data in one go). AFAIK, the SSD speed should come back up after you trim it or clear the cache even if they don't have dedicated DRAM. CMIIW.

[1] https://www.blockchain.com/explorer/charts/blocks-size
copper member
Activity: 2156
Merit: 983
Part of AOBT - English Translator to Indonesia



For those who want to run a full node, the option of 500 GB is not available, given that we have reached 484 GB.

Just curious guys is there any website or thread who predicting the full size of BTC block transaction when all bitcoin has been mined 21 million  Grin .

Bitcoin now already reached almost 500GB full and I know 2022 ETH had around ~700GB, the size getting bigger is there any cart just want to see it . Smiley
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
I couldn't buy 1TB Samsung SSD so I've used Silicon Power A55 it's the cheapest SLC 2.5 SSD I've found.

FYI, your SSD actually only use SLC for it's cache. The memory flash type to store your data use TLC[2]. If you're looking for SSD which actually use SLC to store your data permanently, i expect it would be very expensive. Although there's nothing wrong with your SSD since it's fast enough to handle Bitcoin Core.

--snip--

I mean after reading the OP, I am shocked as they were also stuck at some point even after utilizing SSD with 1 TB of storage. Having core size more and more day by day could be concerning as an individual. Do I need to replace entire SSD if someday blockchain goes beyond that space?

At very least, it shouldn't be caused by the SSD though. For reference, i use 3.5" HDD to run 3 instance of Bitcoin Core (mainnet, testnet and signet) without noticeable performance issue.

[1] https://www.silicon-power.com/web/product-ace_a55
[2] https://www.techpowerup.com/ssd-specs/silicon-power-ace-a55-1-tb.d1272
full member
Activity: 896
Merit: 193
web developer for hire
Can you explain what you mean by I/O jamming? I'm not familiar with the term. Is it similar to how your SSD speed becomes slower after a certain time has passed when you copy/paste data?
465GB blockchain transfer speeds slowed from Samsung SSD to Silicon Power SSD but I/O stops or slows during downloads.

Assuming there is no problem with your connection, I believe one of the possible reasons is that the RAM on your A55 SSD is full. AFAIK, low disk space might also cause SSD to become slow, although I doubt that's the case if you have a 1 TB drive. CMIIW.
SP A55 doesn't have DRAM so that's the reason it's having I/O limits.
full member
Activity: 1092
Merit: 227
I shifted the 95% blockchain from my Samsung 500GB SSD to my SP 1TB & restarted the QT. It took hours to arrive to 100% because I/O jamming. What's strange it didn't I/O jam when transferring from the Samsung 500GB SSD.
Can you explain what you mean by I/O jamming? I'm not familiar with the term. Is it similar to how your SSD speed becomes slower after a certain time has passed when you copy/paste data?
As I understand it, it is the delay that occurs when you want to convert between the input and output units, and the reason for this is the I/O device sends an interrupt signal to the CPU, but for the user, the memory was full, as the size of the free space was about 55 megabytes, which would cause a complete suspension of the system waiting for the data to be transferred because the CPU will be slower.



For those who want to run a full node, the option of 500 GB is not available, given that we have reached 484 GB.

That is definitely reserved space for cache's and if not system drive then that should have been 500 GB/500 GB.

I am not sure how is it possible to smoothly run bitcoin core if we run them on such low disk space? Moreover it will keep using more and more space as we grow day by day on the blockchain and transactions start building up everyday.

I mean after reading the OP, I am shocked as they were also stuck at some point even after utilizing SSD with 1 TB of storage. Having core size more and more day by day could be concerning as an individual. Do I need to replace entire SSD if someday blockchain goes beyond that space?
hero member
Activity: 406
Merit: 443
I shifted the 95% blockchain from my Samsung 500GB SSD to my SP 1TB & restarted the QT. It took hours to arrive to 100% because I/O jamming. What's strange it didn't I/O jam when transferring from the Samsung 500GB SSD.
Can you explain what you mean by I/O jamming? I'm not familiar with the term. Is it similar to how your SSD speed becomes slower after a certain time has passed when you copy/paste data?
As I understand it, it is the delay that occurs when you want to convert between the input and output units, and the reason for this is the I/O device sends an interrupt signal to the CPU, but for the user, the memory was full, as the size of the free space was about 55 megabytes, which would cause a complete suspension of the system waiting for the data to be transferred because the CPU will be slower.



For those who want to run a full node, the option of 500 GB is not available, given that we have reached 484 GB.
legendary
Activity: 2170
Merit: 1789
I shifted the 95% blockchain from my Samsung 500GB SSD to my SP 1TB & restarted the QT. It took hours to arrive to 100% because I/O jamming. What's strange it didn't I/O jam when transferring from the Samsung 500GB SSD.
Can you explain what you mean by I/O jamming? I'm not familiar with the term. Is it similar to how your SSD speed becomes slower after a certain time has passed when you copy/paste data?

Assuming there is no problem with your connection, I believe one of the possible reasons is that the RAM on your A55 SSD is full. AFAIK, low disk space might also cause SSD to become slow, although I doubt that's the case if you have a 1 TB drive. CMIIW.
full member
Activity: 896
Merit: 193
web developer for hire
    



How much percent blockchain on 500GB SSD?
Raspberry Pi micro SDXC Bitcoin blockchain

For the next part on my Bitcoin fun & learning I've downloaded the full blockchain on 1 TB SSD. I faced I/O jamming when it stopped downloading or become slow. I couldn't buy 1TB Samsung SSD so I've used Silicon Power A55 it's the cheapest SLC 2.5 SSD I've found. Shutting down QT for restarting got it moving. On my Samsung 500GB SSD there wasn't any I/O problems.

I shifted the 95% blockchain from my Samsung 500GB SSD to my SP 1TB & restarted the QT. It took hours to arrive to 100% because I/O jamming. What's strange it didn't I/O jam when transferring from the Samsung 500GB SSD.

In my next fun & learning I'm going to use VirtualBox for running Bitcoin QT with Electrum Personal Server on Windows.



Jump to: