Author

Topic: challenges of blockchain growth and potential solutions (Read 239 times)

hero member
Activity: 714
Merit: 1298
Hm, challenges. Two weeks ago I have acquired 2Tb 870 EVO Samsung ( for $119 from Amazon)  to accompany  my 1 TB SSD ( the same manufacture ).
At the moment, 2Tb is used as a backup for 1 Tb which holds synchronized copy of blockchain.
Probably in a year these two SSD will reverse  roles.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
    - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?

There are few of such discussion on this forum. But IIRC there's none on bitcoin-dev mailing.

    - Second-Layer Solutions: How effective are second-layer solutions like the Lightning Network in relieving the main blockchain? Are there experiences or concerns that we can share?

LN only useful for those who somtimes create Bitcoin transaction, since you need still need 2 on-chain TX (1 to open LN channel and 1 other to close LN channel). Bitcoin sidechain remain unpopular.

Quote
Pruning: For those already implementing pruning, how has this impacted your node's resource usage? Any insights or recommendations for others considering this approach?
If you cannot buy a larger disk, then you should apply this now, and put for example 500 GB (or whatever) as your pruning size. But of course, note that by default, your node will send only last 288 blocks (or something around that) to the network, if you use default settings.

No matter size of latest blocks you store, Bitcoin Core only broadcast last 288 blocks to prevent fingerprinting.
copper member
Activity: 813
Merit: 1944
Quote
I have yet to see any serious voices wanting to reduce the blocksize just to limit the blockchain growth.
Maybe I am not a "serious voice", but I hope I can tell you something interesting: the chain of signatures is preserved only to the nearest coinbase transaction. Which means, that it is possible to migrate transactions from one chain to another, and discard the history, if you preserve the latest coinbase transactions, and all unspent coins, which were derived from that. Which means, that some history is prunable, and you can remove some data, without affecting signatures.

Another interesting thought is that "Reclaiming Disk Space", described in the whitepaper, is only implemented as a pruning. But this is not the ultimate solution, and more things can be done, to bring that into Initial Blockchain Download, and to include a proof, that the transaction history is correct, instead of preserving that history as a plaintext. Which also means, that if Ordinals will abuse the chain too much, then some OP_NOPs may be reduced into "Data Proofs", which will prove, that a signature, behind some huge "OP_NOP", expressed as a given Ordinal, is correct, without storing that data (and even without processing that, because OP_NOPs have no additional meaning on consensus level).

Quote
I don't think such 'ideas' would have any community support, especially not with the current fees.
Of course, if you preserve existing code, and just reduce the current maximum block size, then nobody will agree into that. However, imagine that future transactions could be combined, and a lot of users could be hidden behind a small amount of signatures. What then? In that case, people may agree to limit the size of the block, but first, users should get some features to have their transactions joined, to compete with Ordinals in a more honest way than today.
legendary
Activity: 2394
Merit: 6581
be constructive or S.T.F.U
Mostly negative.

He is asking about 'reducing blocksize,' not increasing it. The points you listed are related to increasing the blocksize. I think the last person to ever bring up anything of that nature was Luke, with his proposal to reduce the blocksize to 300KB, with a growth function that takes it to a max of 30MB or so. That was actually a very good proposal that unfortunately did not see the light.

But yeah, honestly, I have yet to see any serious voices wanting to reduce the blocksize just to limit the blockchain growth. I don't think such 'ideas' would have any community support, especially not with the current fees. Cheesy
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
- Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?
Mostly negative.

- Lots of development is invested into migrating financial transactions to second layers.
- Most of the Bitcoin Core developers support that conservative approach.
- This specific scaling solution of tinkering with the block size limit has already happened on Bitcoin Cash (and other forks). The fact that it isn't recognized raises even more concern.
legendary
Activity: 2394
Merit: 6581
be constructive or S.T.F.U
It's even cheaper than that. I can find an 8 TB hard drive on Amazon right now for under $130. Even overestimating growth at 150 GB a year (so every block around 2.8 MB), that's 53 years to fill 8 TB at a cost of around $2 per year.

You're correct, Leo. Running your node on your local computer is undoubtedly more cost-effective. The reason I used cloud prices is that they offer a more accurate representation of the total cost. Running a node involves more than just 'disk space'; it includes factors like electricity, bandwidth, and wear and tear on other hardware components, affecting their lifespan. Additionally, the heat generated by your PC might contribute to your AC bill. All these small details, along with the host's profit margin, are factored into cloud prices. Assuming a 10% profit for the host, if running your node on the cloud costs $10, doing it locally would roughly amount to $9.
sr. member
Activity: 322
Merit: 300
The rapid growth of the Bitcoin blockchain raises concerns among full-node operators. Here are some brief points:

    Block Size Limit:
    The community is discussing adjusting the block size limit.

    Second-Layer Solutions (Lightning Network):
    Lightning Network addresses scalability by moving transactions off-chain. Its effectiveness depends on user adoption and challenges.

    Pruning:
    Reduce storage space by discarding older transaction data.

    Optimizations:
    Ongoing efforts to optimize Bitcoin and improve scalability.
legendary
Activity: 2268
Merit: 18711
-snip-
It's even cheaper than that. I can find an 8 TB hard drive on Amazon right now for under $130. Even overestimating growth at 150 GB a year (so every block around 2.8 MB), that's 53 years to fill 8 TB at a cost of around $2 per year.

The limiting factor for running a node going forwards has never been the size of the blockchain, but rather things like the size of the UTXO set and the speed of verification.

    - Optimizations: Are there any ongoing efforts or developments within the Bitcoin protocol to optimize resource usage and improve scalability?
There are proposals for how to limit the growing size of the UTXO set, such as utreexo. If you want to limit the growing size of the blockchain itself, then you simply run a pruned node.

Offchain solutions like Lightening Network has been very effective in taking load of the main chain and allowing users make smaller transactions at a much cheaper rate, securely.
Have they? Lightning's total capacity isn't even 5,000 BTC, and each channel open and close requires a not insignificant on-chain footprint.
copper member
Activity: 813
Merit: 1944
Quote
If that were really the case, then of course it would be a great story and I would go through with it.
Nothing stops a full node operator from receiving payments for their services. However, note that if you will only serve data, which are served by other full nodes for free, then people will simply ignore your full node, and fetch data from other nodes, which would allow doing that for free.

Which means, if you want to receive money from people, you should give them a reason to do so. For example, you can run Lightning node, and collect routing fees. Or provide any kind of additional data features, like Scanners, behind a paywall. There are many options.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
My suggestion is very simple anyone running a full untrimed node on a 2tb ssd gets a small monthly payment. If miners can get a block worth hundreds of thousands of dollars. 2tb ssd full nodes can get 5 dollars a month with 95% up time.

If that were really the case, then of course it would be a great story and I would go through with it. Unfortunately, this is not the reality, which is why many people (including those I know) refuse to run an unpruned full node. But it's a great suggestion which I second
legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
Hi all, wish you a Happy and Healthy New Year 2024!

Lately I've been contemplating the rapid growth of the Bitcoin blockchain and its potential impact on full-node operators. With the current blockchain size surpassing 500GB (538.09 GB on Jan/01/2024) there's a concern that some of us may be approaching capacity limits to continue hosting such large amounts of data.

Therefore, I'd like to take this opportunity to initiate an open discussion on this matter and find out what solutions or considerations might already be in place. Here are some thoughts that have crossed my mind:

    - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?

    - Second-Layer Solutions: How effective are second-layer solutions like the Lightning Network in relieving the main blockchain? Are there experiences or concerns that we can share?

    - Pruning: For those already implementing pruning, how has this impacted your node's resource usage? Any insights or recommendations for others considering this approach?

    - Optimizations: Are there any ongoing efforts or developments within the Bitcoin protocol to optimize resource usage and improve scalability?

Here are some current stats as of today:

Last Value from Jan 1 2024, 22:03 EST:
538.09 GB

Value from 1 Year Ago:
446.05 GB

Change from 1 Year Ago:
20.63%

Average Growth Rate:
118.8%

I believe that by pooling our collective knowledge and experiences, we can better understand the current landscape and potentially contribute to the ongoing evolution of the Bitcoin ecosystem. Please share your thoughts, insights, or any information you may have on this topic. Looking forward to a fruitful discussion!

Cheers,
citb0in

Spend money and get a 2tb ssd to run a full node.
2tb ssd units are cheap enough.

My suggestion is very simple anyone running a full untrimed node on a 2tb ssd gets a small monthly payment. If miners can get a block worth hundreds of thousands of dollars. 2tb ssd full nodes can get 5 dollars a month with 95% up time.


legendary
Activity: 2394
Merit: 6581
be constructive or S.T.F.U
  - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?

Some people want larger blocks, I have not seen someone discuss smaller blocks for ages unless it's done as a part of some sarcasm like this one

That's exactly the reason we need to decrease the blocksize to 100 kb to kill all possible spam!  Wink


Quote
I've been pondering the relationship between ordinals and the increasing block size in Bitcoin. How do you think the ordinal approach could impact the scalability of the blockchain?

Ordinals do slightly contribute to larger blocksize although by nature those transactions are not large, but more transactions = larger blocksize (although not always), but ordinals are not to blame, the block size based on consensus can be as large as 4MB on the desk, while it's possible on paper, it's going to be very difficult to construct a large amount of block at 4MB not even 3.5MB unless it contains some weird transactions, but one would assume that the worst case scenario would be 576MB of daily added size, roughly 200GB a year which is obviously very unlikely to happen since most blocks are "natural" and they won't fill 4MB worth of transactions, take last year's average and it would be roughly 1.75MB, say 2MB and that would be an average of 100GB added per year.

For cloud storage, you are looking at an average of $10-$20 a year, the VPS I use charges $2/month for an extra 200GB SSD so that would be an extra $12 a year (you can certainly find cheaper offers), for normal disk storage 'which is good enough for the use case' it's probably 1/3 the price.

As it stands right now, you can get a decent VPS with 1TB storage for $12 a month, I suppose many folks who run a full node have not yet faced disk issues yet, and storage would only become cheaper over time, so + 100GB a year is not a concern IMO.



 
legendary
Activity: 4466
Merit: 3391
    - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?

The maximum block size ensures that the size of the block chain (and associated data) grows linearly while technological advances continue to cut costs exponentially.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Thanks for sharing your thoughts.

I've been pondering the relationship between ordinals and the increasing block size in Bitcoin. How do you think the ordinal approach could impact the scalability of the blockchain? Are there any innovative proposals or ongoing discussions in the community regarding this? Is it only me wanting those dirtynals get stopped?
legendary
Activity: 2114
Merit: 2248
Playgram - The Telegram Casino
   - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?
There are varying opinions on this. I am personally against increasing the blocksize.
Yes there have been many discussions on this, more of them from way back on the forum.

   - Second-Layer Solutions: How effective are second-layer solutions like the Lightning Network in relieving the main blockchain? Are there experiences or concerns that we can share?
Second layer Offchain solutions like Lightening Network has been very effective in taking load of the main chain and allowing users make smaller transactions at a much cheaper rate, securely.
copper member
Activity: 813
Merit: 1944
Quote
Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?
If you control some large mining pool, then you can shrink it in your own blocks, produced by your pool. But I expect it will be hard to change it on consensus level.

Quote
Second-Layer Solutions: How effective are second-layer solutions like the Lightning Network in relieving the main blockchain? Are there experiences or concerns that we can share?
It is good, if you can process N payments off-chain, and then summarize them on-chain, but it is unlikely, that this alone will be sufficient.

Quote
Pruning: For those already implementing pruning, how has this impacted your node's resource usage? Any insights or recommendations for others considering this approach?
If you cannot buy a larger disk, then you should apply this now, and put for example 500 GB (or whatever) as your pruning size. But of course, note that by default, your node will send only last 288 blocks (or something around that) to the network, if you use default settings.

Quote
Optimizations: Are there any ongoing efforts or developments within the Bitcoin protocol to optimize resource usage and improve scalability?
Yes, UTXO-based model is in progress, you can check "assumeutxo" and similar features.

Quote
Here are some current stats as of today:
Because of Ordinals, and other stuff like that, just assume 4 MB per block. It is better to overestimate it, than underestimate, and buy too small disk.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Hi all, wish you a Happy and Healthy New Year 2024!

Lately I've been contemplating the rapid growth of the Bitcoin blockchain and its potential impact on full-node operators. With the current blockchain size surpassing 500GB (538.09 GB on Jan/01/2024) there's a concern that some of us may be approaching capacity limits to continue hosting such large amounts of data.

Therefore, I'd like to take this opportunity to initiate an open discussion on this matter and find out what solutions or considerations might already be in place. Here are some thoughts that have crossed my mind:

    - Block Size Limit: What is the community's stance on potentially adjusting the block size limit to manage growth? Are there ongoing discussions on this topic?

    - Second-Layer Solutions: How effective are second-layer solutions like the Lightning Network in relieving the main blockchain? Are there experiences or concerns that we can share?

    - Pruning: For those already implementing pruning, how has this impacted your node's resource usage? Any insights or recommendations for others considering this approach?

    - Optimizations: Are there any ongoing efforts or developments within the Bitcoin protocol to optimize resource usage and improve scalability?

Here are some current stats as of today:

Last Value from Jan 1 2024, 22:03 EST:
538.09 GB

Value from 1 Year Ago:
446.05 GB

Change from 1 Year Ago:
20.63%

Average Growth Rate:
118.8%

I believe that by pooling our collective knowledge and experiences, we can better understand the current landscape and potentially contribute to the ongoing evolution of the Bitcoin ecosystem. Please share your thoughts, insights, or any information you may have on this topic. Looking forward to a fruitful discussion!

Cheers,
citb0in
Jump to: