Pages:
Author

Topic: What is the technical reason why bitcoin can't scale? (Read 487 times)

member
Activity: 266
Merit: 20
core has no power.
Hmm, you must have been sleeping
sure attack me personally but don't provide anything with substance...  Roll Eyes


Sorry actual reality does not have enough substance for you.

Enjoy feeding your unicorn at the chocolate falls, where bitcoin shines eternal.

Have a Great Day.
 Cool

 
full member
Activity: 154
Merit: 177
core has no power.
Hmm, you must have been sleeping
sure attack me personally but don't provide anything with substance...  Roll Eyes

when Core threaten the miners that they would brick all of their millions of asics by changing the PoW algo, costing the miners billion$.
https://news.bitcoin.com/bitcoin-developers-changing-proof-work-algorithm/
Quote
Bitcoin Community Members and Developers Propose Changing Bitcoin’s Proof-of-Work
a threat and a proposition is nothing real (i wouldn't take anything from bitcoin[dot]com seriously)

https://www.nasdaq.com/articles/why-viabtc-rejects-segwit-soft-fork-in-favor-of-block-size-hard-fork%3A-interview-with-haipo
Quote
Do you think that the Bitcoin Core development team should be fired?

I believe that the Bitcoin Core developers currently have too much power; there's no system of checks and balances for them.
They can decide to make massive changes to Bitcoin based on their own personal preferences, and then force those changes on to the users.
do you think, i believe.... again - no substance. and no, core can't force changes on to the users!!

The btc users would have loved being able to mine btc on their home PCs again, but as it was just a blackmail threat to get their way on segwit,
do they? who are "the btc users"? i don't want to mine on my home pc again, but i guess i don't count as "a btc user" then...  Roll Eyes

after all. if paypal offered payments for micropennies but said it will charge $80 per deposit and $80 per withdrawal.
people will just use vinmo
are you comparing bitcoin with paypal and venmo?

i edited it to clear up any misunderstanding

im comparing bitcoin to dollars.. and paypal to sidechains/[bb]altnets like LN[/b]
im comparing altcoin to euro.. and venmo to an altcoin sidechain/altnet


Did you just call the Lightning Network an “altcoin network”? Because that’s a misinformed way to describe it. It needs an onchain transaction to open, and fund a channel in Lightning. In actuality those transactions in Lightning are Bitcoin transactions that have not been broadcasted in the Bitcoin network yet.
i think it is a good idea to ignore franky1 on anything he says about the lightning network
legendary
Activity: 2898
Merit: 1823
after all. if paypal offered payments for micropennies but said it will charge $80 per deposit and $80 per withdrawal.
people will just use vinmo
are you comparing bitcoin with paypal and venmo?

i edited it to clear up any misunderstanding

im comparing bitcoin to dollars.. and paypal to sidechains/[bb]altnets like LN[/b]
im comparing altcoin to euro.. and venmo to an altcoin sidechain/altnet


Did you just call the Lightning Network an “altcoin network”? Because that’s a misinformed way to describe it. It needs an onchain transaction to open, and fund a channel in Lightning. In actuality those transactions in Lightning are Bitcoin transactions that have not been broadcasted in the Bitcoin network yet.
legendary
Activity: 4214
Merit: 4458
the main problem with not increasing the blocksize*. is that the limited transactions allowed end up covering all the mining costs

EG if it cost $200k to mine a block. and there are only allowed 2500 transactions.
then each transaction ends up costing $80
where as letting say 10,000 transactions by removing the cludgy math of the /4 wall.. means transactions could be diluted to $20. while still allowing the new dev sought data limit of 4mb acceptance

keeping the capacity(the real politics) at ~2500 just makes for the argument of "onchain fees are high, so use an altnet"
yet
users would not want to pay $80 to be able to lock an asset.. release an asset for/from another network

thus.. those proposing the onchain stifling end up shooting themselves in the foot

after all. if paypal offered payments for microcents of all dollarfiat payments..  but said it will charge $80 per deposit and $80 per withdrawal if people use the dollar with them.
people will just use vinmo measured in euro. and convert to dollar elsewhere
or
would convert dollar to euro and then deposit with paypal who charge a $1 deposit $1 withdrawal through paypal
thus no one uses dollar in paypal

the other silly economic politics is:
"reduce onchain utility 'coz average joe needs to store blockchain''
which is countered by
"offer custodial channels offchain 'coz average joe wont use fullnodes 24/7 for their once a year unlock'"
meaning average joe still are not adopting bitcoin full nodes. and all that are running full nodes become central custodians.. again self defeating the pretend purpose of their game

*for transaction count stagnation, not for silly political bloat of data excuses

The most bizarre step in the long, slow, drawn out attempts to scale bitcoin is segwit.
Segwit itself increased the limit on the size of a block by roughly 4 times, which, oddly enough, is exactly that ... a block size increase ...
the cludgy math to increase the data bloat.. did not result in an increase the transaction capacity. thus its "we increased the block size" missed the whole point of the real reason why the community wanted a blocksize increase

its like a woman asking for bigger pants so that when she gets pregnant she has room to comfortably incubate kids..
the husband gave her bigger pants but got a vasectomy to refuse to give her the kids she actually wants the pants for. oh and now he just wants to make her fat to fit into the pants. to then have an excuse to leave the wife for another woman

Actually the scaling issue is, in my opinion, a case of ignoring the fact that networks get better and storage gets larger.

While many like to say that increasing the block size or decreasing the time between blocks isn't a long term solution, it is indeed obvious that with increased world bandwidth and increased average storage, block size increases and time reductions can be handled incrementally.

well you will now hear the conversative stagnators say
analogy
"cant do livestream on internet because 1999 floppy disks"

legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
Actually the scaling issue is, in my opinion, a case of ignoring the fact that networks get better and storage gets larger.

While many like to say that increasing the block size or decreasing the time between blocks isn't a long term solution, it is indeed obvious that with increased world bandwidth and increased average storage, block size increases and time reductions can be handled incrementally.

The most bizarre step in the long, slow, drawn out attempts to scale bitcoin is segwit.
Segwit itself increased the limit on the size of a block by roughly 4 times, which, oddly enough, is exactly that ... a block size increase ...
legendary
Activity: 4214
Merit: 4458
hers the thing though..
even your 5mb/s
is more then needed to stream HD movies..
far more than a block needs
https://www.download-time.com/
you can download 500mb in 5 minutes
1gb in 10 minutes
10gb in 100 minutes (1hour 40min)
100gb in 1000 minutes (16hours 40min)
300gb in 3000 minutes (under 50 hours)

so even in your less than ideal country only offering 5mb/s
your not waiting weeks to download the blockchain..
(unlike the unoptomised method 7 years ago that felt like weeks to download just under 50gb)
yep things have moved forward in the last 7-12 years
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
seems someone is trying just a little tooo hard to push for excuses not to evolve
residential internet in the 90's was dialup 56kb/s
in the 00's it was 0.5mb/s broadband 10x dialup
in 10's it was 5mb/s adsl 10x broadband

which is where satoshi based his limits in 2010

in 2020 its now 50mb/s fibre/5g
so internet has sped up 10x since satoshi limit. yet the limit has sped up 0x
the transaction count per block has not increased above the proposed 4700tx possibility
we have never had a single day of satoshis proposed 4700tx (7tx/s)

so we are actually performing at minus 2x of potential suggested in 2010

ISP consumer router speeds do not scale with technical advancements in routing technology, for example in the US nearly all the ISPs there throttle your internet speeds to say 7Mbit/s despite having some of the fastest network stacks in the world.

This also includes so-called "5G" packages that throttle your connection from the datacenter and give you a mediocre speed.
legendary
Activity: 4214
Merit: 4458
seems someone is trying just a little tooo hard to push for excuses not to evolve
residential internet in the 90's was dialup 56kb/s
in the 00's it was 0.5mb/s broadband 10x dialup
in 10's it was 5mb/s adsl 10x broadband

which is where satoshi based his limits in 2010

in 2020 its now 50mb/s fibre/5g
so internet has sped up 10x since satoshi limit. yet the limit has sped up 0x
the transaction count per block has not increased above the proposed 4700tx possibility
we have never had a single day of satoshis proposed 4700tx (7tx/s)

so we are actually performing at minus 2x of potential suggested in 2010
...

no one is screaming the need of 100x by midnight. not even 10x.
but atleast a true progress above the average 2500x tx that we have seen for 12 years

devs deem 4mb(weight) safe and so allowing for full utility of 4x legacy. with no cludgy /4 wall crap. would atleast get things moving to where things should be 12 years into a project

and also including a better fee mechanism would help too
after all devs have made legacy a premium by encoding a 4x treatment of legacy transactions so how about have code for a ?x treatment of transactions under 6 blocks to stop spammers
to segregate/split the spammers from the occassional user

as for the not wanting to store other peoples transactions from 30 years ago (the 2051 scenario mentioned in post above) well dont be a full noder if you dont care about other peoples transactions.
after all what becomes your acceptable value you do "want" to store of other peoples spending habits

is your limit that you only want to have
transactions of 1% of bitcoin price($330 today) be stored in the blockchain
transactions of 0.1% of bitcoin price($33 today)
transactions of 0.01% of bitcoin price($3.30 today)

so do you want "dust" limit raised to 0.0001($3.30 today) to avoid 'coffee spends'
..
lets word your presumption of desire of storage another way
how much would you consider its worth to spend on computer equipment to allow you full node utility for say average PC upgrade period of 5 years

..
if you want to be more biased about unwanted spends.
dont make it about the amount spent
make it about the confirm age
be biased and hate transactions with a UTXO thats only 1-3confirms old being spent
hate spam.. then you might have a talking point worthy of discussion
copper member
Activity: 1624
Merit: 1899
Amazon Prime Member #7
Is it technically infeasible to have a secure network while at the same time having faster transactions, without second layer solutions?

With the internet, connection speeds increased from 56kbps dialup to adsl to cable to adsl2 to now 500mbps. We could see progress being made and changes to the technology.
Internet speeds have increased a lot over the past 30 years, but I don't see residential speeds increasing by a lot from what is available currently. In the 90's and early 2000's, if you were using a residential internet connection to perform some task using the internet, generally speaking, the internet connection would represent a bottleneck, for example, if you were downloading a song, the song would download at a rate that is slower than your ability to listen to the song, or if you were visiting a webpage with many high-resolution pictures, you would have to wait for the pictures to load. Today, if you are using a residential internet connection, you can download  a movie in HD in a matter of seconds or minutes, and your bottleneck is your computer's ability to process and save the information received via the internet. I also don't see things like video or image quality increasing the extent they have increased in the past because there becomes a point in which the human eye will not see the difference in quality of an image/video if the quality is increased.

The point I am trying to make in the above is that there will not be demand to justify the investment required to increase residential internet speeds by amounts speeds have increased in the past.

There is also the issue of processing speed. When a block is received by a node, the node needs to validate (or have validated) every transaction in the block, and it needs to do so very quickly so it knows which transactions to accept or reject, and if it should accept or reject a future block. The processing capacity of chips has improved, similar to how internet speeds have improved, and there is demand for increased processing capacity of chips, however, it has become more difficult to further increase the processing capacity of chips, and this will likely be the bottleneck that prevents bitcoin from scaling without a 2nd layer. There is the potential that advances in technology will make verifying on-chain transactions more efficient, however, there are theoretical limits to this technology.

Finally, in the year 2051, why should I care that you paid $5 worth of coin on a Starbucks coffee today in 2021? Why should I have to store this transaction, along with the millions of other people that bought coffee at Starbucks on a given day? Granted, there is a setting in bitcoin core that allows you to only save x number of recent blocks. If you have a billion people using bitcoin every day, a billion people will need to store every transaction that every other person makes. With LN, or other second-layer solutions, only a small number of users will need to even be aware of any particular transaction
jr. member
Activity: 113
Merit: 1
In some place, one TCP connection may take more than 5 minutes and usually drop line. Maybe now more shorter.
hero member
Activity: 667
Merit: 1529
Quote
Unless you're telling me the CPU hashes at the rate of 1 per second?
No, but something around 1024 hashes per second is realistic. But because people still want to mine on their CPU's, they create more and more complex algorithms that slows down everything, instead of focusing on decentralizing mining without changing mining algorithm, so I expect future CPU-based altcoins will slow down hashing even more, as soon as some people will start developing specialized devices where it will be profitable.

Edit: Actually, it depends. If only mining algorithm is changed, so only block headers are hashed in a different way, then that slows down everything to 1024 hashes per second. But if you have 1024 hashes inside merkle tree to compute and if transaction hashes use the same algorithm as in mining, then everything gets worse and the more transactions are there, the more time is spend on verification. So, I expect when there are many transactions in the block, then it can actually take one second per 1024 transactions (or maybe 512 transactions, because they are hashed twice, or maybe even 256 transactions, because of internal leaves in merkle tree doubles that), so one second per block only for hashing is possible.
legendary
Activity: 2954
Merit: 4158
As I said, the 10 minute interval has nothing to do with the decentralization or better, it's not directly related. You should think it in another way; if the network generated a new block every 5 minutes instead of 10, how many blocks should you wait? Wouldn't that be 12? Probably yes, but there is another thing to count too.
That is actually the equivalent security only (given similar conditions except blocktime). The math, as with the whitepaper involves the calculation in terms of the number of confirmations, not time. The probability of an attacker catching up from X blocks behind decreases exponentially with an increased X, given a hashrate of <51%. Probability People often approximate it for 1 hour, just because 6 conf = ~1hr.
The orphaned blocks. The shorter is the generation time, the more will be the orphaned blocks. By that, I conclude that 6 10-minute interval blocks provide better security than 12 5-minute interval blocks.
Orphan blocks are child blocks received by a client that has not yet received the parent block. You are most likely referring to stale blocks instead.
It depends. As said, it is a tradeoff between getting stales and a faster block interval. I don't think faster blocks are necessary, but if you need on-chain capacity right now then a bigger block would be the most direct solution.

Stale block occurs if miners aren't mining on the same chain as a result of them seeing different blocks at the same height due to a delay in propagation, etc. To prevent this, the miners just have to well connected, not a problem, we already know miners were having internal network of sorts and in certain cases SPV mining on one another. Ensuring good connectivity is to their best interest. Validation of blocks takes less than a second for miners and propagation throughout at least 50% of the network takes less than 10 seconds, I know a few research papers referenced the propagation time[1], not sure about the accuracy but I'm compelled to think that it is rather fast. So long as you can propagate the blocks efficiently and ensure that the miners are able to receive information in a timely manner, that shouldn't result in a far greater increase in the stale rates. As long as the majority of your miners are well connected to each other, then the stale rates wouldn't become that big of a factor.

Propagation of blocks or information in the network has improved tremendously throughout the years. Compact block helps with decreasing the delay with the propagation, FIBRE used to do so as well.
It depends on mining algorithm, because sha256d is quite fast, Scrypt is a bit slower, but take yescrypt and see how long it would take, even if two nodes are connected on localhost.
I can't find any benchmarks on that. I'm assuming that since hashing the (relatively) few block headers is about the least resource intensive task, a small increase in the time shouldn't affect it too much. Signature and script validation are far more time consuming. Unless you're telling me the CPU hashes at the rate of 1 per second?

[1] https://sites.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/bitcoin-delay says 6.5s for median, but it was quite a few years ago.

If the actual stats is 6.5s, then 6.5/600 is the stale rate would be 1.08%, if 300, then 2.15%, assuming the majority of the miners also sees the block at ~6.5s. It is twice the probability but the pros would still outweigh the cons. Bitcoin's landscape and some of the functions has changed throughout the years, not all assumptions still hold true. Note that the percentage also assumes that the economic majority receives the block around 6.5s. The actual time could be even lower <50% of the sample size.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Quote
What is the technical reason why bitcoin transactions can't scale?
Because they cannot be joined. There are many transactions like that: Alice -> Bob -> Charlie -> Daniel. That could be replaced with Alice -> Daniel transaction.
The problem is that these transactions not very often converge into one. Most of the time the amounts will change in an unpredictable way (e.g. Alice -> Bob: 1 BTC, Bob->Charlie 0.3, then Charlie->Daniel 0.8 with another input and so on). The transactions could be joined into one anyway, but the size advantage then becomes small because this transaction would have lots of inputs and outputs.

Lightning is just made to solve that: due to the interconnection between a large number of nodes representing different roles: merchant, consumer, exchange etc. it has enough flexibility to allow the joining of a large number of transactions even with unpredictable amounts, and the "joining" = channel closing transaction only involves two parties (and in ideal cases is not even needed).

But the basic problem is verification, like you already mentioned. We could bring it down to the phrase: "Every Bitcoin user has to verify that every other user, with each transaction (even a coffee), did own their money and did not double spend, and we have to take into account up to 12 year old data" (fortunately most of them only once, with the initial blockchain download). This becomes much more problematic with bigger blocks, like already this old study showed - even if we have better hardware now in 2021, 10 MB blocks (a bit more than double of current maximum) would be the maximum to be handled relatively comfortably on average PCs. And no, limiting the verification process to miners does not solve the problem, because then miners could impose their conditions and modifications at will.

There were attempts like the Mini-blockchain scheme (which is, with some modifications, used in a handful of altcoins) to limit verification work to a relatively recent dataset (e.g. 1 week), but they have their own tradeoffs. Basically these approaches consider everything which is older than a certain block height as "finalized", but this opens attack vectors on the "finalized data", above all when the network interconnection is unstable and single users could be sybil-attacked. I nevertheless consider these attempts interesting.
legendary
Activity: 978
Merit: 1080
Because they cannot be joined. There are many transactions like that: Alice -> Bob -> Charlie -> Daniel. That could be replaced with Alice -> Daniel transaction.

They can be joined in Mimblewimble, stripping out all intermediate outputs.
Which is particularly nice since Mimblewimble uses confidential transactions in which each output comes with a large rangeproof. Thanks to cancellation of spent outputs, an Initial Block Download only needs to download rangeproofs for the UTXO set.

It still leaves a ~100 Byte kernel as a permanent record of each transaction though.
legendary
Activity: 4214
Merit: 4458
i know people love positive PR and try geting any critisism or factual negatives deleted: but lets see how long my post lasts

What is the technical reason why bitcoin transactions can't scale?
There has been improvement with the adoption of Segwit and Taproot in which the transaction fees are reduced up to 50% while the Taproot transactions will be more efficient with also 2.5x faster block validation and 30% to 75% savings on multisig.

but segwit stalled out at a 1800-2500 tx per block average


  • The scaling issue requires increasing the size of the chain.
  • The decentralization occurs when it's not increasing.

getting people onto sidechain /altnets where they are mainly using lite phone apps is actually going to cause alot of people to not be full archive nodes for bitcoin. especially when they have bitcoin locks for more then a month. they see no purpose to run full nodes everyday

The biggest bottleneck is not downloading time, you can download all blocks quite quickly. The verification is what takes the most of the time. If you take some altcoins, especially CPU-based with slow and complicated mining algorithms on purpose, you can clearly see that. But in Bitcoin, ECDSA verification also takes a lot of time. And when it comes to processing speed, we stopped at around 8 GHz and going faster is very hard.

mhm and yet. the altnet proposed as bitcoin solution that wishes people to migrate over to long term.. says people can create and swap public keys and then create multisig addresses and then create transactions and validate signatures in the millions per second... thus debunking your theory. that computers cant handle large capacity transaction verification

in short.
the 1mb 4weight limit is some maths of 2009 whereby internet was 0.5mb/s.. 12 years later the internet is 100x
hard drives were 250gb. but are now 10x that at 2TB
.
even if we went to a old style 4mb limit to allow an average of 8-10k transaction count (near true 4x capacity growth)
it would take 8-10 years to fill a 2TB hard drive. and most people update their computers every 5 years.
heck you can get a 4tb hard drive cheaper than 1 weeks grocery bill. and that would give you more then a decades buffer

..
last criticism of some posters false beliefs
some posters in this topic still believe that changes to bitcoin code in the last 5 years has made transactions cheaper. yet if you read the code youwill see they made legacy transactions a 4x premium. thus segwit are not 'discounted' in any way. they are miscounted atleast.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
With the internet, connection speeds increased from 56kbps dialup to adsl to cable to adsl2 to now 500mbps. We could see progress being made and changes to the technology.

I'll just refer to my older post.

You're missing the point, bitcoin community mostly agree that running a full node should be cheap, which means block size are limited with hardware and internet growth rate.

IMO blockchain size and internet connection aren't the worst part, but rather CPU, RAM and storage speed,
1. Can the CPU verify transaction and block real time?
2. How much RAM needed to store all cached data?
3. Can the storage handle intensive read/write? Ethereum already suffering this problem



Quote
why we are not seeing many changes to bitcoin
Because every big change have to be done in a soft fork way, discussed and developed for months, and later activated by miners signalling their readiness for that change.

Additionally, new features (such as SegWit and Taproot) took a while before supported by wallet and various services.
hero member
Activity: 667
Merit: 1529
Quote
It shouldn't make that much of a difference. The most algorithms don't actually take far longer than Bitcoin nor does it take the bulk of the time to verify.
It depends on mining algorithm, because sha256d is quite fast, Scrypt is a bit slower, but take yescrypt and see how long it would take, even if two nodes are connected on localhost.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Why is it that decentralisation is at odds with scaling? Is it a hardware bottleneck? If miners were 100x more powerful (or 100x less powerful) than they are currently, would that change the transaction time? Or would it still be 10 minutes (on average)?
As I said, the 10 minute interval has nothing to do with the decentralization or better, it's not directly related. You should think it in another way; if the network generated a new block every 5 minutes instead of 10, how many blocks should you wait? Wouldn't that be 12? Probably yes, but there is another thing to count too. The orphaned blocks. The shorter is the generation time, the more will be the orphaned blocks. By that, I conclude that 6 10-minute interval blocks provide better security than 12 5-minute interval blocks.

Here, read this: Why was the target block time chosen to be 10 minutes?
legendary
Activity: 2954
Merit: 4158
The verification is what takes the most of the time. If you take some altcoins, especially CPU-based with slow and complicated mining algorithms on purpose, you can clearly see that.
It shouldn't make that much of a difference. The most algorithms don't actually take far longer than Bitcoin nor does it take the bulk of the time to verify.
But in Bitcoin, ECDSA verification also takes a lot of time. And when it comes to processing speed, we stopped at around 8 GHz and going faster is very hard.
IPC has been increasing on year on year basis. It is becoming less of a problem and my validation from scratch takes about 7 hours. The issue here isn't about that though; the whole point about having more bandwidth is such that blocks can be relayed though the network quickly and thus reducing forks. The main concerns with a block interval that is too fast is that if the propagation is too slow, the network would be less secure due to the possibility of different forks on it.


It is all about compromise. If you increase the block size to match the current levels of your major payments systems, the block size would be absurd. It is possible to have faster block times and larger block size, other altcoins have done it and are still surviving well right now. Segwit is in effect a block size increase as well.

This email is quite old[1] but this was in the middle of the block wars.

[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011865.html
member
Activity: 159
Merit: 72
  • The scaling issue requires increasing the size of the chain.
  • The decentralization occurs when it's not increasing.
Why is it that decentralisation is at odds with scaling? Is it a hardware bottleneck? If miners were 100x more powerful (or 100x less powerful) than they are currently, would that change the transaction time? Or would it still be 10 minutes (on average)?

And if it is 10 minutes (on average), I'm curious why 10 minutes is the optimal parameter and how we would even measure the level of security/decentralisation for different parameters (e.g. 1 min vs 5 min vs 10 min)?
Pages:
Jump to: