Pages:
Author

Topic: rpietila Wall Observer - the Quality TA Thread ;) - page 36. (Read 907246 times)

legendary
Activity: 1106
Merit: 1007
Hide your women
I draw the opposite conclusion. If larger blocks have a higher probability of being orphaned, miners will make their blocks as small as possible, even significantly under the maxblocksize limit. This is why the debate seems so silly. We don't need a software-coded block size limit at all.  blocks that are too big will get orphaned.  This is happening NOW.  It's why so few blocks are anywhere near the 1 MB limit. 

Exactly, and there are not so few blocks with only 1 tx (the coinbase), and regularly blocks with less than 100 tx.

The problem is that fees need to get much, much higher than 1 or 2 mBTC per tx to make it worth for miners.

7 TPS is a joke. We need a PREDICTABLE SCHEDULE of block size increases or a complete removal of a size limit.

Block size increase in itself is not going to increase the number of TPS because of the orphan risk.

To compete on a global scale with just Paypal, bitcoin would need on the order of 700 TPS, or 100 MB block size, at which point the risk of a block getting orphaned would be very very high.

IMHO the key of the scalability battle is not the block size, it's the fee structure and block propagation. There needs to be ways to minimize orphan block risks, maybe by allowing fork merging: a block could be allowed to have multiple next & previous blocks. This way if two blocks come at the same time, rather than one orphaning the other, the next block could accept both.

Currently if block B & C are mined based on block A, then only B or C will be accepted by block D being mined on either B or C.

The proposal would be that D could accept both B & C, and then half the B & C rewards would be burned by D (so the total reward is unchanged).

So the block chain becomes a block tree.  Then processing power becomes the bottleneck because you have to validate blocks D,E,F, and G then 8 blocks for the next block reward, etc.
legendary
Activity: 1100
Merit: 1032
I draw the opposite conclusion. If larger blocks have a higher probability of being orphaned, miners will make their blocks as small as possible, even significantly under the maxblocksize limit. This is why the debate seems so silly. We don't need a software-coded block size limit at all.  blocks that are too big will get orphaned.  This is happening NOW.  It's why so few blocks are anywhere near the 1 MB limit. 

Exactly, and there are not so few blocks with only 1 tx (the coinbase), and regularly blocks with less than 100 tx.

The problem is that fees need to get much, much higher than 1 or 2 mBTC per tx to make it worth for miners.

7 TPS is a joke. We need a PREDICTABLE SCHEDULE of block size increases or a complete removal of a size limit.

Block size increase in itself is not going to increase the number of TPS because of the orphan risk.

To compete on a global scale with just Paypal, bitcoin would need on the order of 700 TPS, or 100 MB block size, at which point the risk of a block getting orphaned would be very very high.

IMHO the key of the scalability battle is not the block size, it's the fee structure and block propagation. There needs to be ways to minimize orphan block risks, maybe by allowing fork merging: a block could be allowed to have multiple next & previous blocks. This way if two blocks come at the same time, rather than one orphaning the other, the next block could accept both.

Currently if block B & C are mined based on block A, then only B or C will be accepted by block D being mined on either B or C.

The proposal would be that D could accept both B & C, and then half the B & C rewards would be burned by D (so the total reward is unchanged).








hero member
Activity: 544
Merit: 500
Right, Ok so my misunderstanding on this scaling debate comes down to one question.

Assuming vast quantities of transactions, and no restrictions on block size.  Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)

Storage (a hierarchy, in general), CPU power, and bandwidth are all potential bottlenecks and the future critical factor will depend in on the path of technological evolution which is difficult to predict. Most expect bandwidth, I believe.


Great timing from mike. Pretty much provides the answers I was looking for. Namely a fee market can and might develop around the mempool storage.

  Tongue https://medium.com/@octskyward/mempool-size-limiting-a3f604b72a4a

legendary
Activity: 1106
Merit: 1007
Hide your women
We have some MAJOR bearish divergence on the short term charts. It's going to slam down HARD this weekend. Sell now if you didn't already because you missed my nearly spot on call on the top of the recent pump and didn't sell then like I advised. Set buys @ 225.

I'm optimistic. I'm going for $229

of course with so few shorts to halt the dip, we may go much lower. 
full member
Activity: 188
Merit: 100
We have some MAJOR bearish divergence on the short term charts. It's going to slam down HARD this weekend. Sell now if you didn't already because you missed my nearly spot on call on the top of the recent pump and didn't sell then like I advised. Set buys @ 225.
legendary
Activity: 1106
Merit: 1007
Hide your women
Storage (a hierarchy, in general), CPU power, and bandwidth are all potential bottlenecks and the future critical factor will depend in on the path of technological evolution which is difficult to predict. Most expect bandwidth, I believe.

Bandwidth is already a problem, because larger blocks already have a higher likelihood of being orphaned, and IMHO that's the point which makes blocksize discussion mostly hypothetical. Larger blocks are already unprofitable for miners, if they make large blocks it's out of goodwill.

Storage is the second issue, but that is fixable in the long run through pruning: no point in storing and replicating historic transactions that have long been over-confirmed. Historic transactions are irrelevant for bitcoin as a storage or value or a mean of payment, they are useful for validating, but that could be replaced by automatic check points every few months f.i., and everything historic beyond the last checkpoint could be pruned, keeping only the UTXO.

Yes but if orphan blocks is the only limitation, miners will simply demand a higher fee to overcome the probability of an orphan block. This will create a fee market. If the price of bandwidth drops then the fee will be driven down. All of this works provided that there is a block reward.

I draw the opposite conclusion. If larger blocks have a higher probability of being orphaned, miners will make their blocks as small as possible, even significantly under the maxblocksize limit. This is why the debate seems so silly. We don't need a software-coded block size limit at all.  blocks that are too big will get orphaned.  This is happening NOW.  It's why so few blocks are anywhere near the 1 MB limit. 

We don't pay miner fees to get xactions confirmed. We pay to get them confirmed quickly.  That is a market that exists TODAY.  We need larger block size limts to scale. We need to scale to get wider adoption. Wider adoption is another way of saying more decentralization (or accurately distribution) of users. Why is mining centralization the only centralization that smallblockers care about? 

7 TPS is a joke. We need a PREDICTABLE SCHEDULE of block size increases or a complete removal of a size limit. I am not opposed to Garzic's temporary 2 MB limit so that we can see what happens and evaluate the effects, but this needs to happen soon. Our competition is not sitting by idly watching this play out. They are  making plans to grab Bitcoin's market share. In some cases, they are already cutting into it.
legendary
Activity: 2282
Merit: 1050
Monero Core Team
Storage (a hierarchy, in general), CPU power, and bandwidth are all potential bottlenecks and the future critical factor will depend in on the path of technological evolution which is difficult to predict. Most expect bandwidth, I believe.

Bandwidth is already a problem, because larger blocks already have a higher likelihood of being orphaned, and IMHO that's the point which makes blocksize discussion mostly hypothetical. Larger blocks are already unprofitable for miners, if they make large blocks it's out of goodwill.

Storage is the second issue, but that is fixable in the long run through pruning: no point in storing and replicating historic transactions that have long been over-confirmed. Historic transactions are irrelevant for bitcoin as a storage or value or a mean of payment, they are useful for validating, but that could be replaced by automatic check points every few months f.i., and everything historic beyond the last checkpoint could be pruned, keeping only the UTXO.

Yes but if orphan blocks is the only limitation, miners will simply demand a higher fee to overcome the probability of an orphan block. This will create a fee market. If the price of bandwidth drops then the fee will be driven down. All of this works provided that there is a block reward.
legendary
Activity: 1100
Merit: 1032
Storage (a hierarchy, in general), CPU power, and bandwidth are all potential bottlenecks and the future critical factor will depend in on the path of technological evolution which is difficult to predict. Most expect bandwidth, I believe.

Bandwidth is already a problem, because larger blocks already have a higher likelihood of being orphaned, and IMHO that's the point which makes blocksize discussion mostly hypothetical. Larger blocks are already unprofitable for miners, if they make large blocks it's out of goodwill.

Storage is the second issue, but that is fixable in the long run through pruning: no point in storing and replicating historic transactions that have long been over-confirmed. Historic transactions are irrelevant for bitcoin as a storage or value or a mean of payment, they are useful for validating, but that could be replaced by automatic check points every few months f.i., and everything historic beyond the last checkpoint could be pruned, keeping only the UTXO.
legendary
Activity: 3920
Merit: 11299
Self-Custody is a right. Say no to"Non-custodial"
I think you are confusing UTXO and unconfirmed transactions.  The UTXO set is all confirmed transactions that have unspent outputs.

Edited the original post, is my question clearer now ?

why edit the original post rather than clarifying the matter in a subsequent post?

A lot of this blocksize debate seems to be speculative with a lot of potential solutions with NONE Of them being real detrimental to the overall health of bitcoin, except possibly mostly for the creation of FUD and downward price manipulations.
legendary
Activity: 2968
Merit: 1198
Right, Ok so my misunderstanding on this scaling debate comes down to one question.

Assuming vast quantities of transactions, and no restrictions on block size.  Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)

Storage (a hierarchy, in general), CPU power, and bandwidth are all potential bottlenecks and the future critical factor will depend in on the path of technological evolution which is difficult to predict. Most expect bandwidth, I believe.
legendary
Activity: 1904
Merit: 1002
Right, Ok so my misunderstanding on this scaling debate comes down to one question.

Assuming vast quantities of transactions, and no restrictions on block size.  Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)

Bottleneck is definitely storage, not processing with current technology.  There are some things in the pipeline that should help quite a bit with fast access to large data sets that should begin rolling out over the next two years.

UTXO is the larger concern.  However, not all transactions increase the UTXO set and some decrease it (lots of inputs to a single output).
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
Right, Ok so my misunderstanding on this scaling debate comes down to one question.

Assuming vast quantities of transactions, and no restrictions on block size.  Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)

I suppose it would have to be the UTXO stored in the RAM.
hero member
Activity: 544
Merit: 500
Right, Ok so my misunderstanding on this scaling debate comes down to one question.

Assuming vast quantities of transactions, and no restrictions on block size.  Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)
legendary
Activity: 2968
Merit: 1198
First, mintxfee is not a solution against miners themselves creating large blocks because they are paying the fee to themselves. Second, miners can also offer fee rebates to large customers if the mintxfee is above what the fee market would dictate. So the mintxfee on the blockchain would look like it is providing some protection but it actually would not be and indeed would be encouraging centralization since its easier to sign up with one big pool to get your rebates than 100 little ones.
hero member
Activity: 544
Merit: 500
I think you are confusing UTXO and unconfirmed transactions.  The UTXO set is all confirmed transactions that have unspent outputs.

Edited the original post, is my question clearer now ?
legendary
Activity: 2590
Merit: 3015
Welt Am Draht

Excuse my ignorance, but what solution is implemented in LTC?


Here's what the man himself has to say - https://www.reddit.com/r/Bitcoin/comments/3ci25k/the_current_spam_attack_on_bitcoin_is_not/

"I know this is post is going to be controversial, but here goes... Smiley
This spam attack is not economically feasible on the Litecoin network. I will explain why.

Here's one of txns that is spamming the network: https://blockchain.info/tx/1ec8370b2527045f41131530b8af51ca15a404e06775e41294f2f91fa085e9d5
For creating 34 economically unfeasible to redeem UTXOs, the spammer only had to pay 0.000299 btc ($0.08). In order to clean up all these spammy UTXOs, you needed a nice pool to mine this huge transaction for free. And the only reason why the pool was able to was because the spammer sent these coins to simple brain wallets! If these were random addresses, they would stick around in the UTXO set forever! (or until each BTC is worth a lot)

The reason why Litecoin is immune to this attack is because Litecoin was attacked in a similar fashion (though to a much smaller degree) years ago. And I noticed this flaw in Bitcoin and patched it in Litecoin. There's code in Bitcoin that says if someone sends a tiny amount of coins to an output, make sure that he pays the mintxfee. This makes sense because you wouldn't want someone creating "dust" spam by sending small amount of coins. BUT the code still only enforces the same mintxfee if you send to many small outputs. The fix is simple: require a mintxfee for each tiny output.
Because of this fix, Litecoin's UTXO set is much more manageable than Bitcoin's. But the pull request for this that I created against the bitcoin codebase was rejected 3 years ago: https://github.com/bitcoin/bitcoin/pull/1536

One of the reasons why I created Litecoin was because it was hard for someone like me (who was a nobody back then) to make any changes to Bitcoin. Having a different set of developers take the code in a different direction can only be good for the resiliency of the whole cryptocurrency movement. And that is why there is value in altcoins."
donator
Activity: 2772
Merit: 1019
My take on the issue:

Any solution has to make spam expensive. Anything else is just adding further problems.
That's why I like the solution implemented with Litecoins.

Excuse my ignorance, but what solution is implemented in LTC?
legendary
Activity: 1904
Merit: 1002

Thanks for the info. The idea is to make the cost of sending the spam comparable to the cost of cleaning up the spam, which makes a lot of sense. Min TX fees can work as an anti spam measure, and can be applied without hard forking the coin if enough nodes simply refuse to relay transactions without the min TX fee.


Forgive the ignorance as i'm not a coder.   Embarrassed

If the unconfirmed transactions UXTO are stored in the mempool, (effectively distributed cloud storage). Surely this provides the natural free market limit and restricts spam.

For example if i'm a full node or solo miner, I clearly can't store every tiny unconfirmed transaction. Therefore just limit my mempool (storage space) with a filter to a certain size, and filter on say; fee paying transactions, or only including transactions over a certain quantity of satoshis.

Larger miners, Pools or super nodes will be able to afford broader limits on their filters and store more UXTO.  As enough transactions get stored in the UXTO set eventually fee market will emerge, Some transactions are worth storing until they get confirmed while others are not and should drop out.

Certain companies say NASDAQ who are just using the single satoshi as a coloured coin (hence, zero fee) will clearly have to sponsor miners to store and confirm these.

What am I missing ?

I think you are confusing UTXO and unconfirmed transactions.  The UTXO set is all confirmed transactions that have unspent outputs.
legendary
Activity: 1162
Merit: 1007
Larger miners, Pools or super nodes will be able to afford broader limits on their filters and store more UXTO.  As enough transactions get stored in the UXTO set eventually fee market will emerge, Some transactions are worth storing until they get confirmed while others are not and should drop out.

It depends what you mean by UTXOs "dropping out." 

It is fine to do as you say and move UTXOs that aren't worth storing in expensive "hot" storage (RAM and the like) into cheaper colder storage (a hard disk).  However, a full node must be able to do an exhaustive search to determine whether a given UTXO exists or not.  Otherwise, the node will be forked off the network. 

It is fine if this "exhaustive search" takes longer for certain spammy UTXOs; in fact, it is actually a good thing because it slows the propagation of blocks that contain such spammy UTXOs down and thus increases the chance that the block is orphaned.  This helps to create a fee market.   
hero member
Activity: 544
Merit: 500

Thanks for the info. The idea is to make the cost of sending the spam comparable to the cost of cleaning up the spam, which makes a lot of sense. Min TX fees can work as an anti spam measure, and can be applied without hard forking the coin if enough nodes simply refuse to relay transactions without the min TX fee.


Forgive the ignorance as i'm not a coder.   Embarrassed

If the unconfirmed transactions are stored in the mempool, (effectively distributed cloud storage). Surely this provides the natural free market limit and restricts spam.

For example if i'm a full node or solo miner, I clearly can't store every tiny unconfirmed transaction. Therefore i would just add a filter to limit my mempool size. Filtering on say; fee paying transactions, or only including transactions over a certain quantity of satoshis.

Larger miners, Pools or super nodes will be able to afford broader limits on their filters and store more (or all) unconfirmed transactions.  As enough transactions get stored in the mempool eventually a fee market will emerge, Some transactions are worth storing until they get confirmed while others are not and should drop out.

Certain companies say NASDAQ who are just using the single satoshi as a coloured coin (hence, zero fee) will clearly have to sponsor miners to store and confirm these.

What am I missing ?

edited to remove stupidity. Grin
Pages:
Jump to: