Pages:
Author

Topic: Can't we avoid reorgs once and for all? (Read 260 times)

legendary
Activity: 2268
Merit: 18509
January 03, 2023, 04:17:48 PM
#32
We are still working at finding a block at H+1, but there has likely been more energy used than needed.
On average, I don't think there has.

The difficulty is always the same (for this difficulty period). The difficulty does not change based on the previous block's hash. It does not matter if every miner is building on top of the same block (as usually happens), but it also does not matter if literally every miner in the world was trying to build on top of a block unique to them. The previous block hash makes absolutely no difference to the difficulty. It will (on average) require the exact same number of hashes and the exact same amount of energy to find the next block, regardless of the presence of a chain split.

Now when the next block is found, you could say that all the energy spent mining on top of the now stale block was not needed. But you could equally say that all the energy spent mining by the pools which did not find the next block was not needed.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 03, 2023, 03:53:30 PM
#31
That exactly means it isn't waste
Some people got paid, while others didn't, while both paid for a cost. The former are miners. The latter is the pool administrator. For the former it wasn't waste, but it was for the latter.

The difficulty between both chains is the same, not different.
It's the same value, but not the same in the sense that the chains are different. Anyway, that's just a play on words. We both understand the same.

2) If there are multiple new blocks found nearly simultaneously and both are being mined upon, are the attempts on top of the losing block wasted?

2) should be answered in the negative, since even with two tips at height h (and thus an undetermine height h winner), all the hashpower is still working on finding a block at height h+1,  and the expected time to find one is no longer than if there is a unique tip at height h.
We are still working at finding a block at H+1, but there has likely been more energy used than needed.
legendary
Activity: 2268
Merit: 18509
January 03, 2023, 03:23:53 PM
#30
That doesn't mean it isn't waste.
Waste from the point of view of a mining pool is very different to waste from the point of view of the network. I'm sure mining pools probably do see stale blocks as waste. Every hash which does not earn a mining pool money is waste, regardless if it is because of a stale block or just the 99.999...% of hashes which are unsuccessful. But those hashes are not wasted from the network's point of view.

And as garlonicon pointed out, this is different again when considering miners instead of mining pools, since miners earn money for unsuccessful hashes too.

Stale blocks create a situation where we have two chains, therefore two potential difficulties.
The difficulty between both chains is the same, not different. And the combined hash rate across both chains for that difficulty will still mean the next block is found in 10 minutes (give or take the usual caveats).

tromp has put this very well I think. Despite the split at height H, all the hash rate on both sides of the split is working on the block at height H+1, just as it would be if there was no split.

But I would answer 1) in the positive...
In the situation you give in point 1), then yes, if a miner is attempting to mine on a chain which is not the main chain (as they would if they were not aware of the latest block), then that work is wasted. But in a chain split as being discussed here, we don't know which chain is the main chain yet, and so the work of both chains contributes to the security of the network.
hero member
Activity: 789
Merit: 1909
January 02, 2023, 06:43:15 PM
#29
Quote
That doesn't mean it isn't waste.
That exactly means it isn't waste, if you define "waste" as "not being paid for producing hashes". You have one lucky miner that submits a share, where the whole block is valid, and 6.25 BTC plus fees are collected. And you have a lot of miners, where each submitted shares below the target, stale, or those which were skipped, that way or another. But how rewards are truly splitted? This lucky miner is not getting 6.25 BTC plus fees. This miner just provided some reward, which is splitted between all miners, and this lucky miner could for example get only 0.01 BTC, because 6.24 BTC plus fees were splitted between other miners, despite none of them mined any accepted block.

Quote
The administrator of a pool decides to do various things to have the business running. An attractive policy is to pay miners for blocks, whether these are stale or not.
The whole reason why pools are needed at all, is that miners cannot collect 0.01 BTC on-chain for mining valid 7 BTC coinbase (6.25 BTC base + 0.75 BTC in fees), at 700 times lower difficulty. They have to use pools, because it is not something that is supported directly by the mining protocol.

So, if you think that another method of splitting rewards is better, then you can simply introduce that by forming your own pool. You can collect hashes, calculate chainwork for them, count the total chainwork in your pool, and then split rewards, based on that. There are many methods of rewarding miners. If you think that anything is "wasted", and your "chainwork-based method" is better, then call it Pay-Per-Chainwork (PPC? PPCW?), and form a pool.

As long as you have two (or more) blocks at the same height, and all of them are valid, you can locally pick another block, and work on it. The only thing you cannot change is forcing other nodes to think that your block is the only valid block, and make it a consensus rule. But when it comes to picking the block to work on, you can locally use any method you want, and stay in the same network. You will have the same chances of picking it wrong as other nodes, as long as you apply it only for a single block, and keep the actual chainwork rules for two or more blocks.
legendary
Activity: 978
Merit: 1080
January 02, 2023, 05:09:25 PM
#28
But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.

I think everyone would agree with that, if we assume that the miner is honest.
I.e. they follow the longest chain rule in attempting to find a block that builds on the block with the most cumulative difficulty.

The remaining questions are:

1) If a new block has been found but not propagated to a specific miner yet, is that miner's attempt wasted?

2) If there are multiple new blocks found nearly simultaneously and both are being mined upon, are the attempts on top of the losing block wasted?

2) should be answered in the negative, since even with two tips at height h (and thus an undetermine height h winner), all the hashpower is still working on finding a block at height h+1,  and the expected time to find one is no longer than if there is a unique tip at height h.

But I would answer 1) in the positive...
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 02:48:48 PM
#27
So, if you think that any work is "wasted", then note that in pools it is not. If miners are rewarded for "invalid" blocks (because of not meeting the difficulty), then I am pretty sure they are also rewarded for stale blocks.
That doesn't mean it isn't waste. The administrator of a pool decides to do various things to have the business running. An attractive policy is to pay miners for blocks, whether these are stale or not. But, stale blocks don't create money. The money come from business' budget. Stale blocks consist an expense for the pool that doesn't contribute in any way. That's the definition of waste.

But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.
It isn't. Difficulty determines the security of one chain. Stale blocks create a situation where we have two chains, therefore two potential difficulties. The security of the network is essentially split in half for a moment, and the next miner decides which part will be sacrificed.

I thought it is apparent enough that block interval comes with an opportunity cost. That cost defines the security that would have been provided if there was a greater block interval. But in the end, it's a tradeoff. Lower security for faster confirmations. Maybe it is technically wrong to call stale blocks waste, because you actually gain something-- be it time or security accordingly.
legendary
Activity: 2268
Merit: 18509
January 02, 2023, 01:32:24 PM
#26
And energy spent on a chain that is not the longest one difficulty-wise, is wasted energy.
This is true if it is external hash power coming from elsewhere and attempting to 51% attack the main chain, for example. But (and I could well be wrong) I don't think it is true if it is hash power which is already mining the main chain that temporarily breaks off to attempt to mine a fork before rejoining the main chain.

In my previous example where the hash rate splits evenly in two, then it doesn't matter that both halves are attempting to build on top of a different block. The total hash rate hasn't changed, and so the next block will still arrive in 10 minutes on average. If half of the network's work was truly wasted, then the next block would take 20 minutes to arrive. But because we haven't pre-determined which fork will win, then the work of both forks is contributing to the security of the network.

All the work put in the invalid chain is wasted work, because the same work could have been used to provide security.
But it did provide security. If the work on the stale chain had found a successful hash first, then it would be the main chain. Just as if the work on any of the failed candidate blocks had found a successful hash first, then that candidate block would be on the main chain. If we look at a pool like BTC.com for example - they have an estimated 3% of the hashrate, but haven't found a block in almost 100 blocks. Does that mean all their work on their now invalid candidate blocks was wasted?

I think the confusion here is stemming from our frame of reference. If you look at the chain in retrospect, then all the work that didn't find a block can be called "wasted". But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.
hero member
Activity: 789
Merit: 1909
January 02, 2023, 12:41:31 PM
#25
Quote
And since block interval is what determines that percentage, adjusting it would adjust the waste as well.
But it is already adjusted on other networks. If you want to make it shorter, then note that in pools, miners submit their shares every 30 seconds or something like that. Also, they submit "invalid" blocks, in that sense the Proof of Work does not meet the network difficulty, but they are rewarded with some fractions of the coinbase.

Another thing is making the block time longer, for example one hour. For compression and Initial Block Download, I also thought about two weeks per "package". And for sidechains, it is proposed to be three months per on-chain update.

So, if you think that any work is "wasted", then note that in pools it is not. If miners are rewarded for "invalid" blocks (because of not meeting the difficulty), then I am pretty sure they are also rewarded for stale blocks.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 10:14:14 AM
#24
But the stale block from such a chain split is not invalid. It is perfectly valid, and could indeed have been the accepted block if a different miner had found the next block.
It's the old chain invalid. The block might be valid, but another miner successfully questioned that chain. Transactions in that block are not confirmed anymore; at least not until someone else confirms them. And energy spent on a chain that is not the longest one difficulty-wise, is wasted energy.

All the work built on top of it is still contributing to the security of the network.
All the work put in the invalid chain is wasted work, because the same work could have been used to provide security. Energy spent on such chain doesn't help anywhere.

It might be wasted in the sense that it was later decided that this hash power was mining on top of a stale block, but it is not wasted in the sense that at the time it wasn't contributing to the security of the network.
No, but we acknowledge that a small percentage of blocks are stale. Therefore, energy spent on that small percentage, is wasted. We might not know which blocks are stale, and thus we don't know at the time we mine them that we're wasting energy, but that's what we do.

And since block interval is what determines that percentage, adjusting it would adjust the waste as well.
legendary
Activity: 2268
Merit: 18509
January 02, 2023, 08:10:39 AM
#23
Building on top of a valid block, regardless of the invalid hashes, is the work. Building on top of an invalid block, while part of the process, is and should be considered waste.
This would be true only if the block being built upon is invalid. That would indeed be wasted work, as it is impossible for that work to find the next block and therefore it isn't contributing to the security of the network. But the stale block from such a chain split is not invalid. It is perfectly valid, and could indeed have been the accepted block if a different miner had found the next block. All the work built on top of it is still contributing to the security of the network.

It might be wasted in the sense that it was later decided that this hash power was mining on top of a stale block, but it is not wasted in the sense that at the time it wasn't contributing to the security of the network.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 07:38:32 AM
#22
We do not consider those hashes to be wasted, so why would a hash that ends up being reorged out be considered wasted? It's no more or less wasteful than all of the hashes that weren't valid.
It's wasted in the sense that in a parallel system, wherein you had had 20 minutes interval and the exact same hash rate, you'd have lower chances to mark valid blocks as invalid.

Reorgs aren't bad because they waste energy. They're bad because they mean that low confirmation numbers cannot be relied upon.
Unreliable low confirmation numbers means less security. Less security with the same hash rate, when that very hash rate only contributes security-wise, is waste.

They all went to securing the network, regardless of the method by which they were unsuccessful.
Building on top of a valid block, regardless of the invalid hashes, is the work. Building on top of an invalid block, while part of the process, is and should be considered waste. Just as executing a 51% attack without the necessary hash rate is discouraging, because the attacker is likely to spend energy to beat the air-- ergo to waste energy.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
January 02, 2023, 06:50:23 AM
#21
The analogy with the dice is likely flawed, and I find it more complicated than needed. Mining works simply enough, over-simplifying might have opposite result.

Energy is only wasted if two or more miners are searching for the hash in the same search space.

E.g miner A and B are both calculating the SHA256d in the region xxxxxxxxxxxxx[10000000-20000000]xxxxxxx... where the x's are constant digits and might be different from each other, within one hash. But each of A and B'a hashes, this sequence of digits is always going to be same, effectively making A and B adjust only the digits in brackets.

When you have A and B searching in such a small range like that, there will of course be overlapping in the search space fairly often, which is why most mining software and pools try to give miners unique and sufficiently large ranges to work on.
legendary
Activity: 2268
Merit: 18509
January 02, 2023, 06:03:38 AM
#20
Therefore, one of the two winners' work (for rolling, whatever) is wasted.
I don't think it is.

The single winning hash itself is wasted, sure, because that block is eventually made stale* when the other winning block at the same height is built upon. But all the hashes which failed to find a winning block are not wasted.

As an example, consider that we have 200 EH/s at present. With 200 EH/s, the average block time is 10 minutes. Two miners simultaneously find a block at the same height and broadcast them. Half the network, 100 EH/s, attempts to build on Block A, and 100 EH/s attempts to build on Block B. Within 10 minutes on average, someone will successfully mine another block on top of one of those blocks, and the other block will be discarded.

Now, let's take the situation where half the network build on Block A, and the other half of the network do nothing. We now only have 100 EH/s instead of 200 EH/s, and it takes twice as long to find the next block. All the miners who were trying to mine on top of the now discarded Block B did not waste their work any more than any other miner who did not find the winning hash of the next block wasted their work. They all went to securing the network, regardless of the method by which they were unsuccessful.



*Although commonly used, orphan is the wrong term here.
legendary
Activity: 978
Merit: 1080
January 02, 2023, 03:32:10 AM
#19
When there are chain splits, every single hash computed on the losing side of the split is not wasted. They could have been valid blocks, and could have made that side the winning side.
They *are* valid blocks. But they got orphaned, and orphanage is waste. Ideally you want all valid blocks to be building on each other sequentially, preserving every valid block, with the cumulative diff fully reflecting the hashrate. With orphans, the cumulative diff underestimates the hashrate.

This is exactly why shortening the block interval too much, let's say much shorter than a minute, is bad in PoW. Because it increases the orphan rate, and thereby the waste.
staff
Activity: 3374
Merit: 6530
Just writing some code
January 01, 2023, 10:20:19 PM
#18
Therefore, one of the two winners' work (for rolling, whatever) is wasted.
What makes you say that?

Every single hash takes the same amount of energy. Every single hash has the same probability of being a valid proof of work. The vast majority of hashes are not, and so discarded. We do not consider those hashes to be wasted, so why would a hash that ends up being reorged out be considered wasted? It's no more or less wasteful than all of the hashes that weren't valid.

When there are chain splits, every single hash computed on the losing side of the split is not wasted. They could have been valid blocks, and could have made that side the winning side. Just because it ends up losing does not mean it is wasted.

Reorgs aren't bad because they waste energy. They're bad because they mean that low confirmation numbers cannot be relied upon.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 01, 2023, 05:39:18 PM
#17
The analogy with the dice is likely flawed, and I find it more complicated than needed. Mining works simply enough, over-simplifying might have opposite result.

Approximately 800 rolls after the new player joined the game, they get lucky and roll 4 sixes.  They get $10.  Are any of your rolls "wasted", just because someone else won their game?
No, they are not. But this isn't the issue I'm raising. What I'm saying is: If both of you roll the 4 dices at the same time, and win at the same time, nobody wins... yet. Someone will have to win afterwards, and use one of the previous winner as reference (prevBlockHash). Therefore, one of the two winners' work (for rolling, whatever) is wasted.

Let's get back to Bitcoin mining, because it's easier to formulate. If you drop the time interval to 1 minute, it's rational to assume that there will be more cases where two miners mine a block and broadcast it at the same time. This means it's more likely to have energy spent on blocks that are going to be orphaned, ergo invalid-- which is the definition of waste for Bitcoin.

I'm not saying that spending energy for finding that one hash is waste. I'm saying that spending energy on orphaned blocks is waste.
legendary
Activity: 3388
Merit: 4615
January 01, 2023, 04:23:22 PM
#16
    meaning that both miner B and miners from group B wasted their computational power.[/li][/list]

    No.

    This is a common misconception.  There is no "wasted" computational power from failure to beat some other miner (or pool) in proof-of-work.

    Here's an analogy that might perhaps help you to see why:

    Instead of generating hashes, we'll roll dice.  Instead of building blocks and paying ourselves with inflation and fees, we'll simply win a game prize. Instead of a hash target, we'll have winning dice configurations.

    Here's the game rules:
    • You are given 4 balanced, fair, six-sided dice to roll (this is your "hashing algorithm")
    • You get to roll those dice once every second (this is the "hashpower" of your dice rolling ability)
    • The operator of the game will pay you $10 every time you roll sixes on all 4 dice simultaneously (This is your target)
    • Statistically, this will happen once every 1296 rolls, or approximately every 21 minutes (this is your "average block time")

    To start with, you are playing alone. Let's say you get lucky and get all sixes after 1100 rolls of the dice. How many of the previous 1099 rolls were "wasted". Could you ever have gotten to the 1100th roll without first rolling those other 1099 times? You can't win the $10 without playing the game, and you can't play the game without rolling the dice a whole lot of times.

    Ok, you've one once. Let's say you continue to play the game and this time you're a bit unlucky. It takes you 1400 rolls of the dice to get all sixes this time.  How many of the previous 1399 rolls were "wasted"?  Again, the only way to get the 4 sixes is to roll the dice, and you're not going to get them on every roll. You HAVE to roll MANY times to get to the target. That's the way the game works. The losing rolls aren't "wasted", they are just part of the way the game (system) works. I could, perhaps, accept an argument that they are ALL wasted, since you put in effort and got nothing those times. That's an arguable point of view that the game itself shouldn't be played at all, but I can't see any argument that some of the losing rolls were "wasted, and some were not.

    Now, let's add a second player to the game.  Here's where people start to get lost when they think about this.  Someone else's win in this game has no effect on your ability to win.  There are now two of you playing. Each rolling theri own 4 dice every second.  When the other player gets 4 sixes, that person gets $10. When you get 4 sixes, you get $10.  Their win of $10 doesn't eliminate your ability to win $10.  It doesn't "reset" anything for you.  There is nobody "counting dice rolls" and making sure that you only get paid when you've rolled at least 1296 times since the last win that anyone had. There isn't even anyone counting rolls and making sure that you've rolled 1296 times since YOUR last win.  It's just 4 sixes pop up, you win.

    So, you're both rolling dice, playing your own games.  As soon as you get 4 sixes, you're gonna get $10. As soon as they get 4 sixes they're gonna get $10. Approximately 800 rolls after the new player joined the game, they get lucky and roll 4 sixes.  They get $10.  Are any of your rolls "wasted", just because someone else won their game? Aren't you still just participating in your own game? Don't you still need to be rolling the dice in order to get your 4 sixes?

    300 rolls later you get lucky and get 4 sixes. That's only 1100 rolls you've made since the game started with the new player. You got your prize 196 rolls earlier than the average for your game, just like the very first time you played the game.  So, how many of your rolls before your win of YOUR game were wasted?  The 800 that you rolled while you were waiting for the other player to win their game? The 299 losing rolls that you rolled in your game since the other player won their game?  The full 1099 losing rolls that you made since YOUR last win?  How are they "wasted" if they had to happen in order for you to get to your winning roll?

    Are you seeing yet that are two ways of looking at this:
    • There are NO WASTED rolls (hashes) that are part of the process.  That's just what you need to do to get to your winning roll (hash)
    • EVERY roll (hash) that doesn't win is a "waste", since you put in the effort to make that roll (hash) and that particular roll (hash) didn't pay anything

    In either case, the fact that someone else happened to have a winning roll (hash) has no bearing on which of your rolls (hashes) towards your prize (block) are wasted.  Either all of them are, or none of them are (depending on your point of view) regardless of what the other players in the game happen to be doing. Failure to get your 4 sixes before someone else gets their 4 sixes has no bearing on whether your rolls were wasted or not.
    staff
    Activity: 3374
    Merit: 6530
    Just writing some code
    December 31, 2022, 07:55:07 PM
    #15
    isn't there a way to avoid reorgs without incentivizing miners to be selfish? Couldn't there be a rule (that perhaps makes system more subjective in terms of consensus) which would make nodes reject those blocks, and decide objectively between 2 blocks (without waiting for the next one to be built on top)?
    No. Given the same chain, two blocks found for the same height in that chain must be treated equally. Any measure that allows one block to be chosen over the other just means that miners have different targets to meet in order to get their block in. Using the actual work means miners might start trying for lower hashes than the target requires. Using something like size means miners will stuff transactions with crap, or not include any transactions at all. Two blocks built on the same parent must be able to be treated equally (prior to any children) to avoid gaming the system.

    Isn't a soft fork enough? I'm thinking of it as limiting the protocol rules further. Not violating the old rules.
    No. Your idea would mean that it is possible to reorg to a less work chain (under the current rules), which would not be compatible with non-upgraded nodes.
    legendary
    Activity: 2268
    Merit: 18509
    December 31, 2022, 09:52:16 AM
    #14
    Couldn't there be a rule (that perhaps makes system more subjective in terms of consensus) which would make nodes reject those blocks, and decide objectively between 2 blocks (without waiting for the next one to be built on top)?
    I don't think so. Or at least, not without fundamentally changing what bitcoin is.

    At the moment, the split is resolved when one of the two competing blocks has more work built on top of it. That is the basis of proof of work. If you come up with some different mechanism to resolve the split, then it is no longer proof of work, but proof of something else. Further, what if your proposed mechanism resolved in the split in favor of Block A, but some miner running outdated code or different software or whatever accidentally built upon Block B before anyone else built upon Block A? Do we now ignore this chain with more work?
    hero member
    Activity: 789
    Merit: 1909
    December 31, 2022, 08:18:00 AM
    #13
    Quote
    I'm still thinking though: isn't there a way to avoid reorgs without incentivizing miners to be selfish?
    Even if there would be, then the question is: if you have a reorg-resistant chain, is it good or not? Because sometimes reorgs are needed, for example to fix bugs like Value Overflow Incident. Imagine that someone sends a transaction that is valid in the current consensus, but was never intended to be valid. What then? How to fix the chain without reorgs?

    Also note that there are some altcoins with some kind of "automatic lock-in" of the chain, for example after 10 confirmations it cannot be reorged. Then, they are not in the better situation, it is actually worse, because then they cannot fix things if something will happen unnoticed, and it will need fixing.

    Another thing is that chain reorganization is the only way to remove malicious data from the chain. How you will do that without reorgs when needed? Leave a hash, and force nodes to assume they don't need to know the content?

    Quote
    My thought was: low block intervals with zero cost.
    It should be handled by some second layers. If you are worried that miners produce "almost valid" blocks, or that anything is "wasted", then think about pooled mining, instead of solo mining, in pools those shares are not "wasted". And if you are worried about pool centralization, then think about decentralized pools, not about getting rid of them.

    Quote
    The cost of reorgs isn't high in bitcoin, because of 10 minutes block interval, but if you were to drop it to 1 minute interval, you'd notice lots of orphan blocks.
    It is only a matter of setting the difficulty for the second network. Currently, miners in pools submit their shares more often than every 10 minutes, so if they for example send it every 30 seconds, then their mining process is unaffected by the Bitcoin block time, as long as it is longer than their time. They could be affected only when trying to make it slower, and submit something for example every hour, then they have to check things every 10 minutes, to make sure they work on top of the latest block. Also, for that reason, the longer the block time, the more layers can be attached under that chain, where there is always one hash per block, to move it forward.

    Quote
    Isn't a soft fork enough? I'm thinking of it as limiting the protocol rules further. Not violating the old rules.
    It is similar to the first change, where we went from "the longest chain" to "the heaviest chain". As long as the network is small, it can be easily deployed. But changing it now is hard. As hard as fixing off-by-one error in difficulty calculation. You can run your rules in your local mining pool, and reward miners, according to their hashes, but that's all, you have to be compatible with the current system. Unless you break SHA-256 or things like that, because then you need two difficulties, chain rehashing, and then you can easily produce backward-compatible chain, and create any new restrictions for some new hash function on top of that.
    Pages:
    Jump to: