Pages:
Author

Topic: Why don't we prune to scale? (Read 594 times)

legendary
Activity: 1372
Merit: 1252
February 06, 2019, 11:04:21 PM
#29
@aliashraf so what is your proposed solution to being able to sync with the blockchain with a pruned mode node only downloading the latest X MB of blocks (X space being secure based on what calculations again? because you must input a minimum safe requirement for the user and this X value will change depending on current hashrate and I guess other variables). Because again as far as I remember this wasn't possible and no one was proposing a realistic solution. Would it require a hardfork?
Given the current ultra right wing dominance in bitcoin politics in which nobody GAS for progressive ideas and the cause and it is just about keeping bitcoin uncompromised to save whales assess, I see no future for radical improvement proposals other than hard forking, but as far as we discuss technically, no it doesn't require such a fork because UTXO commitments could be implemented using special coinbase outputs that legacy nodes wouldn't reject and and a UTXO would become confirmed when enough commitments is achieved.

In my SDUC proposal, a very steep migration path is considered in which we can start even with a minority of hashrate in the beginning and wait for more occasional confirmations.

By enough I mean the height that is considered safe by user and not the protocol. In pruning, user chooses the height according to his security requirements and understanding of chain rewrite threats, it would be the same for UTXO commitment.


The problem is, if you start adding "progressive ideas" into Bitcoin instead of treating it as a sort of a rule that we all must follow, you may give space for politicians to start molding the protocol at will.

Your idea seems interesting. However "By enough I mean the height that is considered safe by user and not the protocol." How does the user know what's safe enough for him? people is typically too lazy and underrate risk in order for convenience. If people start using unsafe heights couldn't this undermine the overall safety of the network?

Anyway I hope you get this all coded and presented as a soft fork.

Analogically speaking, we need to distinguish between constitutional and statutory rules. Bitcoin does not have a legislation or jurisdiction body and it triggers governance problems that make it very hard to deal with political issues but it doesn't mean that such issues do not exist.

Bitcoin is one of the most political tech events in recent history, it was built around a hacker idea of resistance and as a libertarian movement and progress has always been a core principle of such movements. Discussing radical improvements is not what makes bitcoin political on the contrary political nature of bitcoin turns a pure technical improvement proposal to a political one.

As of users choice of enough security, it is very common, right now users choose to wait for how many confirmations and they choose pruning height.

To run a wallet that is not considered to make multi million dollars transactions, I'd rather run a pruned node with like 200 blocks height and if UTXO commitment was mandatory (a hard fork) I wouldn't change that parameter more than 2x-3x but for a soft implementation (SDUC) I'd go for 1k-2k chain height. A miner with huge investment, may tune to 100,000+ block height while a home miner would stick with 1000 or less height. Bitcoin whales who maintain/trade large amounts of bitcoin, exchanges, ... might choose to keep chains in ranges between 10k-50k. Historians, researchers, watchdogs, ... may keep the full history by tuning to a very large number or simply turning prun mode off.

Thanks for the support, by the way.

As long as its doable with a soft-fork then I guess it will not be any less controversial than what prune nodes are already. However a hardfork, we all know what happens with hardforks.

In any case you may add your proposal here:

https://bitcoinhardforkresearch.github.io/

Whenever there is an actual hardfork that somehow ends up without 2 Bitcoins, some of these stuff could be added, but like I said, I just don't see a successful hardfork happening anytime soon if ever.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
February 06, 2019, 04:04:56 AM
#28
@aliashraf so what is your proposed solution to being able to sync with the blockchain with a pruned mode node only downloading the latest X MB of blocks (X space being secure based on what calculations again? because you must input a minimum safe requirement for the user and this X value will change depending on current hashrate and I guess other variables). Because again as far as I remember this wasn't possible and no one was proposing a realistic solution. Would it require a hardfork?
Given the current ultra right wing dominance in bitcoin politics in which nobody GAS for progressive ideas and the cause and it is just about keeping bitcoin uncompromised to save whales assess, I see no future for radical improvement proposals other than hard forking, but as far as we discuss technically, no it doesn't require such a fork because UTXO commitments could be implemented using special coinbase outputs that legacy nodes wouldn't reject and and a UTXO would become confirmed when enough commitments is achieved.

In my SDUC proposal, a very steep migration path is considered in which we can start even with a minority of hashrate in the beginning and wait for more occasional confirmations.

By enough I mean the height that is considered safe by user and not the protocol. In pruning, user chooses the height according to his security requirements and understanding of chain rewrite threats, it would be the same for UTXO commitment.


The problem is, if you start adding "progressive ideas" into Bitcoin instead of treating it as a sort of a rule that we all must follow, you may give space for politicians to start molding the protocol at will.

Your idea seems interesting. However "By enough I mean the height that is considered safe by user and not the protocol." How does the user know what's safe enough for him? people is typically too lazy and underrate risk in order for convenience. If people start using unsafe heights couldn't this undermine the overall safety of the network?

Anyway I hope you get this all coded and presented as a soft fork.

Analogically speaking, we need to distinguish between constitutional and statutory rules. Bitcoin does not have a legislation or jurisdiction body and it triggers governance problems that make it very hard to deal with political issues but it doesn't mean that such issues do not exist.

Bitcoin is one of the most political tech events in recent history, it was built around a hacker idea of resistance and as a libertarian movement and progress has always been a core principle of such movements. Discussing radical improvements is not what makes bitcoin political on the contrary political nature of bitcoin turns a pure technical improvement proposal to a political one.

As of users choice of enough security, it is very common, right now users choose to wait for how many confirmations and they choose pruning height.

To run a wallet that is not considered to make multi million dollars transactions, I'd rather run a pruned node with like 200 blocks height and if UTXO commitment was mandatory (a hard fork) I wouldn't change that parameter more than 2x-3x but for a soft implementation (SDUC) I'd go for 1k-2k chain height. A miner with huge investment, may tune to 100,000+ block height while a home miner would stick with 1000 or less height. Bitcoin whales who maintain/trade large amounts of bitcoin, exchanges, ... might choose to keep chains in ranges between 10k-50k. Historians, researchers, watchdogs, ... may keep the full history by tuning to a very large number or simply turning prun mode off.

Thanks for the support, by the way.
legendary
Activity: 1372
Merit: 1252
February 06, 2019, 12:26:27 AM
#27
@aliashraf so what is your proposed solution to being able to sync with the blockchain with a pruned mode node only downloading the latest X MB of blocks (X space being secure based on what calculations again? because you must input a minimum safe requirement for the user and this X value will change depending on current hashrate and I guess other variables). Because again as far as I remember this wasn't possible and no one was proposing a realistic solution. Would it require a hardfork?
Given the current ultra right wing dominance in bitcoin politics in which nobody GAS for progressive ideas and the cause and it is just about keeping bitcoin uncompromised to save whales assess, I see no future for radical improvement proposals other than hard forking, but as far as we discuss technically, no it doesn't require such a fork because UTXO commitments could be implemented using special coinbase outputs that legacy nodes wouldn't reject and and a UTXO would become confirmed when enough commitments is achieved.

In my SDUC proposal, a very steep migration path is considered in which we can start even with a minority of hashrate in the beginning and wait for more occasional confirmations.

By enough I mean the height that is considered safe by user and not the protocol. In pruning, user chooses the height according to his security requirements and understanding of chain rewrite threats, it would be the same for UTXO commitment.


The problem is, if you start adding "progressive ideas" into Bitcoin instead of treating it as a sort of a rule that we all must follow, you may give space for politicians to start molding the protocol at will.

Your idea seems interesting. However "By enough I mean the height that is considered safe by user and not the protocol." How does the user know what's safe enough for him? people is typically too lazy and underrate risk in order for convenience. If people start using unsafe heights couldn't this undermine the overall safety of the network?

Anyway I hope you get this all coded and presented as a soft fork.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
February 04, 2019, 02:53:17 AM
#26
@aliashraf so what is your proposed solution to being able to sync with the blockchain with a pruned mode node only downloading the latest X MB of blocks (X space being secure based on what calculations again? because you must input a minimum safe requirement for the user and this X value will change depending on current hashrate and I guess other variables). Because again as far as I remember this wasn't possible and no one was proposing a realistic solution. Would it require a hardfork?
Given the current ultra right wing dominance in bitcoin politics in which nobody GAS for progressive ideas and the cause and it is just about keeping bitcoin uncompromised to save whales assess, I see no future for radical improvement proposals other than hard forking, but as far as we discuss technically, no it doesn't require such a fork because UTXO commitments could be implemented using special coinbase outputs that legacy nodes wouldn't reject and and a UTXO would become confirmed when enough commitments is achieved.

In my SDUC proposal, a very steep migration path is considered in which we can start even with a minority of hashrate in the beginning and wait for more occasional confirmations.

By enough I mean the height that is considered safe by user and not the protocol. In pruning, user chooses the height according to his security requirements and understanding of chain rewrite threats, it would be the same for UTXO commitment.
legendary
Activity: 1372
Merit: 1252
February 04, 2019, 12:16:49 AM
#25
@aliashraf so what is your proposed solution to being able to sync with the blockchain with a pruned mode node only downloading the latest X MB of blocks (X space being secure based on what calculations again? because you must input a minimum safe requirement for the user and this X value will change depending on current hashrate and I guess other variables). Because again as far as I remember this wasn't possible and no one was proposing a realistic solution. Would it require a hardfork?
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 31, 2019, 06:10:56 AM
#24
It's always going to be a tradeoff. The most hardcore users aren't going to buy that cutting off a chunk of the blockchain is safe. As long as it was optional then let other people do it, but don't ruin it for those that want to sync since the genesis block. In any case from what I can remember reading when I asked the same thing a while ago what I think I was told is that basically you couldn't sync without doing it from scratch anyway. Correct me if im wrong but you cannot run a node in pruned mode unless you sync from scratch even to this day (you can save a ton of space by not storing the entire blockchain as it cuts as it downloads but it must be downloaded and validated from scratch). I think at 1MB this is not a problem, however with bigger blocksizes I wonder if in in the future it would become a problem trying to sync from scratch but thus far is not an issue.
As I've extensively discussed it above-thread and elsewhere, it is absolutely possible for nodes to decide about the length of the chain they want to download (in boot process) and maintain because bitcoin essentially is a consensus mechanism for the state of a ledger such that as time passes this consensus gets more and more stronger and unforgeable. Taking advantage of this inherent property, it would be trivial to download an old (and secure) enough representation of the state (the UTXO at a given height) instead of rewinding the tape to big bang.

But not rewinding the tape to big bang is like it or not a tradeoff. As technology progresses, it shouldn't be a problem to keep being able to download the tape since the big bang, again assuming blocksize isn't raised to the point that this becomes impossible.
You can always find a way to over-secure a system by reducing the utility level and/or spending more. The naivest approach to securing bitcoin but would be demanding "infinite work" (which is impractical anyway) to reach kinda "finality".
Bitcoin is secure against long range attacks because of its incentive model, there is no finality in bitcoin, what we have is just accumulation of work to levels that makes it unprofitable for adversaries to commit long range attacks. No absolute security in bitcoin.

As of technology progress, I don't think it would be a good practice to complicate things and impose artificial bottlenecks to a system and pray for technology to help and I don't see any potential for technology to win the race from blockchain growth and consecutive bandwidth requirements, helping the current situation with slow and expensive bootstrap of bitcoin full nodes.

Quote
I would need to see numbers to see % of potential exploits being possible if you start cutting the tape. Exploits or failure to reach consensus can happen in a long enough timeline and if that happens then you want to be able to have as many copies of the entire blockchain not to mention being able to download it from scratch.
Potential exploits always exist for bitcoin but they are ways more expensive to be considered an exploit. A pruned node with like 2000 blocks height is practically secure just like conventional full nodes. Supporting a fast bootstrap protocol for pruned nodes have nothing to do with their security as long as the base UTXO is confirmed by enough work, the same factor that justifies pruning.
legendary
Activity: 1372
Merit: 1252
January 30, 2019, 11:23:34 PM
#23
It's always going to be a tradeoff. The most hardcore users aren't going to buy that cutting off a chunk of the blockchain is safe. As long as it was optional then let other people do it, but don't ruin it for those that want to sync since the genesis block. In any case from what I can remember reading when I asked the same thing a while ago what I think I was told is that basically you couldn't sync without doing it from scratch anyway. Correct me if im wrong but you cannot run a node in pruned mode unless you sync from scratch even to this day (you can save a ton of space by not storing the entire blockchain as it cuts as it downloads but it must be downloaded and validated from scratch). I think at 1MB this is not a problem, however with bigger blocksizes I wonder if in in the future it would become a problem trying to sync from scratch but thus far is not an issue.
As I've extensively discussed it above-thread and elsewhere, it is absolutely possible for nodes to decide about the length of the chain they want to download (in boot process) and maintain because bitcoin essentially is a consensus mechanism for the state of a ledger such that as time passes this consensus gets more and more stronger and unforgeable. Taking advantage of this inherent property, it would be trivial to download an old (and secure) enough representation of the state (the UTXO at a given height) instead of rewinding the tape to big bang.



But not rewinding the tape to big bang is like it or not a tradeoff. As technology progresses, it shouldn't be a problem to keep being able to download the tape since the big bang, again assuming blocksize isn't raised to the point that this becomes impossible.

I would need to see numbers to see % of potential exploits being possible if you start cutting the tape. Exploits or failure to reach consensus can happen in a long enough timeline and if that happens then you want to be able to have as many copies of the entire blockchain not to mention being able to download it from scratch.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 30, 2019, 07:49:28 PM
#22
So let's say we implement it in a way that miners first commit to a utxo set by including its hash in a block and n blocks after that ( choose a high enough n so that it is practically impossible to change that block ) we attach the utxo set to the block. This way when the hash of the utxo set is put into a block everybody could verify whether it is correct. If it's not nodes will reject that block, if it is correct nodes will accept it.

SDUC-compatible nodes could verify it.  Non-SDUC-compatible nodes (i.e. all the current nodes on the network) would not have the functionality to verify it because they won't know of the existence of this new hash, let alone be able to verify it. The only way it could work in a trustless fashion is if a significant proportion of the network were to adopt it simultaneously.
OR
A strong enough portion of miners could adopt it and SDUC nodes would wait for enough commitments to a UTXO to accumulate for considering it as a confirmed UTXO, e.g. 1000 commitments in 3000  blocks in a scenario when 33% of miners are compatible. SDUC nodes wouldn't stop querying the blockchain (in reverse, top-down order) unless they probably find a UTXO with enough accumulated commitments.

A more aggressive approach would be implementing UTXO commitment as a UASF once a majority of mining are signaling their support.

PS: If you know that these ideas have been discussed before, could you give me a source where I could find the discussion or a special term for such kind of proposal which I can search for?  
Check this: https://bitcointalksearch.org/topic/m.46662809
legendary
Activity: 3724
Merit: 3063
Leave no FUD unchallenged
January 30, 2019, 07:04:58 PM
#21
So let's say we implement it in a way that miners first commit to a utxo set by including its hash in a block and n blocks after that ( choose a high enough n so that it is practically impossible to change that block ) we attach the utxo set to the block. This way when the hash of the utxo set is put into a block everybody could verify whether it is correct. If it's not nodes will reject that block, if it is correct nodes will accept it.

SDUC-compatible nodes could verify it.  Non-SDUC-compatible nodes (i.e. all the current nodes on the network) would not have the functionality to verify it because they won't know of the existence of this new hash, let alone be able to verify it.  The only way it could work in a trustless fashion is if a significant proportion of the network were to adopt it simultaneously.
newbie
Activity: 21
Merit: 1
January 30, 2019, 06:39:36 PM
#20
Wow these are a lot of answers, thank you very much!
So, I get that the miners could introduce a wrong hash of the utxo set, but I don't quite understand how that would be done without anyone noticing. So let's say we implement it in a way that miners first commit to a utxo set by including its hash in a block and n blocks after that ( choose a high enough n so that it is practically impossible to change that block ) we attach the utxo set to the block. This way when the hash of the utxo set is put into a block everybody could verify whether it is correct. If it's not nodes will reject that block, if it is correct nodes will accept it. Then when the utxo set is attached to the block n blocks after the one with its hash in it, nodes could delete the now obsolete blocks and the history can't be changed.

PS: If you know that these ideas have been discussed before, could you give me a source where I could find the discussion or a special term for such kind of proposal which I can search for? 
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 30, 2019, 09:46:39 AM
#19
So, I got into an argument with a friend and we were discussing on scaling problems of the blockchain.
As the blockchain gets bigger and bigger, why don't we just save the state of the blockchain (by which I mean all the utxos) every year and hash the preceding blocks, so we know that this is the correct state and erase these blocks afterwards?


How do you know the hash of the preceding blocks is the true hash? You will have to trust someone. Bitcoin is all about not having to trust anyone.
A long enough chain is valid by virtue of its own accumulated work, it is how pruning makes sense. Pruned nodes are doing fine right now in bitcoin ecosystem without any vulnerability due to trust because they don't have trust to any source unlike spv wallets.

At the current state of the art, to have a bitcoin pruned full node, you need to download full blockchain, starting from genesis block (bootstrap) it is how your node generates and maintains a UTXO hence becomes eligible for removing a considerable amount of the history depending on your choice of pruning height.
Above-thread I've tried to support UTXO Commitment ideas that imply adding hash commitments in blocks to the state of database.
As a result pruned nodes no longer need to download the full history because they would be able to query the actual UTXO directly which has enough commitments, no trust issue is raised this way because of the hash commitments.
legendary
Activity: 3640
Merit: 1571
January 30, 2019, 08:25:13 AM
#18
So, I got into an argument with a friend and we were discussing on scaling problems of the blockchain.
As the blockchain gets bigger and bigger, why don't we just save the state of the blockchain (by which I mean all the utxos) every year and hash the preceding blocks, so we know that this is the correct state and erase these blocks afterwards?


How do you know the hash of the preceding blocks is the true hash? You will have to trust someone. Bitcoin is all about not having to trust anyone.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 30, 2019, 01:56:11 AM
#17
It's always going to be a tradeoff. The most hardcore users aren't going to buy that cutting off a chunk of the blockchain is safe. As long as it was optional then let other people do it, but don't ruin it for those that want to sync since the genesis block. In any case from what I can remember reading when I asked the same thing a while ago what I think I was told is that basically you couldn't sync without doing it from scratch anyway. Correct me if im wrong but you cannot run a node in pruned mode unless you sync from scratch even to this day (you can save a ton of space by not storing the entire blockchain as it cuts as it downloads but it must be downloaded and validated from scratch). I think at 1MB this is not a problem, however with bigger blocksizes I wonder if in in the future it would become a problem trying to sync from scratch but thus far is not an issue.
As I've extensively discussed it above-thread and elsewhere, it is absolutely possible for nodes to decide about the length of the chain they want to download (in boot process) and maintain because bitcoin essentially is a consensus mechanism for the state of a ledger such that as time passes this consensus gets more and more stronger and unforgeable. Taking advantage of this inherent property, it would be trivial to download an old (and secure) enough representation of the state (the UTXO at a given height) instead of rewinding the tape to big bang.
legendary
Activity: 1372
Merit: 1252
January 29, 2019, 11:25:29 PM
#16
It's always going to be a tradeoff. The most hardcore users aren't going to buy that cutting off a chunk of the blockchain is safe. As long as it was optional then let other people do it, but don't ruin it for those that want to sync since the genesis block. In any case from what I can remember reading when I asked the same thing a while ago what I think I was told is that basically you couldn't sync without doing it from scratch anyway. Correct me if im wrong but you cannot run a node in pruned mode unless you sync from scratch even to this day (you can save a ton of space by not storing the entire blockchain as it cuts as it downloads but it must be downloaded and validated from scratch). I think at 1MB this is not a problem, however with bigger blocksizes I wonder if in in the future it would become a problem trying to sync from scratch but thus far is not an issue.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 29, 2019, 04:04:23 PM
#15
I maintain that your "need for trust" issue is not correct though I admit that there is something to pay when you get such a huge utility, unlike what you say, it is not about trust, it is about desired security level against long range attacks
A large part of the desired security level against long range attacks is reducing the need for trust. I do not think that there is enough utility to be gained from cutting off blockchain history and using an initial state to warrant the decrease in security. Of course, this is just opinions between multiple people and no one is going to convince the other that they are correct. So I will stop responding to you.
Good chat,  it is obviously up to you to continue this discussion, I respect your choice anyway.

Cost of a long range attack increases linearly with its range. Security against such an attack is achieved by the basic game theoretic assumption of  bitcoin: rational behavior of participants. Trust is not relevant here, neither legacy protocol nor the version supporting UTXO commitment is based on trust and/or  secures the network using trust, essentially.


staff
Activity: 3458
Merit: 6793
Just writing some code
January 29, 2019, 03:12:42 PM
#14
I maintain that your "need for trust" issue is not correct though I admit that there is something to pay when you get such a huge utility, unlike what you say, it is not about trust, it is about desired security level against long range attacks
A large part of the desired security level against long range attacks is reducing the need for trust. I do not think that there is enough utility to be gained from cutting off blockchain history and using an initial state to warrant the decrease in security. Of course, this is just opinions between multiple people and no one is going to convince the other that they are correct. So I will stop responding to you.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 29, 2019, 02:46:33 PM
#13
There is absolutely _no difference_ between booting from the hash of the genesis and the hash of a UTXO set at given height in terms of resistance to forge, both are immune to forge as long as the node we are booting from is able to convince us about the raw data each hash is committed to.
That is completely wrong. First of all, nodes do not just have a hard coded hash of the genesis block. The genesis block is small, so it is hard coded into the software itself.
It is irrelevant, no need to any hardcopy of the raw genesis block, just a small, I mean very small optimization it is, not important in protocol level.

But a UTXO set as initial state, that's very different. Hardcoding a UTXO set is unfeasible because it is large. Because it is large, you can't visually inspect it and make sure that there are no unexpected UTXOs there.
No need to hard coding neither the genesis raw block nor a hypothetical UTXO.  For the latter we don't need even the hash. PLease check the link to SDUC above.
According to my proposed algorithm, nodes do not check any initial-utxo, they just need an arbitrary number of blocks as their pruning length and can tune how many commitments they need to be achieved bu a utxo to be considered secure. A straightforward calculation would show that a SDUC compatible node with like 10,000 pruned blockchain is secure against attacks that cost up to half a billion USD with current prices and hashrates. Far more than enough for most of the ordinary applications.


Miners couldn't collude for inserting malicious UTXO commitment to blocks, for the same reason that they couldn't do it for any other form of malicious data: full nodes will reject such blocks and commit to the alternative healthy chain.
New nodes coming online won't know. Under this scheme, a new node that comes online is unable to verify that their initial UTXO set is correct. They can be forked off onto a separate chain which has an invalid initial UTXO set and the new node would have no idea.
Not necessarily, even current spv nodes do not commit to an invalid chain. It is absolutely possible for nodes to download block headers before starting to count number of commitments made by latest blocks to a candidate UTXO, note that in this scheme I've propose a reverse verification algorithm in which we verify the blockchain from latest blocks down.

Speaking of centralization, let's take a look at trade-offs involved:

Current Situation
pros:
  • secure against complete chain rewrite attacks
  • nothing else
You forget a few.
  • Almost no trust in any third parties required. The only trust necessary is in developers for software whose code can also be independently audited
  • The entire history from the beginning of Bitcoin is verifiable
ok, but the first one is obvious and common between the two methods and the second one is not an advantage, just a disaster pointed as a limitation.

  • downward compatibility and software bloat because of the need for re-running the protocol from root
Have you looked at Bitcoin Core in regards to the consensus rules? There's very little bloat because the protocol really hasn't changed that much. Those things that would cause bloat have been reduced to simple if statements or outright removed (BIP 34 style activation has been entirely removed and those soft forks are activated by block height/block hash or enforced from genesis).
The activation of forks based on block height/hash is an example of software bloat and your argument is not correct anyway because we are not speaking of just last ten years, think more strategic.

  • inherent resistance to change and evolution in time
Why is that a con? The blockchain is supposed to be immutable after all. Not only that, but BItcoin does evolve, just slowly and cautiously, as you should be when dealing with a system that handles large amount of money.
[/quote]
Securing money needs evolution and adoption when you impose backward compatibility on nodes it acts like an artificial inertia that complicates things and increases the risks, it is why bitcoin has evolved slowly. To be honest it hasn't happened intentionally and because you guys have been cautious it has always been about downward compatibility. I understand there is a natural inertia as well and there should be, but it is just about the need for guaranteeing few things 1-inflation policy 2-security, 3-immutability (of data, not the protocol nor the code) 4-decentralization and 5-censorship resistance.  

  • Great decentralization effects because of realization of fast and cheap fully functional nodes
There's more to decentralization than just more nodes. More nodes does not mean more decentralization. Take Ethereum for example. It has many light nodes, but I would not say it is decentralized. They are centralized under the Ethereum developers who can just say "we're having a hard fork at this time" and then say "sorry guys, hard fork canceled" and the entire network just does as they say. That is not decentralization, but they have many nodes.
Ethereum nodes are irrelevant, typically they are not full nodes and the whole Ethereum thing and its governance issues is a joke, I admit, still irrelevant tho.
More full nodes, means more decentralization not absolute decentralization because we have pools and miners to take care of as well.

  • Improved software quality because of significant reduction in need for downward compatibility.
Just about everything that is currently needed for consensus validation would be needed in the UTXO commitment model. In fact, this would cause more bloat because now we have to handle a new rule: UTXO commitments. All of the other things needed for backwards compatibility like non-segwit outputs still need to be maintained.
Handling UTXO commitment is trivial and opens doors to possibilities for having a much cleaner approach to downward compatibility because obsolete transaction types do not need to be validated. Spending UTXOs needs a downward compatible scrypt processing engine, I accept but it is not true for validation routine.

cons:
  • vulnerability to ultra-long/complete chain rewrite attacks which are not practical anyway
  • nothing else
You forgot a few:
  • Increased trust in third parties
And if you are talking about UTXO commitments in every block, there's more cons:
  • Additional time to validate and produce blocks due to the need to hash the UTXO set
  • Issues with commitments themselves where they do not necessarily commit to all UTXOs or collide and commit to UTXOs that do not exist



Now I'm not saying the UTXO commitments are universally bad. It is an active area of research and someone may come up with a breakthrough which allows us to do this kind of pruning trustlessly. However, as it is now, UTXO commitments introduce more trust in others, and pruning the blockchain so that it is just an initial UTXO set from which blocks are built on is simply introducing too much trust to the system.

But UTXO commitments will certainly be useful even if they introduce more trust. Instead of pruning the blockchain to some initial state, they would allow for a faster sync. You can do a sort of Hybrid sync where you use a UTXO set initially and check it against UTXO commitment in blocks so that you can be synced very quickly. Then in the background the blockchain can be downloaded and checked. This would allow the best of both worlds: you can have a faster sync and reduce your node's resource usage, and the full history is still verifiable and you can verify that the UTXO set you received initially is correct. This would be optional: for those who want to have a faster sync and don't care that they are trusting others, they can use the UTXO commitments. For those who want a faster sync and are okay with initially trusting but later verifying the UTXO set, they can use the hybrid sync. And for those who want to have the least amount of trust in others and to verify everything, they can do the full sync of the blockchain.

Now you are speaking a bit more rigorously: I maintain that your "need for trust" issue is not correct though I admit that there is something to pay when you get such a huge utility, unlike what you say, it is not about trust, it is about desired security level against long range attacks: when users feel there is a more than one billion dollars incentive for adversarie to commit they need to tune for 20K+ number of commitments and pruning height.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
January 29, 2019, 12:15:52 PM
#12
Additionally, the UTXO set is kinda big. It isn't really something that you want to package with a software. But you need to get it somehow. Well now you need to trust that whoever gave you the software (either packaged or over the network from another node) haven't changed the UTXO set. Changing the UTXO set would not be as obvious as changing the genesis block. You could simply add an extra UTXO and basically no one would notice. It wouldn't be noticed until the UTXO was spent, and if done at the right time (when no nodes with the full history remain), would be completely unnoticed.
I understand you are trying to educate a newbie meanwhile and it is great to remind challenges but your conclusion is unreasonably biased. There is absolutely _no difference_ between booting from the hash of the genesis and the hash of a UTXO set at given height in terms of resistance to forge, both are immune to forge as long as the node we are booting from is able to convince us about the raw data each hash is committed to.

If the change results in a contentious fork, which it almost certainly would, participants on the newly-formed network will believe whatever the newly-forked nodes tell them.  There would be ample opportunity for collusion during the execution of a fork.  People have had lots of practice spotting fairly obvious things like premines and changes to the capitalisation that some forks have previously attempted to sneak in, but they may not notice something as subtle as a UTXO change. 
I've proposed  Soft Delayed UTXO Commitment   which I don't want to get into it right now but take a look into it if you would, to find out there are definite possibility to deal with any possible challenge involved as utxo commitment could be delayed and processed decently to have a super soft migration:
staff
Activity: 3458
Merit: 6793
Just writing some code
January 29, 2019, 12:08:13 PM
#11
There is absolutely _no difference_ between booting from the hash of the genesis and the hash of a UTXO set at given height in terms of resistance to forge, both are immune to forge as long as the node we are booting from is able to convince us about the raw data each hash is committed to.
That is completely wrong. First of all, nodes do not just have a hard coded hash of the genesis block. The genesis block is small, so it is hard coded into the software itself. Furthermore, because the genesis block is small, one can visually inspect the hard coded value and see that it is correct. Furthermore, it is extremely obvious if you have the wrong genesis block: none of the blocks that you have will show up on any node you connect to or on any block explorer!

But a UTXO set as initial state, that's very different. Hardcoding a UTXO set is unfeasible because it is large. Because it is large, you can't visually inspect it and make sure that there are no unexpected UTXOs there. If it is wrong and does omit or have an extra UTXO, you can't tell that you have the wrong UTXO set until a UTXO you don't have or an extra UTXO you have is spent causing a hard fork. Sure hard coding a hash would help, but you are still trusting that the UTXO set that matches that hash is correct.

These are two different things, to say that they are the same is disingenuous.

Miners couldn't collude for inserting malicious UTXO commitment to blocks, for the same reason that they couldn't do it for any other form of malicious data: full nodes will reject such blocks and commit to the alternative healthy chain.
New nodes coming online won't know. Under this scheme, a new node that comes online is unable to verify that their initial UTXO set is correct. They can be forked off onto a separate chain which has an invalid initial UTXO set and the new node would have no idea.

Speaking of centralization, let's take a look at trade-offs involved:

Current Situation
pros:
  • secure against complete chain rewrite attacks
  • nothing else
You forget a few.
  • Almost no trust in any third parties required. The only trust necessary is in developers for software whose code can also be independently audited
  • The entire history from the beginning of Bitcoin is verifiable

  • downward compatibility and software bloat because of the need for re-running the protocol from root
Have you looked at Bitcoin Core in regards to the consensus rules? There's very little bloat because the protocol really hasn't changed that much. Those things that would cause bloat have been reduced to simple if statements or outright removed (BIP 34 style activation has been entirely removed and those soft forks are activated by block height/block hash or enforced from genesis).

  • inherent resistance to change and evolution in time
Why is that a con? The blockchain is supposed to be immutable after all. Not only that, but BItcoin does evolve, just slowly and cautiously, as you should be when dealing with a system that handles large amount of money.

  • Great decentralization effects because of realization of fast and cheap fully functional nodes
There's more to decentralization than just more nodes. More nodes does not mean more decentralization. Take Ethereum for example. It has many light nodes, but I would not say it is decentralized. They are centralized under the Ethereum developers who can just say "we're having a hard fork at this time" and then say "sorry guys, hard fork canceled" and the entire network just does as they say. That is not decentralization, but they have many nodes.

  • Improved software quality because of significant reduction in need for downward compatibility.
Just about everything that is currently needed for consensus validation would be needed in the UTXO commitment model. In fact, this would cause more bloat because now we have to handle a new rule: UTXO commitments. All of the other things needed for backwards compatibility like non-segwit outputs still need to be maintained.

cons:
  • vulnerability to ultra-long/complete chain rewrite attacks which are not practical anyway
  • nothing else
You forgot a few:
  • Increased trust in third parties
And if you are talking about UTXO commitments in every block, there's more cons:
  • Additional time to validate and produce blocks due to the need to hash the UTXO set
  • Issues with commitments themselves where they do not necessarily commit to all UTXOs or collide and commit to UTXOs that do not exist



Now I'm not saying the UTXO commitments are universally bad. It is an active area of research and someone may come up with a breakthrough which allows us to do this kind of pruning trustlessly. However, as it is now, UTXO commitments introduce more trust in others, and pruning the blockchain so that it is just an initial UTXO set from which blocks are built on is simply introducing too much trust to the system.

But UTXO commitments will certainly be useful even if they introduce more trust. Instead of pruning the blockchain to some initial state, they would allow for a faster sync. You can do a sort of Hybrid sync where you use a UTXO set initially and check it against UTXO commitment in blocks so that you can be synced very quickly. Then in the background the blockchain can be downloaded and checked. This would allow the best of both worlds: you can have a faster sync and reduce your node's resource usage, and the full history is still verifiable and you can verify that the UTXO set you received initially is correct. This would be optional: for those who want to have a faster sync and don't care that they are trusting others, they can use the UTXO commitments. For those who want a faster sync and are okay with initially trusting but later verifying the UTXO set, they can use the hybrid sync. And for those who want to have the least amount of trust in others and to verify everything, they can do the full sync of the blockchain.
legendary
Activity: 3724
Merit: 3063
Leave no FUD unchallenged
January 29, 2019, 10:46:12 AM
#10
Additionally, the UTXO set is kinda big. It isn't really something that you want to package with a software. But you need to get it somehow. Well now you need to trust that whoever gave you the software (either packaged or over the network from another node) haven't changed the UTXO set. Changing the UTXO set would not be as obvious as changing the genesis block. You could simply add an extra UTXO and basically no one would notice. It wouldn't be noticed until the UTXO was spent, and if done at the right time (when no nodes with the full history remain), would be completely unnoticed.
I understand you are trying to educate a newbie meanwhile and it is great to remind challenges but your conclusion is unreasonably biased. There is absolutely _no difference_ between booting from the hash of the genesis and the hash of a UTXO set at given height in terms of resistance to forge, both are immune to forge as long as the node we are booting from is able to convince us about the raw data each hash is committed to.

If the change results in a contentious fork, which it almost certainly would, participants on the newly-formed network will believe whatever the newly-forked nodes tell them.  There would be ample opportunity for collusion during the execution of a fork.  People have had lots of practice spotting fairly obvious things like premines and changes to the capitalisation that some forks have previously attempted to sneak in, but they may not notice something as subtle as a UTXO change. 
Pages:
Jump to: