Pages:
Author

Topic: Semi-Full Bitcoin Node. Downloading from ONLY pruned nodes. (Read 471 times)

newbie
Activity: 7
Merit: 0
1) It is possible to print money in a 51% attack if other users don't have the full history. 51% Attacker outruns the whole chain by more than the month that everyone does store, so that NO-ONE has the history. Then you can do what you like. not very likely I agree.. (Outrunning a month with 51% takes Years)

Oh really?
I don't believe that. In particular, I do not believe that he could spend my coins in case that was included in the subset of things "that he liked to do".

Furthermore, it is also not true that outrunning a month with 51% of the total hash power takes years. If prepared correctly (and not script kiddy style) it takes exactly one month + a few minutes.

Think of it this way. We both keep tossing coins, every day each of us tosses once and we count the number of heads that each of us accumulates.
So when will I, for the first time after a month, have more heads than you?

Well, after one month, on average each of us is expected to have 15 heads. But when will I have more heads than you? It isn't that hard:
- Within 1 month and 1 day: I have more heads than you with 50% chance
- Within 1 month and 2 days: same story! Every other day I have (on average) a new 50% chance to revert the full month, considering that the numbers of heads we both got will converge towards the same value over time. You just need to be longer once for a very brief period of time to pull it off.

When you look at the maths, your advantage might be even higher with 51% (as opposed to 49%).
Since it's not the "longest" chain (as frequently claimed) but the chain with the most work that survives, you will have a lower difficulty drop on retargets than the 49% of the network, meaning that your chain might be heavier without you mining noticably less blocks. So even when your chain is not longer, it might still win! I am just too tired to do the math now.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
Discussing HashCash-like improvement for bitcoin I  brought it up as a necessary step:
...  I'm thinking of a hybrid approach by giving space to wallets for participating in consensus without eliminating block miners. So many radical changes would be necessary for this to happen, on top of them getting rid of blockchain bloat and spv wallets,  interchangeability of fee and work, defining total work of a block in a more general way, ....
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely and unlike op I don't believe in a "semi-full node" replacement either. What he suggests, snapshotting the UTXO, is the key to this agenda.

Using such a snapshot, has been proposed by many people earlier and ignored mostly because it was considered to be one of those "dangerous" proposals that need a hard fork to be implemented and in this weird community, bitcoin, had forking is cursed, ... long story.

@eurekafag AFAIK is the first person who said something about it, july 2010(!), he used the term snapshotting (it is why I used it above, to show my respect). The topic got no attention but another user, @Bytecoin rephrased it two days later and posted a more comprehensive proposal.

Satoshi Nakamoto was still around and he never made a comment regarding this, Gavin Andersen didn't get it, neither @Theymos, ... just 2 and a half pages of non-productive discussions. Obviously in mid 2010 there were few blocks, few UTXOs and so many other problems and priorities.

Almost one year later, july 2011, Gregory Maxwell, made a contribution to this subject he basically proposed something that later was termed, UTXO Commitment, it was Merkle era, people were excited about the magical power of Merkle Trees and Maxwell proposed maintaining a Merkle Hash Tree of UTXO by full nodes that enables them to spot an unspent output efficiently while miners include the root of such a tree in coinbase transaction (later others proposed including it directly in block header) this way, 'lite clients' would be able to ask for proof of any tx input as being committed to the UTXO Merkle root included in latest blocks.  

Basically, Maxwell's proposal needs a hard fork because full nodes MUST validate the UTXO Merkle root once  it is provided:
What if the coinbase TXN included the merkle root for a tree over all open transactions, and this was required by the network to be accurate if it is provided.
'A hard fork?! Better to forget about it or at most put it, with all due respects, in the long list of hard-fork-wish-list', it was and still is how proposals could be handled in bitcoin community. Few replies, again non-productive and Maxwell's proposal got no more stem.

In August 2012, Andrew Miller published a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs)  in bitcointalk: again no serious discussion.
Andrew explicitly mentioned his proposal as the one which "belongs to Hardfork Wishlist".

Peter Todd went further and proposed TXO Commitment by which he meant committing the Merkle hash root of the state to each transaction, he also introduced a new concept 'delayed commitment' which is a key feature, imo.

I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?
This is a bump for this thread, for a special purpose:
proving Gregory Maxwell wrong here:
I'm preparing a draft for this, but I'm really sick of doing work on problems that you guys in the team are not interested in.
And I'm really sick of your insulting lectures when you can't even bother to keep up on the state of the art in what's already been proposed. Tongue Please spare me the excuses about how its other people's fault that you're not doing other things. You seem to have plenty of time to throw mud...

There are already existing designs for an assumevalid 'pruned sync'...  but not enough hours in a day to implement everything immediately, and there are many components that have needed their own research (e.g. rolling hashes like the ecmh, erasure codes to avoid having snapshots multiplying storage requirements, etc.).

If you want to help--great! But no one needs the drama.  It's hard enough balancing all the trade-offs, considerations, and boring engineering without having to worry about someone being bent that other people aren't going out of their way to make them feel important. It's hard even to just understanding all the considerations that other engineers express: So on one has time for people who come in like a bull in a china shop and don't put in that effort towards other people's work. It doesn't matter who you are, no one can guarantee that any engineering effort will work out or that its results will be used even if it does.  The history of Bitcoin (as is the case in all other engineering fields) is littered with ideas that never (yet) made it to widespread use-- vastly more of them from the very people you think aren't listening to you than from you. That's just how engineering goes in the real world. Don't take it personally.

And it is the drama, usual Greg Maxwell ...
I don't think it is relevant to accuse people of not having done enough research when they are asking such a question, but the above quote and this whole thread tell everything about Maxwell's claim, I've been contributing to the subject for a while and nobody would ever accuse me about being noob here.

Speaking of software engineering ...  a system that takes like an age to boot, is a natural candidate for practicing engineering principles, imo, I understand consensus based, decentralized, public protocols are hell of a new domain, but why should we stick with 'how engineering goes in the real world' of bitcoin when it has led us here?


Now, to be more on-rail with topic I was just wondering how would it look like a proper answer to my question Huh
How do you think about the importance of bootstrap problem in bitcoin? Do you think a hypothetical safe approach to it, could ever exist and if so, how do you think about the importance and priority of upgrading bitcoin to support the new fast-sync feature?

For Gregory Maxwell as a prominent figure and a pro, it looks just more professional to say, I presume:
"Yes, we do agree that a hypothetical solid solution to the bootstrap nightmare in bitcoin is of much interest and deserves to be considered as an implementation priority". No, really, doesn't it look more professional?

But how do you think about my highlighted question?
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
..fun from a coding point of view

This is very true..


And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency.

I agree that if you had all the transactions the MMR commitment per block would be 'spare', since you can always work it out anyway, but in this particular system you do not always have the transactions. And the MMR commitment in the block header cannot be reproduced from a list of block headers alone. But by adding it, you can start validating blocks immediately - with just the header list. So it is not redundant as it adds an ability that wasn't there before. Whether or not you think it is a useful ability is another point.
Who talked about block headers? Oh ... it was me, sorry, but it was about fresh bootstrapping. When a SDUC node starts freshly it needs to find the chain with most work hence it downloads headers and queries coinbase txns top down to find the most recent UTXO that it could rely on. There after it should query and download the whole blocks.

I've prepared an illustration below:
UTXOs are generated every 1000 blocks. Legacy miners who doesn't support SDUC, remain silent but SDUC miners commit to as much as previous UTXOs as they can. Depending on the ratio of SDUC compatible miners, UTXOs become consolidated enough and would be considered as a replacement for the history down to genesis block. By enough I mean the security level a node chooses deliberately. For every 10,000 commitments we get a rough 1 billion dollars security as of this writing.
hero member
Activity: 718
Merit: 545
..fun from a coding point of view

This is very true..


And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency.

I agree that if you had all the transactions the MMR commitment per block would be 'spare', since you can always work it out anyway, but in this particular system you do not always have the transactions. And the MMR commitment in the block header cannot be reproduced from a list of block headers alone. But by adding it, you can start validating blocks immediately - with just the header list. So it is not redundant as it adds an ability that wasn't there before. Whether or not you think it is a useful ability is another point.

-----------------------------

Actually - I am thinking that a system like this will HAVE to be used at some point.. Are you expected to validate a 1000 year chain of transactions if you want to sync a full node 1000 years from now ? That would take years (and it is already impossible to sync certain chains). Validating the longest header chain via POW would still be easy though.

.. Clearly 1000 years is a long way off  Tongue .. but 10 to 20 years isn't. And that could already be too much.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
I afraid you could be distracted from the cause as I mentioned above:
I understand real-time MMR refresh is low cost and (more importantly) fun from a coding point of view and I like it too but it is not the protocol we desperately need and it would be a pain to have it soft, if ever.

A peaceful transition requires spontaneous UTXO commitments to be re-committed hundreds of times for being viable as a replacement for the history. It is true that nodes could make conclusions about such aggregated UTXO continuous commits but it is not an elegant choice to make.

And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency. Imagine a simple general ledger system that for every single transaction includes balances of the accounts involved, it is possible but is not recommended as any designer/programmer is aware of.

I maintain that we don't need a rolling UTXO schema for the purpose of pruning and should focus on consolidating one snapshot in each few thousand blocks, instead.
hero member
Activity: 718
Merit: 545
I already have an implementation written and functioning as part of a larger system. Works well.

Actually there are many benefits to having it real time. A couple..

1) You end up needing the information all the time anyway. Might as well calculate once at the beginning as a one-time hit of creating a block, and allow everyone to use it for ever-more, rather than constantly re-evaluating from x blocks back.

2) You can validate and participate in the network without needing any information other than the longest chain of block headers. The next block can be validated entirely from it's information and the information embedded in the headers of the previous block.

3) Re-org MMR calculations are simple.

there are more..

I think intuitively, obviously, you want each block to commit the current MMR state, rather than some delayed commitment. That makes each block far more useful. A straight state machine, where next block of data only relies on previous and no other extraneous information.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
You don't need to refresh UTXO  for every single block, why should you?

Suppose you got 2 recent UTXOs like as of 2500, and 1500 blocks below the current heighth, you might just use either of them (and previous 2500 or 1500 blocks) to decide about validity of a txn, it is up to you and how much do you expect a given UTXO Merkle root should be confirmed and how many blocks have actually committed to it .... that simple!

Many people have suggested commitment to the latest state by each block which is not helpful at all. Actually would be a distraction from what we are looking for, fast bootstrapping and elimination of spv wallets by having full nodes with the least possible amount of resources.

Committing to the latest state (like in txns or in blocks) would be useful for validation purposes but not for pruning. Here we could delay commitment for as much as few thousands of blocks because we could afford maintaining such amounts of blocks in any commodity device.

Please note that committing to the same stack of UTXO Merkle roots is very crucial for the algorithm, because it is how commitments accumulate and the UTXOs become consolidated.

I suggest we finalize this issue before proceeding anymore, if you don't mind.
hero member
Activity: 718
Merit: 545
Nice.

I hadn't thought too much about how to do it with a soft fork, and had been banking on just hard-forking in the best I could come up with..

Sooo.. with that in mind - this is the version I have settled on after much playing. (I need to think more about yours..)

I started with the delayed commitment - ( do you mean ..INSERT and UPDATE of items.. ?  ) and you pick a step counter that starts a new epoch. Here once every 1000 blocks. But you always get into difficulties at the boudaries, and re-orgs can bounce you from one MMR root to another. Making providing proofs slightly more complex, (you just send both/all) and other vagaries.

You not only embed the root hash of the MMR into the block, you add all the MMR peaks, so that you have all the information required to add data as well as update.  

After a while I realised that the delay was complicating some matters and not helping in others.

what I _actually_ wanted was much simpler. I want the current MMR state embedded in every block. Real-time.

Much better. Every block HAS to commit to the correct MMR, so that (block n-1 MMR) + (block n txns) = (block n MMR), or it's invalid. Everyone DOES agree anyway - an ordered UTXO is the same for everyone - so now the Miners have to commit to it.

I use an overlapping set of the the last 50 MMR states, blocktime ordered, reconstructing up to date proofs for inputs  to check for txn validity, given an original MMR proof from the user that references any of the previous 50.. works well..  
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink

lol.. I'm ashamed to say I hadn't actually thought about that bit..   I suppose there are the usual suspects to do it softly softly.. you could either stuff it in the coinbase - or as an OP_RETURN in the first transaction in the block...  I have a feeling your method will be more cunning.
Sure it is.  Cheesy

Quote
Would a block definitely be considered invalid if the commitment was wrong or missing ? (I should think yes) -
No, you shouldn't. And here is the trick:
The very interesting point about UTXO commitment is its potential to be 'delayed' (and yes, I'm borrowing this term from Peter Todd) i.e. you have time to decide about its validity. To be more precise, you MUST wait for a minimum number of commitments to start pruning the history, right? Otherwise you will be in the risk of being committed to a short-to-middle range double spend attack without any bridges behind to commit to chain rewrite, you will reject any (implied) rewrite request that goes beyond the snapshot you are committed to, because there is no history and no genesis.

You should wait for like 10,000 commitments, imo. Once you got that thresholds you are ready to get rid of the history because it takes like 8 billion dollars (nowadays) to rewrite Bitcoin blockchain in that range and it is why UTXO commitment works after all.

Another interesting issue here is your free will: you could choose a more realistic and effective strategy by pruning after 1000 blocks once you are confidentially sure about that nobody commits a 1 billion dollars attack against the network, yeah?

Now we are ready to walk through the algorithm (it is the alpha version, published for the first time, feel free to suggest improvements) which I purposely call it Soft Delayed Utxo Commitment

 
Soft Delayed Utxo Commitment
1- A SDUC compatible node takes a snapashot from UTXO every 1,000 blocks and generates a Merkle root for the set using a deterministic method that allows insertions and deletions of items such that the most recent snapshot and the last 1000 blocks always generate the same new snapshot as if it was supposed to be generated using the previous snapshot (if any) and the last 2,000 blocks.

2- A SDUC node is configurable (and queryable) for the number of  commitments it needs for committing permanently and irreversibly to an UTXO snapshot via its Merkle root. It is never allowed to be less than 1,000 commitments.

3- We define commitments for an UTXO as the number of blocks that have embedded a commit to its Merkle root.

4- SDUC mining nodes, commit to a UTXO snapshot by embedding its Merkle hash root in the coinbase transaction of their blocks as an special input. They are free to commit to as many UTXO Merkle root as they wish (by introducing more special inputs in the coinbase) but they should be stacked properly with the last UTXO Merkle root being interpreted as a reference to the state of the system after the block numbered floor(#BlockHeight/1000 ) and the next item below it referring to the state at  floor(#BlockHeight/2000 ) and so on.

5- In networking layer, we add proper message formats for SDUC nodes to identify each other and consolidate their chains and snapshots.

6- SDUC nodes bootstrap in a different way compared to legacy nodes:
        
  • phase 1: SDUC node acts like a spv node and downloads block headers.
  • phase 2: SDUC node spots at least one SDUC peer that conforms to its expected number of commitments expectation i.e. it has proof for the desired number_of_commitments for each UTXO commitment it presents and the bootstrapping SDUC node is interested in. Thereafter it consolidates its chain with the peer, in a top-down approach down to the (at least) first completely confirmed UTXO snapshot, confirmed from the bootstrapping SDUC node's point of view, obviously.

7- SDUC nodes are free to ignore all the history beyond their most recent UTXO which is held confirmed by virtue of blocks with configured number_of_commitments on the specific.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------        

Implementation is straightforward but for such a BIP to be adopted in bitcoin core (or any other bitcoin clone) there are two options:
1- Forking the source code and building an alternative client.
Generally, I hate this approach and as much as I'm not satisfied with conservatist atmosphere in Core devs I strongly believe in keeping development centralized and open. But it would not be an issue if it was not about miners and the need for having at least a percentage of them (like 5% I suppose) running the client software and you just can't bring a software out of nowhere and ask miners to run it with the amount of stakes they have put in their business.

2- The hard but more promising way: Convincing Doomad and cellard not to ruin this topic and having a productive discussion, cooling down Gregory Maxwell and convincing him to contribute instead of arguing or showing no interest, formalizing a BIP, working hard on the implementation details, testing and praying for the commit of the BIP.

At the end of the day we have a soft-soft migration path SDUS nodes grow smoothly without any conflict or chain split because every node in sense is a SDUC node and it can be interpreted just about the numer_of_commitments parameter being set to a very large number for legacy nodes that the owners prefer and have enough resources to stick with the whole blockchain history and more reasonable values for a growing number of nodes which need more robust and efficient management of their resources. They could coexist in peace for a very long time even forever.

Quote
but maybe users of the scheme could craft specific transactions that they share for each other only.. via the blocks.. and we don't have to fork at all.
kinda ... you are super smart dude, we should hang out a bit more  Wink

Quote
What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  
Not a big deal, imo. There will always be nodes with very large number_of_commitments set and we are super safe. For a hypothetical scenario in which we are short of such nodes, your solution works, nothing will be lost and anybody would be able to rebuild the blockchain from the ground up perhaps using a special software.
legendary
Activity: 3122
Merit: 2178
Playgram - The Telegram Casino
What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  

Nice thinking.

Challenge being that storage coins expect to be paid for their services.

That is, miners (or whatever the terminology is for users providing storage space) expect to receive a fee, usually in the form of the respective native token. Who'd pay for that? We'd be back to relying on people voluntary hosting a full node, but with extra steps involved. The effective cost of hosting a full node in terms of bandwidth and harddisk space stays the same and would likewise increase the fewer nodes are involved (in this case, storage coin nodes responsible for hosting the blockchain).
hero member
Activity: 718
Merit: 545
I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink

lol.. I'm ashamed to say I hadn't actually thought about that bit..   I suppose there are the usual suspects to do it softly softly.. you could either stuff it in the coinbase - or as an OP_RETURN in the first transaction in the block...  I have a feeling your method will be more cunning.

Would a block definitely be considered invalid if the commitment was wrong or missing ? (I should think yes) - but maybe users of the scheme could craft specific transactions that they share for each other only.. via the blocks.. and we don't have to fork at all.

-----------------

What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
....
I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?

Good breakdown.

I would also add Bram Cohen's UTX0 Merkle Set Proposal : https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

It uses 1 bit per txn-output to store spent or unspent. It's super simple and gives a 256x space advantage over regular list of 32 byte hashses, and you provide the proofs yourself when you want to spend (unlike MMR though they don't change, but are bigger)

( I was using a system where I stored using Brams' first and then MMR for later, but ended up going for just MMR. )
I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink
hero member
Activity: 718
Merit: 545

Gentlemen - Please stop. Thank you.

--------------------------------------------------------------

Discussing HashCash-like improvement for bitcoin I  brought it up as a necessary step:
...  I'm thinking of a hybrid approach by giving space to wallets for participating in consensus without eliminating block miners. So many radical changes would be necessary for this to happen, on top of them getting rid of blockchain bloat and spv wallets,  interchangeability of fee and work, defining total work of a block in a more general way, ....
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely and unlike op I don't believe in a "semi-full node" replacement either. What he suggests, snapshotting the UTXO, is the key to this agenda.

Using such a snapshot, has been proposed by many people earlier and ignored mostly because it was considered to be one of those "dangerous" proposals that need a hard fork to be implemented and in this weird community, bitcoin, had forking is cursed, ... long story.

@eurekafag AFAIK is the first person who said something about it, july 2010(!), he used the term snapshotting (it is why I used it above, to show my respect). The topic got no attention but another user, @Bytecoin rephrased it two days later and posted a more comprehensive proposal.

Satoshi Nakamoto was still around and he never made a comment regarding this, Gavin Andersen didn't get it, neither @Theymos, ... just 2 and a half pages of non-productive discussions. Obviously in mid 2010 there were few blocks, few UTXOs and so many other problems and priorities.

Almost one year later, july 2011, Gregory Maxwell, made a contribution to this subject he basically proposed something that later was termed, UTXO Commitment, it was Merkle era, people were excited about the magical power of Merkle Trees and Maxwell proposed maintaining a Merkle Hash Tree of UTXO by full nodes that enables them to spot an unspent output efficiently while miners include the root of such a tree in coinbase transaction (later others proposed including it directly in block header) this way, 'lite clients' would be able to ask for proof of any tx input as being committed to the UTXO Merkle root included in latest blocks. 

Basically, Maxwell's proposal needs a hard fork because full nodes MUST validate the UTXO Merkle root once  it is provided:
What if the coinbase TXN included the merkle root for a tree over all open transactions, and this was required by the network to be accurate if it is provided.
'A hard fork?! Better to forget about it or at most put it, with all due respects, in the long list of hard-fork-wish-list', it was and still is how proposals could be handled in bitcoin community. Few replies, again non-productive and Maxwell's proposal got no more stem.

In August 2012, Andrew Miller published a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs)  in bitcointalk: again no serious discussion.
Andrew explicitly mentioned his proposal as the one which "belongs to Hardfork Wishlist".

Peter Todd went further and proposed TXO Commitment by which he meant committing the Merkle hash root of the state to each transaction, he also introduced a new concept 'delayed commitment' which is a key feature, imo.

I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?

Good breakdown.

I would also add Bram Cohen's UTX0 Merkle Set Proposal : https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

It uses 1 bit per txn-output to store spent or unspent. It's super simple and gives a 256x space advantage over regular list of 32 byte hashses, and you provide the proofs yourself when you want to spend (unlike MMR though they don't change, but are bigger)

( I was using a system where I stored using Brams' first and then MMR for later, but ended up going for just MMR. )
 
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact if it were up to you?
Not much, with the exception of free will (why should you mention this?)

Because most of the "improvements" you propose for Bitcoin involve depriving people of their right to do something which they already currently do.  You think you can just ban all the things you don't like, as though you were some sort of dictator.  That's not progress, that's oppression.  Something which is generally considered the opposite of progression.  It's also a mentality which is largely impotent in a permissionless system, so good luck with that.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely

Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact if it were up to you?
Not much, with the exception of free will (why should you mention this?) other ones are pure garbages plus spv wallets, but fortunately for you and other respected "investors", it is not up to me and you can sell your shits to people. Oh wait, you can't anymore? Sorry, but it is not my fault.
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely

Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact in the horrific scenario where it was left up to you to decide these things?
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
Discussing HashCash-like improvement for bitcoin I  brought it up as a necessary step:
...  I'm thinking of a hybrid approach by giving space to wallets for participating in consensus without eliminating block miners. So many radical changes would be necessary for this to happen, on top of them getting rid of blockchain bloat and spv wallets,  interchangeability of fee and work, defining total work of a block in a more general way, ....
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely and unlike op I don't believe in a "semi-full node" replacement either. What he suggests, snapshotting the UTXO, is the key to this agenda.

Using such a snapshot, has been proposed by many people earlier and ignored mostly because it was considered to be one of those "dangerous" proposals that need a hard fork to be implemented and in this weird community, bitcoin, had forking is cursed, ... long story.

@eurekafag AFAIK is the first person who said something about it, july 2010(!), he used the term snapshotting (it is why I used it above, to show my respect). The topic got no attention but another user, @Bytecoin rephrased it two days later and posted a more comprehensive proposal.

Satoshi Nakamoto was still around and he never made a comment regarding this, Gavin Andersen didn't get it, neither @Theymos, ... just 2 and a half pages of non-productive discussions. Obviously in mid 2010 there were few blocks, few UTXOs and so many other problems and priorities.

Almost one year later, july 2011, Gregory Maxwell, made a contribution to this subject he basically proposed something that later was termed, UTXO Commitment, it was Merkle era, people were excited about the magical power of Merkle Trees and Maxwell proposed maintaining a Merkle Hash Tree of UTXO by full nodes that enables them to spot an unspent output efficiently while miners include the root of such a tree in coinbase transaction (later others proposed including it directly in block header) this way, 'lite clients' would be able to ask for proof of any tx input as being committed to the UTXO Merkle root included in latest blocks.  

Basically, Maxwell's proposal needs a hard fork because full nodes MUST validate the UTXO Merkle root once  it is provided:
What if the coinbase TXN included the merkle root for a tree over all open transactions, and this was required by the network to be accurate if it is provided.
'A hard fork?! Better to forget about it or at most put it, with all due respects, in the long list of hard-fork-wish-list', it was and still is how proposals could be handled in bitcoin community. Few replies, again non-productive and Maxwell's proposal got no more stem.

In August 2012, Andrew Miller published a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs)  in bitcointalk: again no serious discussion.
Andrew explicitly mentioned his proposal as the one which "belongs to Hardfork Wishlist".

Peter Todd went further and proposed TXO Commitment by which he meant committing the Merkle hash root of the state to each transaction, he also introduced a new concept 'delayed commitment' which is a key feature, imo.

I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?
hero member
Activity: 718
Merit: 545
Thanks for the input.

1) It is possible to print money in a 51% attack if other users don't have the full history. 51% Attacker outruns the whole chain by more than the month that everyone does store, so that NO-ONE has the history. Then you can do what you like. not very likely I agree.. (Outrunning a month with 51% takes Years)

2) You can still verify the longest chain via POW even with this maximal pruning. It is not a blind trust-the-peer situation.  

3) A peer cannot tamper / alter / change the data he is providing you - because.. Hashing!. Either it is the chain data or it isn't. At that stage I would simply go with the chain with the most POW.

4) The man in the middle attack - where the attacker cuts me off from the valid network, so I only see their chain, is a concern even without the pruning. 

5) As long as you keep up with the network, log in once a month, you have the _same_ security as non-pruned bitcoin - as you still validate the whole chain.

I think what this does is change the requirements for the network from - everyone needs a big hard drive - to - everyone needs a small hard drive and to log in once a month. Fair enough.

I agree that if you miss your 1 month window.. you'll need to place trust in the longest POW chain, but that seems like a given anyway.

------------------------

EDIT : Transactions from years back would still be available as you could provide the merkle proofs that linked to the block headers with the original data. You'd have to store them yourself though.
legendary
Activity: 3122
Merit: 2178
Playgram - The Telegram Casino
The impression I get is that people either decide to run a full node on purpose or just go straight for a SPV wallet. Running a "semi-full" node (eg. Bitcoin Core with pruning enabled) seems to be the exception. Accordingly I doubt that providing the ability to run a semi-full node increases the overall node count much. However I'm just extrapolating from anecdotal observations without having anything substantial to back this claim up, so don't take my word for it.

I think the problem at hand is, that the fewer full nodes there are, the more traffic they need to bear. This in turn will make running a full node even harder, causing more full nodes to drop off, further increasing the traffic on the remaining nodes until only a handful of very costly full nodes are left. And every new pruned node that comes online needs these full nodes to bootstrap, lest they won't even become a semi-full node.
legendary
Activity: 3038
Merit: 4418
Crypto Swap Exchange
If a Semi-Full Bitcoin node only stored the complete UTXO, the last months worth of Blocks in full,  and the rest ONLY as block headers, how bad could it be ? You'd still have a complete record of the POW & User Balances, and the last months of data complete.
Not very useful. The inability of the client to independently validate every aspect of the purpose of trustless in Bitcoin and it is no different from a SPV client.
I know - if you don't have the whole chain, haven't verified every transaction since genesis - you cannot independently be sure that the whole chain is valid.  But is this actually a serious threat ?
Yes. If you are not 100% sure of the information that you're fed, the only thing you can do is to trust the person who provided you the information, which is risky.
A 51% attack could in theory print money - but that would need to go on for over a month (much more than a month actually at 51% only) and is easily recogniseable. I just don't see it.
51% attacks can't print money out of thin air. They still need UTXO to spend and follow the network rules regarding the block rewards. You simply have to overtake another chain and that wouldn't be noticable at all, until the attack is over. 51% attacks uses long block reorgs to evade detection.
Are we saying there is even the remotest chance that a block from a month ago has any txn errors ? With all the users that are currently running bitcoin having somehow _missed_ it ? So why do I need to download and verify it ?
Because you can't be sure that whoever you are connected to is not malicious.
Connecting to a network of these kinds of nodes absolutely does not have the _same_ security as a full blown Bitcoin node, but it's not far off. And if it meant that many more people ran semi-full nodes, I think it could be a bonus.

A user would only need to log on once a month minimum to catch up - before data was discarded. Seems squarely within the realms of possibility.

(I know you can do this already with Bitcoin - I am wondering if pruned nodes was the default install, and the majority of the nodes on the network used this, could the network  still thrive)

It will definitely not thrive. It is simply not possible for the network to only run on pruned nodes. Without full nodes that keeps all the data, it would be impossible for any node to retrieve the exact transaction data about any transaction back in time. If this continues on, then the problem would only get worse. It would be inherently difficult for anyone to prove that they have made a transaction 2 years ago, should a contract last that long.


Unless, as Bob said, it is possible if there is enough redundancy with the historical nodes but it is simply impossible for those nodes to have sufficient redundancy because no one wants to run them and if they do, there will be a high cost to it.
Pages:
Jump to: