Author

Topic: Distributed node (Read 851 times)

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 12, 2014, 01:17:41 PM
#13
My requirements are met just fine with electrum.
I was asking more for the general welfare of the
ecosystem.
donator
Activity: 1218
Merit: 1079
Gerald Davis
May 12, 2014, 01:05:30 PM
#12
But we still need (or will need) a copy of the
blockchain in order to participate
as a node?  or not?  

It depends on what you mean by "a copy".

To validation transactions, engage in mining, and support SPV clients you only need a pruned version of the blockchain.  Today all "full nodes" keep more than just the pruned version.  There are reasons that some people would want to keep the unpruned version of the blockchain and in order for new nodes to obtain that there has to be sufficient number of nodes which have a copy and accepting connections.  The vocabulary will probably evolve to highlight this difference (full node vs archive node?) much like we identify the difference between full nodes and lite nodes today.

If your requirements are even smaller than what a pruned version of the database can provide then you should be looking at SPV clients.  They still provide better security than a "distributed node" and they only need to maintain a copy of the blockheaders locally.  The headers will require less than 50 MB per year to store.  They request the data necessary to validation a transaction in realtime and only for the transactions they are interested in.  The data they receive from full nodes can be verified against the blockheaders to ensure they aren't being spoofed.

staff
Activity: 4284
Merit: 8808
May 12, 2014, 01:02:30 PM
#11
But we still need (or will need) a copy of the
blockchain in order to participate
as a node?  or not? 

It sounded like you said more advanced pruning
would allow block validation without the whole
blockchain, but it hasn't been released yet
due to synchronizaton and DB issues.
Right, yes. You'll know about that when it's available.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 12, 2014, 12:34:45 PM
#10
But we still need (or will need) a copy of the
blockchain in order to participate
as a node?  or not? 

It sounded like you said more advanced pruning
would allow block validation without the whole
blockchain, but it hasn't been released yet
due to synchronizaton and DB issues.
 
Sorry if its not clear to me yet.  I'm still
early on the learning curve here.
staff
Activity: 4284
Merit: 8808
May 12, 2014, 12:20:50 PM
#9
I assume there will be some big news about it when it is released, allowing more casual users to run nodes on their PC.  Is that right?
It won't be noticeable at all except during initial sync, and even there only after a number of other improvements which make a much bigger difference. I only brought it up to further emphasize that computation isn't a major issue.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 12, 2014, 11:45:00 AM
#8
I assume there will be some big news about it when it is released, allowing more casual users to run nodes on their PC.  Is that right?
legendary
Activity: 1792
Merit: 1111
May 12, 2014, 11:29:53 AM
#7
This is great. However, if there is a bug in the new code (or in openssl), we may have the BIP50-style fork again.
This is why the code wasn't deployed a year ago. (even though alt implementations have been rushing it out...)

Just about any change to consensus code has consistency risks, due care is being taken.

It's just like fixing the firmware of ISS with astronauts inside. We are already too big to fail.
staff
Activity: 4284
Merit: 8808
May 12, 2014, 11:11:54 AM
#6
This is great. However, if there is a bug in the new code (or in openssl), we may have the BIP50-style fork again.
This is why the code wasn't deployed a year ago. (even though alt implementations have been rushing it out...)

Just about any change to consensus code has consistency risks, due care is being taken.
legendary
Activity: 1792
Merit: 1111
May 12, 2014, 11:08:03 AM
#5

 
As far as computation goes, at runtime the computation load of Bitcoin is very low already and sipa has written faster ecdsa code that can do over 40k verifies per second on a quad core 3.2GHz i7 desktop (about 6x faster than openssl). It's not needed at runtime but it will be nice for the initial sync.



This is great. However, if there is a bug in the new code (or in openssl), we may have the BIP50-style fork again.

To minimize the risk, majority of miners have to validate blocks with both existing and new codes. Miners should not build on top of a block unless it passes BOTH codes. Actually I'm not sure if the 6x improvement outweighs the risk.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 12, 2014, 07:23:42 AM
#4
completing the pruning requires first improving the synchronization behavior, second making sure all the database corruption bugs are gone

So these things are still in progress...
staff
Activity: 4284
Merit: 8808
May 12, 2014, 04:26:10 AM
#3
DHT is precisely the wrong technology for this, they generally have fairly poor security properties (or at least require a lot of complexity to make sibyl resistant) and don't solve the of the problems we need solving here.

If you're looking to reduce storage, this is explained in section seven of the Bitcoin whitepaper. The system was designed from the beginning so that nodes do not need to store all the data in order to validate new blocks.  Bitcoin Core v0.8 reorganized the system to facilitate supporting this, but completing the pruning requires first improving the synchronization behavior, second making sure all the database corruption bugs are gone (since corruption will mean a re download instead of just a reindex), and some P2P enhancements so that nodes which can serve new blocks but not old ones can be distinguished.
 
As far as computation goes, at runtime the computation load of Bitcoin is very low already and sipa has written faster ecdsa code that can do over 40k verifies per second on a quad core 3.2GHz i7 desktop (about 6x faster than openssl). It's not needed at runtime but it will be nice for the initial sync.

If you search around the forum you'll see these things have been discussed many times before.
newbie
Activity: 25
Merit: 0
May 12, 2014, 04:19:08 AM
#2
This sounds possible, some form of DHT used to store the blockchain on each node.
I am new here and have heard it was explored in the past, but I'm not sure to what extent or it's priority for the core devs.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
May 11, 2014, 11:27:07 PM
#1
If the number of nodes are dropping, would it possible to let people run 1/10th of a node?
That would require less resources. 

I know it sounds crazy,  but why couldn't the node's computing and storage needs itself be distributed among several machines ?
Jump to: