.....
How does this work with the initial election of Servicenodes and the final payment made to that node?
With the model outlined you would have to wait for a certain number of Bitcoin confirmations before a node was paid, so you would end up with a backlog of payments?
Lets go back a few years.
The concept of Masternodes was not actually invented by DASH (Darkcoin back then). Dash took an existing concept and added random selection and random election (I know these things
).
The term Master Node is derived from pieces of existing concepts, like this snippet from December 2010, where master node selection was discussed in detail. It relates to selecting a cloud server from a group and making it the master (masternodes / servicenodes in terms of our cyrpto world are cloud servers too). How to select which server is made the master and which are the slaves was the issue being discussed in this example back from 2010:
+Majority election for the cloud
+===============================
+
+This package helps you elect a master node among a set of identical nodes (say, electing a master database server among the servers running the database in your cloud, or the master load-balancer server).
+
+See `test/test_majority.coffee` for usage example.
+
+
+Consensus protocol
+------------------
+
+The idea is to elect an online node with a minimal ID as a master. (If you use IP addressed as node IDs, which you should, then a node with a lexicographically smallest IP address wins.)
+
+To implement this idea, there are a few complications to take care of:
+
+* Getting the nodes to agree which one of them is a node with a minimal ID.
+
+ Solution: Each node broadcasts its vote on who should be elected. Each node also received the broadcasts, counts votes and picks a winner based on a majority consensus. The incoming broadcasts are also used to judge whether peer nodes are online or offline.
+
+* If there's a “network split” (a conditions when two or more groups of nodes are online, but cannot communicate across groups), we have to avoid picking several master nodes.
+
+ Solution: There must be a known list of all nodes, and for a winner to be elected, more than N/2 votes are required (where N is the total number of nodes, not a number of nodes that voted).
+
+* When nodes with small IDs come back online, master may be re-elected even though the old one is perfectly capable of functioning. In the worst case, if the node with a minimal ID is unstable (goes offline and back online often), the whole system may be disrupted by frequent re-elections.
+
+ Solution: Only re-elect a master when the old one is offline. To do it, there are two separate elections going on: one which goes as described and picks a “candidate” (the node to become a master when the current master goes offline), and another one that picks the actual master.
+
+ A node always gives candidate vote to the online node with a smallest ID. The master vote is kept unchanged until the current master goes online, in which case the *currently elected candidate* becomes the new master vote (i.e. a node only updates its master vote to the candidate which has been elected by the candidate election prior to that moment).
+
+ Both master and candidate elections happen on every broadcast (e.g. every 5 seconds), but master votes only change when the old master goes away, so the master is not re-elected until necessary
+
+* Which nodes to consider offline?
+
+ For now, a node is considered offline if no broadcast has been received from it during the period between two broadcasts. This is an area that could obviously use some improvement, but even this simple rule is good enough.
+
+The algorithm is very simple, avoid special-casing the “current node” in any way (it also assumes that a node receives its own broadcasts), and cannot possibly re-elect a master unless a majority of nodes decide that such re-election is needed.
+
+
+Status
+------
+
+This code HAS NOT been used in a real-world application yet — it was certainly written with such use in mind, but I still need to cover some more basic stuff in my cloud stack before I get to the monitoring service.
+
+
+Installation
+------------
+
+Will be released as an NPM package once it gets some real-world testing.
+
+
+Thanks
+------
+
+Martin v. Löwis, Spike Gronim and Nick Johnson for the answers on my StackOverflow question [How to elect a master node among the nodes running in a cluster?](http://stackoverflow.com/questions/4523185/how-to-elect-a-master-node-among-the-nodes-running-in-a-cluster)
+
+
+License
+-------
+
+MIT license.
I love the function name in here: 'pickWinner'
+pickWinner = (threshold, votes) ->
+ counts = {}
+ for vote in votes
+ counts[vote] = (counts[vote] || 0) + 1
+ if counts[vote] >= threshold
+ return vote
+ return null
+
+startMajorityElector = (options) ->
+ { nodeId: myNodeId, pingInterval, obtainListOfNodes, broadcast, becomeMaster, giveUpMaster, trace } = options
+
+ lastReplies = {}
+ nextReplies = {}
+
+ consensusMasterId = null
+ consensusCandidateId = null
+
+ myMasterVote = null
+ myCandidateVote = null
+
+ pingAllNodes = ->
+ [lastReplies, nextReplies] = [nextReplies, {}]
+
+ allNodes = obtainListOfNodes()
+
+ allNodesIndexed = {}
+ for node in allNodes
+ allNodesIndexed[node.nodeId] = node
+
+ # compute consensus votes
+
+ # ignore replies from unlisted nodes
+ legitimateReplies = (reply for nodeId, reply of lastReplies when allNodesIndexed[nodeId])
+
+ masterVotes = (reply.masterVote for reply in legitimateReplies)
+ candidateVotes = (reply.candidateVote for reply in legitimateReplies)
+
+ threshold = allNodes.length / 2 # thank gods, in JavaScript 3 / 2 == 1.5
+
+ previousConsensusMasterId = consensusMasterId
+ consensusMasterId = pickWinner(threshold, masterVotes)
+ consensusCandidateId = pickWinner(threshold, candidateVotes)
+
+ if previousConsensusMasterId != consensusMasterId
+ if previousConsensusMasterId == myNodeId
+ giveUpMaster()
+ else if consensusMasterId == myNodeId
+ becomeMaster()
Now. We want to do the same thing. Which servicenode gets paid is something that has only
this year become decentralized. The process was originally a reference node controlled by the dev, which wasn't great as it was centralized, but that's now gone.
So we can untie functions (functions such as providing data mining services, escrow, etc) of ServiceNodes from the election and payment of rewards process. In our case we want a random group of servers to check if a servicenode is actually running a full bitcoin node.
So what we could do is pay all nodes in sequence, as the existing SPR process, but if a particular ServiceNode fails to confirm a hash from the Bitcoin blockchain, then it can get bumped off the payments list. It's scoring criteria can be so bad that it needs to prove itself again.
So the question about waiting for confirmations from Bitcoin can be removed from some of the processes in order to avoid a payments queue / backlog forming which would potentially become a pot of money and therefore a target for hackers.
This proof of running a Bitcoin node in the current proposals gives rise to the potential of reintroducing the original Satoshi whitepaper model - nodes using PoW to validate something. In our case, validate they actually exist.
PoW from Hashcash was created to avoid email spam - a user sending an email had to generate some PoW before the email could go. This is basically what we are doing - PoW to generate a hash of a message from the SPR network that only that node will know about. If the node fails to generate the inputs into the hash, it's proof that node is a fake; or more likely that the attacker doesn't want to keep an army of sybil nodes if they have to carry out some PoW in order to get paid once every day or once every week.
I think there is a whitepaper in there somewhere.