I haven't heard of Secure Proof of stake before. Is this random validation the new ingredient here? Should this part of the protocol help reduce manipulation, probability of forking, and bad behavior among the validators?
Part 1. Intro
The consensus protocol starts by randomly sampling a smaller consensus group out of all eligible validators in the shard (for reduced communication) using a randomness source derived from the previous block’s signature. The randomness source is unpredictable before the signing of the previous block. The sampling is deterministic, meaning that every node can compute the list of validators in the consensus group and the first node to be selected is the block proposer.
The block proposer aggregates transactions into a new block and sends this block to the validators in the consensus group for verification. Each validator will verify the validity of the block, process the transactions and if everything checks out will participate in the pBFT consensus. The voting in the pBFT is done for every validator by sending a signature for a multisignature scheme. If the proposer collects more than 2/3 + 1 signatures from the consensus group members, the block is considered validated, the aggregated signature can be added to the block and the block disseminated in the entire shard. The next consensus group will be randomly sampled using the new signature.
Part 2. (this one goes a bit more into some other stuff too but gives you a bit more of perspective / context)
Secure Proof of Stake was designed to ensure resistance to known security problems like Sybil attack, Rogue-key attack, Nothing at Stake attack and others. We are achieving this by combining the following aspects:
Randomness sourceThe source of randomness is composed of the last block's aggregated signature and the round?s number. Being a collective signature, each node that participates in the process alters the final signature data. Even if the block proposer can control which transactions will be included in a block, the signature cannot be influenced in a predictable way. This is because the aggregated signature is created by multiple parties.
Shard reorganizationAfter each epoch, a third of the nodes from each shard are redistributed uniformly and non-deterministically across the other shards, to prevent collusion. This method adds bootstrapping time for the nodes that were redistributed, but the pruning mechanism will decrease this time to a feasible amount.
Consensus group selectionAfter each round a new set of validators are selected using last committed block?s signature, current round and the eligible nodes list. In case of network desynchronization due to the delays in message propagation, the protocol has a recovery mechanism the same members will be chosen, based on the signature of the last block, but with a different block proposer that variates with the round r. This avoids forking and allows synchronization on last block. The small time window (round time) in which the validators group is known, minimizes the attack vectors.
Node ratingBeside stake, the node?s rating influences the chances to be selected as part of the consensus group. If the block proposer is honest and its block gets committed in the blockchain, it will have its rating increased, otherwise, it?s rating will be decreased. This way, each possible validator is incentivized to be honest, run the most up-to-date client software version, increase its service availability and thus ensuring the network functions as designed.
Shard redundancyThe nodes that were distributed in sibling shards on the tree?s lowest level keep track of each other?s blockchain data and application state. By introducing the concept of shard redundancy, when the number of nodes in the network decreases, some of the sibling shards will need to be merged. The targeted nodes will instantly initiate the process of shard merging.
Threat modelElrond assumes a byzantine adversarial model, where at least ⅔ +1 of the eligible nodes are honest (untampered code, synchronized). The protocol permits the existence of adversaries that have stake or good rating, delay or send conflicting messages, compromise other nodes, have bugs or collude among themselves, but as long as 2/3 +1 of the nodes eligible validators in a shard are honest/not compromised, the protocol can achieve consensus.
The protocol assumes highly adaptive adversaries, which however cannot adapt faster than a round?s timeframe. The computational power of an adversary is bounded, therefore the cryptographic assumptions granted by the security level of the chosen primitives hold firmly within the complexity class of problems solvable by a Turing machine in polynomial time.
The network of honest nodes is assumed to form a well connected graph and the propagation of their messages is done in a bounded time ∆.
Attack vectors prevention1) Sybil attacks: mitigated through the stake locking when joining the network. This way the generation of new identities has a cost equal to the minimum stake;
2) Nothing at stake: removed through the need of multiple signatures, not just the proposer one, and the stake slashing, if a consensus group tries to append blocks on different forks. The reward per block compared to the stake locked will discourage such behavior;
3) Long range attacks: mitigated by our pruning mechanism and the use of a randomly selected consensus group every round (and not just a single proposer);
4) DDoS attacks: the consensus group is randomly sampled every round (few seconds) so a time to DDoS is almost impossible. We are implementing a built-in filtering mechanism for duplicate or malicious messages.
Other attack vectors we have taken into consideration are: single shard takeover attack, transaction censorship, double spend, bribery attacks, etc.