I understood that but I don't see how it will work in practice: who will decide to make the first share for free? What happens during network disconnects or miner downtimes ? If you allow them how do you protect yourself from leechers that will simulate these events.
I expect only those who want to further reduce their variance would take the risk.
You can also keep a simple balance based book-keeping on each address you've had work trades with. A configurable tolerance value (a percentage would probably do) can then be used to determine when to completely give up on the node. The system should also give priority to cooperating with the nodes that have the best ratio in your favor. This is clearly not 100% efficient but that's not the real question to answer. The question is, is it good enough? If it's good enough, I can leave further improvements to the self-interested miners with the skills to do it. I think a good enough system can be built.
The cost of getting this started, even assuming 99% leech ratio is a very minimal investment. If someone doesn't reciprocate, no problem, you just ignore them for the rest of the session. Once you've found enough reciprocating partners, even the minimal losses stop.
This needs to be modeled and computed. As I said there's not only bootstrap but error recovery to do.
Yes, true. But first I need to design the system to a point where modeling is possible.
I mentioned 2 above: bootstrap leeches and share ignoring masquerading as error or downtime. If a node has more network errorr or downtime than yours do you blacklist it? If you do and everyone does nobody can participate. If you don't you accept to work for others for free even if it's only for a small amount. Imagine you protect yourself from say any 5% leech (some node that doesn't reciprocate for 5% of your shares) by dropping any node with such a behaviour, what do you expect will happen if someone decides to bombard each of the participating nodes with a DoS attack (a simple SyN flood for example) in turns? Every node will start to blacklist DoSed nodes and you will end up with a fully disconnected setup where you'll have to rebuild the trusts between every node from scratch.
This is a good point. It means the protocol will need to have some way to find out if a) someone has blacklisted your node and b) if so, provide information about the shares you missed. I'll have to ponder about this. A blacklist is ineffective though, addresses are plentiful and easy to change, so it'll have to be a whitelist based system.
The point b above is challenging. it needs timestamping, otherwise you can't verify that the shares could ever have been valid blocks. However, this
is a timestamping service... I wonder if it'd make sense to make each separate ping-pong of share exchanges a separate sharechain in itself. That way, when your share goes unreciprocated by someone, you can save the share and all (or just a few with the lowest hashes?) of the reciprocating shares you got in response to that share from others and use those to prove the time. This should be enough to make it much more expensive to falsify this evidence than you can benefit from it.
The problem is that your proof can be missed for perfectly innocuous reasons that happen regularly to all participants: the cooperation will thus eventually stop between them all. If you allow for a margin of error, you give a way to cheat. Any way you look at it, the problem is that by eliminating something like the sharechain you eliminate any way for participants to verify reliably that a proof of work really exists and agree on this. So you break anything that would rely on this missing agreement.
Kind of ironic that I'm now thinking of sharechain based solutions when I advertised this as not having a sharechain.
Still, it's not one central sharechain, so it doesn't create the same limitations and vulnerabilites (mainly the 51% attack). This doesn't need a network-wide consensus, afterall.
I'm not sure what a centralized option would bring vs p2pool for example (it is trivial to setup a separate p2pool pool if you want to mine amongst friends). Any central server would be a SPOF so why not just use p2pool?
SPOF is only a problem when you're limited to one central server. This would most definitely not be. In addition, using such a server rather than joining a p2p network where you'd have to route other people's shares will reduce the bandwidth your node needs to use. Due to this, there will be economic incentives for running a switch server. The most obvious way of accepting payments in this case would be as a coinbase outputs to the switch's wallet address.
Even the p2p network could support asking for extra work to the node's address in exchange for share routing. I expect this would solve the "network collapse" problem you outlined but still give users choice about how much expected value they're willing to sacrifice for lower variance. This would, however, complicate the protocol. I don't currently have an idea of how to implement this for the p2p model though. This is a very general networking problem, though, so I wouldn't be surprised if a good enough solution already exists somewhere.
If you select based on addresses as you will try to have the most addresses in a single share (if I understand correctly it's a direct consequence of wanting to have the less variance thus the most other participants) a single share will be broadcast by all nodes owning one address in a share to all other nodes this is O(n^2) traffic. You want to fine-tune this by only forwarding shares if you are its miner (or delegate this work to someone else which means accepting to forward shares for others too: same traffic in the end) but even then it's still O(n) traffic. This is not good, "network collapse"-level not good (if you see this project used by more than small loosely-connected teams of 10 nodes).
Yes, there is an obvious cost to reducing variance that needs to be paid as increased bandwidth requirements. This will effectively limit the variance improvement to the percentage you're willing to pay for it. I personally would prefer a variance of payment every two days or so. Too much variance reduction also increases the Bitcoin transaction fee costs when sending the bitcoins. (This is the reason I don't mine on p2pool. The payment size I'd get with my hashrate is too small.)
So, I don't quite agree with you that the network traffic would be a problem. Especially since it's possible to assign a cost to the transport layer.
To make it usable, you want O(1) network traffic if you can and worst case scenario O(log(n)). So here you have another show-stopper until you find a way to reduce this traffic to something not linearly dependent on the number of miners you want to reciprocate with (note: small miners are the most impacted by this as they want more links with others than large miners in order to achieve the same variance).
I guess p2pool is O(1) due to the difficulty adjustment to average the rate to a share per 10 seconds.
I know I sound overly negative but all my dev instincts tell me the direction you want to go is a dead end and all common questions you ask yourself when designing P2P networks (error handling, network scaling, robustness to attacks, partitioning events handling, ...) don't have clear answers, only difficult or arguably impossible problems to solve. This isn't even taking into account the acceptance factor : miners cringe when you mention a potential single digit percent loss of revenue and giving work for free to be accepted in such a scheme will simply make them run away.
I believe this is worth investigating, even if it turns out to be a dead end. I'm studying computer science currently and am planning on doing my Master of Science Thesis on this subject. I'm doing a preliminary design for Bachelor's degree first though. Thank you for taking the time to write down your thoughs. I appreciate it.