In fact - at the current # 115 level - at all costs everything tries to prevent it from finding the key - from hardware - data loss and tedious restore due to a storm - by placing it ... - nevertheless, together with Jean_Luc_Pons managed to rebuild and improve this arduous process and process continues. This gave a lot of lessons in preparation for # 120.
What I will say more - that the 50% threshold was already exceeded when I mentioned it recently. Over the next hour I will give you the exact statistics from the current situation. It is certain, however, that this time is not as lucky as before and the work is still in progress.
Did you implement the feedback from server to clients? If no - that means that the clients continue to perform "useless work" moving the dead "zombi" kangaroos:
If you have let's say 100 different machines working independent, you reproduce only the dead kangaroos found within one machine only (machine 1 reproduce dead kangaroos only found by itself). After merging all these 100 files on the server, the server finds a lot of dead kangaroos however does not send back signal to clients to reproduce them. Clients continue working with that "zombi" because they do not have feedback from the server. That also means that during the next merging the server will receive again the same kangaroos, the server will kill them again, but the "zombies" will continue their jumps on client side. And so on.
This could cause the very inefficient work for wider ranges like #115 or more.
range
anyway they had agreemnet, and cant think beyond thier creation+GPU's