Hi!
It's just an idea for energy saving in any proof-of-work chain.
Half-overlapped 'serialing'The basic idea is that m2 (miner 2) re-computes what m1 (miner 1) already has computed. I'm interested in the case where m1 does not tell m2 the result, but gives m2 the same puzzle, and m2 adds the same amount of work himself. That is the overlapping. The miners make a serially-nested hashes-chain on top of the normal PoW-chain.
In other words, it is the additional puzzles-chain for slowing down the PoW-chain, and it looks like this:
[-b1-] [-b3-] etc
[-b2-] [-b4-]
exact same work
|
V
b1-> arrow is the hashing done by the previous miner
->b2 arrow is the hashing done by the current miner
b2-> arrow is the hashing done by the current miner
->b3 arrow is the hashing done by the next miner
and so on
The puzzle making takes N computations, the puzzle solving also takes N computations.
The puzzles are for energy saving, they are not the Proof of Work.
h(x) = hash(x)
Blocks:
b1 b2 b3 b4
Their miners:
m1 m2 m3 m4
So in this picture each miner happens to mine one PoW block, in other words, m1,m2 etc are the successful miners.
Steps:
m1
1. x1 = 0
2. Miner m1 computes h(h(h(x1)))... Serial, nested hashing, N times. Value b1_r is the result.
3. b1_r_h = h(b1_r + 1).
Alternative method for step 3:
3a. b1_r_h = h(b1_r).
I think the keyed hash looks easier to remember, but hashing hardware might benefit from needing less different instructions, so 3a might be faster.
x1 is the ever ongoing x that is partially kept secret by calculating b1_r_h. The next successful miner has to find the pre-image b1_r.
4. x1 and b1_r_h are stored in the block b1.
5. Then m1 mines the actual block, using b1_r_h for the top of the usual Merkle hashing.
m2
1. The block b2 will only be valid if it has used b1_r as x for the h(h(h(x))).
2. Since miner m2 does not know (and as it fits to BFT, cannot know) b1_r, he re-computes what m1 did N times.
3. Then m2 will know b1_r and makes an equal-work-check: b1_r_h == h(b1_r + 1).
4. m2 now acts the same as m1 started, starting with x2 = b1_r.
5. Here is the first time the connection between two blocks is made, because b1 was the genesis block and just used zero as x1.
6. From here on everything just repeats.
The equal-work-check is the check that every client also needs in the end, and the user community should do the same checks, slowly. They should do so in a probabilistic way or centralized validation center with a trust mechanism, so that user clients don't get denial-of-service attacked by noise blocks, and have faster access to the transactions data.
N amount of serial work will be done two times by each miner. N times can be as big as you want, but it introduces the risk. A faster single thread worker can use more time for parallelism when it uses less time for the serial work amount N. The benefit of N is to get in-between hardware-off phases so to speak.
Why
Assumption: In 'normal BTC' blocks next to each other are not likely to be mined by the same party in game theory.
Conclusion: So in 'half-overlapped serialing BTC' the current miner will not be likely to give his secret to the next miner, because they don't cooperate. If they cooperate, it will not save energy. But there should not be the risk of a 51% attack by a slow chain, because there is no way to spare out the serialing-block between two conventional blocks.
The puzzle making takes as long as the puzzle solving. Adjacent miners will, by BTC rules and probability not know the (additional) secret.
And it cannot be guessed with parallelism, because h() is not a reduced entropy hash as used for PoW targets, it's the hard one, e.g. sha256(), and the so called serialing (not to be confused with the normal chaining of the underlying PoW-chain) consists of nested hash computations. Ideally it is simpler than a cryptographically secure hash and is hard to optimize over a certain limit in hardware.
The overlapping double work is the key idea. It's there for mitigating the worst case: Everyone knows all secrets. In this case, it should not exist a fast path for checking validity as it would be the case when using Makwa or LCS35, or even discrete logarithm puzzles. With the latter parallelism is an option, and with the other two the fast path makes calculating an unwanted new chain faster by a too big amount.
My conclusion
So, a block chain can be slowed down. And it can be made so that there is no way to create an unwanted block-chain by being the adjacent miners yourself in a too efficient way.
This is because the attacker (conspired cooperating miners) will in effect have to compute N times h() and will only be faster than non-cooperating miners by this factor:
2 * // no re-computation needed
single_thread_factor // faster hardware
So single thread performance will benefit the attacker, but ideally it's not sufficient, as there is no way become faster than the proposed factor (2 * stf) at the serialism part than the non-conspiring general public.
In the best scenario, the parallelism should outweigh the serial work, so fast and limited hardware would be the ideal base for it. The risk is there, but with a fail proof way it should come down to comparing single thread performances, or proving that a physical device has a speed limit for a wanted effect to take place.
In an old post I had some basic formula of how likely the attack is with a speed increase of some x - can't find it, too tired. Whether there is no possible attack would still have to to be investigated. There is, but the result should be found out.
I don't see how this could be easily attacked or parallelized. What it indeed does is making chain validation and clients slow, but that work could be delegated and trusted and should have no environmental footprint.
Maybe it can be attacked and my post is dumb, but i tried.
My older scheme fell apart much quicker. I'm not sure about the pseudo code and the hash-chaining. But the 'm1 doesn't tell m2 the secret' should work. It's not a must, but a risk. The idea is just that each time this happens, energy is being saved.
I have had a few other ideas, but this one looks solid. If anyone want to help, you're welcome. In particular, calculating the attack risk with single thread speed factors between user and attacker. Maybe there is some good mitigation system that could be found.
Thank you.