Pages:
Author

Topic: [ANN][MOTO] Motocoin - page 63. (Read 178225 times)

sr. member
Activity: 434
Merit: 250
June 11, 2014, 12:50:03 PM
I'm not working on anything. Many proposals (including your proposal that we were discussing here) are flawed because people don't understand how it works.

My "N block head" proposal is not intended to address DeepCrypto's difficulty-warp attack.  This attack vector, specifically, is what I'm asking about.

Quote
Here are some good proposals that are at least possible but it is not clear how well will they protect us from bots:
1. Increase map size.

We've already established that this will cut out the current generation of bots, but will do little/nothing in the long term.

Quote
2. Add computational difficulty.

This will need to be done in some form, regardless.  There is really no other way to restrict block interval at all AFAICT, whether it is coupled to targettime or not.

Quote
3. Add several coins to collect.

Assuming these coins would have to be collected in a specific order, this would at least introduce a modal state to the challenge problem, which should make things significantly more difficult for bots.

Quote
4. Change map generation algorithm to prevent fall-thru maps.

I'm not sure how feasible this is and am not sure how well it would actually deter bots in the long run.

~~~

Anyway, deterring bots was not at all what I was asking about.  I couldn't care less what the developers are doing about the bots themselves.  I only care about the two real issues at hand:

1. Anyone willing to make the energy spend could retroactively fork the whole chain back to block 1 at any moment with the difficulty warp.
2. Human players can't even attempt to compete in the targettime competition because of the block frequency.

What, if anything, is being done to address these?  I'm glad that you're continuing to think about the bot cat-and-mouse game we're going to play over the next few years, but I think that is perhaps premature if the network is not secured from attack and if the coin becomes worthless because the whole premise (human mineable) just "doesn't actually work" because humans can't even attempt to participate in the mining challenge itself.

sr. member
Activity: 434
Merit: 250
June 11, 2014, 12:39:51 PM
This would impact bots not-one-bit and would give humans four times as much wall clock time to solve.  What am I missing?

Bots can generate 4, or 10, or more blocks in the time it takes for a human to solve one block. It's just no match. How would 4x wall clock help? or... how could you prevent bots from doing it faster?

Again these are two entirely separate issues.  The intent of this patch would not be to slow the bots, but to give humans the same relative wall-clock time to solve.  I'm using N=4 because it is simple to think about, and because currently the bots appear to be roughly 2 to 4 times as productive as humans.

If bots were 100 times as productive as humans, on average, then N "should" be 100, and things start to get a little goofy.  The other (primary and unrelated) goal of slowing down the bots is something that needs to be addressed seperately.

As I said we could make N scale dynamically against the block rate, as well, to keep the ratio of "bot time" and "human time" roughly in check.  However, right now I think even a fixed N=4 would be a huge improvement, and would roughly align with the (current) 2-4x overproduction of the bots.  I do expect this overproduction to increase in time, but currently it seems to be fairly consistent judging from the spreadsheet and current block rates.

Quote
a bot can find a solution that takes 25 secs to find the coin in less than 1 second because it does not need to play in real time, and it's not limited by human reflexes.

I don't think any bot can currently find solutions consistently in under 1 second.  This could change, but really just means an increase in the value of N to maintain the ratio of "bot reflex time" to "human reflex time"

Quote
Another problem: say there are 1000 blocks in the chain (each using the previous one for their map) and you choose to generate a map from block 998 and solve a new block that is 1001. And for the next 5 blocks, everyone just uses the last of the chain. So the blocks 999 and 1000 were never confirmed. All transactions that it had would need to be included in block 1001 or 1002. This would have to be enforced somehow by the protocol. Possible, but messy and not easy to code.
It's not as simple as a chain, because block 1001 points to block 998, but so does block 999,

Block 1001 would confirm block 998.  Block 1002 would have work over block 1001, which refers to block 1000 as previous, which in turn refers to block 999 as previous, which in turn refers to block 998, and so on.  So block 1002 would confirm not only block 1001 but every block already in the chain before it.  Block 999 might never actually be used for work input at all, but as long as some block 1000+ *was* used for work input (and eventually one of them would have to be, since N is finite) it would be confirmed.  Block 999 could never be changed once block 1000+ was worked over without also forming a whole new chain after it.  If we are on block 1010 and I want to rewrite history in block 999 (which was never directly worked over) I need to create not only a new block 999 but 12 new blocks after it, to win longest chain, and I need to do so faster than the rest of the network can create new blocks ahead of me.  (Or, in other words, I need to 51% attack.)

When you prove work over a block you are confirming and securing not only that block's contents, but the contents of every block predecessor to it as well.  As long as a successor of a block is (eventually) proved over, that block is (eventually) proved over.



full member
Activity: 204
Merit: 100
June 11, 2014, 12:33:29 PM
Are the devs even working on it?  We have multiple proposals for mitigation  of the attack now, is the problem in selecting the course of action or in executing it?
Quote
If the developers are not committing to resolving these issues then they need to let us know.

I'd still like some response to this.  Am I wasting my time by even continuing to discuss this with you instead of just implementing and executing a hard fork of your network myself?
I'm not working on anything. Many proposals (including your proposal that we were discussing here) are flawed because people don't understand how it works.
Here are some good proposals that are at least possible but it is not clear how well will they protect us from bots:
1. Increase map size.
2. Add computational difficulty.
3. Add several coins to collect.
4. Change map generation algorithm to prevent fall-thru maps.
sr. member
Activity: 434
Merit: 250
June 11, 2014, 12:24:50 PM
Are the devs even working on it?  We have multiple proposals for mitigation  of the attack now, is the problem in selecting the course of action or in executing it?
Quote
If the developers are not committing to resolving these issues then they need to let us know.

I'd still like some response to this.  Am I wasting my time by even continuing to discuss this with you instead of just implementing and executing a hard fork of your network myself?
hero member
Activity: 583
Merit: 505
CTO @ Flixxo, Riecoin dev
June 11, 2014, 12:20:09 PM
So the blocks 999 and 1000 were never confirmed. All transactions that it had would need to be included in block 1001 or 1002. This would have to be enforced somehow by the protocol. Possible, but messy and not easy to code.

now I think maybe you can just discard those 2 blocks, but if they were mined by humans they would be very angry. You would now have 4x stales/orphans!
hero member
Activity: 583
Merit: 505
CTO @ Flixxo, Riecoin dev
June 11, 2014, 12:15:24 PM
This would impact bots not-one-bit and would give humans four times as much wall clock time to solve.  What am I missing?

Bots can generate 4, or 10, or more blocks in the time it takes for a human to solve one block. It's just no match. How would 4x wall clock help? or... how could you prevent bots from doing it faster?
a bot can find a solution that takes 25 secs to find the coin in less than 1 second because it does not need to play in real time, and it's not limited by human reflexes.


Another problem: say there are 1000 blocks in the chain (each using the previous one for their map) and you choose to generate a map from block 998 and solve a new block that is 1001. And for the next 5 blocks, everyone just uses the last of the chain. So the blocks 999 and 1000 were never confirmed. All transactions that it had would need to be included in block 1001 or 1002. This would have to be enforced somehow by the protocol. Possible, but messy and not easy to code.
It's not as simple as a chain, because block 1001 points to block 998, but so does block 999,
sr. member
Activity: 434
Merit: 250
June 11, 2014, 12:10:39 PM
That the ONLY important information is the one that is protected by proof-of-work. There is NO sense in appending unprotected information to your newly mined block because its integrity is depended on relaying nodes, but this is not a proof-of-relay-nodes-ownership, this is proof-of-work. Moreover, it can be freely overwritten by future miners. So, PLEASE, stop using sentences like "lets use previous block as seed and add some transactions to new block without protecting them", just say "lets use previous block as seed without storing any new information in new block", this makes things simpler. Use of unprotected information only complicates your description and network protocol but is useless.

It isn't useless, those transactions need to get into the chain somehow, even if they are not immediately confirmed.  Are you suggesting that everyone should just greedy-mine their own coinbases and never include any other TX, generally?  (EDIT: by 'everyone' here I mean human miners.  The scenario would still work if these human miners did "retroactively mine" only their coinbase onto the chain, tx processing would just be slower and entirely reliant on bots and "1 block human solvers" to secure.)

The relaying nodes can change the normal transactions in the block if they wish, but this results in a different block with a different hash, and amounts to a fork.

Quote
I can change block but I can leave its proof-of-work. This changed block will be confirmed by my newly mined block while original block has no confirmations, so there is no competition.

How would this be any different from just rewriting via a 51% attack?  You still need your "newly mined block" worked on to refer back to your old block, so you still need to be confident that you can produce that block.  What you've just stated is basically the same as what I've said all along... the network would be 1/4 as strong as it is right now because the head of the chain would be subject to a 4 block fork with the same effort that a 1 block fork takes currently. (Solving 1 block.)  Aside from this known adjustment in the security threshold, what changes?

sr. member
Activity: 434
Merit: 250
June 11, 2014, 12:01:20 PM
Where/how do you see a graph structure arising?

Having re-read most of the thread I think I might now see where you're getting confused.

What you're saying would be true if, like in other coins, the work function and the block hash were isomorphic (the same function) but in MOTO this is *not* the case.  Unlike other coins where the block hash is the output of the work function, in MOTO the block hash is part of the input to the work function, and is just an "ordinary" hash.  (Has no other relation to the work function.)  Changing any TX in a block that is on the network but not confirmed by application of work would create a new/different block with a new/different block hash, resulting in a fork and subsequent race for confirmation.

However, in re-reading the thread I also realized that I left one point somewhat vauge... saying we'd take the retroactive block as the input for the map generation is an oversimplification.  We would need to include a reference to the new coinbase as input to the map generation as well so that the produced work couldn't be stolen/replicated.  (Otherwise I could just submit any solution found (whether found by me or broadcast by someone else) as "the next few blocks' solutions" each with a new coinbase tx.)  I kind of took this for granted as implied, but realized that it might not be as self-evident as I might have thought.

Does any of this clarify?
member
Activity: 106
Merit: 10
June 11, 2014, 11:55:55 AM
any one who good at playing this game>>> could u share your skill>??
full member
Activity: 204
Merit: 100
June 11, 2014, 11:55:12 AM
What exactly am I misconceiving?
That the ONLY important information is the one that is protected by proof-of-work. There is NO sense in appending unprotected information to your newly mined block because its integrity is depended on relaying nodes, but this is not a proof-of-relay-nodes-ownership, this is proof-of-work. Moreover, it can be freely overwritten by future miners. So, PLEASE, stop using sentences like "lets use previous block as seed and add some transactions to new block without protecting them", just say "lets use previous block as seed without storing any new information in new block", this makes things simpler. Use of unprotected information only complicates your description and network protocol but is useless.

Quote
Quote
Otherwise what impact on semantics do you see?
Instead of blockchain we will have some blockmesh or blocknet.

No, you still would not be allowed to reference arbitrary blocks (only the last N-1 existing blocks in chain) so block association would remain linear and we would still have a chain.  Where/how do you see a graph structure arising?
We have chain of N blocks, block (N-1) is based on block (N-2), block N is based on block (N-2). Now we want to generate block (N+1), if we will use only previous block (that is N) as a seed then block (N-1) will not get a new confirmation and someone in the future may replace it. So, if we want to generate new block N, we should use both blocks - N and (N-1) as its parents. That is, each block can have several children and several parents. Notice, that in fact there is no ordering between blocks (N-1) and N, because they are depended only on block (N-2) we can freely swap them.

Quote
Quote
there is disincentive to ignore those transactions as the only way to do so is to ignore that block
Obviously wrong, you can just ignore those transactions without ignoring that block. What stops you from doing so?

How could you ignore the transactions without rejecting the block?  Changing the tx set would result in a new block hash, so your new block would be in competition with the existing block to be referenced as previous by the next block to be mined.  You could fork the head of the chain but not modify it arbitrarily, and this is no different from what is true on BTC.
I can change block but I can leave its proof-of-work. This changed block will be confirmed by my newly mined block while original block has no confirmations, so there is no competition.
sr. member
Activity: 434
Merit: 250
June 11, 2014, 10:43:06 AM
HunterMinerCrafter, you have some sort of misconception in your head. I tried to explain it to you but have so far failed.

What exactly am I misconceiving?

Quote
Quote
by using 2 bits of nonce to specify which of the top 4 blocks to generate map from, and adjusting confirmation semantics accordingly
If you specify as part of nonce that you are generating map from block (Top-4) that means that by the time you generated that map you already knew that there are 4 more blocks. Then why you didn't generate map from the last one.

As I said you could if you wanted to.  Human miners, right now, are forced to always use the most current block so their map resets with each new block.  Under this scenario, they would have the option of continuing to work on the same map, once generated, for up to 4 blocks, giving the human player four times as much opportunity to attempt a solution.  (The "stolen" bits of nonce would not be used in the map seeding... so when i generate my map they would be '00'.  As I worked on my map, and the first new block came in, i would increment to '01' and continue working on the same map.  Two more blocks come in and I increment to '10' and then '11', and continue working on my same map.  If I find a solution now I submit my block with the nonce value used, my work inputs, and the depth bits in nonce set to '11'.  If I find my solution before the third block I'd submit everything the same just with my bits being '10' instead.  If I don't find a solution by the time the fourth block comes around, I'm forced to reset my map and return to working on the head of the chain.)

This would impact bots not-one-bit and would give humans four times as much wall clock time to solve.  What am I missing?

Quote
Quote
Otherwise what impact on semantics do you see?
Instead of blockchain we will have some blockmesh or blocknet.

No, you still would not be allowed to reference arbitrary blocks (only the last N-1 existing blocks in chain) so block association would remain linear and we would still have a chain.  Where/how do you see a graph structure arising?

Quote
Quote
there is disincentive to ignore those transactions as the only way to do so is to ignore that block
Obviously wrong, you can just ignore those transactions without ignoring that block. What stops you from doing so?

How could you ignore the transactions without rejecting the block?  Changing the tx set would result in a new block hash, so your new block would be in competition with the existing block to be referenced as previous by the next block to be mined.  You could fork the head of the chain but not modify it arbitrarily, and this is no different from what is true on BTC.  The only difference is that in bitcoin the "chain head" which can be "forked without additional effort" is only one block long, where our chain head would become N blocks long.  With the same work that it would currently take to create a 1 block fork by generating 1 block you could now create an N block fork by generating 1 block.  This is precisely why I say that the only impact is that the strength/security of confirmations is reduced by a factor of up to N.  In practice, most blocks will continue to be mined by bots who will not have any reason not to mine at the true head of the chain, so most of the time the reduction in security will not even approach N.  (It will only really do so when miners submit blocks with those nonce bits > 0!)

EDIT: fixed typos and added some words for clarity.
full member
Activity: 204
Merit: 100
June 11, 2014, 10:18:07 AM
HunterMinerCrafter, you have some sort of misconception in your head. I tried to explain it to you but have so far failed.

Quote
by using 2 bits of nonce to specify which of the top 4 blocks to generate map from, and adjusting confirmation semantics accordingly
If you specify as part of nonce that you are generating map from block (Top-4) that means that by the time you generated that map you already knew that there are 4 more blocks. Then why you didn't generate map from the last one.

Quote
Otherwise what impact on semantics do you see?
Instead of blockchain we will have some blockmesh or blocknet.

Quote
there is disincentive to ignore those transactions as the only way to do so is to ignore that block
Obviously wrong, you can just ignore those transactions without ignoring that block. What stops you from doing so?
sr. member
Activity: 434
Merit: 250
June 11, 2014, 10:12:36 AM
You still don't understand what I'm talking about. If you just attached some transactions to your block without securing them then no one cares about them. Realying nodes may change them while relaying and next miner can just ignore them and add there any transactions he wants.
Sure.  And?  Transactions would be malleable and not considered confirmed at all until N confirmations.  Each additional N confirmations would offer what is currently 1 block worth of added integrity.  An N block fork would then be equivalent to what a 1 block fork is currently, and N block forks would become as common as 1 block forks are currently.  Other than "scaling the security factor down by N" what is the problem?
In fact, this scheme doesn't change anything, miners still add transactions to block, but instead of saying that these transaction belong to block N we say that they belong to block (N-1). With all this discussion I forgot what is supposed benefit of it. If it should limit set of possible maps than it obviously won't work because miners can add arbitrary transactions (e.g. send funds to themselves) to previous block and change their seed as many times as they want.

(Here is where you "got it" before... you just at that time were confusing the two issues.)
sr. member
Activity: 434
Merit: 250
June 11, 2014, 09:53:42 AM
I thought we covered this already...

Under this scenario you *could* just generate the map from the top block, but you are not *forced* to until the block you are working on becomes too old.  Yes, your submitted work would include the hash of the block you think is top (as normal) as the previous block reference, you just wouldn't be forced to use this most recent block as map seed.  As such, forced map resets would be 4 times less frequent.  We could use any number for the allowed depth, really, and could even have this number adjust dynamically based on block rates, to keep map refresh frequency relatively consistent with network hash rate.
As I tried to explain you, no one cares about what you submitted if you didn't protect it by some proof-of-work. Therefore, it is unnecessary to complicate protocol by sending useless stuff (like previous block hash in your example).

Argh, you "got it" once already, I swear!   Cheesy

It would be like writing the new transactions into the ledger, but not signing next to them - instead signing next to all the old transactions.  When someone else came in later with some new transactions (also potentially "still unconfirmed") they would be signing next to your old transactions, giving them their confirmation.

Yes, transactions in these blocks still need to be treated as if they are actually still sitting, unconfirmed, in txpool but there is disincentive to ignore those transactions as the only way to do so is to ignore that block, and ignoring that block means you need to come up with two new blocks to overcome it with a longer chain.

EDIT: Also this doesn't require any extra data to be sent to the network.  Previous block hash is already there and we would just steal 2 bits (for N=4 anyway) out of the nonce to store the depth of seed.  Block size wouldn't increase at all.
sr. member
Activity: 434
Merit: 250
June 11, 2014, 09:49:57 AM

Are the devs even working on it?  We have multiple proposals for mitigation  of the attack now, is the problem in selecting the course of action or in executing it?

Quote
I agree, no sense.

I've detailed in my last post, let me know if you need more explanation/clarification.

Quote
This is some huge and complex change to the whole blockchain semantics.

Not really.  It ultimately just changes the meaning of "one confirm" to mean "1-to-N blocks deep" instead of "1 block deep" resulting in effective network security being reduced by a factor of up to N. (And arguably only for the last N blocks!)

Otherwise what impact on semantics do you see?

Quote
I still don't understand how are you going to add transactions if you generate map only based on previous blocks.

We also already covered this earlier as well.  In most blocks (assuming bots maintain majority of hashing) there will likely be no effective change, since botters will likely always submit their blocks with a map seed based on the "new" block itself, and thus securing the transactions at that moment.  In the case where a new block is submitted based on a map seed 1 block deep, the new transactions are not at all secured ("yet") and the additional confirmation strength is given to the prior block.  The new transactions in this new block may not actually be confirmed at all for 3 more blocks, but will be eventually.  Similarly for 2 deep, and so on.

Quote
But anyway, this would require a lot of time to work out this scheme and a lot of changes to client.

Not really.  The work structure given to motogame would need to be expanded to include data for the prior N blocks, the g_HasNextWork flag semantics would need to be modified some, work validation would need to check against the correct block data, and confirmation numbers would need to be divided by N.  What else would need done?  I could probably work up this patch in under a day of dedicated time.

If the developers are not committing to resolving these issues then they need to let us know.



full member
Activity: 204
Merit: 100
June 11, 2014, 09:42:18 AM
I thought we covered this already...

Under this scenario you *could* just generate the map from the top block, but you are not *forced* to until the block you are working on becomes too old.  Yes, your submitted work would include the hash of the block you think is top (as normal) as the previous block reference, you just wouldn't be forced to use this most recent block as map seed.  As such, forced map resets would be 4 times less frequent.  We could use any number for the allowed depth, really, and could even have this number adjust dynamically based on block rates, to keep map refresh frequency relatively consistent with network hash rate.
As I tried to explain you, no one cares about what you submitted if you didn't protect it by some proof-of-work. Therefore, it is unnecessary to complicate protocol by sending useless stuff (like previous block hash in your example).
sr. member
Activity: 434
Merit: 250
June 11, 2014, 09:25:27 AM
by using 2 bits of nonce to specify which of the top 4 blocks to generate map from, and adjusting confirmation semantics accordingly
Ah, now I see how wrong it is. If you measure distance from top block then you should include hash of block that you think is top, but if you know its hash then you can just generate map from it.

I thought we covered this already...

Under this scenario you *could* just generate the map from the top block, but you are not *forced* to until the block you are working on becomes too old.  Yes, your submitted work would include the hash of the block you think is top (as normal) as the previous block reference, you just wouldn't be forced to use this most recent block as map seed.  As such, forced map resets would be 4 times less frequent.  We could use any number for the allowed depth, really, and could even have this number adjust dynamically based on block rates, to keep map refresh frequency relatively consistent with network hash rate.
sr. member
Activity: 434
Merit: 250
June 11, 2014, 09:18:44 AM
One thing I have been looking into for difficulty control is the possibility of setting the perlin function itself to scale with difficulty.  Perlin complexity scales exponentially with dimension of the seed, so this might be a nice place to inject computational difficulty.  I think that it would make more sense to increase complexity of frame calculation as opposed to increasing complexity of map generation (so that we never get into a situation where a user can't start a game because they don't have sufficient hardware to generate a map in reasonable time) but I'm not really sure if this is a better approach or not.  I could imagine a similar situation where users' framerates are throttled to something annoyingly low simply because they can't calculate the perlin quickly enough to maintain.

What was that? I only see a bunch of words with no sense. Perlin function has no scale difficulty. What does it mean a "perlin complexity"?

Uhhhh, wut?

Perlin complexity is 2^N where N is the dimension of the seed.  This is well known, is even on the wikipedia page under "complexity" heading.

Right now we are calculating a two dimension perlin coherency from a two dimension seed, so we have constant complexity of 2^2.  There is nothing preventing us from calculating a two dimension coherency by first calculating a three dimension coherency from a three dimension seed, and applying a (modular) combiner to "flatten" the resulting noise into a 2d coherency.  (Arguably we could also scatter the 3d coherency back onto a 2d plane, and re-seed a second perlin round from the resulting (now 2d) noise, but this gives us an overall polynomial exponential complexity, instead of the nice, clean "2^N curve like bitcoin has" outcome.)

Quote
Perlin noise computation is not related to the map size it is always calculated from the 4 nearest points.

It is calculated from 4 points only in a 2d space.  It would be calculated from 8 points in a 3d space, 16 points in a 4d space, 32 points in a 5d space, 64 points in a 6d space, and so on.  See where the 2^N comes from, now?

Quote
And we aren't looking where to inject computation difficulty it is not a PoW currency.

Huh?  It is most certainly a PoW currency (just with a very "fancy" work function) and we certainly are looking to inject (in some form) computational difficulty.  (Any proposed solution that doesn't result in increased computational difficulty doesn't do anything to hamper bots.  The only real metric for the bots' performance is computational difficulty of the work challenge!)  I'm just proposing that we might be able to do it directly/explicitly via a complexity curve, instead of trying to come up with some way of "indirectly" coupling difficulty and targettime, likely by playing with block acceptance semantics, as we've proposed previously.  I'm just trying to offer more options.  I'm not saying that this would necessarily be better in the long run than doing it indirectly (I think there are pros and cons both ways, and multiple trade-offs to be taken into consideration) I'm simply pointing out the possibility.


full member
Activity: 204
Merit: 100
June 11, 2014, 08:50:54 AM
by using 2 bits of nonce to specify which of the top 4 blocks to generate map from, and adjusting confirmation semantics accordingly
Ah, now I see how wrong it is. If you measure distance from top block then you should include hash of block that you think is top, but if you know its hash then you can just generate map from it.
full member
Activity: 204
Merit: 100
June 11, 2014, 08:42:25 AM
Is there any ETA
No.

One thing I have been looking into for difficulty control is the possibility of setting the perlin function itself to scale with difficulty.  Perlin complexity scales exponentially with dimension of the seed, so this might be a nice place to inject computational difficulty.  I think that it would make more sense to increase complexity of frame calculation as opposed to increasing complexity of map generation (so that we never get into a situation where a user can't start a game because they don't have sufficient hardware to generate a map in reasonable time) but I'm not really sure if this is a better approach or not.  I could imagine a similar situation where users' framerates are throttled to something annoyingly low simply because they can't calculate the perlin quickly enough to maintain.

What was that? I only see a bunch of words with no sense. Perlin function has no scale difficulty. What does it mean a "perlin complexity"? Perlin noise computation is not related to the map size it is always calculated from the 4 nearest points.
I agree, no sense.

As for the map reset, I still think that simply allowing the users something like 4 blocks of "slack time" (by using 2 bits of nonce to specify which of the top 4 blocks to generate map from, and adjusting confirmation semantics accordingly) is the best approach.
This is some huge and complex change to the whole blockchain semantics. I still don't understand how are you going to add transactions if you generate map only based on previous blocks. But anyway, this would require a lot of time to work out this scheme and a lot of changes to client.
Pages:
Jump to: