Firstly I like the general idea and I had thought that pruned nodes work somewhat like this, at least in terms of storage. Im not entirely sure I understand every detail of your suggestion though. E.g., not that its of importance here, so feel free to ignore this question: How is it easy for governments to target companies unless they are all in the same country?
You're right it is not 100% easy for them, but consider this: When Snowden was on the run the USA was able to ground a plane with the Spanish PRIME MINISTER because they though MAYBE Snowden was there.
If Bitcoin was eroding taxation national governments might indeed take global action.
That said I also wrote that centralized Bitcoin hosting would still be better than the fiat we have now.
5.5:
I dont agree with the permanent blacklisting. You can either blacklist the ECDSA key, which is usueless because the cheating node can just create a new one or you can blacklist the IP. The cheating node will get a new IP and that banned as well. If some spoofed IPs you can completly isolate a node or even render the entire network useless.
"Proof of work burn done"? So I would need to pay (or burn) bitcoin in order to run a node?
Proof of work does not involve burning Bitcoin, just hashing a minute or two as a DDOS prevention. Very standard same as in hash cash or Bitcoin mining. I wrote "work burn" because you would be "wasting" CPU/GPU power.
This loss is what prevents a spam attack using many newly generated keys.
5.7:
I am not entirely sure, but how do the tables you use to store information about other nodes work if you consider home run nodes? They will have e.g. daily new IP addresses and might be offline for the majority of the day.
If the nodes change IP they would somehow have to announce their presence when going online. Otherwise nodes can store key/IP mappings along with some anti-DDOS trust information.
Each node would know many other nodes for each chunk so some being offline temporarily would not be an issue.
Also why not validate the entire block yourself, but only keep your chunk instead of requesting data from several peers that not might not be responsive. Block validation is crucial for propagation and if this becomes a slow process we might have a significant increase on orphans.
At 1600 TPS the blocks are 330 mb each it would be slower to wait for all nodes getting that. Wouldn't even be able to cross the firewall of China within 10 min. if sent directly from one node to another. (according to Chinese miners not me)
Much faster to have a "slower", but massively parallel process.
On average you would would check 512 transactions. If you got all their inputs - say avg. 3 - that would only be 4 * 330 bytes * 512 = 0.68mb. This represents most of the data you would need to get per block so maybe 1-2mb over 10 minutes, not a huge load.
Finding online nodes would not take much time.
6.1:
Im not sure I can follow your arguments on bandwith usage. 156GB per year will not be evenly distributed.
See above, I estimate 1-2 mb on average per block over 10 minutes. Even if this fluctuated to 10 mb sometimes that would not cripple a normal PC connection at all.
You also as far as I can tell did not account for transaction relaying, which is part of a full nodes work. If we assume 10^6 TX per block this is no small part to handle.
Correct I forgot, however we can just relay based on script range so if the TX claims a TX script with a hash in the range 000-0CF you send it to nodes handling that script range.
Other things I might have missed/dont understand:
- how are chunk sections select? Do I pick them by hand? Are they random?
Always intervals of 512. This gives a nice trade off in terms of data you need to store and data you need to poll for.
512 also means that your entire chunk will equal exactly one merkle tree node in the block merkle tree.
Running a new node the software should select to store a chunk range that has the fewest other swarm nodes hosting it.
Range 51200-51712 for example.
- how do you suggest to start with an "all trust" approach considering the current state of the network "dont try anyone"?
Lets say you are a Chinese mining operation and there are multiple openings in and out of China, but each is limited to say half the block size/10 min.
So you run two swarm nodes that trust each other (because you operate both) and place one near each of the firewall openings. Think one at the border of Russia and one at the south west border.
And tada now Chinese miners are ready for bigger block sizes although their individual connections suck.
- What if there are not enough swarm nodes to cover all chunk ranges? Or, less crucial, what if there are not enough swam nodes to cover all chunk ranges all the time?
Each range should always have 200+ nodes covering at a minimum.
As for ensuring coverage you could do either of:
A. Limit blocks by TX number instead of size.
B. Nodes can cover more or all chunks if they choose, so just have some that cover the range 1mil. to infinity even if this is mostly empty.
C. Calculate the maximum number of TX based on block limit and min. TX size.