i always like to play around with peoples game theory of their alts, find the flaws and find solutions
Thanks franky1,
lets review points one by one.
Do you agree the “heroic deed” is really scarce? It is hard to accomplish and it has cost. So it is totally different from “air drop”. Everyone in order to be a participator and get paid, needs at least one hour helping project. To having 45 million participator we need 45 million hour of contribution. It is too big number. Can we achieve this level of participation in one year? Absolutely no. we need at least 10 years to hit 45 million real participator in project.
- What kind of participate, people can do in this software (system or community)?
There is no limit for activities, but intentionally we start from “develop” and over time expand it to wider range of activities.
The “develop” refers to every activities we need to develop our system and its proper community. Some of them are software developing, test, design, documentation, manuals, translate, tutorials and educational stuff, etc...
These kind of activities can be measured and evaluate fairly.
We definitely will solve this problem by 45 million hours of participation of supporters (people in different skills) in next 10 years. This is not our today problem at all.
I still have to write more technical document to explain what exactly happened under the hood. But for now since you emphasize on blockchain space “bloat”, here I want to tell two other facts that even worsen the situation
1. My proposed data structure for recording data is a Directed acyclic graph (DAG) and not a linked-list like Bitcoin! In this design each node can publish unlimited blocks regardless of that famous Bitcoin 10 minute gap between blocks!
2. My proposed DAG not only records transactions, but also records text documents(e.g. decentralized weblogs, wiki, forums), media files (e.g. decentralized podcasts and video channels) and literally every kind of files, all in one DAG.
Considering these 2 features the block-graph will bloat even more rapidly. What is the solution?
First of all, users have to pay for recording data on blockgraph, and nodes get this money in their wallet. The cost of different data type are different. e.g. transaction, wiki page, weblog post or video stream have different prices.
Even if users pay for record data, the blockgraph will bloat fast, but there are solutions too. We can solve it by simple “Supply and demand“ of free market rule.
In my design, data is divided in 2 classes.
A: Essential data
B: optional data
The essential data are the core data about transactions which are compressed and small. They are necessary and each node has to record and maintain these data (either full history of transactions or pruned version is ok), whereas the optional data are cumbersome and each node may record it or not.
BTW if a given node needs some optional data which doesn’t exist on local machine, it can purchase this information from other nodes. The mentioned node can also sell this data to other nodes -if there is a demand for it-. In such a mechanism some nodes may prefer to act like a Long Term Data Backers and making a passive income by selling data, and the others just maintain the necessary data.
It is a free market for data. As we know, the storage nowadays are very cheap, so most probably major percent of nodes prefer to store entire blockgraph (including transactions, wiki pages, weblog posts, even video podcasts) on their passive hard drives and earn money by selling those information.
At this point we can also use CDNs or better calling BDNs to provide fast, reliable, distributed storage over the glob. It will be easy to write a plugin to connect our nodes to any commercial CDN company and vice versa.
Recorded data can have expire date as well, so the recorder may renew the data rent regularly.
Nodes can manage what kind of data to be maintained or not. In addition, some customized application can be implemented as a plugin working on top of our software via APIs. So this app will use blockgraph space only as a proof of existence and they share data in between in form of big blocks of data. For example a supply chain software can be a plugin to software. So this software(plugin) will be installed by business partners and they just record the hash of goods allocations or stats on block chain and the real big data will be transferred via FTP or what else between partners.
At the end of the day the nodes use UBL either for trade real goods and services in smart contracts or for trade recorded data (either encrypted or not) on blockgraphs.
These are just a few naive and today’s practical solutions that we already knew. Definitely we improve and innovate far more solutions to use and manage these mass data as well.
Another point about “coinbase block”. Every 12 hours one coinbase block will be creates by software. It contains “only” information about all participators dividend. Since the system is decentralized and all nodes have same information, rationally all nodes can create same coinbase block. There is no block reward in this system too. So we actually do not need to even broadcast the coinbase block. Each node standalone and independently creates the coinbase block and adds it to DAG. The coinbase block hash will be same for entire network, and outcome for all nodes will be same transactions records and same final balance. The details about this mechanism needs another document which I’ll post ASAP.
The tweak to the evaluation of payment…
Thanks for your “active participation” and not just virtue buzz. Indeed I can say now you are doing kinda “heroic deed”, since the first mission of the system is to be survived and improve itself. I guess you dedicated around 30 minutes up to 1 hour to read my posts and write your notes, so you may now re-estimate the fact that how much hard will be having 45 million hours of participating in system (either technical or non technical) and how long will take this process.
BTW, In my design we spend whole output of an account in one transaction and return back remained amount to a new account (like Bitcoin). In this system there is not a balance for an account. Each address can have Unused outputs(UTXO) or used outputs, Same as Bitcoin. Therefor the “slot” solution won’t work for this system. Meanwhile because of the way system designed, we can have vary different type of transaction simultaneously in a block. That is, while we have M of N multi-signature transaction as a common classic transaction in system, we can have IoT-friendly transactions for micro payments as well as MimbleWimble format transactions and also “in-jar” transactions for more privacy, all in one block. Up to the case (In sense of functionality, transaction fee, privacy and security level, and...), users can chose which type of transactions they want to do. Some of them will be really small and light weight transactions while the other are longer, more secure and costs more. I’ll explain more technical details on this subject step by step.
...more then the 90mill daily creation you limited to(2x45m)…
Again I have to post a separate article only for covering coinbase mechanism, but for now I should admit the system is designed to support entire world population (currently 8 billion) and not 45 million. If you have technical skills (either mathematical, statistical or software development) let me know. I will send you more draft detailed technical document (or even some code snippets) in advance.