I cloned the Qeditas source code from IOHK's github and have been using it to play with some ideas. Here is the most up-to-date branch of mine:
https://github.com/aliibrahim80/qeditas/tree/sha256hashvalIn addition to the work I've done, I have merged in mariashka's modifications to the "mathdata" code. She verified the code in Coq and factored the ocaml code into three modules: logic, mathdata and checking. Her branch is here:
https://github.com/input-output-hk/qeditas/tree/verified-mathdataI don't think it was ever pointed out on this thread, but there were problems with the initial distribution database. The balances appear to all be correct, but the wrong hashes were used to refer to assets. This propagated all the way up to the root of the initial ledger ctree. mariashka wrote code to fix this, and using this I got a fixed version. After making more changes (using only sha256 instead of sha256+ripemd160), I had to regenerate the database. Here is a copy of the version of the database (about 1GB) with the initial ledger tree I am currently using:
https://mega.nz/#!JGx1VSwY!04XmXU0Zv0iye4Sa2LhSvp7ERh_HiJAKn_2IsQsYkN4Here is a summary of the changes I have made so far:
The first thing I did was to add a proof-of-burn component to the consensus code. It is only simulated for now, but the idea is to sometimes burn some proof-of-work coins in order to embed the current block/state into a proof-of-work block chain. This would make sure someone could not privately create a long-range attack chain. This would also supply a source for "randomness" for the stake modifier. (The previous version of allowing stakers to choose bits for future stake modifiers seemed unsafe to me.)
I added some json support and added a "createtx" command that takes a representation of a tx in json and converts it to the internal representation.
One potentially major problem I found was that multiple threads could be calling the sha256 and ripemd160 code, and these could corrupt intermediate results (race conditions). I added locking that should avoid these race conditions.
In an effort to make things faster and safer, I changed the "hashval" type from being 160 bits (5 int32s) to 256 bits (8 int32s). These values are calculated using sha256, where previously the 160 bit hashvals were computed using sha256 followed by ripemd160. This change roughly halves the amount of hashing to be done and does seem to make the code noticably faster. I also thought this would be safer since it's possible that using 160 bits would make the system vulnerable to a "birthday paradox" (finding a collision using only 2^80 tries). Since hashvals are used to identify objects and propositions, such collisions seem like they could cause serious problems. For example, someone could try to find a collision between two sets which can be demonstrated to be different. I added the explicit hashval for such identifiers to the Owns and Rights preassets and use these to identify the corresponding object or proposition instead of relying on the address (which is still 160 bits) where the asset is contained.
I completely removed the proof-of-storage component from the consensus algorithm, mainly because it was easier to delete it than try to understand it to be honest.
A commitment of the blockdelta (including the merkle root of txs and their signatures) were added to the blockheader. Surprisingly before there was no such commitment. Given a blockheader there was no way to identify a definite blockdelta that would correspond to it. If a blockdelta were supplied to a node that did not support the blockheader, then there would be no way to know if the header should be ignored or if the wrong blockdelta was supplied (perhaps as an attack).
In addition, the header now contains a hash of the signature of the previous block header (except for the genesis block). Again, this is to prevent attackers from supplying false signatures later in an effort to make previously valid blocks appear to be invalid.
Aside from minor bug fixes, the rest of the work had to do with trying to debug the networking code. Basically every message sent and received is logged in their own files now (in a binary form). This helps me with debugging when things go wrong in the network.
I like the idea for Qeditas, but the current state of the code is slow and has bugs. Fortunately, my impression is that most of the bugs are in parts of the code (e.g., the networking code) that do not correspond to the more important parts that have a corresponding Coq verification. The slowness of the code could probably be compensated for by significantly adjusting the average block time (e.g., from 10 minutes to 240 minutes -- 4 blocks a day). A very slow block time like this would give people time to react to problems if they appeared while the system is running. Also, a slow block time would mean more information could be burned into a proof-of-work chains without it rising to a level their chain might consider "spam."
I am not sure how much of my changes IOHK wants to take, but of course they are welcome to pick and choose, take all or none. Since I'm making major changes, it might make more sense for me to create a renamed/rebranded independent fork. I am open to discussion about either having Qeditas adopt my changes or renaming.