Update:The coin is basicly done. Other people can come in and polish up the hundreds of other things now. There are two developers working on front end and api, so will speed it along. Nothing else that is absolutely critical and not incremental needs to be done. There are still hundreds of small changes, improvements, refactoring to be done but it can be ignored for now.
After the IPO receipts are done, then schedule is cleared for consensus implementation and meshnet/darknet.
This is the CJDNS networking stack
Skywire is similar, but is simpler.
The Skywire networking is designed to be "modular" in the sense that it can be torn apart.
- the routing algorithm in network can be swapped out/changed, its source routed
- the TUN adapter is just an application on top of the base layer
- multiple-homing and multiplexing is handled at higher level and can be changed or controlled by application
- settlement and payments for bandwidth
There are a large number of pieces. The objective is get them working at the ghetto level, even if they have to be centralized. Then people can fix or improve them later. Then we can move on to the next part. The crypto/blockchain had to be perfect, but now we can shift back to cowboy development.
We built most of the infrastructure for internal usage and not for some abstract purpose. We tried doing ICO/exchange with automated bot over bitmessage, tox, xmpp and I was not happy with any of the solutions. I just wanted to be able to put in a public key for a node, connect to that node and be able to send bytes. We had
- reliability issues. Direct connections from Russia/China/Hong Kong through Europe and North America were failing and connections from US to China were constantly connecting/disconnecting. The reliability of direct routes between nodes on the IP namespace seems to be decreasing and was never 100%. Many users have reported that CJDNS has higher reliability than IP because it is not reliant upon direct paths
- connecting all nodes to VPN was failing because of DNS failures. Dependence upon DNS resulted in dropped/timed out connections. Failure requiring restart by hand with RTNETLINK answers: Operation not permitted Linux route delete command failed: external program exited with error status: 2
- Bitmessage could not send more than one message every five minutes. Bitmessage was completely taken out by flood attack during a period when we were testing it. Message latency was in minutes, compared to real time
- TOX worked very well, but the programmatic API for golang was lacking and did not work for a full month. After the API was fixed, worked but then connections between the North America and China keep connecting/disconnecting.
So the requirements are
- multi-hop (unix-to-unix copy, source routed)
- link layer encryption between nodes
- no reliance upon the DNS system
- own namespace independent of IPv4/IPv6 (which is only used for transport)
There is someone implementing the basic protocol now. Then once we have length prefixed packets flowing over it, then we need to
- add a golang TUN adapter, so we can tunnel VPN connection over it and use it like a VPN.
- need to use add wifi controller library to scan for local nodes and connect automatically and form links (fixed connectivity meshnet functionality)
- need routing algorithm that will scale. While the network is 10,000 nodes, then any routing algorithm is fine.
- get coin payments for transit setup. Then people can get coins for reliable links out of difficult countries
- get "pirate box" you can just plug into ethernet and connect to a bunch of nodes in network and start relaying traffic and provides open access wifi connection over skywire
Once the software is useful, then there will be non-speculative capital inflows.
I dont think end-users care about software quality as much as usability and that it works. It does same thing and has same reliability, but it costs 5x the mental effort and frustration for developers. However, if you are building foundation for larger system it is important. I think we should go on rampage of implementing new things and ignore technical debt/refactoring for a while. As soon as something is working, just move on to next thing instead of improving or refactoring already working components. Then refactor and move things around later.
Then there loose ends that are frustrating. I want to get the ECC cryptography completely in golang, to enable cross compilation for windows/osx/arm/linux, however this will require a significant testing and will require auditing and fuzzing again.
This is a very incremental process. I am moving towards being able to run brain wallet on a raspberry pi and have a remote lib-bitcoin node and be able to check balances and being able to process Bitcoin/Skycoin transactions from that. I am also moving towards, being able to run skywire on a raspberry pi and connect the laptop to the ethernet port and have all traffic for laptop tunneled over skywire, in hardware.
Rant:I think OP redecentralize is heading towards secession from the existing internet.
Computing is moving away from spatial localized storage and processing, to computations that exist in an abstract mathematical space that governments cannot control or shut down anymore than they can regulate the integers. If f is a computation and, X and Y are data, and f(X) = Y and the computational/program f has serialization F, and H is a hash function of data to a hash, then any two computers computing f(X) will receive the same Y. This computation can be described by a tuple of 3 hashes, (H(F), H(X), H(Y))
If someone give you the hash H(Y), you can look in your local hash for that data and if you do not have it, you can go to the network to find someone who does (a distributed key-value store, distributed hash table). If you want to verify the computation then you get the data F and data X, from hashes H(F) and H(X) and then compute f(X) and verify that H(f(X)) = H(Y). This is a functional or algebraic representation of procedural computation.
There is a set of all data, which contains the Xs and Ys and serializations of the computations applied to them and network of N computers and each computer has a subset of the set of all Xs and performs a subset computations. There is a "global" computer which is the union of all the data and serializations of computations (functions that can be applied to the data) across all the computers in the network and there is a projection from the global computer to localized computation (a single CPU/memory/communication unit) which only has a subset of the state.
Urbit and Nook were the first systems to head in that direction, but also the piratebay after it was seized. The data delocalized from physical space. It is now moving to bitorrent sync and eventually IPFS.
When you make a request to the piratebay, there is a database D which is a collection or set of objects D = (X1, X2, ...). The url is a function (a deterministic, procedural computation in a subset of C) f, that when applied to D with some parameter returns a webpage. You may even augment the data structure storing D, with some extra structure, such as an index to allow efficient lookup of entries or computation of the returned page. If the user has a copy of D, then they can perform the function F locally, instead of doing it against a remote server and result is the same.
The database D starts as the null set of data objects, then a succession of transactions, creating and modifying data objects is applied against it. This is essentially the "blockchain". A snapshot against the database can be stored as you serialize D to a byte string and then hash it. So you can either start with the null set and then apply each transaction against the state since genesis, or start at a snapshot and then apply the transactions after the snapshot. To propagate transactions or snapshots peer-to-peer across the network, such each node is pseudo-anonymous you need a criteria for acceptance of a transaction, to prevent malicious nodes from spamming junk or conflicting transactions. This is something like "the hash of the transaction is signed by the owner of the database". If there is a total ordering of transactions (maps from the state, to a new state), this is equivalent to the blockchain.
Modifying the database is the application of a function f, that maps the database from its existing state into its new state (also called a transaction). In the abstract functional space, where data is identified by the hash of its serialization, where functions consume multiple data objects in the database and output multiple new data objects in the database (collection of data objects), then the set of data objects and set of transactions on the data objects forms a directed bipartite graph. With an arrow from a data object to a transaction if the transaction consumed that data object, and an arrow from a transaction to a data object if that object was created by that transaction.
In other words, Bitcoin and the Piratebay are basicly the same mathematical object. Its just a distributed NoSQL database in a functional space, where transactions are functions on the data of the collection of objects. "Consensus" is the problem of resolving writes or consumption by two separate transactions that "spend", consume or input the same data object, such that if one node admits a transaction over another choice, all other nodes will admit the same transaction.
There are two types of maps/functions
- maps that are actions on the state of the program itself, which when applied to the program/node/computer map it from an existing state to a new state. These are procedural computations. These can be database writes, lines/statements in a C program within a function, functions called within a C program, database transactions, bitcoin transactions, POST URLs. This is the action of a computation.
- maps that are functional and return one or more values from the state, but do not modifying it (do not map the state to a new state). These are pure functions. These can be functions that return JSON from a database (a url handler), compilers (functions that map from source code to an executable), checking the balance of an address, GET URLS. These types of function describe the application of a procedural program to serialized data or state and the returned result.
- there are composite functions, such as a function that computes a value from existing values and makes an assignment to a new variable. The calculation of the value is functional, but the assignment operator is an action on the program state.
A node/computer has a state S and a computation f is just a map f: S -> S. The application of a function that could be called on the node, is just a map from the state to a new state. If two functions commute f*g = g*f, f(g(x)) = g(f(x)) for all x in S, then this corresponds to notion of parallelism of concurrency. The execution of each program can be considered to be series of actions applied to the state, f1, f2, f3, f4, etc...
- if two lines of a C program are treated as functions on the state of the computer, mapping it to a new state, then if the functions f,g commute it means which operation is carried out first and which is carried out second, does not affect the program state. It suggests they can be carried out in parallel (can be computed on the same clock cycle) if the program is restricted.
- two goroutines may each awake when data comes in. The action of the respective go routines are f1 and f2. If f1*f2 = f2*f1, if the action of the goroutines on the program state can be shown to commute, then the program state does not depend on the order which the two goroutines execute
- at the source level, if there are two C source files with macros, then commutation of the actions means that the evaluation of the macros does not depend on the order of inclusion of the source files
If an execution of a program is a series of actions on the state, f1, f2, f3, f4, then there is a partial ordering defined by whether swapping two actions changes the composite map. This is a dependency map for data flows and its structure the ultimate limitation for automatic parallelism, it is useful to draw a diagram for a normal program, for program with goroutine, then for a GPU with a two stage vertex/texture pipeline that takes in a texture and list of vertices and returns a 2d pixel array. For the GPU shader, you can draw the dependency graph at different layers of granularity, with projections between the layers. Programs appears to have a strict hierarchy of granularity enforced by the memory model and actions at each level correspond to different granularity of computation.
The summary is, that I think there is a language that has the same syntax as C (a hyper minimalist version of C), but which formalizes what bitcoin/skycoin are in a categorical aesthetic. A function has a representation as machine code (an executable form, or action) but also as a textual form (a serialization as bytes) and an algebraic form (a declarative objective in a functional space that can be reasoned about). A skycoin transaction is both json and a serialized struct, with an isomorphism for converting between the machine and human readable forms.
If f maps from skycoin transactions as struct to skycoin transactions as json, a g maps between json and struct, then the composition f*g = identity and g*f = identity. f(g(x)) = x for all valid x.
Also, although transactions are structs they have representation as bytes (reification for transport over wire), they are actually functions. A transaction is a function applied against the unspent output set, that consumes outputs and creates new outputs. A block just contains an array of transactions (which must commute with each other). The fact that transactions commute, means that two transactions cannot spend the same outputs. If the output set is ordered, it also means that a transaction cannot spend an output created in the current block.
So when I look at
- Golang (reflection, reification, CSP, multiple return)
- Docker (determinism, minification and reification/serialization of execution environment)
- IFPS (spatial delocalization of data)
- Nook/Urbit
- Skycoin
- LLVM
- NixOS (linux with declarative/fuctional package manager, deterministic package builds)
- Tox (public key hashes as communication endpoints)
- Cloud computing / CoreOS (creation and deletion operators for computation devices in a set (spawn instance, shutdown instance), standardized computation/storage objects as a communication endpoint, )
- etc...
I am seeing a definite move of computing towards a categorical aesthetic. In each of these projects, existing things are being torn apart and reconstructed at a new level of modularity, through simplification.
Many people are distracted and fixated on the wrong things.
For instance, people were fixated on a single aspect such as "charging by the hour" for computing resources for cloud computing. When there are multiple aspects, that were more important.
Aspect 1: Creation/Deletion operators in cloud computing
- there is a set of computer/memory/communication objects (call them nodes)
- there was an operator that creates a new computer/memory/communications object and adds it to the set
- there is an operator that destroys an existing computer/memory/communications object and removes it from the set
Aspect 2: Agoric Computing
- renting out computation/storage by hour/minute. Fungibility of resources
Aspect 3: Separation of Storage and Computing
- the place data is stored for processing and what its written out (S3, NAS) is independent on the computing nodes
- spatial delocalization of data
For Bitcoin, the two aspects were
Aspect 1: Spatial Delocalization of Data
- each node had copy of blockchain and ledger.
- The blockchain exists as a monad in an abstract mathematical space, independent of any of the spatially existing nodes in physical reality.
Aspect 2: Determinism of Computation
- each node, could derive the same balance and state, given the same data (blockchain) as input
Aspect 3: Consensus
- each node ran a process, where every node arrives at the same blockchain (the consensus chain) through a process of communication
Bitcoin accomplishes #2 as best it can, within the language limitations of C/C++. Bitcoin accomplishes #3 in a resource intensive process that is still spatially localized. So Skycoin is a movement of Bitcoin closer towards the category theory aesthetic.
Bitcoin/Skycoin, is in some ways a movement towards an algebraic or categorical aesthetic for money, just as Nook, Urbit and a are a movement towards a categorical aesthetic for computation.