It is currently all in a single file of 666 lines!
spooky
It is doing the onion routing and tokenizing and hopefully properly routing without leaking any private info or exploding the packet traffic
James
packets.h and udp.h are the key
As suggested by jl777, I am posting the workings of onion routing and potential pitfalls it could have. Onion routing is a method for sending data from one place to another by bouncing the data across multiple servers with multiple layers of encryption.
Process:
1. the data to send is encrypted on multiple levels by the client
2. the client sends the encrypted packet to the first of many PrivacyServers
3. the privacy server does checks on the packet
4. the server then decrypts the packet one level
5. the server checks if the packet is now completely decrypted and meant for this location
6. If it still has layers of encrytion to go, it locates a new privacy server to send to and repeats the process
7. once the packet reaches the end user, the data is available, seemingly untractably and anonymously
Faults to work out:
- the packets could be monitored for size and followed by tracing the unique size of the packet through the network
A) ~ mitigated by either padding the values to max or adding a random salt to the packet at each level
- the packets could be followed by the timing of each hop
B) ~ mitigated by adding random wait times on the system
- somehow having a code fault that allows access to previous senders or public keys for other levels or privacy servers
C) ~ further code analysis for errors
If anyone wants to help find flaws to iron out, your help is appreciated, otherwise this is just a rundown of what supernet can do
B) I think assumes the attacker is able to monitor all packet traffic, but the attacker would only be able to decrypt packets that it is routing (or receiving). Unless the traffic level is so low that individual packets can be traced globally, it will be hard to correlate the packets even for a single transmission. this being said, a random delay is a good idea, but probably not such a large range.
It is important to note that there are two types of nodes, the public servers that publish their IP addresses and pubkey and the private nodes that only communicate to the public servers. The part I need most help is to see if the probabilistic routing will truly shield a private node's IP address from the public servers, and of course whether it will be able to successfully route most of the time.
My feeling is that based on network topology different fanout levels need to be used in addition to some integration of historical probabilities. When the network is relatively small, we an err on the side of larger values, but at the larger scale it will be important to be as efficient as possible. One approach I am thinking of is to just use an adaptively adjusted fanout factor, eg. shrink it if it worked, expand it if it didnt. this would end up jittering around the optimum level and once there is enough history to add a bit of headroom to the critical value.
It would be nice if somebody could model this
James