Weekly Consolidation Update 3 Containing Communications Direct From XAI Dev
Previous consolidations: Week 2 :
https://bitcointalksearch.org/topic/m.10369861Week 1 :
https://bitcointalksearch.org/topic/m.10301510Following on from last week, PlumeDB is now on testnet for slack group. The distributed database which serves as the base layer for all future AI applications is now finished and will be available for public consumption once initial debugging is completed.
There has been a ton of talk happening in the slack group and below is a snippet of that, which hopefully contains some of the more relevant parts.
Development notes:What i'm working on right now is, I added a wrapper method for logging from within the plume related classes, that first notifies the UI through a signal I added on uiInterface and now i'm adding a tab in the UI so that it pulls that and puts it in a list so that we can see within the wallet that its "doing stuff" behind the scenes. There will be a log tab and a messages tab, so that we can watch things happening in real time, the log tab is just general activity, the messages tab are p2p messages sent and received from nodes it'll just make it a lot easier to a) let people know that "something is happening" and b) trace and troubleshoot stuff.
I want each peer to pass their count of all dht entries they got so i added a field on the message that gets broadcast, and i added a column in the UI on the peers tab, but to get the count, you can't just call Count() on the database. I just had to write my own little method that has to iterate the entire database which isn't a great way to do it...when you're broadcasting the message every 10 seconds.
Keeping Sapience Safe:We are going to put Sapience on the moon.
https://twitter.com/SapienceAIFX/status/564976214642032640"We choose to go to the moon." #SapienceSpaceProgram #LunarNode project. More info roadmap/mission calendar soon. $XAI #DreamBigger #CubeSat
Roadmap:The immediate roadmap is to get the AI core out for sapience, then the asset market, xaishares and another round of fundraising, and then wallet rewrite as a more modern and easy to use app across all the platforms.
Coming online and what appears to be a reference to the character from the movie Transcendence:pinn@aifx ~/Sapience/src $ strip sapienced
pinn@aifx ~/Sapience/src $ ./sapienced
pinn@aifx ~/Sapience/src $ Sapience server starting
pinn@aifx ~/Sapience/src $ ./sapienced plmpeerinfo
[
]
pinn@aifx ~/Sapience/src $
Daemon compiled, rpc did something, no peers yet but...its alive
How to set your own fees as a node:You can override the default fees in the .conf file with :
datareservationrate
dataaccessrate
Locating other Plumes via RPC during testnet debugging:{
"myplume" : {
"plumeid" : "454f3d0395828f802236d2fa934389806eece2514685414f97f4eb98b6c674f2",
"plumename" : "Test"
},
"myplume" : {
"plumeid" : "a46d5bed6e07279cc388b6776bf05b3f767cbd43d3f29e9a948ca4b7b825dc0d",
"plumename" : "Help Test"
},
"myplume" : {
"plumeid" : "e918926a2a7f25c507c6dd97c206eb1ae60d10a3822c42372434ea7c183e065e",
"plumename" : "test"
},
"publicplume" : {
"plumeid" : "454f3d0395828f802236d2fa934389806eece2514685414f97f4eb98b6c674f2",
"plumename" : "Test"
},
"publicplume" : {
"plumeid" : "f9e2ab418ef99a1e11cc63770f1596d4b1acde7e1148bc4e065c4bb37fe9991e",
"plumename" : "Test Plume 1"
},
"publicplume" : {
"plumeid" : "fb137f00b5ee1d4320d07abe818535d7354d2276d256e4ef586a656c1da0224b",
"plumename" : "Test Plume 2"
First results from testing look encouraging:So some UI issues and a few other things to tweak... but so far its working far better then i expected tbh.
The hardest part is the peering, making sure the peers see each other and respond to messages etc. This data proposal thing is weird, so the props come back, you accept them, it should then raise a signal to the ui that the plume is alive now, and it should then get added to the table. Also when you restart the wallet...i have to put something in the InitializePlumeCore so that it sends the signals to the UI for each record it reads off disk. It also should probably only put your plume in only one of the maps, if its a public queue its showing up both in the public map and the "my" map.
Behind the scenes it is tracking everything in these core maps and sets:
CCriticalSection cs_plumecore;
std::map
mapMyDataPlumes; // map of data plumes this node originated
std::map mapMyServicedPlumes; // map of data plumes this node is a Neural Node for
std::map mapPublicDataPlumes; // map of public data plumes
std::map mapReservationsWaiting; // data reservation requests sent, awaiting proposals
std::map mapProposalsReceived; // data reservation proposals received, awaiting acceptance
std::map mapProposalsWaiting; // data reservation proposals waiting for client to accept
std::map mapProposalsAccepted; // data reservation proposals accepted
std::map mapPeerLastDhtInvUpdate; // last time we requested infohash inventory from the peer
std::map > mapPeerInfoHashes; // list of peers for each infohash
std::set setServicedPlumes; // plumes I am a Neural Node for
std::map mapGetResponses; // responses to get requests
std::map > mapChunksWaiting; // chunks awaiting use
There's some more rpc commands i'm working on like the chunks thing which will let you walk a data set in chunks of 100 at a time. Tere is an api class CPlumeApi which backs the rpc calls and that the python shell should bind do / be the primary interface to the data functionality i had to sprinkle stuff like this around so it doesn't blow up on android:
#ifndef ANDROID
PlumeLog("%s %s %s %d", msgType.c_str(), peerAddress.c_str(), msg.c_str(), sizeKb);
#endif
Android is like hyper-sensitive to activity on the UI thread, would be nice to get 8^3 nodes but i guess that's unrealistic for testing... since at the lowest level its seeking to penetrate 8*8*8 nodes as an infohash propagates so like, being able to test it "at scale" would be cool, but is unlikely.
Conclusions from JoeMoz after first round testnet:
This is nice...we have an overlay network, and it functions!