Get to know Joe Mozelesky, lead developer of Sapience:
The following a selection of various info that is readily available and was provided by Joe throughout January 2015 related to Sapience development.
Potential End User Case Scenario:Pattern analysis and fraud detection algorithms, that is the kind of thing that you could run on XAI.
The other real world application i can think of for this stuff... I used to do a lot of work on power grids. I could see real world use cases for both capacity need prediction, preventative maintenance predictive analytics for generation owners. I did some work for a company that operates huge wind farms...there's another application - predicting supply based on weather forecasts combined with turbine efficiency etc.
I could see stuff where you have applications built on StreamInsight or oracle's CEP engine where there is an adapter plugged in to simultaneously send the data over to the XAI network to keep feeding it real time streamed data to run continuous analytics on like financial trading engines, etc.
Then there's more mundane stuff... like how about building a feed from your accounting system to send inventory data/sales data into it, and using AI to make re-order recommendations for purchasing, depending on your business you might combine that with weather data, data from futures/commodities markets, new housing starts, etc.
Supply chain stuff, like having an AI that is analyzing news feeds and can predict shortages of particular raw materials and then send you alerts saying hey potentially in 2 months your supplier A isn't going to be able to make quota because of a shortage of XYZ material, start bringing another factory online or substitute a different raw material.
There's all kinds of applications where traditionally a mid-market company might be using desktop BI (business intelligence) software and basic rules engines to do basic analytics, simply because they don't have access to more powerful tech.
Accessibility Ideas:What would be really cool is if somebody manages to miniaturize a pico projector to fit in a Android Wear smartwatch...so you could say something like "Sapience Visualize" and it projects ai network activity map onto the wall etc. I have to look into how hard it is to do android notifications from QT...would be pretty cool in addition to being able to say AI in your pocket - AI on your wrist. Those wearables consume notifications over bluetooth.
Related to Android QT development:Only thing I can think to try is rebuilding it with an older toolchain... i built it with 4.9, but maybe will try 4.8. It seems to be a memory access violation when its trying to invoke a method, so that makes me think it is the linker issue, apparently there is a bug where the linker does not order the loads correctly, although when watching the LogCat it did load gnustl first...looks like the call table must be wrong/off. Its trying to invoke uin256 constructor and bombing out with the access violation, so it must have the wrong address for it. To build the apk you have to build the dependencies using the same android toolchain... that requires jumping through various hoops. i built most of them manually but to try again i'd probably try to build them all using the ndk-build build system in the android ndk e.g. boost, openssl, miniupnpc, bdb.
mixed and matched stlport and gnustl in leveldb. I rebuilt the dependencies using android ndk standalone toolchain and made sure all of them referenced gnustl and used the standalone toolchain deployed using the shell script in the ndk, as opposed to trying to build it with all the paths set into the ndk itself, also used the latest leveldb directly from github.
Leveldb pulled out of .pro file and compiled as a separate dependency.
GitHub
CedricProfit/SapienceAndroid
Contribute to SapienceAndroid development by creating an account on GitHub.
https://github.com/CedricProfit/SapienceAndroidTalking Android Wallet v1.0 Ready:https://play.google.com/store/apps/details?id=com.blockchainsingularity.apps.sapience_qtQuotient PoS:I added some auto-optimization of block sizes, so there is a new setting on the profit explorer tab where you can input your preferred "block" size and when staking it will split your blocks down if they are bigger. It won't directly divide it up into chunks at your block size - I originally coded that and it breaks all of the PoS checks-and-balances, the PoS stuff all assumes a max of 2 outputs so it will split in 2, and combine in a way that seeks your preferred size. Its nice anyway, you can just set it to 367 or whatever the magic number is, and forget about it/let it run.
I've been working on some other code to allow you to break down your blocks all at once, originally i was going to have this happen automatically on a thread but i realized this will saturate the capacity of the network if 1,000 wallets are all generating 100 tps and from the bitcoin debate with gavin we know the theoretical maximum is 7tps network wide on bitcoin.... we have 1/10th the block time so theoretically we can handle 70tps network wide.
So i need to think of what the best way is to do it, either throttle it so it only does so many per minute, or have a button...or just leave it and let people do it manually in coin control leaving it alone might be fine, as the staking mechanism will slowly optimize it over time anyway i've been running it this week on my wallet just to make sure everything is ok and its been working good for me.
Thinking Ahead XAI:One of the things i have been thinking about solutions for is dealing with "bad" or unwanted data on the network... so the idea is that on Sapience you can load private data just for running your own algorithm scripts on, but also be able to mark some data sets as public and share them... the only issue with that is people who decide to "troll" the network and load crap onto it... i was thinking a simple solution at first is to provide an interface so on your node you can just ignore/reject data you don't want to host parts of.
As far as specific algos...what I am working towards is that the sapience platform will work sort of like pine scripts in tradingview, where there is a set of generalized services and then specific algos or operations are scripted via a kind of markup... so this way it is adaptable/not limited by what is hardcoded.
What i am trying to get to is something that is sort of like a cross between AIML and opencog etc.
Performance with latency/etc. may present some challenges, preventing things from stalling out, being resilient if there are long running operations across multiple nodes and one or a couple of them go down.
Operating Costs:Micro fees... sort of like ethereum where there is a cost associated with each operation within a "contract"... what i am thinking of is that a "run" would get associated with a transaction and signed by the submitter, and the transaction amount will have to be enough to cover the sum of the operation fees. we will need to think about what fees make sense for both data handling and computation time/operations. If you are running analytics off a real time feed from your e-commerce website for instance, you might want to feed a stream and only need each "unit" of data to have a lifetime of a few minutes, vs. loading some trade data for backtesting where you might want to store it for a couple days or longer.
Thoughts about how Sapience will pan out and believing in his project:I really believe this is going to be a game changing platform...because right now your only real alternative options are Microsoft and Google, at high cost... there's a handful of startups but they aren't live yet.
Is XAI like SkyNet?Heh, that thing isn't even in the same ballpark as to what xai is putting together...i was looking at it and the token calls itself decentralized, and yet its a combination of a local server running on the end user environment + their proprietary centralized "cluster"/db of data. Its interesting for what it is but its not really solving the same problem.
Plagiarism Concerns:My biggest fear when i start publishing the data architecture design etc. on the wiki is that its going to get ripped off or show up regurgitated somewhere... i guess there's not much that can be done about that though other than just build a better product.
Data:The data platform for Sapience is pretty generalized, so although I'm building it specifically to support back-ending the AI stuff, in practical application the XAI platform could be leveraged for all kinds of stuff down the road, it all comes down to writing adapters basically... maybe another analogy is its like object oriented programming, or component-based development.... As far as data throughput... right now the way the existing low level stack in the bitcoin p2p code works, i would say expect something closer to DSL
but there is a lot of room for improvement, the existing stack is pretty basic and has arbitrary caps/throttling in it... Thats why i was saying in the last video, we can circle back around to do optimization later...because there is plenty of room for it.
For AI work, the speed cap isn't too much of an issue, because essentially we are dealing with operating on streams anyway.
For me it is easy to think about it that way, because i used to do a ton of real-time streamed data work with Pi. So basically you might have a lot of data, but you are generally only operating on small pieces of it incrementally at any given time, so like, i can give you two real world examples where you could do some powerful stuff with a platform like XAI. I had one client that makes huge pumps used in nuclear reactors. They capture real time streamed data from these pumps over wireless mesh networks and funnel all of that to some monitoring servers where they then do manual analysis. What if you were streaming that data into a predictive analytic application that could use fuzzy logic to make preventative maintenance recommendations before you had failures or degradation?
Idea for some extra reading:If you're interested in some light reading on some of the tech concepts behind the sapience design, some of these papers are good reads, this one in particular:
http://users.monash.edu.au/~asadk/paper/full_text/A%20Multi%20Feature%20Pattern%20Recognition%20for%20P2P-based%20System%20Using%20In-network%20Associative%20Memory%20-%2085th%20Solomonoff%20Memorial%20Melbourne%20Australia%2030Nov%2002Dec%202011.pdfhttp://www-ai.cs.uni-dortmund.de/LEHRE/SEMINARE/SS09/AKTARBEITENDESDM/LITERATUR/sam08_decisiontree_bhaduriKargupta.pdfOne of the Decentralized AI endgames:I could imagine a year or two from now XAI becoming essentially like a decentralized Azure/AWS.