1. Who are going to write your code? Looking at your team, I see no one labelled as developer. In my world, the developers and software engineers are perhaps the most critical resources.
We have 12 people strong dev. team doing the platform now. We list only the project principals in WP.
Our CTO and VP of Engineering run the dev team.
2. If I understand your concept correctly, the training data is to be produced by advanced video rendering, which is well suited to GPUs. There is also a huge amount of rendering power distributed in the mining community just as you say. However, the mining rigs are mostly ill suited for such workloads since rendering requires considerable bandwidth between computer and GPU. Today, the typical mining setups use cheap USB bandwidth between CPU and GPU. Any rendering benchmark will show this being detrimental to performance.
Sure, rigs can be rebuilt but it's very hard to fit more than 3 or 4 GPUs with proper PCIe bandwidth. Only higher end systems will work well and the mining rigs mostly use the cheapest possible hardware (except for the GPUs). Lack of CPU and memory resources may also be a bottleneck depending on how the code is written. So, if you are serious about tapping into the distributed GPU resources, you need to explain the requirements so the community can adapt in time.
In our case we don’t need a lot of bandwidth. Typically synthetic data images do not require “Hollywood”
level rendering. In our experiments the mining hardware was shown to be more than sufficient even outperforming
similar setups on Amazon by a hefty margin.
It is true that not every system will be a fit, but we are getting around this issue with some clever software engineered
into our nodes. As our platform matures and goes toward the first release in Q1 of 2018 we will be releasing the specs to the community.
3. Massive amount of images (or perhaps even video) hints at massive amounts of data. Can you present any calculations on needed internet bandwidth and storage requirements for this. It looks like you have some experimental projects to draw experience from?
As our platform matures and goes toward the first release in Q1 of 2018 we will be releasing the specs to the community.
Now we are working out the use cases with couple of friendly miner farms (each one has several thousand mining rigs of
all kinds). We want to be disciplined about what we release to the community. A lot of these problems might not apply
depending on how we engineer the node.
For example one scheme involves bundling the data generation and model training tasks in the node. This way once data
example is generated the network will run a learning epoch on it and then the data piece will be destroyed. Then next one
will be produced and epoch will run. This gets around the need to store massive amounts of data (but slows down learning
speeds -- mitigated by more nodes running the process).
5. The distributed computing part of the platform looks a lot like it would match the BOINC architecture. (I'm sure you know of BOINC).
Why not just piggy back on BOINC and get immediate access to an open platform for distributed computing tailored to sending compute jobs to a large number of users (with GPUs). BOINC handles all those issues with getting back scientifically correct data, users who don't complete their work packages etc.
I noticed they even have a crypto currency these days meant for rewarding the contributors on a voluntary bases.
We are absolutely looking at BOINC and GridCoin, Golem and others. Our goal is to cast our web of nodes as wide as possible.
We would also be writing containers for Amazon, Google Cloud and Azure.
What’s immediately apparent though is that our platform customers will derive greater immediate economic value
from mining farms, so this is a priority for now.
very cool Project, How many mind. Ether is the Presale??
Thanks! You can buy NEUROTOKENS using ETH or BTC