Machine Learning on SONM is now live!re you involved with neural networking? From now on you can run ML algorithms on SONM directly from docker containers.
There are various tasks for automation of solving complex professional goals one can set for Machine Learning: for example image, video, voice, text recognition algorithms. Generally, a serious task like image recognition takes a long time: one CPU can execute calculations on algorithms for days or even weeks. This becomes an issue for an application efficiency, which is why people want to use several video cards and distribute the execution of the algorithm between them. But where can anyone get a cheap GPU in the era of mining?
The SONM platform provides an opportunity to rent equipment during the time required to solve expensive and complex tasks.
This begs the question: Is it possible to pack everything required to train the Neural Network on a GPU in a docker container and make calculations on SONM’s platform straight “out of the box”?
Let’s try!
TensorFlow with GPU support requires CUDA to perform calculations on video cards, so an official container with CUDA and cuDNN libraries is required.
https://blog.sonm.io/machine-learning-on-sonm-is-now-live-f91f996da057