Hello community!
We continued studying of artificial neural networks last week. We attended a lecture by Sergey Shumsky on artificial intelligence which was a part of the OpenTalks.AI conference. I noted two very interesting points in his presentation. First point is the usage of neural networks for text translation from one language to another. It turned out to be very effective as it very close to the translations made by humans and the main idea of it is very simple. Neural networks now are widely used for encoding-decoding tasks. Usually it's a combination of two neural networks - one is trained to compress/encode the input data and the second is trained to restore/decode the original data. This process goes "without a teacher", such pair of networks are required to coincide the input and output data. As for text translation idea is almost the same - one network encodes some text on the source language and the second decodes resulting data, but in a different language. The second interesting point is a progress in artificial intelligence for game "Go". In March 2016, AlphaGo (developed by Google DeepMind team) won in match with Lee Sedol, a score was 4-1. Lee Sedol is a famous and professional Go player, who has 9 dan (highest rank in Go). AlphaGo algorithm, used in this battle, was based on a neural network which was trained on the games of other professional Go players. In October 2017, a new version of AlphaGo Zero was released, it was not trained on human battles and played only with itself. This version achieved much better results - people can't win any parties in Go anymore. AlphaGo Zero was trained with usage of only 50 GPUs for several weeks. Reviews of professional players shows that it has its own interesting strategies, but even AlphaGo developers doesn't understand all logic fornow. As I mentioned in the previous report, neural networks computations may be sufficiently parallelized with many ways. With our very approximate and pessimistic estimation of the FREED project's capacities we will be able to perform such computations in just a few minutes.
Also we are slowly moving to a practical researches - our first neural networks began their training with a usage of widely used libraries and frameworks such as NVIDIA cuDNN, Microsoft Cognitive Toolkit, TensorFlow, Theano and Keras. These tools are very different in their aims and internals and I will write more details a bit later, because even a brief review of these technologies requires pretty large description. As an author of the Keras framework wrote, "The ideas behind deep learning are simple, so why should their implementation be painful?". And he's right - it's really simple! Follow our news, It's the only beginning!
—
Sincerely yours, Head of R&D team
Dmitry Plotnikov
http://storage5.static.itmages.com/i/18/0219/h_1519022596_5367187_0a9094bfcb.jpg