Neurala has highly parallel GPU based neural networks for better AI and self driving robotics
Networks (ANN) running on graphic processing units (GPU). The invention is seen as an important foundation for real-time artificial intelligence and robotics applications.
Humans outperform computers in many natural tasks, including vision and language processing, because the brain efficiently processes many inputs, learns, and recognizes patterns. Computers, however, process only one input at a time on each CPU core and then make sequential calculations. Therefore, even fast CPUs cannot match the power of the human brain.
Neurala’s breakthrough, which dates back to 2006, was to see that GPUs, which were originally designed for computer games and 3D graphics, could be used to process multiple inputs simultaneously and to simulate neural networks. Cutting-edge artificial intelligence and ANNs are dramatically accelerated on GPUs, which can handle many more instructions per clock cycle than a computer’s central processing unit (CPU). As a result, ANNs that can perform interesting tasks can be written to run in real-time using a low-cost graphic processing card found in many consumer products.
“Our invention makes it possible for robots and other devices to use artificial intelligence in situations in which execution time is critical. It will be fundamental for our effort to build brains for robots that interact with the world and with humans in real-time,” said Massimiliano Versace, CEO and co-founder of Neurala.
The robot's brain processes visual information in real time, enabling it to do more than simply navigate from one spot to another. This means robots could one day be trusted to make their own decisions when navigating changing terrain on Mars. The Neurala GPU networks are already ten times faster than regular CPU based networks.
* self driving flying drones
* self driving cars
* mostly self guided ground robots
Applied for navigating robots on Mars
Surface exploration of planetary environments with current robotic technologies relies heavily on human control and power-hungry active sensors to perform even the most elementary low-level functions. Ideally, a robot should be capable of autonomously exploring and interacting within an unknown environment without relying on human input or suboptimal sensors. Behaviors such as exploration of unknown environments, memorizing locations of obstacles or objects, building and updating a representation of the environment, and returning to a safe location, are all tasks that constitute typical activities efficiently performed by animals on a daily basis. Phase I of this NASA STTR focused on design of an adaptive robotic multi-component neural system that captures the behavior of several brain areas responsible for perceptual, cognitive, emotional, and motor behaviors. This system makes use of passive, potentially unreliable sensors (analogous to animal visual and vestibular systems) to learn while navigating unknown environments as well as build usable and correctable representations of these environments without requiring a Global Navigation Satellite System (GNSS). In Phase I, Neurala and the Boston University Neuromorphics Lab, constructed a virtual robot, or animat, to be developed and tested in an extraterrestrial virtual environment. The animat used passive sensors to perform a spatial exploration task. The animat started exploring from a recharging base, autonomously planned where to go based on past exploration and its current motivation, developed and corrected an internal map of the environment with the locations of obstacles, selected the shortest path of return to its recharging base before battery depletion, then extracted the resulting explored map into a human-readable format.(some videos and more)
http://nextbigfuture.com/2014/02/neurala-has-highly-parallel-gpu-based.html