Robotics firm DeepMind , part of Google’s Alphabet group since 2014, has often made headlines with unusual robots in the past. Now their creative minds are developing a robot designed to learn the skills it needs by itself. The developers call the principle “learning by playing”. Sensors and mechanics are not programmed, and the robot should recognize for itself which of its abilities will be of use in completing assigned tasks. For each successful task, the robot is rewarded through a points system . The prototype has already learned in this way to identify building blocks by color and pick them up. As it does so, the artificial intelligence tries to prioritize to the most efficient action.
To allow the robot to learn by itself, DeepMind had to start with very simple tasks, but future reprogramming – such as if the field of application changes – will be unnecessary. When the robot has “grown up”, it should have learnt – similarly to humans – what it can use grippers for. At present, then, the DeepMind model is still inferior to existing robotic solutions, but it should soon outshine all others with its wide applications and flexibility.