DeepMind robot finds out for itself what it can do
Until now, robots have been programmed for specific tasks. In the best case they can learn as well, but are still dependent on their source code. In the future, they could learn like babies.
Share
Robotics firm DeepMind , part of Google’s Alphabet group since 2014, has often made headlines with unusual robots in the past. Now their creative minds are developing a robot designed to learn the skills it needs by itself. The developers call the principle “learning by playing”. Sensors and mechanics are not programmed, and the robot should recognize for itself which of its abilities will be of use in completing assigned tasks. For each successful task, the robot is rewarded through a points system . The prototype has already learned in this way to identify building blocks by color and pick them up. As it does so, the artificial intelligence tries to prioritize to the most efficient action.
To allow the robot to learn by itself, DeepMind had to start with very simple tasks, but future reprogramming – such as if the field of application changes – will be unnecessary. When the robot has “grown up”, it should have learnt – similarly to humans – what it can use grippers for. At present, then, the DeepMind model is still inferior to existing robotic solutions, but it should soon outshine all others with its wide applications and flexibility.
Related Exhibitors
Interested in news about exhibitors, top offers and trends in the industry?
Browser Notice
Your web browser is outdated. Update your browser for more security, speed and optimal presentation of this page.
Update Browser