Mimic Robotics develops AI-controlled, highly mobile robotic hands, Dyna Robotics aims to automate repetitive, stationary tasks with intelligent robotic arms, and Physical Intelligence builds vision-language-action models designed to control a wide variety of robot types via a common “brain.”

Europe's new robotics hope relies on physical AI

On November 3, Handelsblatt reported under the headline “Europe's new robotics hope” that Zurich-based startup Mimic Robotics had raised €20 million from investors to develop adaptive robotic hands that can be mounted on common industrial robotic arms. Mimic Robotics' generative AI model draws on comprehensive data sets of human hand movements to imitate complex gripping and manipulation tasks. The investors now want to enable the three founders, Stephan-Daniel Gravert, Elvis Nava, and Stefan Weirich, to roll out their “frontier physical AI” in industries such as manufacturing, assembly, and logistics and bring their AI hands into initial series production processes. There, with dexterity similar to that of humans, they will take on complex tasks that could not previously be sufficiently automated. According to the developers, their dexterous, powerful, and precise robotic hand hardware is intended to bridge the gap between humans and robots. The humanoid design enables maximum compatibility to reliably handle both difficult working conditions and delicate objects. According to the founding team, their robotic hand is the perfect platform for enabling their AI models to perform human-like manipulation tasks. Possible applications for the AI hand include, for example, the assembly of small parts where conventional grippers reach their limits, or the picking and repackaging of sensitive goods in warehouses.

AI enables robots to perform complex folding processes

Dyna Robotics, based in the US, is also currently developing a foundation model for robots that is specifically designed for long-lasting tasks that also require a high degree of manipulative skill. Like Mimic Robotics, this young company has set itself the goal of establishing physical AI that is trained in practice and improves with use. The US company's DYNA-1 model is currently being tested directly in real-world environments, and each new installation is intended to provide further data for a continuous reinforcement learning cycle. In a demonstration run, a robot equipped with DYNA-1 folded more than 900 napkins fully autonomously in just over 24 hours with a success rate of around 99%, impressively demonstrating its ability to perform robust, repeatable manipulation over long periods of time. This AI-based technology could therefore also be used to automate laundry and textile handling in large laundries or hotels, as well as complex packaging and folding processes in the consumer goods industry. Assistance tasks in canteens or service areas, where many similar movements are required, are also conceivable. But what is really fascinating about Dyna Robotics' solution is that users no longer have to develop a separate control program for each task when an established model only needs to be fine-tuned for new workflows.

Steps toward “open-world” capabilities

Physical Intelligence (PI) from San Francisco is pursuing an even more general approach. The company is implementing vision-language-action models that are designed to control a wide variety of robot types via a common “brain.” With π₀ (pi-zero), PI has already introduced a general robotics foundation model that has learned from camera, motion, and text data and can be instructed using natural language. In April of this year, π₀.₅ followed, another model designed to better generalize tasks even in previously unknown environments. PI wants to take a step toward so-called “open-world” capabilities. Elements of π₀ have since been released as open source to accelerate research and industrial applications. The fact that PI's approach has great potential is demonstrated not least by the fact that the startup's investors include Jeff Bezos of Amazon, OpenAI, and the venture capital firms Thrive Capital and Lux Capital.

In the future, OEMs could integrate the PI model into their own platforms, for example in mobile transport robots, collaborative arms, or service robots in production.

The time has come: AI agents are doing physical work

Mimic Robotics, Dyna Robotics, and Physical Intelligence thus represent three complementary approaches to the current development of physical AI and impress with highly specialized, adaptive gripping models, a foundation model optimized for long-term tasks, and a universal “robotics operating system” for many platforms. Together, they paint a picture of the near future in which AI agents not only analyze data, but also take on physical work in factories, warehouses, and service environments with increasing autonomy.