The Google DeepMind Robotics team demonstrated this week how the RT-2 robot, trained using the Google Gemini 1.5 Pro neural network, can carry out natural language commands and move around an office space.

Image source: Google DeepMind

DeepMind Robotics published a paper titled “Mobility VLA: Multimodal Instructional Navigation Using VLM with Long Context and Topological Graphs” in which a series of videos showed the robot performing various tasks in a 9,000 square meter office space. ft (836 m2).

In one video, a Google employee asks the robot to take him somewhere to draw. “Okay,” he replies, “give me a minute.” We are thinking together with Gemini…” The robot then leads the person to a wall-sized whiteboard.

In the second video, another employee asks the robot to follow directions on a board. He draws a simple map showing how to get to the Blue Zone. Once again, the robot thinks for a moment before following the specified route to a location that turns out to be a robotics testing site. “I have successfully followed the instructions on the board,” the robot reports.

Before recording videos, the robots were familiarized with the space using the Multimodal Instructional Navigation with Demonstration Tours (MINT) solution. Thanks to this, the robot can move around the office in accordance with various landmarks indicated using speech. DeepMind Robotics then used a hierarchical Vision-Language-Action (VLA) system “that combines environmental awareness with the power of common sense.” After combining the processes, the robot gained the ability to respond to written and drawn commands, as well as to gestures and navigate the area.

According to Google, in about 90% of 50 interactions with employees, robots successfully followed the instructions given to them.

Leave a Reply

Your email address will not be published. Required fields are marked *