The new Robot Ai from Google can fold the delicate oregami, and close clouds without damage

On Wednesday, Google DeepMind Declare Two new modeling of artificial intelligence is designed to control robots: Gemini robots and Gemini robots. The company claims that these models will help robots of many shapes and sizes to understand and interact with the material world more effectively and accurate than previous systems, which paves the way for applications like Humanoid Robot.

It should be noted that although the devices for robot platforms seem to advance at a fixed pace (well, may not be always), the creation of the Amnesty International model is able to try these robots independently through new scenarios with safety and accuracy that has been out of reach. What the industry “embodies artificial intelligence” is the goal of the moon in NVIDIA, for example, and still is a sacred cup that can transform robots into workers in general use in the material world.

In this manner, the new Google models depend on the Gueini 2.0 Language Model, which adds special capabilities to automated applications. Gemini Robotics includes what Google calls “VLA” capabilities, allowing it to process visual information, understanding language orders, and creating material movements. On the contrary, Gemini Robotics-AR focuses on “embodied thinking” with augmented spatial understanding, allowing it to connect the current robot control systems.

For example, with Robotics Gemini, you can ask for a “banana picking and put in the basket”, and the camera’s width for the scene will use the banana, which directs an automatic arm to successfully perform the procedure. Or you may say, “Fold the erytami fox”, and it will use its knowledge of origami and how the paper is carefully folded to perform the task.

https://www.youtube.com/watch?

Gemini robots: Bring artificial intelligence to the material world.

In 2023, we covered the Google RT-2, which represents a noticeable step towards more general robotic potential using Internet data to help robots understand language orders and adapt to new scenarios, then double performance in invisible tasks compared to their predecessor. Two years later, it seems that Robotics Gemini has made another big leap forward, not only in understanding what to do but in implementing the complex material manipulation that RT-2 was explicitly able to deal with.

Although RT-2 was limited to the reuse of the physical movements he has already practiced, Robotics Gemini shows that greatly enhanced ingenuity that provides previously impossible tasks such as oreigami nicknames snacks in compressed bags. This shift from robots that only understand orders indicates robots that can perform sensitive material tasks that DeepMind may have begun to solve one of the biggest challenges of robots: make robots to convert their “knowledge” into accurate and accurate movements in the real world.

Better generalized results

According to DeepMind, the new GEMINI ROBITICS system explains a much stronger generalization, or the ability to perform new tasks that have not been specifically trained, compared to previous artificial intelligence models. In its announcement, the company claims that Robotics Gemini “is more than the performance of a comprehensive generalization standard compared to other models of the latest chants.” Circular is important because robots that can adapt to new scenarios without specific training for each situation can one day work in real unpredictable environments.

This is important because doubts remain in relation to the extent of the benefit of human robots now or their ability to do so. Tesla revealed its Optimus Gen 3 robot last October, claiming the ability to complete many material tasks, yet concerns continue to the validity of its independent male capabilities after the company admitted that many robots in its impressive experimental presentation were controlled by humans.

Here, Google tries to make the real thing: general robot brain. With this goal in mind, the company announced a partnership with Austin, based in Texas Apptronik To “Build the next generation of human robots with Gemini 2.0.” While he was mainly trained on a bilateral robot platform Aloe 2Google states that Robotics Gemini can control Franka’s automatic arms To more sophisticated human systems such as Apptronik’s Apollo.

https://www.youtube.com/watch?

Gemini robots: fighting skills.

Although the human robot approach is a relatively new application of the GOOGLE IQ Models (from this session of LLMS), it should be noted that Google had previously acquired many robotics companies in the period 2013-2014 (including Boston Dynamics, which makes human robots), but later sold it. It seems that the new partnership with Apptronik is a new approach to human robots rather than the direct continuation of these previous efforts.

Other companies were working hard on human robots, such as the male number (which obtained a large financing for their human robots in March 2024) and Boston Dynamics, the previous alphabet mentioned above (which presented a new flexible robot last April), but the AI ​​driver “really highlighted. On this front, Google also granted a limited up to Robotics Gemini from During the “reliable test” program for companies like Boston Dynamics, Agility Robotics and charming tools.

Safety and restrictions

For safety considerations, Google states “a comprehensive approach, which maintains traditional robot safety measures such as avoiding collision and strength restrictions. The company describes development.”Robot ConstitutionA framework inspired by Isaac Asimov Three laws of robots And the launch of a set of data that is not called “AsimovTo help researchers assess the effects of safety from automatic procedures.

The new ASIMOV data collection represents Google’s attempt to create uniform methods to assess robot safety beyond the prevention of material damage. The data collection appears designed to help researchers test the extent to which artificial intelligence models understand the potential consequences of actions that the robot may take in different scenarios. According to Google, the data group will help researchers strictly measure safety repercussions of automated procedures in real world scenarios. “

The company has not announced the time tables of availability or specific commercial applications for new artificial intelligence models, which are still in the research phase. While Google’s experimental videos depict progress in the capabilities of AI-driving artificial intelligence, censorship research environments still leave open questions about how these systems actually perform in the unexpected realistic world settings.

Leave a Comment