
Scientists have trained a four -legged robot to play a feather ball against a human opponent, wandering around the stadium to play marches of up to 10 shots.
By combining body movements completely with visual perception, RobotName “anything“I learned to adapt the way it moved to reach Shutttlecock and successfully return it over the network, thanks artificial intelligence (artificial intelligence).
This indicates that four -legs robots can be built as opponents in “complex and dynamic sports scenarios”, as researchers wrote in a study published on May 28 in the magazine Science robots.
Anymal is a four -legged robot that weighs 110 lbs (50 kilograms) with a length of about 1.5 feet (0.5 meters). Owning four legs allows me and Similar quadrilateral robots To travel through difficult terrain and Move up and down obstacles.
The researchers have already added weapons to these dog -like machines and taught them how to do so Bring specific creatures or Open doors By grabbing the handle. But coordination of control of the limbs and visual perception in a dynamic environment still represents a challenge in robots.
Related to: Watch a “robot dog” scramble through the Basur Basic Course with the help of Amnesty International
“Sports is a good application for this type of research because you can gradually increase competitiveness or difficulty.” Yuntao whatI told a previous robot researcher at Eth Zürich and now with Robotics Light Robotics, Live Science.
Teaching a new dog new tricks
In this research, what and his team tied a dynamic arm carrying a 45 -degree feather ball racket on the standard Animis robot.
With the addition of the arm, the robot stood 5 feet, 3 inches (1.6 meters) and was 18 joints: three on each of the four legs, and six on the arm. The researchers designed a complex, integrated system that controls arm and leg movements.
The team also added an Estrio camera, which contains two lenses stacked on top of each other, to the right of the center on the front of the robot body. Al -Adsin allowed her to process visual information about the shutters received in the actual time and work on where they were heading.
Then the robot was taught to become a feather football player through Learning reinforcement. Through this type of machine learning, the robot explores its environment, learn about the experience and error to learn to discover and track Shutttlecock, move towards it and swing the racket.
To do this, the researchers first created a simulation environment consisting of a feather ball court, where the apparent robot stands at the center. The default shuttecocks was introduced from the mid -stadium center for the opponent, and the robot was assigned to track its location and estimate the path of its trips.
After that, the researchers created a strict training system to teach anything how to hit Shutttlecocks, with a virtual coach equivalent to the robot of a variety of properties, including the position of the racket, the angle of the racket, and the speed of the swing. More importantly, swing bonuses were based on time to stimulate accurate visits and time in time.
Shutttlecock can decline anywhere throughout the court, so the robot was also rewarded if it is efficiently moved throughout the court and if it does not speed up. Anymal’s goal was to increase the amount that was rewarded through all trials.
Based on 50 million experiments for this simulation training, researchers have created a nervous network that can control the movement of all 18 joints to travel and hit the shuttle.
Fast learner
After simulation operations, scientists transferred the nerve network to the robot, and Isal was placed through its steps in the real world.
Here, the robot was trained to find and track bright orange shutters offered by another machine, which enabled researchers to control speed, corners and landing in the shutters. Eilan had to wander all over the stadium to hit the shuttle quickly that would return it over the network and to the center of the court.
The researchers found that after intense training, the robot can track the shutters and restore them accurately at swing speeds of about 39 feet per second (12 meters per second) – the researchers pointed out nearly half of the medium player for the human amateurs.
Anymal also modified its movement patterns based on the extent to which she had to travel to Shutttlecock and the time he spent to reach them. The robot did not need to travel when Shuttlecock was just two feet (half a meter), but at about 5 feet (1.5 meters), anything to reach the shuttle by moving all the four legs. About 7 feet (2.2 meters), the robot retreated to the shuttle, resulting in a period of height that extended the arrival of the arm by 3 feet (1 m) in the direction of the target.
“Control of the robot to look at the shuttle is not very trivial,” he said. If the robot looks at Shutttlecock, it will not be able to move very quickly. But if not, he will not know where he should go. “This comparison must occur in a fairly smart way,” he said.
What a robot quality was surprised how to move all eighteen joints in a coordinated way. It is a particularly difficult task because the engine in every joint learns independently, but the final movement requires them to work alongside.
The team also found that the robot began automatically to return to the center of the court after all, similar to how human players are ready for contained shutts.
However, the researchers noted that the robot did not think about the opponent’s movements, an important way human players predict. The team said in the study that the estimates of human formation would help improve the performance of Anymal. MA indicated that they can also add a neck joint to allow the robot to monitor Shuttlecock for more time.
He believes that this research will eventually have applications that go beyond sport. For example, it can support the removal of debris during disaster relief efforts, as the robot will be able to balance the dynamic visual perception in a graceful movement.