Meta AI Researchers are Training Robots to Learn Like 3-year-olds

By StudyFinds Research

Babies and toddlers learn by exploring their surroundings and now robots can too. In a groundbreaking collaboration between Carnegie Mellon University and Meta, scientists have drawn inspiration from the way infants learn to create an innovative approach to teaching robots. The result is RoboAgent, an artificial intelligence agent designed to emulate a toddler’s learning process and acquire manipulation skills equivalent to a three-year-old child.

“We aimed to create a single AI agent capable of a wide range of skills in novel situations, similar to how human babies learn,” explains Vikash Kumar, from Carnegie Mellon’s School of Computer Science’s Robotics Institute. “RoboAgent leverages passive observations and limited active play, just like infants who keenly watch, imitate, and replay to learn.”

RoboAgent showcases proficiency in 12 manipulation skills across various scenarios, demonstrating a dynamic learning platform adaptable to changing environments. Unlike prior research conducted in simulations, this project successfully operated in real-world environments using notably less data.

“RoboAgents exhibit a greater complexity of skills than previous attempts,” states Abhinav Gupta, an associate professor at the Robotics Institute, in a university release. “Our agent demonstrates a diverse skill set that surpasses any real-world robotic agent’s achievements. It combines efficiency, scalability, and adaptability to unseen situations.”

The unique learning architecture of RoboAgent is the core of its effectiveness and efficiency. It employs temporal chunks of movements to make decisions, diverging from the traditional per-time step approach. This innovative policy structure facilitates reasoning even with limited experiences, enabling the agent to act according to specified goals.

RoboAgent’s learning process draws inspiration from the way children accumulate knowledge. Just as parents guide their offspring, researchers teleoperated the robot to provide valuable self-experiences. However, RoboAgent’s learning scope goes beyond its immediate environment.

“To overcome limitations, RoboAgent learns from internet videos, similar to how babies acquire behaviors by observing their surroundings,” says Mohit Sharma, a Ph.D. student in robotics. “These videos help RoboAgent learn how humans interact with objects and utilize skills to complete tasks. It extracts valuable lessons from different scenarios and applies them to new challenges.”

The team’s ambitious project aims to enhance robots’ adaptability in diverse settings.

“RoboAgent’s learning could lead us closer to a universal robot capable of a range of tasks in various environments,” states Shubham Tulsiani, an assistant professor from the Robotics Institute. “This platform could make robots more useful in unstructured spaces such as homes, hospitals, and public areas.”

The project’s impact is further amplified by its open-source approach. The team is sharing its trained models, codebase, hardware drivers, and an extensive dataset, RoboSet, which is the largest publicly accessible robotics dataset on standard hardware. The goal is to foster collaboration and development within the robotics community, paving the way for a versatile and foundational general robotic agent in the future.

You might also be interested in:

Source: Study Finds

Top image credit: Pexels

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, and What Really Happened.

Provide, Protect and Profit from what s coming! Get a free issue of Counter Markets today.

Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Meta AI Researchers are Training Robots to Learn Like 3-year-olds"

Leave a comment

Your email address will not be published.