MIT Unveils New Robot Training Model Inspired by Large Language Models

Elliot Kim

Elliot Kim

November 01, 2024 · 2 min read
MIT Unveils New Robot Training Model Inspired by Large Language Models

Mit researchers have introduced a novel approach to training robots, drawing inspiration from the massive datasets used to train large language models (LLMs). The new method, dubbed Heterogeneous Pretrained Transformers (HPT), enables robots to learn from vast amounts of data and adapt to new tasks with ease.

The traditional approach to robot training, known as imitation learning, often falls short when faced with small changes in the environment or task. To overcome this limitation, the MIT team turned to the brute force data approach used in LLMs, such as GPT-4. By leveraging a transformer to combine data from different sensors and environments, the HPT architecture allows robots to learn from a vast array of data and generalize to new situations.

The ultimate goal of this research is to create a "universal robot brain" that can be downloaded and used without any training, according to CMU associate professor David Held. With funding from the Toyota Research Institute, this breakthrough could have significant implications for the robotics industry, enabling robots to learn and adapt at an unprecedented scale.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.