Texonom
Texonom
/
Engineering
Engineering
/Hardware Engineering/Robotics/Robotics AI/AI Robotics Model/
RT-2
Search

RT-2

Creator
Creator
Seonglae Cho
Created
Created
2023 Mar 1 13:45
Editor
Editor
Seonglae Cho
Edited
Edited
2024 Mar 12 12:9
Refs
Refs
 
 
 
 
robotics-transformer2.github.io
https://robotics-transformer2.github.io/assets/rt2.pdf
RT-2: New model translates vision and language into action
Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1), a model trained on multi-task demonstrations which can learn combinations of tasks and objects seen in the robotic data. RT-2 shows improved generalisation capabilities and semantic and visual understanding, beyond the robotic data it was exposed to. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.
RT-2: New model translates vision and language into action
https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action
RT-2: New model translates vision and language into action
 
 

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Hardware Engineering/Robotics/Robotics AI/AI Robotics Model/
RT-2
Copyright Seonglae Cho