MuTT: A Multimodal Trajectory Transformer for Robot Skills

Claudius Kienle, Benjamin Alt, Onur Celik, Philipp Becker, Darko Katic Rainer Jakel, Gerhard Neumann

Published in International Conference on Intelligent Robots and Systems (IROS, 2024

Abstract:

High-level robot skills represent an increasingly popular paradigm in robot programming. However, configuring the skills' parameters for a specific task remains a manual and time-consuming endeavor. Existing approaches for learning or optimizing these parameters often require numerous real-world executions or do not work in dynamic environments. To address these challenges, we propose \ac{mutt}, a novel encoder-decoder transformer architecture designed to predict environment-aware execution of robot skills by integrating vision, trajectory, and robot skill parameters. Notably, we pioneer the fusion of vision and trajectory, introducing a novel trajectory projection. Furthermore, we illustrate \ac{mutt}'s efficacy as a predictor when combined with a model-based robot skill optimizer. This approach facilitates the optimization of robot skill parameters for the current environment, without the need for real-world executions during optimization. Designed for compatibility with any representation of robot skills, \ac{mutt} demonstrates its versatility across three comprehensive experiments, showcasing superior performance across two different skill representations.