Learning state representations enables robotic planning directly from raw observations such as images. Several methods learn state representations by utilizing losses based on the reconstruction of the raw observations from a lower-dimensional latent space. The similarity between observations in the space of images is often assumed and used as a proxy for estimating similarity between the underlying states of the system. However, observations commonly contain task-irrelevant factors of variation which are nonetheless important for reconstruction, such as varying lighting and different camera viewpoints. In this work, we define relevant evaluation metrics and perform a thorough study of different loss functions for state representation learning. We show that models exploiting task priors, such as Siamese networks with a simple contrastive loss, outperform reconstruction-based representations in visual task planning in case of task-irrelevant factors of variations.
Chamzas, C., Lippi, M., Welle, M.C., Varava, A., Kavraki, L.E., Kragic, D. (2022). Comparing Reconstruction- and Contrastive-based Models for Visual Task Planning. In IEEE International Conference on Intelligent Robots and Systems (pp.12550-12557). 345 E 47TH ST, NEW YORK, NY 10017 USA : Institute of Electrical and Electronics Engineers Inc. [10.1109/iros47612.2022.9981533].
Comparing Reconstruction- and Contrastive-based Models for Visual Task Planning
Lippi, Martina;
2022-01-01
Abstract
Learning state representations enables robotic planning directly from raw observations such as images. Several methods learn state representations by utilizing losses based on the reconstruction of the raw observations from a lower-dimensional latent space. The similarity between observations in the space of images is often assumed and used as a proxy for estimating similarity between the underlying states of the system. However, observations commonly contain task-irrelevant factors of variation which are nonetheless important for reconstruction, such as varying lighting and different camera viewpoints. In this work, we define relevant evaluation metrics and perform a thorough study of different loss functions for state representation learning. We show that models exploiting task priors, such as Siamese networks with a simple contrastive loss, outperform reconstruction-based representations in visual task planning in case of task-irrelevant factors of variations.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


