Learning to See before Learning to Act: Visual Pre-training
We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects.
computer-vision transfer-learning robotics affordance nerf tensorflow pytorch research article code video paper

Does having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task, and adapted to perform an active manipulation task. We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects. However, realizing these gains requires careful selection of which parts of the model to transfer. Our key insight is that outputs of standard vision models highly correlate with affordance maps commonly used in manipulation. Therefore, we explore directly transferring model parameters from vision networks to affordance prediction networks, and show that this can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience.

Don't forget to tag @yenchenlin in your comment, otherwise they may not be notified.

Authors
PhD student at MIT CSAIL
Share this project
Similar projects
Extension to block NSFW content using AI
NSFW Filter is an extension that blocks NSFW content from your browser. It uses a computer vision model to detect NSFW content and hides it from the user.
Automatic Asset Classification
This project aims to automate the task of labelling images of flood defence assets as well as clustering images to find possibly better groupings.
VirTex: Learning Visual Representations from Textual Annotations
We train CNN+Transformer from scratch from COCO, transfer the CNN to 6 downstream vision tasks, and exceed ImageNet features despite using 10x fewer ...
Transfer Learning & Fine-Tuning With Keras
Your 100% up-to-date guide to transfer learning & fine-tuning with Keras.