Learning to See before Learning to Act: Visual Pre-training
We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects.
computer-vision transfer-learning robotics affordance research
Objectives & Highlights

Does having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task, and adapted to perform an active manipulation task. We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects. However, realizing these gains requires careful selection of which parts of the model to transfer. Our key insight is that outputs of standard vision models highly correlate with affordance maps commonly used in manipulation. Therefore, we explore directly transferring model parameters from vision networks to affordance prediction networks, and show that this can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience.

Takeaways & Next Steps

With just a small amount of robotic experience, we can further fine-tune the affordance model to achieve better results. With just 10 minutes of suction experience or 1 hour of grasping experience, our method achieves∼80% success rate at picking up novel objects.

Don't forget to add the tag @yenchenlin in your comments.

PhD student at MIT CSAIL
Share this project
Similar projects
GradCAM for the BreaKHis Dataset
An NBDev package for fine-tuning ResNets to visualize gradient-weighted class activation for the BreaKHis dataset.
Made With ML Top Resources
A tagged and curated collection of trending tutorials, toolkits and research.
Transformer OCR
Rectification-free OCR using spatial attention from Transformers.
What does a CNN see?
First super clean notebook showcasing @TensorFlow 2.0. An example of end-to-end DL with interpretability.