Carl Vondrick

Ph.D. Student
Massachusetts Institute of Technology
Email: vondrick@mit.edu

ResumeGithubScholar

About Me

I am a Ph.D. student at MIT in computer science.

My research studies computer vision and machine learning. Scene understanding models excel with large amounts of labeled data, however this is expensive to scale. My work explores how to efficiently leverage human effort and unlabeled data in order to create more powerful perception systems.

I work at CSAIL where I am advised by Antonio Torralba. I completed my undergraduate degree at UC Irvine advised by Deva Ramanan. I have spent some summers at Google and Google X.

Thank you to Google and the NSF for supporting my research!

News

Papers by Project

Leveraging Unlabeled Video and Text

Although lacking annotations, unlabeled video and text is abundantly available and contains rich signals about the world. How do we use this resource to develop more powerful perceptual systems?

Generating Videos with Scene Dynamics
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
NIPS 2016
Coming Soon

Learning Sound Representations from Unlabeled Video
Yusuf Aytar, Carl Vondrick, Antonio Torralba
NIPS 2016
Coming Soon

Anticipating Visual Representations with Unlabeled Video
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
CVPR 2016
Paper NPR CNN AP Wired Late Show with Stephen Colbert MIT News

Predicting Motivations of Actions by Leveraging Text
Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, Antonio Torralba
CVPR 2016
Paper Data

Cross-Modal Transfer

Objects and events manifest in many modalities (e.g., natural images, cartoons, sound, text). How can we represent concepts agnostic to their modality? How can we transfer between modalities?

Learning Aligned Cross-Modal Representations from Weakly Aligned Data
Lluis Castrejon*, Yusuf Aytar*, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
CVPR 2016
Paper Project Page Demo

See also: Learning Sound Representations from Unlabeled Video


Human Activity Understanding

The ability to understand people from vision is important for human-machine interaction. How can we train machines to better understand people's activities and intentions?

Where are they looking?
Adria Recasens*, Aditya Khosla*, Carl Vondrick, Antonio Torralba
NIPS 2015
Paper Project Page Demo

Assessing the Quality of Actions
Hamed Pirsiavash, Carl Vondrick, Antonio Torralba
ECCV 2014
Paper Code + Data

See also: Anticipating Visual Representations with Unlabeled Video

See also: Predicting Motivations of Actions by Leveraging Text


Diagnosing Computer Vision Models

In order to improve upon computer vision models, it is instructive to understand and diagnose their failures. We are interested in analyzing and visualizing computer vision models. How much training data do you need? What bottlenecks prevent us from effectively capitlaizing on big data?

Visualizing Object Detection Features
Carl Vondrick, Aditya Khosla, Hamed Pirsiavash, Tomasz Malisiewicz, Antonio Torralba
IJCV 2016
Paper Project Page Slides MIT News

Do We Need More Training Data?
Xiangxin Zhu, Carl Vondrick, Charless C. Fowlkes, Deva Ramanan
IJCV 2015
Paper 10x Data

Learning Visual Biases from Human Imagination
Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba
NIPS 2015
Paper Project Page Technology Review

HOGgles: Visualizing Object Detection Features
Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz, Antonio Torralba
ICCV 2013
Paper Project Page Slides MIT News

Do We Need More Training Data or Better Models for Object Detection?
Xiangxin Zhu, Carl Vondrick, Deva Ramanan, Charless C. Fowlkes
BMVC 2012
Paper Slides 10x Data


Efficient Video Annotation

Large labeled datasets have enabled significant advancements in image understanding. However, there has not been as much progress in video understanding, possibly because labeled video data is much more expensive to annotate. We seek to develop better methods to annotate video efficiently. Our research developed a model that can annotate massive datasets for a fraction of the cost typically needed.

Efficiently Scaling Up Crowdsourced Video Annotation
Carl Vondrick, Donald Patterson, Deva Ramanan
IJCV 2012
Paper Slides Data + Code

Video Annotation and Tracking with Active Learning
Carl Vondrick, Deva Ramanan
NIPS 2011
Paper Slides Code

A Large-scale Benchmark Dataset for Event Recognition
Sangmin Oh, et al.
CVPR 2011
Paper Slides Data

Efficiently Scaling Up Video Annotation with Crowdsourced Marketplaces
Carl Vondrick, Deva Ramanan, Donald Patterson
ECCV 2010
Paper Data + Code

I'm normally not a praying man, but if you're up there, please save me, Superman. — Homer Simpson