I work in the interdisciplinary area of computational perception and cognition - at the crossing of computer science and cognitive science. My research extends to image memorability (what makes an image memorable?), visualizations (how can visual displays of data be made effective?) and saliency (how can computational models predict human attention?). At the intersection of these 3 areas are applications to user interfaces, visual content design, and educational tools. By furthering the understanding of how humans attend to, remember, and process visual stimuli, the goal of my research is to contribute to cognitive science, while using this understanding for building computational applications.

A computational understanding of image memorability

Images carry the attribute of memorability: a predictive value of whether the image will be later remembered or forgotten. Understanding how image memorability works and what it is affected by has numerous applications - from better user interfaces and design to smarter image search and education tools. I am interested in gaining a better understanding of memorability from the ground up: to what extent is memorability consistent across individuals? How quickly can an image be forgotten? How can we model the effects of image context on memorability (can we make an image more memorable by changing its context)? Can we use people's eye movements and pupil dilations to make predictions about memorability? Read on for the answers to some of these questions.

Bylinskii, Z., Isola, P., Bainbridge, C., Torralba, A., Oliva, A.
"Intrinsic and Extrinsic Effects on Image Memorability"
Vision Research 2015 (in press)
[paper pdf]    [supplemental material]    [website]   
S.M. Thesis
Bylinskii, Z. "Computational Understanding of Image Memorability"
MIT Master's Thesis 2015
[thesis pdf]    [presentation slides]    [poster]   
Conference Posters and Abstracts
Bylinskii, Z., Isola, P., Torralba, A., and Oliva, A.
"How you look at a picture determines if you will remember it"
IEEE CVPR Scene Understanding Workshop (SUNw) 2015
[abstract]    [poster]   
Bylinskii, Z., Isola, P., Torralba, A., and Oliva, A.
"Modeling Context Effects on Image Memorability"
IEEE CVPR Scene Understanding Workshop (SUNw) 2015
[abstract]    [poster]   
Bylinskii, Z., Isola, P., Torralba, A., and Oliva, A.
"Quantifying Context Effects on Image Memorability"
Vision Sciences Society (VSS) 2015
Vo, M., Gavrilov, Z., and Oliva, A.
"Image Memorability in the Eye of the Beholder: Tracking the Decay of Visual Scene Representations"
Vision Sciences Society (VSS) 2013
[abstract]    [supplement]   

What makes a visualization memorable, comprehensible, and effective?

A collaboration with Harvard University's visualization group, this line of work aims to understand how people interact with and perceive data visualizations (graphs, charts, infographics, etc.). We are interested in answering the questions: which visualizations are easily remembered and why? What information can people extract from visualizations? How can we measure comprehension? Does chart junk help or hinder understanding and memorability? What do people pay the most attention to?

Borkin, A.M.*, Bylinskii, Z.*, Kim, N.W., Bainbridge, C.M., Yeh, C.S., Borkin, D., Pfister, H., and Oliva, A. "Beyond Memorability: Visualization Recognition and Recall"
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis) 2015
[paper pdf]    [supplemental material]    [website]    [teaser video]    [media coverage]
* = equal contribution
Borkin, M., Vo, A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., and Pfister, H.
"What Makes a Visualization Memorable?"
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis) 2013
[paper pdf]    [supplemental material]    [website]    [media coverage]
Conference Posters and Abstracts
Bylinskii, Z., Borkin, A.M.
"Eye Fixation Metrics for Large Scale Analysis of Information Visualizations"
First Workshop on Eye Tracking and Visualization (ETVIS) in conjunction with IEEE VIS 2015
[paper pdf]    [presentation slides]    [data+code]   

Kim, N.W., Bylinskii, Z., Borkin, A.M., Oliva, A., Gajos, K.Z., and Pfister, H.
"A Crowdsourced Alternative to Eye-tracking for Visualization Understanding"
CHI Extended Abstracts (CHI'15 EA) 2015

[paper pdf]    [poster]   

Saliency Benchmarking

Saliency is a information measure of images that can be used to determine the most important image parts. Applications span from image compression, video parsing, and computational photography to robot navigation, user interfaces, and object detection. Many new saliency models are developped every year, and tracking progress in the field is becoming increasingly difficult. I am interested in exploring evaluation methodologies for objectively comparing saliency models. This requires an analysis of datasets, metrics, models, and the surrounding design considerations. I am currently running the MIT Saliency Benchmark, offering a benchmark dataset and a regulary-updated results page with the latest models and metrics.

Bylinskii, Z., DeGennaro, E., Rajalingham, R., Ruda, H., Zhang, J. Tsotsos, J.K.
"Towards the quantitative evaluation of visual attention models"
Vision Research 2015 (in press)
[paper pdf]   
Benchmark Website
Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Oliva, A., and Torralba, A.
"MIT Saliency Benchmark"
Available at: http://saliency.mit.edu as of June 2014

Other Computer Vision Projects

Are all training examples equally valuable?

When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others. We make the following considerations: (1) some examples are better than others for training detectors or classifiers, and (2) in the presence of better examples, some examples may negatively impact performance and removing them may be beneficial. We propose an approach for measuring the training value of an example, and use it for ranking and greedily sorting examples. Our experiments show that the performance of current state-of-the-art detectors and classifiers can be improved when training on a subset, rather than the whole training set.

Lapedriza, A., Pirsiavash, H., Bylinskii, Z., Torralba, A.
"Are all training examples equally valuable?"
arXiv (1311.6510 [cs.CV]) 2013
[paper pdf]

Detecting Reduplication in Videos of American Sign Language

A framework is proposed for the detection of reduplication in digital videos of American Sign Language (ASL). In ASL, reduplication is used for a variety of linguistic purposes, including overt marking of plurality on nouns, aspectual inflection on verbs, and nominalization of verbal forms. Reduplication involves the repetition, often partial, of the articulation of a sign. In this paper, the apriori algorithm for mining frequent patterns in data streams is adapted for finding reduplication in videos of ASL.

Gavrilov, Z., Sclaroff, S., Neidle, C., Dickinson, S.
"Detecting Reduplication in Videos of American Sign Language"
Proc. Eighth International Conf. on Language Resources and Evaluation (LREC) 2012
[paper pdf]    [poster]

Skeletal Part Learning for Efficient Object Indexing

The goal of this project is to construct an indexing and matching framework operating on the graph encodings of object shapes. A parts-based indexing mechanism has greater robustness to occlusion and part articulation, while the graph-based representation provides angle and size invariance. The idea is pair-wise matching object graphs to extract common recurring subgraphs which then constitute the part vocabulary. Given a novel query object, its graph can be matched to the parts which vote for object hypotheses. Classifiers can additionally be used to learn associations between object categories and object-to-part similarity values.

Gavrilov, Z., Macrini, D., Zemel, R., Dickinson, S.
"Skeletal Part Learning for Efficient Object Indexing"
Undergraduate Research Project 2013

I have been funded by the Natural Sciences and Engineering Research Council of Canada via: the Undergraduate Summer Research Award (2010-2012), the Julie Payette Research Scholarship (2013), and the Doctoral Postgraduate Scholarship (2014-ongoing).