"There is something inside of me. What is it?"
- Vincent van Gogh
In July 2015 I will be starting as an assistant professor in the Department of Psychology and Center for Brain Science at Harvard.
Please contact me if you are interested in doing graduate or postdoctoral work in my lab.
I grew up in Chicago, and did my undergraduate studies at Columbia University in New York, where I majored in neuroscience and behavior.
After graduating in 2007, I spent a year at the Center for Neural Science at New York University, where I studied reinforcement learning in humans
and monkeys. I received my Ph.D in psychology and neuroscience from Princeton University in 2013. I'm currently a postdoctoral fellow at MIT in Josh Tenenbaum's Computational Cognitive Science Group.
My research aims to understand how richly structured knowledge about the environment is acquired, and how this knowledge aids adaptive behavior. I use a combination of behavioral, neuroimaging and computational techniques to pursue these questions.
One prong of this research focuses on how humans and animals discover the hidden states underlying their observations, and how they represent these states. In some cases, these states correspond to complex data structures, like graphs, grammars or programs. These data structures strongly constrain how agents infer which actions will lead to reward. A second prong of my research is teasing apart the interactions between different learning systems. Evidence suggests the existence of at least two systems: a "goal-directed" system that builds an explicit model of the environment, and a "habitual" system that learns state-action response rules. These two systems are subserved by separate neural pathways that compete for control of behavior, but the systems may also cooperate with one another.
You can read my CV here.
How can we unlearn fear? Gradually, it turns out. Footnote (December 30, 2013).
The exploitative economics of academic publishing. Footnote (March 18, 2014).
Copyright notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by these copyrights. These works may not be reposted without the explicit permission of the copyright holder.
Gershman, S.J., Malmaud, J., & Tenenbaum, J.B. (in preparation). Structured representations of utility in combinatorial domains.
Gershman, S.J., Monfils, M.-H., Norman, K.A., & Niv, Y. (in preparation). The computational nature of memory reconsolidation.
Huys, Q.J.M., Lally, N., Faulkner, P., Seifritz, E., Gershman, S.J., Dayan, P., & Roiser, J.P. (submitted). The interplay of approximate planning strategies.
Niv, Y., Daniel, R., Geana, A., Gershman, S.J., Leong, Y.C., Radulescu, A., & Wilson, R.C. (submitted). Reinforcement learning in multidimensional environments relies on attention mechanisms.
Gershman, S.J. & Hartley, C.A. (submitted). Individual differences in learning predict the return of fear.
Gershman, S.J. (submitted). Do learning rates adapt to the distribution of rewards?
Kulkarni, T.D., Saeedi, A., & Gershman, S.J. (submitted). Variational particle approximations. [Ising model code] [github code]
Gershman, S.J., Radulescu, A., Norman, K.A., & Niv, Y. (in press). Statistical computations underlying the dynamics of memory updating. PLoS Computational Biology.
Stachenfeld, K.L., Botvinick, M.M., & Gershman, S.J. (in press). Design principles of the hippocampal cognitive map. Advances in Neural Information Processing Systems 27.
Gershman, S.J. & Niv, Y. (in press). Novelty and inductive generalization in human reinforcement learning. Topics in Cognitive Science.
Gershman, S.J., Frazier, P.I., & Blei, D.M. (in press). Distance dependent infinite latent feature models. IEEE Transactions on Pattern Analysis and Machine Intelligence. [Supplementary Materials] [code]
Austerweil, J.L., Gershman, S.J., Tenenbaum, J.B., & Griffiths, T.L. (in press). Structure and flexibility in Bayesian models of cognition. In J.R. Busemeyer, J.T. Townsend, Z. Wang, & A. Eidels, Eds, Oxford Handbook of Computational and Mathematical Psychology. Oxford University Press.
Gershman, S.J. (2014). The penumbra of learning: A statistical theory of synaptic tagging and capture. Network: Computation in Neural Systems, 25, 97-115.
Soto, F.A., Gershman, S.J., & Niv, Y. (2014). Explaining compound generalization in associative and causal learning through rational principles of dimensional generalization. Psychological Review, 121, 526-558.
Gershman, S.J., Blei, D.M, Norman, K.A., & Sederberg, P.B. (2014). Decomposing spatiotemporal brain patterns into topographic latent sources. NeuroImage, 98, 91-102. [code]
Gershman, S.J. & Goodman, N.D. (2014). Amortized inference in probabilistic reasoning. Proceedings of the 36th Annual Conference of the Cognitive Science Society.
Tsividis, P., Gershman, S.J., Tenenbaum, J.B., & Schulz, L. (2014). Information selection in noisy environments with large action spaces. Proceedings of the 36th Annual Conference of the Cognitive Science Society.
Feng, S.F., Schwemmer, M., Gershman, S.J., & Cohen, J.D. (2014). Multitasking vs. multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors. Cognitive, Affective, and Behavioral Neuroscience, 14, 129-146.
Gershman, S.J. (2014). Dopamine ramps are a consequence of reward prediction errors. Neural Computation, 26, 467-471.
Gershman, S.J., Markman, A.B., & Otto, A.R. (2014). Retrospective revaluation in sequential decision making: a tale of two systems. Journal of Experimental Psychology: General, 143, 182-194.
Gershman, S.J., Moustafa, A.A., & Ludvig, E.A. (2014). Time representation in reinforcement learning models of the basal ganglia. Frontiers in Computational Neuroscience. doi: 10.3389/fncom.2013.00194
Gershman, S.J. (2013). Computation with dopaminergic modulation. In Jaeger D., Jung R. (Ed.) Encyclopedia of Computational Neuroscience. Springer.
Gershman, S.J. (2013). Bayesian behavioral data analysis. In Jaeger D., Jung R. (Ed.) Encyclopedia of Computational Neuroscience. Springer.
Gershman, S.J., Jones, C.E., Norman, K.A., Monfils, M.-H., & Niv, Y. (2013). Gradual extinction prevents the return of fear: Implications for the discovery of state. Frontiers in Behavioral Neuroscience. doi: 10.3389/fnbeh.2013.00164. [article in Footnote magazine]
Detre, G.J., Natarajan, A., Gershman, S.J., & Norman, K.A. (2013). Moderate levels of activation lead to forgetting in the think/no-think paradigm. Neuropsychologia, 51 2371-2388. [Supplementary Materials] [code]
Christakou, A., Gershman, S.J., Niv, Y., Simmons, A., Brammer, M., & Rubia, K. (2013). Neural and psychological maturation of decision-making in adolescence and young adulthood. Journal of Cognitive Neuroscience, 25, 1807-1823.
Gershman, S.J. & Niv, Y. (2013). Perceptual estimation obeys Occam's razor. Frontiers in Psychology, 23, doi: 10.3389/fpsyg.2013.00623.
Gershman, S.J., Schapiro, A.C., Hupbach, A., & Norman, K.A. (2013). Neural context reinstatement predicts memory misattribution. Journal of Neuroscience, 33, 8590-8595.
Otto, A.R., Gershman, S.J., Markman, A.B., & Daw, N.D. (2013). The curse of planning: Dissecting multiple reinforcement learning systems by taxing the central executive. Psychological Science, 24, 751-761. [Supplementary Materials]
Gershman, S.J., Jäkel, F.J., & Tenenbaum, J.B. (2013). Bayesian vector analysis and the perception of hierarchical motion. Proceedings of the 35th Annual Conference of the Cognitive Science Society.
Wingate, D., Diuk, C., O'Donnell, T., Tenenbaum, J.B., & Gershman, S.J. (2013). Compositional policy priors. MIT CSAIL Technical Report 2013-007.
Gershman, S.J. (2013). Memory modification in the brain: computational and experimental investigations. Ph.D Thesis, Princeton University, Department of Psychology.
Gershman, S.J. & Niv, Y (2012). Exploring a latent cause model of classical conditioning. Learning & Behavior, 40, 255-268. [Supplementary Materials]
Gershman, S.J., Hoffman, M.D., & Blei, D.M. (2012). Nonparametric variational inference. Proceedings of the 29th International Conference on Machine Learning. [code]
Gershman, S.J., Moore, C.D., Todd, M.T., Norman, K.A., & Sederberg, P.B. (2012). The successor representation and temporal context. Neural Computation, 24, 1553-1568.
Gershman, S.J. & Blei, D.M. (2012). A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology, 56, 1-12. [correction]
Gershman, S.J. & Daw, N.D. (2012). Perception, action and utility: the tangled skein. In M. Rabinovich, K. Friston, P. Varona (Ed.), Principles of Brain Dynamics: Global State Interactions. MIT Press.
Gershman, S.J., Vul, E., & Tenenbaum, J.B. (2012). Multistability and perceptual inference. Neural Computation, 24, 1-24.
Gershman, S.J., Blei, D.M., Pereira, F., & Norman, K.A. (2011). A topographic latent source model for fMRI data. NeuroImage, 57, 89-100.
Sederberg, P.B., Gershman, S.J., Polyn, S.M., & Norman, K.A. (2011). Human memory reconsolidation can be explained using the Temporal Context Model. Psychonomic Bulletin and Review, 18, 455-468.
Daw, N.D., Gershman, S.J., Seymour, B., Dayan, P., & Dolan, R.J. (2011). Model-based influences on humans' choices and striatal prediction errors. Neuron, 69, 1204-1215. [Supplementary Materials]
Gershman, S.J. & Wilson, R.C. (2010). The neural costs of optimal control. Advances in Neural Information Processing Systems 23.
Gershman, S.J, Cohen, J.D., & Niv, Y. (2010). Learning to selectively attend. Proceedings of the 32nd Annual Conference of the Cognitive Science Society.
Gershman, S.J & Niv, Y. (2010). Learning latent structure: Carving nature at its joints. Current Opinion in Neurobiology, 20, 1-6.
Gershman, S.J., Blei, D.M., & Niv, Y. (2010). Context, learning, and extinction. Psychological Review, 117, 197-209.
Gershman, S.J., Vul, E., & Tenenbaum, J.B. (2009). Perceptual multistability as Markov chain Monte Carlo inference. Advances in Neural Information Processing Systems 22.
Socher, R., Gershman, S.J., Perotte, A., Sederberg, P.B., Blei, D.M., & Norman, K.A. (2009). A Bayesian analysis of dynamics in free recall. Advances in Neural Information Processing Systems 22. [code+data]
Gershman, S.J., Pesaran, B., & Daw, N.D. (2009). Human reinforcement learning subdivides structured action spaces by learning effector-specific values. Journal of Neuroscience, 29, 13524-13531. [Supplementary Materials]
Disclaimer: Unless otherwise noted, the software provided below is for academic research purposes only. I provide no guarantees whatsoever. All software is written in Matlab, unless otherwise specified.