"There is something inside of me. What is it?"
- Vincent van Gogh
I grew up in Chicago, and did my undergraduate studies at Columbia University in New York, where I majored in neuroscience and behavior.
After graduating in 2007, I spent a year at the Center for Neural Science at New York University, where I studied reinforcement learning in humans
and monkeys. I received my Ph.D in psychology and neuroscience from Princeton University in 2013. I'm currently a postdoctoral fellow at MIT in Josh Tenenbaum's Computational Cognitive Science Group.
The focus of my work is on computational models of learning and memory. In particular, I'm interested in to what extent
we can understand the brain as performing statistical inference. I test the predictions of these models in humans and animals using behavioral and brain imaging methods. You can read my CV here.
Copyright notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by these copyrights. These works may not be reposted without the explicit permission of the copyright holder.
Gershman, S.J., Monfils, M.-H., Norman, K.A., & Niv, Y. (in preparation). The computational nature of memory reconsolidation.
Gershman, S.J., Moustafa, A.A., & Ludvig, E.A. (submitted). Time representation in reinforcement learning models of the basal ganglia.
Gershman, S.J., Radulescu, A., Norman, K.A., & Niv, Y. (submitted). Statistical computations underlying the dynamics of memory updating.
Soto, F.A., Gershman, S.J., & Niv, Y. (submitted). Explaining compound generalization in associative and causal learning through rational principles of dimensional generalization.
Gershman, S.J., Blei, D.M, Norman, K.A., & Sederberg, P.B. (submitted). Decomposing spatiotemporal brain images into topographic latent sources. [code]
Gershman, S.J. & Niv, Y. (submitted). Novelty and inductive generalization in human reinforcement learning.
Gershman, S.J., Frazier, P.I., & Blei, D.M. (submitted). Distance dependent infinite latent feature models. [Supplementary Materials] [code]
Feng, S.F., Schwemmer, M., Gershman, S.J., & Cohen, J.D. (in press). Multitasking vs. multiplexing: Toward a normative account of capacity constraints in cognitive control. Cognitive, Affective, and Behavioral Neuroscience.
Gershman, S.J. (in press). The penumbra of learning: A statistical theory of synaptic tagging and capture. Network: Computation in Neural Systems.
Gershman, S.J. (in press). Dopamine ramps are a consequence of reward prediction errors. Neural Computation.
Austerweil, J.L., Gershman, S.J., Tenenbaum, J.B., & Griffiths, T.L. (in press). Structure and flexibility in Bayesian models of cognition. In J.R. Busemeyer, J.T. Townsend, Z. Wang, & A. Eidels, Eds, Oxford Handbook of Computational and Mathematical Psychology. Oxford University Press.
Gershman, S.J., Markman, A.B., & Otto, A.R. (in press). Retrospective revaluation in sequential decision making: a tale of two systems. Journal of Experimental Psychology: General.
Gershman, S.J. (2013). Computation with dopaminergic modulation. In Jaeger D., Jung R. (Ed.) Encyclopedia of Computational Neuroscience. Springer.
Gershman, S.J. (2013). Bayesian behavioral data analysis. In Jaeger D., Jung R. (Ed.) Encyclopedia of Computational Neuroscience. Springer.
Gershman, S.J., Jones, C.E., Norman, K.A., Monfils, M.-H., & Niv, Y. (2013). Gradual extinction prevents the return of fear: Implications for the discovery of state. Frontiers in Behavioral Neuroscience. doi: 10.3389/fnbeh.2013.00164.
Detre, G.J., Natarajan, A., Gershman, S.J., & Norman, K.A. (2013). Moderate levels of activation lead to forgetting in the think/no-think paradigm. Neuropsychologia, 51 2371-2388. [Supplementary Materials] [code]
Christakou, A., Gershman, S.J., Niv, Y., Simmons, A., Brammer, M., & Rubia, K. (2013). Neural and psychological maturation of decision-making in adolescence and young adulthood. Journal of Cognitive Neuroscience, 25, 1807-1823.
Gershman, S.J. & Niv, Y. (2013). Perceptual estimation obeys Occam's razor. Frontiers in Psychology, 23, doi: 10.3389/fpsyg.2013.00623.
Gershman, S.J., Schapiro, A.C., Hupbach, A., & Norman, K.A. (2013). Neural context reinstatement predicts memory misattribution. Journal of Neuroscience, 33, 8590-8595.
Otto, A.R., Gershman, S.J., Markman, A.B., & Daw, N.D. (2013). The curse of planning: Dissecting multiple reinforcement learning systems by taxing the central executive. Psychological Science, 24, 751-761. [Supplementary Materials]
Gershman, S.J., Jäkel, F.J., & Tenenbaum, J.B. (2013). Bayesian vector analysis and the perception of hierarchical motion. Proceedings of the 35th Annual Conference of the Cognitive Science Society.
Wingate, D., Diuk, C., O'Donnell, T., Tenenbaum, J.B., & Gershman, S.J. (2013). Compositional policy priors. MIT CSAIL Technical Report 2013-007.
Gershman, S.J. (2013). Memory modification in the brain: computational and experimental investigations. Ph.D Thesis, Princeton University, Department of Psychology.
Gershman, S.J. & Niv, Y (2012). Exploring a latent cause model of classical conditioning. Learning & Behavior, 40, 255-268. [Supplementary Materials]
Gershman, S.J., Hoffman, M.D., & Blei, D.M. (2012). Nonparametric variational inference. Proceedings of the 29th International Conference on Machine Learning. [code]
Gershman, S.J., Moore, C.D., Todd, M.T., Norman, K.A., & Sederberg, P.B. (2012). The successor representation and temporal context. Neural Computation, 24, 1553-1568.
Gershman, S.J. & Blei, D.M. (2012). A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology, 56, 1-12. [correction]
Gershman, S.J. & Daw, N.D. (2012). Perception, action and utility: the tangled skein. In M. Rabinovich, K. Friston, P. Varona (Ed.), Principles of Brain Dynamics: Global State Interactions. MIT Press.
Gershman, S.J., Vul, E., & Tenenbaum, J.B. (2012). Multistability and perceptual inference. Neural Computation, 24, 1-24.
Gershman, S.J., Blei, D.M., Pereira, F., & Norman, K.A. (2011). A topographic latent source model for fMRI data. NeuroImage, 57, 89-100.
Sederberg, P.B., Gershman, S.J., Polyn, S.M., & Norman, K.A. (2011). Human memory reconsolidation can be explained using the Temporal Context Model. Psychonomic Bulletin and Review, 18, 455-468.
Daw, N.D., Gershman, S.J., Seymour, B., Dayan, P., & Dolan, R.J. (2011). Model-based influences on humans' choices and striatal prediction errors. Neuron, 69, 1204-1215. [Supplementary Materials]
Gershman, S.J. & Wilson, R.C. (2010). The neural costs of optimal control. Advances in Neural Information Processing Systems 23.
Gershman, S.J, Cohen, J.D., & Niv, Y. (2010). Learning to selectively attend. Proceedings of the 32nd Annual Conference of the Cognitive Science Society.
Gershman, S.J & Niv, Y. (2010). Learning latent structure: Carving nature at its joints. Current Opinion in Neurobiology, 20, 1-6.
Gershman, S.J., Blei, D.M., & Niv, Y. (2010). Context, learning, and extinction. Psychological Review, 117, 197-209.
Gershman, S.J., Vul, E., & Tenenbaum, J.B. (2009). Perceptual multistability as Markov chain Monte Carlo inference. Advances in Neural Information Processing Systems 22.
Socher, R., Gershman, S.J., Perotte, A., Sederberg, P.B., Blei, D.M., & Norman, K.A. (2009). A Bayesian analysis of dynamics in free recall. Advances in Neural Information Processing Systems 22. [code+data]
Gershman, S.J., Pesaran, B., & Daw, N.D. (2009). Human reinforcement learning subdivides structured action spaces by learning effector-specific values. Journal of Neuroscience, 29, 13524-13531. [Supplementary Materials]
Disclaimer: Unless otherwise noted, the software provided below is for academic research purposes only. I provide no guarantees whatsoever. All software is written in Matlab, unless otherwise specified.