Questions from the first workshop: ********************************** The goal of this workshop is not to review recent research, but to plan for the future. Rather than asking you to give your standard stump speech, we would like to ask you to reflect on a set of questions about what cortical computation might be, such as the ones listed below, or to propose an alternative set of questions. As you will see, we (the organizers, Marcus and Marblestone) have recently (in a Perspective in Science, Gary Marcus, Adam Marblestone, and Thomas Dean. "The atoms of neural computation." Science 346.6209 (2014): 551-552, http://www.sciencemag.org/content/346/6209/551, and a Supplement in arXiv) put forward a kind of opening bid: a proposal -- incomplete and indubitably flawed -- for a candidate set of cortical computations (Table 1 in the Supplement). You might like our list; you might hate it; we are not looking for assent, we are looking for progress -- which might consist in a better list of candidates, or a set of specific ideas about how to assess them empirically, or even an argument that the whole notion of trying to assemble such a list is wrong-headed, ideally coupled with a more compelling alternative suggestion. The goal is not to ratify our particular list, but to encourage a group discussion on how we as a field can better connect neurophysiology with ideas drawn from computation and higher-level cognition. On the first day, each of you will have 10 minutes to say something, deep, provocative, or outlandish, then we will have 10 minutes for group discussion of your remarks. There will also be plenty of time for general discussion. On the second day, we will form breakout groups, driven by the ideas that get the most traction on the first day; in our ideal world, the workshop will lead to a set of white papers outlining new directions towards bringing theory, experiment, computation, cognition and behavior together. Some sample questions that you might contemplate in your ten minutes (and throughout the weekend), though you should by no means consider the following list to be exhaustive, include the following. To make the discussion interesting and future-oriented, we encourage you to go out on a limb and speculate! 1) If you had to make your own list of the set of elementary computations that collectively underlie “mid-level” and “high-level” cognition in the mammalian brain, e.g., language understanding, visual scene parsing, navigation, planning and goal-directed rule application, what would be in it? (Is the Table of Computations in http://arxiv.org/abs/1410.8826 a useful starting point?) 2) To the extent that the kinds of computations we proposed -- or an alternative set that you propose -- seem plausible, how might these computations be implemented neurally? How could we test this? Be as specific as possible, and feel free to focus on a subset of the primitives we discussed (e.g., those involved in variable binding) or to suggest alternatives that you think we neglected. 3) To what extent is the cortex computationally uniform vs. diverse, and how do its functions rely on interaction with non-cortical brain areas? In other words, what is the computational division of labor between the various areas of cortex and the other non-cortical areas with which it interacts, such as the basal ganglia, cerebellum, thalamus and hippocampus. For example, is the cortex in and of itself capable of “binding symbolic variables”, or must this occur via interaction with other systems? (Some relevant references on possible neural implementations of variable binding are noted in the above-mentioned table) 4) How could a proposal about an inventory of computational primitives in the brain be tested experimentally and what tool-sets would be required to do so? 5) What would it take to move from a theory of computational primitives to a “whole brain” model or simulation? What new techniques do we need? How can we better foster theoretical development as a community? Questions from the second workshop: ********************************** The goal of this workshop is not to review recent research, but to plan for the future, and to crystallize / evolve our emerging ideas about vexing open questions. Rather than asking you to give your standard stump speech, we would like to ask you to reflect on a set of questions about how to strategically improve our understanding of brain computation. On the first day, each of you will have 15 minutes to say something, deep, provocative, or outlandish, then we will have 15 minutes for group discussion of your remarks. There will also be plenty of time for general discussion. On the second day, we will form breakout groups, driven by the ideas that get the most traction on the first day; in our ideal world, the workshop will lead to a set of white papers outlining new directions towards bringing theory, experiment, computation, cognition and behavior together. Some sample themes that you might contemplate in your fifteen minutes (and throughout the weekend), though you should by no means consider the following list to be exhaustive, include the following. To make the discussion interesting and future-oriented, we encourage you to go out on a limb and speculate, especially if you can do so in rich detail! It would be helpful if individuals or groups (particularly groups with the potential to reach convergence across different camps) could prepare some pre-written notes. 1) Suggest a plausible computational account of cognitive neuroanatomy, and how it could be tested or refined. Current preliminary examples include the SPAUN [1] and LEABRA [2] models. While differing in many important details, both models share certain core ideas [3], such as the basal ganglia as a switching network, trained by reinforcement learning, which gates information flow in the cortex via the thalamus, sensory cortex as a learned hierarchy of increasingly abstract feature representations, and prefrontal cortex as a working memory system that stores information transiently in the form of recurrent activity patterns. What is missing in such models, and how can empirical neuroscience disambiguate between such architectures? 2) What can we learn from artificial intelligence, that can inform our understanding of the brain, and vice versa? For example, rather than building a system from discrete, biologically-derived elementary mesoscale computations (e.g., the Table of Computations in [4]), current deep learning [5] systems begin with a neural network that is relatively unstructured microscopically, and then train this network using supervised back-propagation (e.g., Deep Q-Learning [6]). Does the brain use a form of supervised back-propagation to derive its detailed microstructure, and if so, how is such supervised learning implemented biologically? Can the apparent (partial) success of LSTM recurrent network models on certain syntactic tasks (e.g., translation) tell us something useful about how human language works? Do the recent progress in neural networks appended with eternal addressable memory units (e.g., Neural Turing Machines [7] and Memory Networks [8]) have implications for our models of the cortex or other structures? Conversely, how can maps and understandings of the biological brain function be used to improve artificial intelligence models? Could an understanding of the objective functions used by the brain [9] in learning/optimization be used to enable the development of ethical AI agents [10]? 3) Can we reverse engineer how the brain binds symbolic variables and executes discrete, variable-containing rules? While there appears to be some broad consensus about how the brain might perform pattern recognition tasks (e.g., via feature hierarchies), there appears to be much persistent disagreement about whether and how the brain performs symbolic variable binding -- the transitory or permanent tying together of two bits of information: a variable (such as an X or Y in algebra, or a placeholder, like subject or verb in a sentence) and an arbitrary instantiation of that variable (say, a single number, symbol, vector, or word). Yet, some have argued that understanding of this binding operation could be a crucial bridge between neural computation and higher-level cognition. Relevant references on possible neural implementations of variable binding -- such as DPANN [11], circular convolution, and pointers -- are noted in [4]. 4) How to create a fruitful interplay between hypothesis testing, mapping, perturbation, analysis and modeling. How does the nature, quantity and quality of brain data impact the types of understandings we could achieve? For example, would you rather have 100 molecularly annotated [12] Drosophila connectomes, or one full connectome of a mouse, but without the molecules? Are either of these particularly useful without an activity map, or without a causal map, and why? What is your ideal neuroscience dataset (assuming effectively unlimited technology), and what specifically would you do with it, to better understand the brain? Is it possible to form usefully-detailed theoretical hypotheses of neural computation, today, without more detailed maps to inspire or constrain such ideas? If so, what hypotheses are you anxiously waiting to test, once a certain form of big data flows in? Are dimensionality reduction approaches useful in the absence of specific pre-defined hypotheses? Are there analytical methods that can combine the best of both worlds, i.e., encoding interesting hypotheses, but in such a way that they can be flexibly [13] modified and tested by big data? Where and how should behavior and cognition fit in? 5) How can we synergize and integrate existing and nascent methodologies to study the brain systematically? With static, dynamic and molecular mapping, optogenetics, machine learning, simulation, theory and other toolsets, what experimental and analytical systems do we really need to build, to enable an understanding of the above fundamental questions? Be as specific as possible. [1] Eliasmith, Chris, et al. "A large-scale model of the functioning brain." Science. 338.6111 (2012): 1202-1205. [2] OReilly, Randall C., Thomas E. Hazy, and Seth A. Herd. "The leabra cognitive architecture: how to play 20 principles with nature and win!." The Oxford Handbook of Cognitive Science (2012). [3] http://lesswrong.com/lw/md2/the_brain_as_a_universal_learning_machine/ [4] Marcus, Gary F., Adam H. Marblestone, and Thomas L. Dean. "Frequently Asked Questions for: The Atoms of Neural Computation." arXiv preprint arXiv:1410.8826 (2014). http://arxiv.org/abs/1410.8826 [5] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444. [6] Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533. [7] Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural Turing Machines." arXiv preprint arXiv:1410.5401 (2014). [8] Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014). [9] O'Reilly, Randall C., Thomas E. Hazy, Jessica Mollick, Prescott Mackie, and Seth Herd. "Goal-driven cognition in the brain: a computational framework."arXiv preprint arXiv:1404.7591 (2014). [10] http://www.givewell.org/labs/causes/ai-risk [11] Hayworth, Kenneth J. "Dynamically partitionable autoassociative networks as a solution to the neural binding problem." Frontiers in Computational Neuroscience 6 (2012). [12] Marblestone, Adam H., Evan R. Daugharthy, Reza Kalhor, Ian D. Peikon, Justus M. Kebschull, Seth L. Shipman, Yuriy Mishchenko et al. "Rosetta Brains: A Strategy for Molecularly-Annotated Connectomics." arXiv preprint arXiv:1404.5103 (2014). [13] Jonas, Eric, and Konrad Kording. "Automatic discovery of cell types and microcircuitry from neural connectomics." eLife 4 (2015): e04250.