Computational Neuroscience
Full course description
This course introduces to you the philosophy, methods and tools of computational neuroscience. You will gain insights into information processing in the brain at several scales (from the spiking behaviour of single neurons to whole-brain dynamics) and levels of abstraction (from cognitive capacities to their biophysical implementation). In parallel, you will receive hands-on training on how to design, simulate and interpret mathematical models of brain processing.
This course is closely related to Fundamental and Systems Neuroscience (year 1) in that computational neuroscience constitutes the theoretical underpinning of systems neuroscience. It addresses similar scientific questions with a stronger emphasis on mathematical modelling (often from the perspective dynamical systems theory) and computer simulations. The emphasis on dynamical models and their numerical simulations relates the present elective to the MSB courses Dynamical Systems and Nonlinear Dynamics (year 1) and Scientific Computing (years 1 & 2).
The human brain is often regarded as the most complex object in the known universe. It is not surprising therefore that studying the brain and its function is a challenging task, which requires several perspectives and complementary insights. If we want to understand neural systems, we need to describe them at three levels of abstraction. At the first level, we need to identify their function: what do these systems do and why? At the second level, we need to identify potential mechanisms underlying a certain function: how do neural systems realize functions algorithmically? Finally, we need to identify the hardware (i.e. biological) implementation of these algorithms: what are the physical and biological building blocks underlying neural information processing?
Computational neuroscience integrates knowledge across these three levels as it studies information processing carried out by neural structures in terms of biologically constrained models of brain structure and function. While all three levels are equally important in general, the specific question addressed by any one study may place different weights on each level. Computational neuroscience also addresses questions at several spatio-temporal scales from subthreshold activity exhibited by single neurons to whole-brain dynamics.
You will get an overview of models and techniques employed in computational neuroscience as well as of the philosophy underlying the three levels of abstraction (tri-level hypothesis). Finally, you will receive hands-on experience on how to design and simulate models at distinct levels of abstraction and spatio-temporal scales.
Course objectives
The intended learning outcomes of this course are that students can:
1. Describe and compare the three levels of abstraction put forth by the tri-level hypothesis and how they influence each other
2. Describe and implement mathematical models at the micro-level (point neurons), meso-level (neural circuits & neural mass models) and macro-level (cortical networks)
3. Describe and implement learning rules applicable at different spatio-temporal scales / levels of abstraction such as spike-timing dependent plasticity
4. Conduct a computational neuroscience study from formulating a research question to interpreting simulation results
Recommended reading
The provided literature accompanies the Lectures and Computer Exercises. In the spirit of RBL, you will conduct your own literature search for Peer Group meetings.
• Frigg, R., & Hartmann, S. (2012). Models in Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy.
• Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183(3), 339–3s73. https://doi.org/10.1007/s11229-011-9970-0
• GitHub Guides
• Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal Dynamics. Cambridge: Cambridge University Press.
• Potjans, T. C., & Diesmann, M. (2014). The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cerebral Cortex, 24(3), 785–806. https://doi.org/10.1093/cercor/bhs358
• Martí, D., Deco, G., Giudice, P. D., & Mattia, M. (2006). Reward-biased probabilistic decision-making: Mean-field predictions and spiking simulations. Neurocomputing, 69(10–12), 1175–1178. https://doi.org/10.1016/j.neucom.2005.12.069
Background: Platt, M. L., & Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400(6741), 233–238. https://doi.org/10.1038/22268
• Gancarz, G., & Grossberg, S. (1998). A neural model of the saccade generator in the reticular formation. Neural Networks, 11(7), 1159–1174. https://doi.org/10.1016/S0893-6080(98)00096-3
Computational Neuroscience 2020-2021 Page 11 of 21
• Rolls, E. T. (2016). Pattern separation, completion, and categorisation in the hippocampus and neocortex. Neurobiology of Learning and Memory, 129, 4–28. https://doi.org/10.1016/j.nlm.2015.07.008
• Lange, Senden, Radermacher, and De Weerd (in press). Interfering with a memory without erasing its trace.
• Background: Schoups, A., Vogels, R., Qian, N., & Orban, G. (2001). Practising orientation identification improves orientation coding in V1 neurons, 412, 549–553. https://doi.org/10.1038/35087601
• Background: Teich, A. F., & Qian, N. (2003). Learning and Adaptation in a Recurrent Model of V1 Orientation Selectivity. Journal of Neurophysiology, 89(4), 2086–2100. https://doi.org/10.1152/jn.00970.2002