Spring 2021

All Neural Dynamics Forum talks during Spring 2021 will take place online through Zoom unless specified otherwise, details as below.

​Meeting ID: 932 7630 6396
Password: 815933
https://zoom.us/s/93276306396


1PM June 25th – Professor Steve Coombes (University of Nottingham)
* Note: Forum will take place in person outdoors at Royal Fort Gardens 

Next generation neural field modelling

Neural mass models have been actively used since the 1970s to model the coarse-grained activity of large populations of neurons and synapses.  They have proven especially fruitful for understanding brain rhythms.  However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue.  In this talk I will discuss a simple spiking neuron network model that has recently been shown to admit to an exact mean-field description for synaptic interactions.  This has many of the features of a neural mass model coupled to an additional dynamical equation that describes the evolution of population synchrony.  I will show that this next generation neural mass model is ideally suited to understanding beta-rebound. This is readily observed in MEG recordings whereby motor action causes a drop in the beta power band attributed to a loss of network synchrony.  Existing neural mass models are unable to capture this phenomenon (event related de-synchrony) since they do not track any notion of network coherence (only firing rate).  I will spend the latter part of my talk discussing patterns and waves in a spatially continuous non-local extension of this model, highlighting its usefulness for large scale cortical modelling.


1PM June 11th – Dr Michael Proulx (Reader in Psychology, University of Bath)

Understanding spatial cognition through virtual and augmented reality

Spatial knowledge is key for most everything we do. The study of spatial cognition is now able to take advantage of advances in computational modelling, Virtual Reality and Augmented Reality with important implications for theory and application. I will explore these issues through a few case studies of our research, including: the use of eye-tracking with interactive virtual environments; visual impairments and pain to explore multisensory experiences of space; and using motion-tracking and augmented reality to assess the presentation of visual information in tactile or auditory displays to the blind or blindfolded. These approaches and immersive technologies hold great potential for advancements in fundamental and translational neuroscience.


1PM June 4th – Professor Peter Ashwin (Professor of Mathematics, University of Exeter)

Excitable Networks in Continuous Time Recurrent Neural Networks

Continuous time recurrent neural networks (CTRNN) are systems of coupled ordinary differential equations that are simple enough to be insightful for describing learning and computation, from both biological and machine learning viewpoints. We describe a direct constructive method of realising finite state input-dependent computations on an arbitrary directed graph. The constructed system has an excitable network attractor whose dynamics. The resulting CTRNN has intermittent dynamics: trajectories spend long periods of time close to steady-state, with rapid transitions between states. Depending on parameters, transitions between states can either be excitable (inputs or noise needs to exceed a threshold to induce the transition), or spontaneous (transitions occur without input or noise). In the excitable case, we show the threshold for excitability can be made arbitrarily sensitive.


1PM May 28th – Professor Andre Fenton (Professor of Neural Science, New York University)

Memory, learning to learn, and control of cognitive representations

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.


1PM May 14th – Dr Sarah Morgan (Senior Research Associate, University of Cambridge Brain Mapping Unit)

What can brain MRI tell us about schizophrenia?

Schizophrenia is an extremely debilitating disease, which affects approximately 1% of the population during their lifetime. However, to date there are no known biomarkers for schizophrenia, the biological mechanisms underpinning the disease are unclear, and there has been correspondingly little progress on new therapeutics. In this talk, I will discuss how MRI brain imaging can begin to address these challenges, using data from the Psyscan project (http://psyscan.eu/). In the first part of the talk, I will show how morphometric similarity mapping and imaging transcriptomics can shed fresh light on structural brain differences in schizophrenia (Morgan et al, PNAS 2019). In the second part, I will examine the extent to which MRI data can be used to distinguish patients with schizophrenia from healthy volunteers. We find that fMRI can do this with high accuracy, based on a reproducible pattern of cortical features associated with neurodevelopment (Morgan*, Young* et al, BP:CNNI 2020). Overall, we begin to see how MRI can give us a more integrative understanding of schizophrenia, which might inform future treatments.


1PM May 7th – Professor Daniel Wolpert (Professor of Neuroscience, Zuckerman Mind Brain Behaviour Institute, Columbia University)

Computational principles underlying the learning of sensorimotor repertoires

Humans spend a lifetime learning, storing and refining a repertoire of motor memories appropriate for the multitude of tasks we perform. However, it is unknown what principle underlies the way our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. I will review our work on how humans learn to make skilled movements focussing on the role of context in activating motor memories and how statistical learning can lead to multimodal object representations. I will then present a principled theory of motor learning based on the key insight that memory creation, updating, and expression are all controlled by a single computation – contextual inference. Unlike dominant theories of single-context learning, our repertoire-learning model accounts for key features of motor learning that had no unified explanation and predicts novel phenomena, which we confirm experimentally. These results suggest that contextual inference is the key principle underlying how a diverse set of experiences is reflected in motor behavior.


2PM April 30th – Tali Sharot (UCL)

How People Decide What They Want to Know: Information-Seeking and the Human Brain

The ability to use information to adaptively guide behavior is central to intelligence. A vital research challenge is to establish how people decide what they want to know. In this talk I will present our recent research characterizing three key motives of information seeking. We find that participants automatically assess (i) how useful information is in directing action, (ii) how it will make them feel, and (iii) how it will influence their ability to predict and understand the world around them. They then integrate these assessments into a calculation of the value of information that guides information-seeking or its avoidance. These diverse influences are captured by separate brain regions along the dopamine reward pathway and are differentially modulated by pharmacological manipulation of dopamine function. The findings yield predictions about how information-seeking behavior will alter in disorders in which the reward system malfunctions. We test these predictions using a linguistic analysis of participants’ web searches ‘in the wild’ to quantify their motive for seeking information and relate those to reported psychiatric symptoms. Finally, using controlled behavioral experiments we show that the three motives for seeking information appear early in developmental following roughly linear trajectories.


2PM April 16th – Bard Ermentrout (University of Pittsburgh)

A Robust Neural Integrator Based on the Interactions of Three Time Scales

Neural integrators are circuits that are able to code analogue information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights.   In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals.  I show the mechanism for this can be explained by the instability regions of the Mathieu equation.  I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.


1PM April 9th – Paul Anastasiades (University of Bristol)

Circuit organisation of the rodent prefrontal thalamo-cortical system

Interactions between the thalamus and prefrontal cortex (PFC) play a critical role in cognitive function and arousal and are disrupted in neuropsychiatric disorders. The PFC is reciprocally connected with ventromedial (VM) and mediodorsal (MD) thalamus, both higher-order nuclei with distinct properties to the classically studied sensory relay nuclei. To understand the properties of the circuits linking PFC and thalamus we use anatomical tracing, electrophysiology, optogenetics, and 2‐photon Ca2+ imaging, determining how VM and MD target specific cell types and subcellular compartments of mouse PFC. Focusing on cortical layer 1, we find thalamic nuclei target distinct sublayers, with VM engaging NDNF+ cells in L1a, and MD driving VIP+ cells in L1b. These separate populations of L1 interneurons participate in different inhibitory networks in superficial layers by targeting either PV+ or SOM+ interneurons. NDNF+ cells mediate a unique form of thalamus-evoked inhibition at PT cells, selectively blocking VM-evoked dendritic Ca2+ spikes. Together, our findings reveal how two thalamic nuclei differentially communicate with the PFC through distinct L1 micro‐circuits and how inhibition is critical for controlling PFC output back to thalamus.

Leave a Reply

Your email address will not be published. Required fields are marked *