Spring 2019/2020

June, 19th – Isabelle Ferezou

Title: A mesoscopic view of tactile sensory information processing in the cerebral cortex

Abstract: Since the first description of its remarkable cellular organization by Woolsey and Van der Loos (1970), the whiskers representation in the rodent primary somatosensory cortex (S1) has become a major model for studying the cortical processing of tactile sensory information. In its layer 4, neurons form clusters, called barrels, that share the same topology as the whiskers on the snout of the animal, each neuronal column associated with a barrel receiving primarily inputs coming from its corresponding whisker.

A huge amount of information has been collected over the past 50 years on the whiskers sensory system; however it is still largely unknown how it really integrates distributed information to build a global percept of the tactile scene. Working at a mesoscopic scale that allows visualizing how the information flows throughout cortical columns and further propagates to other cortical areas is a real asset to address this question. Voltage sensitive dye imaging, which benefits from a sub-columnar spatial resolution and a millisecond time resolution reveals how, upon tactile stimulation of a given whisker, information is rapidly transmitted to its corresponding column in S1, but also, within the next couple of milliseconds, to the secondary somatosensory cortex and then to the primary motor cortex. Using this method, we described with an unprecedented precision the topography of whiskers representation, as well as the lateral propagation of sensory inputs within these cortical areas, thus providing insights in the neuronal dynamics at play for integration of complex multi-whisker inputs in the cortical network.

 

June, 12th – Pradeep Dheerendra

Title: Dynamics underlying auditory object boundary detection and segregation

Abstract: A visual object might be easy to define and understand, but objects perceived via audition are also important. Auditory object analysis involves the process of detecting, segregating and representing spectro-temporal regularities in the acoustic environment into stable perceptual units. Thus the auditory system accomplishes the process of transformation of acoustic waveform into an object based representation. This talk focuses on two fundamental aspects of auditory object processing viz. detection of auditory object boundary and auditory segregation. In the first study, I present the dynamics underlying the detection of emergence of a new auditory object in an ongoing auditory scene using MEG. I found a slow drift signal at the object boundary which I think might be the precision signal. In the second study, I present the brain basis underlying human auditory figure-ground analysis in a macaque model using fMRI and psychophysics. This has provided spatial priors for macaque neurophysiology.

June, 5th – Alex Cayco Gajic

Title: High-dimensional representations in cerebellar granule cells

Abstract: The cerebellum is thought to learn sensorimotor relationships to coordinate movement. Sensory and motor information is sent to a large number of cerebellar granule cells, which comprise the vast majority of neurons in the brain. Theoretically, this large anatomical expansion is thought to help pattern separation by representing sensorimotor information in a high-dimensional granule cell population code. However, how the granule cell population activity encodes sensory and motor information, and whether granule cell populations can support high-dimensional representations, is poorly understood. To address this, we used a high-speed random-access 3D 2-photon microscope to simultaneously monitor the Ca2+ activity in hundreds of granule cell axons of spontaneously behaving animals. We find that granule cell population activity transitions between separate, orthogonal coding spaces representing periods of quiet wakefulness vs. active movement, and that the granule cell representation is higher dimensional than has previously been observed.

May, 29th – Lucia Prieto Godino

Title: Evolution of olfactory systems on the fly

Abstract: Sensory systems encode the world around us to guide context-dependent appropriate behaviours that are often species-specific. This must involve evolutionary changes in the way that sensory systems extract environmental features and/or in the downstream sensory-motor transformations implemented. However, we still know little about how evolution shapes neural circuits. We are studying the olfactory system of Drosophila and tsetse flies across multiple species spanning a wide range of ecological niches and divergence times. We find divergent odour-guided behaviour towards host odours. To elucidate the cellular, circuit and molecular basis behind this behavioural evolution we are employing a multidisciplinary approach, including field work, the development of genetic tools across species, calcium imaging, single cell transcriptomics and reconstruction of central olfactory circuits at synaptic resolution. I will discuss the progress we have made in our efforts to understand how evolution tinkers neural circuits as animals adapt to different environments.

 

May, 22nd – Bradley Love

Title: A clustering account of spatial and non-spatial concept learning

Abstract: How do we learn to categorise novel items and what is the brain basis of these acts? For example, after a child is told an animal is a dog, how does that experience shape how she classifies future items? I will present model-based fMRI results concerning how people learn categories from examples and touch on parallel findings with monkey single-unit recordings. Our analyses indicate that the medial temporal lobe (MTL), including the hippocampus, plays an important role in both learning and recognition. Successful cognitive models, which explain both behavioural and brain measures, learn to selectively weight (i.e., attend) to stimulus aspects that are task relevant. This form of weighting, or top-down attention, can be viewed as a compression process. I will discuss how the medial prefrontal cortex (mPFC) and the hippocampus coordinate to build low-dimensional representations of learned concepts, as well as how the dimensionality of visual representations along the ventral stream is altered by the learning task. Finally, this general learning mechanism offers a straightforward account of spatial learning, including place and grid cell activity in both human and rodent studies.

 

May, 15th – Grace Lindsay

Title: Modelling the influence of feedback in the visual system

Abstract: Cortico-cortical feedback is common in the visual system and is believed to be involved in processes such as perceptual inference, attention, and learning. In this talk I will demonstrate how convolutional neural networks can be used to explore how such feedback works. In the first half of the talk, I will focus on the signals from prefrontal areas that are believed to control top-down feature attention. In the second half, I’ll discuss ongoing work on how local feedback connections help process noisy images.

 

May, 8th – bank holiday

 

May, 1st –


April, 24th –

 

April, 17th – Easter

 

April, 10th – Bank holiday

 

April, 3rd – Timothy O’Leary


March, 27th – Silvia Maggi

 

Winter 2019/2020

March, 20th –

 

March, 13th – Krasimira Tsaneva-Atanasova

Title: The Origin of GnRH Pulse Generation: An Integrative Mathematical-Experimental Approach

Abstract: The gonadotropin-releasing hormone (GnRH) pulse generator controls the pulsatile secretion of the gonadotropic hormones LH and FSH and is critical for fertility. The hypothalamic arcuate kisspeptin neurons are thought to represent the GnRH pulse generator, since their oscillatory activity is coincident with LH pulses in the blood; a proxy for GnRH pulses. However, the mechanisms underlying GnRH pulse generation remain elusive. We developed a mathematical model of the kisspeptin neuronal network and confirmed its predictions experimentally, showing how LH secretion is frequency-modulated as we increase the basal activity of the arcuate kisspeptin neurons in vivo using continuous optogenetic stimulation. Our model provides a quantitative framework for understanding the reproductive neuroendocrine system and opens new horizons for fertility regulation.

 

March, 6th – Matthias Hennig

Title: SpikeInterface: A project for reproducible next generation electrophysiology

Abstract: Many electrophysiologists would agree that spike sorting is somewhat of a dark art, with many secrets, black-box algorithms (occasionally probably written in blood) and heuristics and superstitions. With exciting new large scale probes and arrays now shipped to many labs and producing terabytes of recordings, reliable and reproducible analysis becomes increasingly harder to achieve. In this talk I will show (and attempt to live-demo) SpikeInterface, a project that aims to bring together the many efforts that have been put into spike sorting by many groups over the past decade and beyond. This project not only wraps many sorters, tools and and file formats, but also provides new methods for assessing quality of sorted spikes based on comparison between sorters and with ground truth data. We found a surprisingly low agreement between sorters, and show that this is due to high false positive rates that cannot be corrected for using common heuristics. Here I will suggest methods and workflows to remedy and improve this situation, which are often implemented with a few lines of code.

https://github.com/SpikeInterface

https://www.biorxiv.org/content/10.1101/796599v1

This project is joint work with: Alessio P. Buccino, Cole L. Hurwitz, Jeremy Magland, Samuel Garcia, Joshua H. Siegle, Roger Hurwitz


Febbruary, 28st – Mara Cercignani

Title: MRI for In Vivo Imaging of the Effects of Inflammation on the CNS

Abstract: Recent evidence supports a role for inflammation in several psychiatric disorders such as Alzheimer’s disease and major depression. One of the mechanisms underpinning CNS inflammation is the activation of microglia, which can be imaged using Translocator Protein (TSPO) PET. This technique, however, is costly and difficult to implement. This talk will present some of the results obtained in our lab using non-invasive, quantitative MRI approaches to assess the effects of inflammation on the brain.

Febbruary, 21st – Arno Onken

 

Febbruary, 14th – Marcus Kaiser

Title: Structure and Dynamics of Human Connectomes: Applications for Informing Diagnosis and Treatment of Brain Disorders

Abstract:

Our work on connectomics over the last 15 years has shown a small-world, modular, and hub architecture of brain networks [1,2]. Small-world features enable the brain to rapidly integrate and bind information while the modular architecture, present at different hierarchical levels, allows separate processing of various kinds of information (e.g. visual or auditory) while preventing wide-scale spreading of activation [3]. Hub nodes play critical roles in information processing and are involved in many brain diseases [4].

After discussing the organisation of brain networks, I will show how connectivity in combination with machine learning and computer simulations can identify the progression towards dementia before the onset of symptoms informing interventions that can delay disease progression [5].

For epilepsy patients, connectome-based simulations can also be used to predict the outcome of surgical interventions as well as alternative target regions [6]. I will also present recent results on local changes in epilepsy, concerning structural connectivity within brain regions, which are more indicative of surgery outcome than connectivity between brain regions. In addition, we also developed models of tissue within a brain region (http://www.vertexsimulator.org). Such models can observe the effects of invasive [7] or non-invasive electrical brain stimulation.

I will finally outline how these models could, in the future, inform invasive interventions, such as optogentic stimulation in epilepsy patients (http://www.cando.ac.uk) or non-invasive interventions using electrical, magnetic or focused ultrasound stimulation.

[1] Martin, Kaiser, Andras, Young. Is the Brain a Scale-free Network? SfN Abstract, 2001.

[2] Sporns, Chialvo, Kaiser, Hilgetag. Trends in Cognitive Science, 2004.

[3] Kaiser et al. New Journal of Physics, 2007.

[4] Kaiser et al. European Journal of Neuroscience, 2007.

[5] Peraza et al. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 2019.

[6] Sinha et al. Brain, 2017.

[7] Thompson et al. Wellcome Open Research, 2019.

Febbruary, 7th – Liad Baruchin

Title: The early developing brain undergoes many changes in its basic neuronal connectivity.

Abstract: Specifically, in our lab, looking at the barrel cortex, we find that circuits involving VIP+ and SST+ IN completely change from birth to adulthood. Currently, I am investigating how these interneuronal populations are involved in early sensory perception. To do that, I am using a genetic model in which either SST+ or VIP+ interneurons are completely silenced. Thus, using silicon probes I can record from different layers over the barrel field and see how silencing this neuronal populations affect the neuronal response to passive whisking. In this talk I will present my most recent results that show that these neuronal populations differentially affect the cortical processing of whisking speed and paired-pulse adaptation.


January, 31st – Eleni Vasilaki

Title: Sparse Reservoir Computing (SpaRCe) for neuromorphic devices

Abstract: In this talk I will present fundamental ideas about biological learning in fruit flies, and how these are related to Machine Learning. Inspired by the architecture of small brains, and within the framework of Ecco State Networks, I will discuss the importance of neuron selectivity to specific stimuli. I will then introduce a threshold per reservoir neuron as an efficient mechanism to achieve sparseness in the neuronal representation. The threshold is adapted via a gradient rule on an error function structurally identical to threshold learning via backpropagation. And yet, a simple mathematical analysis of its consequences for the specific architecture shows that it leads to neuronal selectivity. I will show in simulations that, within this context, our approach is advantageous in terms of performance versus imposing sparseness of weights via L1 norm. I will also discuss how such learning architectures can be exploited in the context of neuromorphic engineering.

January, 24th – Miguel Maravall

Title: Tactile sequence learning induces selectivity to multiple task variables in the mouse barrel cortex.

Abstract: Sequential temporal patterning is a key feature of natural signals, used by the brain to decode stimuli and perceive them as sensory objects. To explore the neuronal underpinnings of sequence recognition and determine if neurons adjust temporal integration as a result of learning, we developed a task in which mice had to discriminate between sequential stimuli constructed from distinct vibrations delivered to the vibrissae (whiskers), assembled in different orders.

Optogenetic inactivation experiments showed that both primary somatosensory ‘barrel’ cortex (S1bf) and secondary somatosensory cortex are involved in the task, consistent with a serial flow of sensory input to decision-making stages. Two-photon imaging in superficial layers of S1bf of well-trained animals revealed heterogeneous neurons with selectivity to task variables including sensory input, the animal’s action decision, and trial outcome (rewards and their departure from prediction). A large fraction of neurons were activated preceding goal-directed licking, thus predicting the animal’s learned response to a target sequence rather than the sequence itself. These neurons were absent in naïve animals. Therefore, in S1bf learning resulted in neurons that embodied the learned association between the presence of the target sequence and licking, instead of neurons that categorically responded to the sequence or integrated features over time.

 

January, 17th – Petra Vertes

Title: Maps, Models and Maths: New strategies for understanding the biological basis of mental ill-health.

Abstract: The last 20 years have witnessed extraordinarily rapid progress in neuroscience, including breakthrough technologies such as optogenetics and the collection of unprecedented amounts of neuroimaging, genetic and other data. However, the translation of this progress into improved understanding and treatment of mental health symptoms has been comparatively slow. One central challenge has been to reconcile different scales of investigation, from genes and molecules to cells, circuits, tissue, whole-brain and ultimately behaviour. In this talk I will describe several strands of work using mathematical, statistical, and bioinformatic methods to bridge these gaps. First, I will describe my work on linking neuroimaging data to the Allen Brain Atlas (a brain-wide, whole-genome map of gene expression) and how we can apply these tools in the nascent field of imaging transcriptomics to further our understanding of schizophrenia and other neuropsychiatric disorders. Next, I will discuss parallel efforts for using network science and control theory for linking microscopic function (ie the role of individual cells) to large-scale behaviour in C. elegans.

Januar, 10th – Mark Walton

Title: Regulation of dopamine during reward-guided decision making: tracking reward prediction in action

Abstract: It is widely accepted that the activity of many dopamine neurons and dopamine release in parts of the striatum represent predictions of future rewards, which in turn can be used to shape decision making. Nonetheless, the precise content and function of these dopamine signals during reward-guided behaviours remains a matter of great controversy. I’ll present ongoing work to examine how dopaminergic correlates of reward prediction and choice, recorded in rodents performing reward-guided decision making tasks, are modulated by action requirements, task structure and context. These data – along with others’ – suggests that dopamine activity can be shaped by a mixture of influences over different timescales and across different parts of striatum.


 

Autumn 2019/2020

December, 13th – Marc Goodfellow

Title: Modelling pathological brain dynamics

Abstract:

Disorders of the brain can often result in alterations to its large-scale dynamics. An example is epilepsy, in which electrographic measurements display abnormal rhythms, particularly during seizures. Understanding why these dynamics are generated is challenging, particularly in the clinical setting, but better insight could help to improve diagnosis and treatment. In this talk I will discuss a particular approach to this problem, using mathematical models of large-scale brain networks to understand pathological dynamics. I will demonstrate how the study of such models can lead to new insight into the generation of seizures, and how models can be combined with clinical data to generate predictions for the surgical treatment of epilepsy.

December, 6th – Armin Lak

Title: Dopaminergic and prefrontal basis of learning from sensory confidence and reward value

Abstract:

Deciding between stimuli requires combining their learned value with one’s sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behaviour conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine learned value with sensory confidence before choice outcome and guide subsequent learning.


November, 29nd – Nothing!!

 

November, 22nd – Bernhard Staresina

Title: Memory consolidation during sleep: Mechanisms and representations

Abstract:

In this talk, I will first present direct recordings from the human hippocampus during natural sleep. Analyses focus on the question how different sleep signatures (slow oscillations, spindles and ripples) interact and may facilitate hippocampal-neocortical information transfer. I will then turn to memory representations being reactivated during sleep. Using targeted memory reactivation, we show that sleep spindles seem to facilitate content-specific consolidation.

November, 15th – Jacques-Donald Tournier

Title: Multi-shell diffusion MRI and its applications in the neonatal brain

Abstract:

Recent advances in MRI acquisition now allow the routine acquisition of large amounts of so-called multi-shell diffusion MRI data within reasonable time frames. This opens up exciting new possibilities, but also brings additional challenges. This talk will present new methods for the acquisition and analysis of such data, both at the single-subject and at the group level. The talk will focus primarily (but not exclusively) on applications within the neonatal brain, using data acquired as part of the developing human brain connectome project.

November, 8th – Dan Goodman

Title: The Reluctant Machine Learner

Abstract:

The unique quality of the brain is that it can perform difficult tasks.

The traditional approach to modelling in neuroscience, though, has focussed on simple tasks, because those were the only ones we could model. Recently, that has all changed with the advent of powerful new methods from machine learning that can recognise some images better than humans, for example. I will argue that we have to study the brain solving difficult tasks, and therefore we have to be using techniques from machine learning because these are the only known methods that enable us to do that. However, that doesn’t mean that the brain is at all like the current best known machine learning models. Those models miss out on a lot of important points, like temporal dynamics and spiking neurons. Moreover, they make mistakes that humans would never make and require vastly more data than we do to learn. Despite these issues, neuroscience has a lot to gain from adopting machine learning methods, and I’ll talk about a couple of ongoing projects in my lab that attempt to use machine learning methods in a way that is more compatible with traditional neural modelling: modelling speech recognition in the auditory system; and trying to understand the computational role of the heterogeneity observed in real brains.

 

November, 1st – Christina Buetfuring

Title: Decision coding by layer 2/3 neurons in primary somatosensory cortex

Abstract:

Sensory information enables us to make informed choices that are critical for survival. While primary sensory areas provide information on sensory stimuli, behaviourally-relevant decision-making variables have been shown to be represented in higher-order association cortices. Therefore, sensory coding and decision-making are typically studied under the assumption of anatomical separation. Neurons in the superficial layers of the whisker region of primary somatosensory cortex (S1), barrel cortex, not only receive somatotopically mapped bottom-up inputs from the thalamorecipient layer 4 but also lateral projections from neighbouring barrels and top-down projections from higher cortical areas. Therefore, layer 2/3 (L2/3) neurons in barrel cortex are a prime candidate for providing an intersection of sensory processing and decision-making in complex behavioural tasks. Previous work using electrophysiological recordings in monkeys, rats and mice has not found conclusive choice activity in S1 but was limited to low number of neurons. Studies using two-photon calcium imaging found that some behavioural aspects modulate activity in L2/3 barrel cortex neurons. It is unclear, however, whether the signal difference across trial types in those studies reflects choice-related signals or a modulation of activity by action-related variables such as motivation, movement preparation etc. Here, we used two-photon calcium imaging of neurons in L2/3 mouse barrel cortex during a cued texture discrimination task with two lickports to determine whether these neurons can code for behaviourally-relevant decision variables. We found neurons carrying information about the stimulus irrespective of the behavioural outcome (‘stimulus neurons’) as well as neurons whose activity carried information about the choice to be made (‘decision neurons’). Choice-related activity in decision neurons is not driven by signals related to motor output, but instead follows stimulus presentation. Furthermore, ambiguous population coding of decision neurons predicts miss trials and an improvement in categorical coding in decision neurons coincides with learning the stimulus-choice association. Our identification of neurons encoding stimulus and behaviourally-relevant decision signals within the same circuit suggests a direct involvement of L2/3 S1 in the decision-making process.

Location: GEOG BLDG G.11N SR1


October, 25th – first year student projects

October, 18th – first year student projects

 

October, 11th – Cian O’Donnell

Title: Neural variability in Autism

Abstract:

Autistic people often have sensory processing deficits, and we would like to understand why. One clue comes from the observation that Autistic peoples’ EEG and fMRI responses to sensory stimuli are more variable than those in neurotypical people. We used in vivo two-photon calcium imaging of populations of layer 2/3 cortical neurons in young wild-type and Fragile-X Syndrome mouse models to search for three aspects of such variability at a cellular level: 1) across single trials from identical stimuli in the same animal, 2) across animals of the same age, and 3) longitudinally across days in the same animals. I will present what we found. Work with Beatriz Mizusaki (Univ of Bristol), Nazim Kourdougli, Anand Suresh, and Carlos Portera-Cailliau (Univ of California, Los Angeles).

Location: PHYS BLDG 3.34

October, 4th – Dimitris Pinotsis

Abstract:

In this talk, I will discuss how deep neural networks can reveal semantic and biophysical properties of memory representations in the brain (neural ensembles or cell assemblies).

First, I will consider a flexible decision-making paradigm and show that deep neural networks allow us to understand the sensory domains and semantics different brain areas prefer (motion vs color) and code (sensory signals vs abstract categories) respectively. These results will also suggest a way for studying sensory and categorical representations in the brain by combining behavioural and neural network models.

Then, I will show that deep neural networks can also reveal cortical connectivity in neural ensembles and explain a well-known behavioral effect in psychophysics, known as the oblique effect. This work will also introduce a new mathematical approach for identifying neural ensembles that exploits a combination of machine learning, biophysics and brain imaging.