Winter&Spring 2023

20th January 2023

Claudia Clopath

1:00 PM online: https://bristol-ac-uk.zoom.us/j/94138286231?pwd=MlRURE1SWjR6OTZCR1Fnak9QbGxhUT09

Meeting ID: 941 3828 6231 Passcode: 277162

Theory of neural perturbome

To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory–inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of n euronal networks in future studies.


13th January 2023

Kyle Wedgewood 

1:00 PM BIOMED BLDG C44 and online: https://bristol-ac-uk.zoom.us/j/94138286231?pwd=MlRURE1SWjR6OTZCR1Fnak9QbGxhUT09

Meeting ID: 941 3828 6231 Passcode: 277162

Closed-Loop Interrogation of the Dynamics of Neuroendocrine Cells

This talk will discuss how mathematical modelling can be embedded within experiment protocols to study electrical behaviour in neurons and neuroendocrine cells in which delays play an important role. We discuss three examples, the first of which explores the capability of a neuron that is synaptically coupled to itself, to store and repeat patterns of precisely timed spikes, which we regard as single cell ‘memories’. Drawing on analogies from semiconductor lasers, we append a delayed self-coupling term to the oft studied Morris-Lecar model of neuronal excitability and use bifurcation analysis to predict the number and type of memories the neuron can store. These results highlight the delay period as an important period parameter controlling the storage capacity of the cell. We then use the dynamic clamp protocol to introduce self-coupling to a mammalian cell and confirm the existence of the spiking patterns predicted by the model analysis. The second example covers preliminary work of investigating the origin of pulsatile secretion in corticotrophs in the pituitary gland. Such pulsatility has previously been conjectured to be strongly coupled to the delay period between secretion from the corticotrophs and feedback from the adrenal glands. Here, we combine Ca2+ imaging, mathematical modelling and dynamic perfusion to explore how delays influence behaviour of this combined system. The final example will explore how techniques combining control theory and bifurcation analysis with dynamic clamp can be used to probe single cell electrical excitability.


6th January 2023

Anne Skeldon (Professor of Mathematics)

1:00 PM BIOMED BLDG C44 and online: https://bristol-ac-uk.zoom.us/j/91296637128?pwd=U1lkcEVobkZQVlUrTGIwUWNDUkNJdz09

Meeting ID: 912 9663 7128 Passcode: 017491

Sleep regulation: physiological mechanisms and the design of light interventions for improved sleep

In this talk I will give a brief overview of the fundamental mechanisms that are believed to underpin sleep-wake regulation (sleep homeostasis, circadian rhythmicity, light) and high-level phenomenological and neuronal models that capture these mechanisms. Using data collected in 20 people living with schizophrenia and 21 healthy (unemployed) controls, I will then discuss how data and models can be used to uncover the relative contributions of physiological and environmental factors driving different sleep phenotypes. The talk will highlight how personalised models could be used to co-design light interventions with patients.


 

Winter&Spring 2021/2022


10th June 2022

Carsen Stringer (Group Leader, HHMI Janelia Research Campus)

1:00 PM Onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092  Passcode: 225286

Making sense of large-scale neural and behavioral data 

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures complex temporal and highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data from a 2AFC task, and artificial neural activity from an agent playing atari games. We found within these datasets unique subpopulations of neurons encoding abstract elements of decision-making, the environment and behavioral states. To interrogate behavioral representations in the mouse brain, we developed a fast deep-learning model for tracking 13 distinct points on the mouse face recorded from arbitrary camera angles. The model was just as accurate as state-of-the-art pose estimation tools while being several times faster, making it a powerful tool for closed-loop behavioral experiments. Next, we aligned facial key points across mice in order to train a universal model to predict neural activity from behavior. The universal mouse model could predict neural activity as well as a model fit to a single mouse, showing that neural representations of behaviors are conserved across mice. The latent states extracted from the universal model contained interpretable mouse behaviors.

6th May 2022

Paul Cisek (Professor, University of Montréal)

2PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092  Passcode: 225286

Neural dynamics of embodied decisions

Decision-making is not a single thing, but rather a variety of processes that have been gradually elaborated and refined over the brain’s long evolutionary history. In the first part of my talk, I will briefly describe that history, emphasizing the kinds of decision processes that dominated animal behavior for hundreds of millions of years. In particular, I will focus on theories of how animals make real-time decisions between action opportunities available in a dynamically changing world. I will then summarize experimental studies suggesting that these kinds of action decisions unfold in parallel across cortical and subcortical circuits, laying the foundations for more advanced aspects of primate behavior.


29th April 2022

Karan Grewal (Research Scientist, Numenta)

 4:15 PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092 Passcode: 225286

Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environment

A key challenge for AI is to build systems that must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios where the task changes over time, and this phenomenon is known as catastrophic forgetting. In this talk, I will discuss how biophysical properties of dendrites in the brain and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Specifically, I will highlight the performance of a deep learning architecture that embodies properties of dendrites on two separate benchmarks requiring task-based adaptation: Meta-World (a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously) and permutedMNIST (a continual learning benchmark in which the model’s prediction task changes throughout training). Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. This work sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.


22nd April 2022

H Freyja Ólafsdóttir (Assistant Professor of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour) 

1PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092 Passcode: 225286

Hippocampal-entorhinal circuits for spatial memoryThe hippocampus is important for spatial and episodic memory. Place cells – the principal cell of the hippocampus – represent information about an animal’s spatial location. Yet, during sleep and rest place cells spontaneously recapitulate (‘replay’) past trajectories. Replay has been hypothesised to serve a variety of functions in memory. In my talk I will describe recent work I carried out which showed replay may support a dual function: underpinning both spatial planning as well as the consolidation of new memories. Namely, we found during rest periods place and grid cells, from the deep medial entorhinal cortex (dMEC, the principal cortical output region of the hippocampus), replayed coherently. Importantly, putative dMEC replay lagged place cell replay by ~11ms; suggesting the replay coordination may reflect consolidation. Moreover, in a separate study we found replay occurring just before movement to or upon arrival at a reward site preferentially depicted locations and trajectories consistent with the animals’ current task demands; perhaps indicative of spatial planning. However, we also found replay could dynamically ‘switch’ between a planning and consolidation mode, in relation to engagement with task demands, and we found planning-like replay predicted the accuracy of imminent spatial decision. Finally, I will discuss unpublished work showing how the formation of hippocampal-dMEC cell assemblies during encoding periods may underlie hippocampal-dMEC replay coordination and on-going work where we employ an ontogenetic approach to elucidating the neural circuit mechanisms of spatial memory.


8th April 2022

Mark D Humphries (Professor of Computational Neuroscience, University of Nottingham)

1PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092 Passcode: 225286

Simultaneous and separable latent encoding of arm movement direction and kinematics in motor cortexLittle is known about if and how multiple features of movement are simultaneously encoded by population activity in motor cortex. Using neural activity from dorsal premotor cortex (PMd) and motor cortex (M1) as monkeys performed a sequential arm movement task, in this talk I will show that the direction and kinematics of arm movements are simultaneously but separably encoded in the low-dimensional trajectories of population activity. Trajectories of population activity encoded the direction of arm movement, with the distances between neural trajectories proportional to the difference in angle between the directions they encoded. By contrast, different durations of arm movements in the same direction were encoded by the how long the neural trajectory took to traverse. A recurrent neural network (RNN) model of our results suggested the direction and duration could be independently controlled by respectively rotating the inputs to motor cortex and scaling the effective neuron time constant within motor cortex. Our results propose a mechanism for the simultaneous yet independent control of multiple arm movement features by motor cortex.


1st April 2022

Youssef Mohamed (Doctoral Student, KTH Royal Institute of Technology)

1PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092 Passcode: 225286

Human-Robot Interaction and Social Robotics

How robots perceive humans is as important as how humans perceive robots. Hence, developing systems that are able to understand and rationalize our actions, is the first step to creating more socially aware robots. Nonetheless, doing so can be “complicated” as human actions can sometimes be illogical but patterns can be detected and inferences can be made using AI.​ In this seminar we will be discussing the state-of-the-art approaches of detecting human intentions and internal states and how those systems are able to do so. Furthermore, we will also bring up some of the moral and societal concerns that those systems invoke. 


25th March 2022

Dr Alexandra Keinath (Postdoctoral Fellow, McGill University)

2PM BIOMED BLDG C44 and onlinehttps://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Meeting ID: 921 4480 1092 Passcode: 225286

Dynamic maps of a dynamic world: how hippocampal and entorhinal representations cope with an ever-changing experience of an unstable world 

Extensive research has revealed that the hippocampus and entorhinal cortex maintain a rich representation of space through the coordinated activity of place cells, grid cells, and other spatial cell types. Frequently described as a ‘cognitive map’ or a ‘hippocampal map’, these maps are thought to support episodic memory through their instantiation and retrieval. Though often a useful and intuitive metaphor, a map typically evokes a static representation of the external world. However, the world itself as well as our experience of it are intrinsically dynamic. Here I will present three projects where we address how hippocampal and entorhinal representations adapt to, incorporate, and overcome these dynamics. In the first project, I will describe how boundaries dynamically anchor entorhinal grid cells and human spatial memory alike when the shape of a familiar environment is changed. In the second project, I will describe how the hippocampus maintains a representation of the recent past even in the absence of disambiguating sensory and explicit task demands, a representation which causally depends on intrinsic hippocampal circuitry. In the third project, I will describe how the hippocampus preserves a stable representation of context despite ongoing representational changes across a timescale of weeks. Together, these projects yield new insight into the dynamic and adaptive nature of our hippocampal and entorhinal representations and set the stage for exciting future work building on these techniques and paradigms.


18th March 2022

Dr Petra Fischer (Lecturer, University of Bristol)

1PM BIOMED BLDG C44

Coordination of neural activity during action choice

Life is a series of action choices. One can ignore external events or react to them. I will discuss two projects that show cortical gamma synchronization during rapid evaluation of external events and adjustment of an ongoing action. Our behavioural task is simple, but can be used as a powerful tool to understand how neural activity is coordinated within cortico-subcortical circuits to produce fundamental components of flexible behaviour. I will discuss the implications of brief periods of neural synchronization for coordination of neural activity and ideas for future projects.


11th March 2022

Darinka Trübutschek (Research associate, the Max Planck Institute for Empirical Aesthetics)

1PM Meeting ID: 921 4480 1092 Passcode: 225286

https://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Decoding our thoughts: Tracking the contents of non-conscious working memoryOur daily and intellectual lives depend on our ability to hold information in mind for immediate use. Despite a rich research history, cracking the neuro-cognitive code of such working memory remains one of the most important challenges for neuroscience to date. According to the pre-dominant theoretical stance, maintaining information in working memory requires conscious, effortful activity sustained over the entire delay period.However, this might reflect only the tip-of-the-iceberg. Recent studies have shed doubt on these long-standing assumptions showing either that (1) even subliminal stimuli may be stored for several seconds in non-conscious working memory and that (2) memories might also be retained in the absence of accompanying neural activity in activity-silent working memory by means of slowly decaying synaptic changes.During this talk, I will present evidence from a series of behavioral and magnetoencephalography experiments that help to reconcile these diverging frameworks. Specifically, while the short-term maintenance of information may be entirely decoupled from both conscious experience and persistent neural activity, the hallmark feature of working memory – the ability to manipulate information – requires both. As such, these findings provide strong evidence against a genuinely non-conscious ‘working’ memory, and instead point towards the existence of an activity-silent short-term memory. Moreover, they offer a theoretical framework of how both active and/or conscious and activity-silent and/or non-conscious brain processes may interact to support working memory.


25th February 2022

Dr Charlotte Horne (Senior Research Associate, University of Bristol)

1PM BIOMED BLDG C44

The role of cognitive control in treatment-resistant schizophrenia and the relationship with sleep loss

In part 1 of the talk, I will present recent work completed during my post-doc at King’s College London. Approximately one third of patients with schizophrenia fail to respond to antipsychotic medication – termed ‘treatment resistance’ – and the underlying mechanisms remain unclear. Here, I present data suggesting poor cognitive control may be a mechanism underlying treatment resistant schizophrenia. We investigated effective connectivity within a network of interacting regions responsible for cognitive control, sensory and reward processing using Dynamic Causal Modelling (DCM) whilst patients performed an fMRI reward learning task. Treatment-resistant patients had reduced top-down connectivity compared to treatment-responsive patients which may be underpinned by a glutamatergic abnormality (as measured using MR spectroscopy). Our findings suggest that treatment resistance may represent a subtype of schizophrenia with a distinct underlying mechanism that is not targeted by current medication.

In part 2 of the talk, I will (informally) present some of my current interests around the causal role of sleep loss on the development of mental illness (e.g. depressive and psychotic symptoms). In particular the effects of sleep/circadian disruption on cognitive control and what makes some people more vulnerable to these effects. I hope to create more of an informal discussion around these ideas as I am really keen to hear more about similar research taking place in Bristol and to collaborate.


18th February 2022

Karan Grewal (Research Scientist, Numenta)

4PM Meeting ID: 921 4480 1092 Passcode: 225286

https://bristol-ac-uk.zoom.us/j/92144801092?pwd=YVNKeGFFRW1kb09iQnU0MkRIRzV5UT09

Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environment

A key challenge for AI is to build systems that must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios where the task changes over time, and this phenomenon is known as catastrophic forgetting. In this talk, I will discuss how biophysical properties of dendrites in the brain and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Specifically, I will highlight the performance of a deep learning architecture that embodies properties of dendrites on two separate benchmarks requiring task-based adaptation: Meta-World (a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously) and permutedMNIST (a continual learning benchmark in which the model’s prediction task changes throughout training). Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. This work sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.


11th February 2022

Dr Lillian J Brady (Postdoctoral Research Fellow, Vanderbilt University)

3PM Meeting ID: 937 0313 1980 Passcode: 422725

https://bristol-ac-uk.zoom.us/j/93703131980?pwd=MFBNbDU4bFcxeTRRL3NxdlU2SVZ4UT09

Sex differences in cholinergic regulation of dopamine release through nicotinic receptors mediate sexually dimorphic behavior

Dopamine release dynamics in the mesolimbic dopamine pathway, which connects the ventral tegmental area (VTA) to the nucleus accumbens (NAc), are an essential component of the process that controls motivation and reward-seeking behavior in Substance Use Disorder. In the NAc specifically, tonic and phasic dopamine release is known to play a critical role in converting information about environmental reward-predictive cues to anticipated rewarding outcomes and is heavily modulated by the activity of cholinergic (ChAT) interneurons signaling through nicotinic acetylcholine receptors (nAChRs). Using operant conditioning and fast-scan cyclic voltammetry with pharmacology we defined sex differences in ChAT regulation of dopamine release underlying sex-specific motivational strategies for non-drug rewards. We find critical differences in cholinergic regulation of dopamine terminals that underlies distinct differences in behavioral strategies between males and females.


26th November 2021

Dr Emma Cahill (Lecturer, University of Bristol)

1PM BIOMED BLDG C44

BLA, BLA, BLA… what about the BNST!?: Role of the amygdala and extended amygdala circuits for the detection of threat cues in rats

Learning which threats to avoid is key for survival. We can model how a rodent learns about threats using pavlovian associative conditioning tasks, where the rodent learns that a specific conditioned stimulus was predictive of an aversive event. The imminence of a threat is assessed by either the threat cue physical proximity as close/distant, or by a psychological prediction of its likelihood as a recognisable predictable/unpredictable event. There is a wealth of data regarding the neurochemical mechanisms that take place within the amygdala nuclei, basolateral (BLA) and Central (CeN) for this associative learning to occur. However, the so-called extended amygdala, the bed nuclei of the stria terminalis (BNST), seem to play a role specifically in the detection of ambiguous threat cues, more akin to anxiety-like responses. In this talk, I will present some recent data on the activation of the amygdala and extended amygdala after modifications of the threat cue. In one set of experiments, the predictability of the CS-US relationship was modified by changing the reinforcement contingency during training. In a second set of experiments, the salience of the CS was modified at test to reduce its detectability. There remain many open questions regarding under what conditions the specific subnuclei of the BNST become recruited and how they regulate interconnected amygdala circuits to influence responding to threats.


19th November 2021

Professor Anne Collins (Assistant Professor, University of California, Berkeley) 

3PM Meeting ID: 937 0313 1980 Passcode: 422725

https://bristol-ac-uk.zoom.us/j/93703131980?pwd=MFBNbDU4bFcxeTRRL3NxdlU2SVZ4UT09

Executive contributions to reinforcement learning computations in humans

The study of the neural processes that support reinforcement learning has been greatly successful. It has characterized a simple brain network (including cortico-basal ganglia loops and dopaminergic signalling) that enables animals to learn to make valuable choices, using valanced outcomes. However, increasing evidence shows that the story is more complex in humans, where additional processes also contribute importantly to learning. In this talk, I will show three examples of how prefrontal-dependent executive processes are essential to reinforcement learning in humans, operating both in parallel to the brain’s reinforcement learning network, as well as feeding this network information.


12th November 2021

Dr Abhishek Banerjee (Senior Lecturer, University of Newcastle)

1PM BIOMED BLDG C42

Prefrontal reprogramming of sensory cortex: Cellular and computational principles 

Animals adapt their behaviour in response to variable changes in reward reinforcement. Value-based decision-making involves multiple cognitive maps across distributed brain areas. It is less clear which brain regions are essential and how changes in neural responses flexibly re-maps guiding adaptive behaviour. In this talk, I will highlight behavioural-neural interactions between orbitofrontal and somatosensory circuits that implement flexible decision-making. I will present further evidence of how some of these functions are disrupted in autism spectrum disorders, arguing for a new conceptual framework based on computational psychiatry to understand cognitive pathophysiology in neurological disorders.


22nd October 2021

Dr Michele Veldsman (Research Scientist, University of Oxford)

1PM GEOG BLDG G.11N SR1

MRI Markers of vascular cognitive impairment

Cerebrovascular risk factors increase the likelihood of dementia. High cerebrovascular burden leads to vascular dementia, accounting for around 20% of dementia cases. Less well appreciated, is that up to 70% of patients with Alzheimer’s disease (AD) also have cerebrovascular disease pathology at post-mortem. The impact of mixed pathologies is likely greatly underestimated. Studies of neurodegenerative dementias rarely control cerebrovascular burden, beyond age and obvious magnetic resonance imaging (MRI) markers like white matter hyperintensities (WMHs). In this talk, I will show work investigating MRI markers of cerebrovascular burden in healthy ageing in 22 000 people from the UK Biobank. I will demonstrate new methods for the estimation of the spatial distribution of cerebrovascular risk-related WMHs and their impact on cognition. I will also present work looking at the importance of microstructural integrity of normal appearing white matter and integrity of grey matter in distributed brain networks for the preservation of cognitive function in healthy ageing and after ischaemic stroke. Together, I will build up a picture of the important MRI markers of cerebrovascular burden that may act as transdiagnostic markers of cognitive impairment.

Autumn 2021

All Neural Dynamics Forum talks during Autumn 2021 will take place online through Zoom unless specified otherwise, details as below.

​Meeting ID: 932 7630 6396
Password: 815933
https://zoom.us/s/93276306396


1PM September 24th – Dr Mike Ambler (Clinical Lecturer, University of Bristol)

GEOG BLDG G.11N SR1

Torpor TRAP: from mice to Mars

Torpor is a naturally occurring hypothermic, hypometabolic state employed by a wide range of species in response to a paucity of food availability. It can be brief as seen in daily heterotherms, or prolonged as seen in seasonal hibernators. Understanding the neural control of torpor might allow synthetic torpor states to be induced in species for which it is not an extant behaviour, which might have useful clinical or space travel applications. I will discuss the approach to finding the neural switch for a behaviour about which little is known. I will present data from the mouse (a daily heterotherm) in which the use of targeted recombination in active populations (TRAP) allows dissection of the role of specific hypothalamic nuclei in the induction of torpor. From this work, the preoptic area of the hypothalamus (POA) emerges as a central region capable of triggering torpor in the mouse. Finally, I will show that in the rat, which does not naturally enter torpor, chemogenetic activation of the POA induces a state with remarkable similarities to torpor.


2PM September 17th – Xiaosi Gu (Icahn School of Medicine at Mount Sinai)

The Social Brain: From Models To Mental Health

Given the complex and dynamic nature of our social relationships, the human brain needs to quickly learn and adapt to new social situations. The breakdown of any of these computations could lead to social deficits, as observed in many psychiatric disorders. In this talk, I will present our recent neurocomputational and intracranial work that attempts to model both 1) how humans dynamically adapt beliefs about other people and 2) how individuals can exert influence over social others through model-based forward thinking. Lastly, I will present our findings of how impaired social computations might manifest in different disorders such as addiction, delusion, and autism. Taken together, these findings reveal the dynamic and proactive nature of human interactions as well as the clinical significance of these high-order social processes.

Spring 2021

All Neural Dynamics Forum talks during Spring 2021 will take place online through Zoom unless specified otherwise, details as below.

​Meeting ID: 932 7630 6396
Password: 815933
https://zoom.us/s/93276306396


1PM June 25th – Professor Steve Coombes (University of Nottingham)
* Note: Forum will take place in person outdoors at Royal Fort Gardens 

Next generation neural field modelling

Neural mass models have been actively used since the 1970s to model the coarse-grained activity of large populations of neurons and synapses.  They have proven especially fruitful for understanding brain rhythms.  However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue.  In this talk I will discuss a simple spiking neuron network model that has recently been shown to admit to an exact mean-field description for synaptic interactions.  This has many of the features of a neural mass model coupled to an additional dynamical equation that describes the evolution of population synchrony.  I will show that this next generation neural mass model is ideally suited to understanding beta-rebound. This is readily observed in MEG recordings whereby motor action causes a drop in the beta power band attributed to a loss of network synchrony.  Existing neural mass models are unable to capture this phenomenon (event related de-synchrony) since they do not track any notion of network coherence (only firing rate).  I will spend the latter part of my talk discussing patterns and waves in a spatially continuous non-local extension of this model, highlighting its usefulness for large scale cortical modelling.


1PM June 11th – Dr Michael Proulx (Reader in Psychology, University of Bath)

Understanding spatial cognition through virtual and augmented reality

Spatial knowledge is key for most everything we do. The study of spatial cognition is now able to take advantage of advances in computational modelling, Virtual Reality and Augmented Reality with important implications for theory and application. I will explore these issues through a few case studies of our research, including: the use of eye-tracking with interactive virtual environments; visual impairments and pain to explore multisensory experiences of space; and using motion-tracking and augmented reality to assess the presentation of visual information in tactile or auditory displays to the blind or blindfolded. These approaches and immersive technologies hold great potential for advancements in fundamental and translational neuroscience.


1PM June 4th – Professor Peter Ashwin (Professor of Mathematics, University of Exeter)

Excitable Networks in Continuous Time Recurrent Neural Networks

Continuous time recurrent neural networks (CTRNN) are systems of coupled ordinary differential equations that are simple enough to be insightful for describing learning and computation, from both biological and machine learning viewpoints. We describe a direct constructive method of realising finite state input-dependent computations on an arbitrary directed graph. The constructed system has an excitable network attractor whose dynamics. The resulting CTRNN has intermittent dynamics: trajectories spend long periods of time close to steady-state, with rapid transitions between states. Depending on parameters, transitions between states can either be excitable (inputs or noise needs to exceed a threshold to induce the transition), or spontaneous (transitions occur without input or noise). In the excitable case, we show the threshold for excitability can be made arbitrarily sensitive.


1PM May 28th – Professor Andre Fenton (Professor of Neural Science, New York University)

Memory, learning to learn, and control of cognitive representations

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.


1PM May 14th – Dr Sarah Morgan (Senior Research Associate, University of Cambridge Brain Mapping Unit)

What can brain MRI tell us about schizophrenia?

Schizophrenia is an extremely debilitating disease, which affects approximately 1% of the population during their lifetime. However, to date there are no known biomarkers for schizophrenia, the biological mechanisms underpinning the disease are unclear, and there has been correspondingly little progress on new therapeutics. In this talk, I will discuss how MRI brain imaging can begin to address these challenges, using data from the Psyscan project (http://psyscan.eu/). In the first part of the talk, I will show how morphometric similarity mapping and imaging transcriptomics can shed fresh light on structural brain differences in schizophrenia (Morgan et al, PNAS 2019). In the second part, I will examine the extent to which MRI data can be used to distinguish patients with schizophrenia from healthy volunteers. We find that fMRI can do this with high accuracy, based on a reproducible pattern of cortical features associated with neurodevelopment (Morgan*, Young* et al, BP:CNNI 2020). Overall, we begin to see how MRI can give us a more integrative understanding of schizophrenia, which might inform future treatments.


1PM May 7th – Professor Daniel Wolpert (Professor of Neuroscience, Zuckerman Mind Brain Behaviour Institute, Columbia University)

Computational principles underlying the learning of sensorimotor repertoires

Humans spend a lifetime learning, storing and refining a repertoire of motor memories appropriate for the multitude of tasks we perform. However, it is unknown what principle underlies the way our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. I will review our work on how humans learn to make skilled movements focussing on the role of context in activating motor memories and how statistical learning can lead to multimodal object representations. I will then present a principled theory of motor learning based on the key insight that memory creation, updating, and expression are all controlled by a single computation – contextual inference. Unlike dominant theories of single-context learning, our repertoire-learning model accounts for key features of motor learning that had no unified explanation and predicts novel phenomena, which we confirm experimentally. These results suggest that contextual inference is the key principle underlying how a diverse set of experiences is reflected in motor behavior.


2PM April 30th – Tali Sharot (UCL)

How People Decide What They Want to Know: Information-Seeking and the Human Brain

The ability to use information to adaptively guide behavior is central to intelligence. A vital research challenge is to establish how people decide what they want to know. In this talk I will present our recent research characterizing three key motives of information seeking. We find that participants automatically assess (i) how useful information is in directing action, (ii) how it will make them feel, and (iii) how it will influence their ability to predict and understand the world around them. They then integrate these assessments into a calculation of the value of information that guides information-seeking or its avoidance. These diverse influences are captured by separate brain regions along the dopamine reward pathway and are differentially modulated by pharmacological manipulation of dopamine function. The findings yield predictions about how information-seeking behavior will alter in disorders in which the reward system malfunctions. We test these predictions using a linguistic analysis of participants’ web searches ‘in the wild’ to quantify their motive for seeking information and relate those to reported psychiatric symptoms. Finally, using controlled behavioral experiments we show that the three motives for seeking information appear early in developmental following roughly linear trajectories.


2PM April 16th – Bard Ermentrout (University of Pittsburgh)

A Robust Neural Integrator Based on the Interactions of Three Time Scales

Neural integrators are circuits that are able to code analogue information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights.   In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals.  I show the mechanism for this can be explained by the instability regions of the Mathieu equation.  I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.


1PM April 9th – Paul Anastasiades (University of Bristol)

Circuit organisation of the rodent prefrontal thalamo-cortical system

Interactions between the thalamus and prefrontal cortex (PFC) play a critical role in cognitive function and arousal and are disrupted in neuropsychiatric disorders. The PFC is reciprocally connected with ventromedial (VM) and mediodorsal (MD) thalamus, both higher-order nuclei with distinct properties to the classically studied sensory relay nuclei. To understand the properties of the circuits linking PFC and thalamus we use anatomical tracing, electrophysiology, optogenetics, and 2‐photon Ca2+ imaging, determining how VM and MD target specific cell types and subcellular compartments of mouse PFC. Focusing on cortical layer 1, we find thalamic nuclei target distinct sublayers, with VM engaging NDNF+ cells in L1a, and MD driving VIP+ cells in L1b. These separate populations of L1 interneurons participate in different inhibitory networks in superficial layers by targeting either PV+ or SOM+ interneurons. NDNF+ cells mediate a unique form of thalamus-evoked inhibition at PT cells, selectively blocking VM-evoked dendritic Ca2+ spikes. Together, our findings reveal how two thalamic nuclei differentially communicate with the PFC through distinct L1 micro‐circuits and how inhibition is critical for controlling PFC output back to thalamus.

Winter 2020/2021

All Neural Dynamics Forum talks during Winter 2021 will take place online through Zoom, details as below.

​Meeting ID: 932 7630 6396
Password: 815933
https://zoom.us/s/93276306396


1PM March 26th – Steve Brown (University of Zurich)

Cellular and circuit-based mechanisms underlying the daily regulation of sleep

“Mammalian sleep and wake follows a complex daily pattern influenced by both a circadian clock controlling vigilance according to time of day, and a sleep homeostat controlling vigilance according to prior wake history.  In this lecture, we shall consider mechanisms underlying both central and local control of these processes, and how they in turn intimately control global metabolism.”


1PM March 19th – Yaara Erez (University of Cambridge)

Towards personalised neuroimaging in neurosurgery: linking brain structure and function

The importance of quality of life of patients following neurosurgery for brain tumors has been increasingly recognized in recent years. Emphasizing the balance between oncological and functional outcome, an emerging discipline at the forefront of research and patient care focuses on cognitive function. In current surgical standard practice, focal electrical stimulation on the exposed brain while patients are awake is used for mapping areas critical for motor function as well as language to prevent irreversible damage as a result of tissue removal. However, some cognitive functions are harder to map with standard stimulation alone. In the talk, I will present my work aimed at developing techniques and tools for mapping cognitive function in neurosurgery. I will focus on a particularly challenging aspect of cognition – executive functions – how we set and achieve goals, make plans, and prioritize tasks, which are essential to all aspects of our everyday life. Because of the complex nature of these functions and the distributed neural systems that support them, there are currently no established techniques for their functional mapping in neurosurgery. I will introduce a novel method for mapping executive function during awake neurosurgery using electrocorticography (ECOG) – recording directly from the surface of the brain – while patients perform cognitive tasks. I will show evidence for the feasibility and utility of this method as a first step towards establishing its foundations. Critical to bridging the translational gap and bringing neuroimaging into use in neurosurgery is our understanding of the functional role of the neural networks associated with cognitive functions and our ability to identify them in individuals. I will therefore present supporting findings for these using functional MRI (fMRI) data in healthy human volunteers. Finally, I will discuss some open questions related to developing neuroimaging tools for personalised medicine in neurosurgery.


1PM March 12th – Alexandra Constantinescu (UCL)

How do our brains form maps of the world?

Navigating our mental world is thought to be similar to navigating in the real world. In this talk, I will present behavioural and fMRI studies investigating how spatial and non-spatial memories are organized into 2D cognitive maps using grid cell-like codes in the entorhinal and medial prefrontal cortices. First, I will show a paradigm for navigation in an abstract “bird space”. Second, I will present how humans can learn long lists of words using the memory palace technique and a virtual reality task inspired by Harry Potter. And third, I will talk about a new method we’re developing for analysing human grid-like codes in more detail, using a big data approach and 7T submillimeter fMRI. Our findings have implications in understanding the remarkable capacity of humans to generalize experiences to novel situations.


1PM March 5th – Narender Ramnani (Royal Holloway)

Cerebellum and Cognition

The cerebellum is well-known for its contribution to the control of skilled movement. The mechanisms include connectivity with the motor system and the ability of it’s remarkable circuitry to store motor memories, including those relating to simple conditioned motor responses acquired through Pavlovian conditioning. However, some cerebellar circuitry communicates with the prefrontal cortex – including areas of that have important roles in cognitive function but little to do with motor control. In this lecture I draw from theoretical neurobiology, anatomy, brain evolution and neuroimaging to address the ways in which cerebellar circuits might contribute to the skilled execution of cognitive operations, such as the instrumental learning of contingencies that link decisions with their antecedents and consequences.


2PM February 26th – Keith Doelling (Institut Pasteur in Paris)

Temporal prediction of natural rhythms in speech and music

The ability to predict the onset of future events is a critical feature for survival. Knowing in advance when some stimulus might occur improves our ability to detect, process and react to it. The neuroscientific field has broken down temporal prediction into two separate and distinct mechanisms: interval timing, the measurement and prediction of single time intervals, and rhythmic timing, the synchronization with repeated sequential intervals. This talk will probe this formulation by asking how far it can get us when dealing with realistic stimuli. Rarely in the natural world (even in music) are rhythms perfectly isochronous and rarer still are temporal intervals presented in isolation. Here we test the extension, particularly, of rhythmic processing models into more naturalistic settings in two parts. First, I will show in a series of studies that neural responses to naturalistic stimuli like speech and music are well modeled as an oscillator synchronized to quasi-rhythmic input. Second, I will present work comparing such a model with behavioral responses of participants to ambiguous rhythms, suggesting that a neural oscillator may act as a kind of rhythmic prior to improve sensory perception of quasi-rhythmic stimuli. Together, the work will present a clear direction for the study of temporal prediction in more realistic environments. It will highlight computational modeling as well as behavioral research as a critical avenue for the elucidation of neural mechanisms underlying the temporal prediction of music and of the environment at-large.


12 PM February 12th – Oliver J Robinson (UCL)

The translational cognitive neuroscience of anxiety.

Anxiety can be a normal adaptive process, but it can also become a clinical state. At both ends of the spectrum anxiety significantly alters the ways individuals make decisions and behave. However, our understanding of the mechanisms underlying such symptoms is at present limited and does not contribute to treatment development or clinical decision-making. In this talk I will outline our recent work which attempts to better understand anxiety through a combination of computational modelling of behaviour and neuroimaging of adaptive and pathological anxiety.


1PM January 29th – Anil Seth (University of Sussex)

Consciousness, complexity, and hallucination

What happens in the brain during hallucination, and how can the study of hallucination shed light on ‘normal’ conscious perception. I will describe a number of research projects applying neurodynamical analyses (e.g., complexity, Granger causality) and computational models to shed light on the brain basis of hallucinatory perception. These projects include analyses of human neuroimaging data recorded during the psychedelic state, stroboscopic-induced hallucinations, and the use of computational models of predictive perception to model diverse hallucinatory forms. I will contextualise these analyses within the framework of ‘computational neurophenomenology’ – the attempt to account for phenomenological properties of perceptual experience in terms of (models) of their underlying neural mechanisms.


1PM January 22nd – Daniel Bush (UCL)

Theta Oscillations and Phase Coding in the Mammalian Hippocampus

The mammalian hippocampus is implicated in spatial and episodic memory function. In the rodent, hippocampal network dynamics can be characterised by oscillatory activity in the 6-12Hz theta band during active behaviour, and in the 150-250Hz ripple band during quiescent waking and sleep. During these periods, hippocampal place cells encode behavioural trajectories on a compressed timescale as theta sweeps and replay events, respectively. I will present a series of MEG and intracranial EEG experiments showing that human hippocampal theta oscillations also play a role in spatial coding, functional connectivity and memory. Next, I will present theoretical work that describes how oscillatory activity can support the phase coding of information in the central nervous system. Finally, using rodent place cell recordings, I will demonstrate that the temporal code for location within a place field is preserved across different network states. In sum, these results indicate that the mammalian hippocampus consistently uses phase coding in the service of memory encoding and retrieval.


2PM  January 15th – Robb Rutledge (Yale University)

A Computational and Neural Model for Mood Dynamics

The happiness of individuals is an important metric for societies, but we know little about how daily life events are aggregated into subjective feelings. We have shown that happiness depends on the history of rewards and expectations, a result we have now replicated in thousands of individuals using smartphone-based data collection and quantified in relation to major depression (including in our new smartphone app https://thehappinessproject.app). Using fMRI, we show how happiness relates to neural activity in the ventral striatum and ventromedial prefrontal cortex. Computational modelling shows precisely how feelings vary across individuals in relation to a wide variety of factors including expectations, intrinsic reward, social comparison, and reinforcement learning.


 

Autumn 2020

All Neural Dynamics Forum talks during Autumn 2020 will take place online through Zoom, details as below.

​Meeting ID: 932 7630 6396
Password: 815933
https://zoom.us/s/93276306396


1PM December 18th Tobias U. Hauser (UCL)

Do we need a developmental computational psychiatry?

Many psychiatric disorders arise during adolescence, a time when the
brain undergoes fundamental reorganisation. However, it is unclear
whether and how the emergence of mental health problems is linked to
aberrant neurocognitive development. In my talk, I will discuss why it
is critical to understand (aberrant) cognitive and brain development if
we want to better understand how mental health problems arise. I will
present findings showing how psychiatric traits are linked to adolescent
brain myelination, and illustrate why computational neuroscience
approaches could be important in understanding psychiatric disorders.


1PM December 11th Aleks Domanski (University of Bristol)

Investigating population-level multisensory integration for predictive coding in the primary visual cortex
Why can you hear your friend more clearly in a noisy bar by watching their lips move as they speak?
During sensory processing under ambiguous conditions, integration of multisensory information may improve the extraction of input statistics and increase the accuracy of predictions about upcoming information. The computational principles underlying such a boost in predictive coding and their vulnerabilities to disruption, notably in Autistic individuals, remain poorly understood. Alongside primary sensory afferents, the recurrently connected network of primary visual cortex (V1) receives modulatory input from other sensory and frontal brain structures. Indeed, previous work demonstrates that visually tuned neurons in V1 also respond to tones and noise bursts. Recurrently connected ensembles of neurons conveying mixed combinations of audio-visual variables could, as demonstrated in higher-order brain circuits, provide a powerful computational substrate to facilitate the nonlinear classification of ambiguous sensory input. However, this is unexplored in the context of viewing more dynamically complex naturalistic scenes.
Here, I will examine a large population calcium imaging dataset (1000~2000 cells) from mouse V1 to study how past and current multisensory input statistics are integrated by the circuit during ambiguous natural movie viewing to improve the fidelity of predictive coding.

9AM December 4th Jihwan Myung (Taipei Medical University)

Multiple circadian clocks that are not always synchronized
 
Circadian clocks are biological clocks that maintain near 24-h periodicity with high precision. These clocks synchronize and make a robust clock when coupled. An interesting but often ignored feature of these clocks is that they do not always synchronize completely—sometimes by design. A functionally relevant case of close-to-synchronization can be found in the central clock called the suprachiasmatic nucleus (SCN), where its subpopulations deviate from phase-locking as the day-length increases as if the degree of synchrony served as a mechanism of seasonal time encoding. We then discovered that robust circadian clocks exist outside the SCN and they are not phase-locked with the SCN clock. Since the datasets needed to make these observations have high fidelity in time, i.e., the variables are dynamic and not static, experimenters need to understand the theoretical background on oscillator systems when designing experiments and interpreting data. Conversely, theoreticians equally need to appreciate the complexity of the biological system and imperfections in experimental approaches. We discuss some cases of circadian phase coordination and close-to-synchronization behaviors in the molecular, cellular, and tissue levels, and how these can be studied by experimental and theoretical approaches.

9AM November 20th – Woo-Young Ahn (Seoul National University)

Deep digital phenotypes using computational modeling, machine learning, and mobile technology

Machine learning has the potential to facilitate the development of computational methods that improve the measurement of cognitive and mental functioning, and adaptive design optimization (ADO) is a promising machine-learning method that might lead to rapid, precise, and reliable markers of individual differences. In this talk, I will present a series of studies that utilized ADO in the area of decision-making and for the development of ADO-based digital phenotypes for addictive behaviors. Lastly, I will introduce an open-source Python package, ADOpy, which we developed to increase the accessibility of ADO to even researchers who have limited background in Bayesian statistics or cognitive modeling.

https://ccs-lab.github.io/team/young-ahn/


1PM November 13th – Naoki Masuda (University at Buffalo)

Recurrence analysis of dynamic functional brain networks of individuals with epilepsy

Functional brain networks have been suggested to vary over time. We propose a new method to characterize dynamics of functional brain networks using the so-called recurrence plots (RPs) and their quantification. RPs and recurrence quantification analysis (RQA) were originally proposed for single nonlinear time series and have been applied to a range of dynamical systems and empirical data including neural signals. Here we propose these methods for dynamic networks, where recurrence is defined at the level of the functional networks, i.e., a network recurs to a past network if the distance between the two networks is sufficiently small. For resting-state magnetoencephalographic dynamic functional networks, we found that functional networks tended to recur more rapidly in people with epilepsy than healthy controls. For stereo electroencephalography (sEEG) data, we found that dynamic functional networks involved in epileptic seizures emerged before seizure onset, and RQA allowed us to detect seizures. The proposed methods can also be used for trying to understand dynamic functional networks in brain function in health and other neurological disorders.

http://www.buffalo.edu/cas/math/people/faculty/naoki-masuda.html


1PM November 6th – Rick Adams (UCL)

Beyond E/I imbalance – clarifying the fundamental circuit dysfunction in schizophrenia using biophysical modelling of multiple imaging paradigms.

Subjects with a diagnosis of schizophrenia show consistent differences from controls in neuroimaging paradigms such as resting state (rsEEG and rsfMRI), mismatch negativity (MMN) and 40 Hz auditory steady state response (ASSR). The underlying circuit changes causing these group differences are unclear, however, and it is not known whether the same abnormalities could underlie group differences in all paradigms. Nevertheless, it is widely hypothesised that schizophrenia involves a loss of synaptic gain – e.g. due to NMDA receptor dysfunction – and disrupted ‘balance’ between excitatory and inhibitory transmission in cortical circuits. Here we analyse a neuroimaging dataset containing data from controls (n=107), subjects diagnosed with schizophrenia (Scz, n=108) and their first degree relatives (n=57) each undergoing rsEEG, rsfMRI, MMN and 40 Hz ASSR paradigms. We use a variety of dynamic causal modelling approaches to estimate synaptic gain and other circuit parameters in auditory and frontal areas. We find some striking commonalities across paradigms, not just in synaptic gain in the Scz group, but also in relationships with symptoms and cognitive function. The potential for development of a model-based biomarker of synaptic gain is discussed.

https://iris.ucl.ac.uk/iris/browse/profile?upi=RAADA06


1PM October 30th – Michael J. Frank (Brown University)

Striatal dopamine computations in learning about agency

The basal ganglia and dopaminergic systems are well studied for their roles in reinforcement learning and reward-based decision making. Much work focuses on “reward prediction error” (RPE) signals conveyed by dopamine and used for learning. Computational considerations suggest that such signals may be enriched beyond the classical global and scalar RPE computation, to support more structured learning in distinct sub-circuits (“vector RPEs”). Such signals allow an agent to assign credit to the level of action selection most likely responsible for the outcomes, and hence to enhance learning depending on the generative task statistics. I will present experimental data from mice showing spatiotemporal dynamics of dopamine terminal activity and release across the dorsal striatum in the form of traveling waves that support learning about agency.

http://ski.cog.brown.edu/


Spring 2019/2020

June, 19th – Isabelle Ferezou

Title: A mesoscopic view of tactile sensory information processing in the cerebral cortex

Abstract: Since the first description of its remarkable cellular organization by Woolsey and Van der Loos (1970), the whiskers representation in the rodent primary somatosensory cortex (S1) has become a major model for studying the cortical processing of tactile sensory information. In its layer 4, neurons form clusters, called barrels, that share the same topology as the whiskers on the snout of the animal, each neuronal column associated with a barrel receiving primarily inputs coming from its corresponding whisker.

A huge amount of information has been collected over the past 50 years on the whiskers sensory system; however it is still largely unknown how it really integrates distributed information to build a global percept of the tactile scene. Working at a mesoscopic scale that allows visualizing how the information flows throughout cortical columns and further propagates to other cortical areas is a real asset to address this question. Voltage sensitive dye imaging, which benefits from a sub-columnar spatial resolution and a millisecond time resolution reveals how, upon tactile stimulation of a given whisker, information is rapidly transmitted to its corresponding column in S1, but also, within the next couple of milliseconds, to the secondary somatosensory cortex and then to the primary motor cortex. Using this method, we described with an unprecedented precision the topography of whiskers representation, as well as the lateral propagation of sensory inputs within these cortical areas, thus providing insights in the neuronal dynamics at play for integration of complex multi-whisker inputs in the cortical network.

 

June, 12th – Pradeep Dheerendra

Title: Dynamics underlying auditory object boundary detection and segregation

Abstract: A visual object might be easy to define and understand, but objects perceived via audition are also important. Auditory object analysis involves the process of detecting, segregating and representing spectro-temporal regularities in the acoustic environment into stable perceptual units. Thus the auditory system accomplishes the process of transformation of acoustic waveform into an object based representation. This talk focuses on two fundamental aspects of auditory object processing viz. detection of auditory object boundary and auditory segregation. In the first study, I present the dynamics underlying the detection of emergence of a new auditory object in an ongoing auditory scene using MEG. I found a slow drift signal at the object boundary which I think might be the precision signal. In the second study, I present the brain basis underlying human auditory figure-ground analysis in a macaque model using fMRI and psychophysics. This has provided spatial priors for macaque neurophysiology.

June, 5th – Alex Cayco Gajic

Title: High-dimensional representations in cerebellar granule cells

Abstract: The cerebellum is thought to learn sensorimotor relationships to coordinate movement. Sensory and motor information is sent to a large number of cerebellar granule cells, which comprise the vast majority of neurons in the brain. Theoretically, this large anatomical expansion is thought to help pattern separation by representing sensorimotor information in a high-dimensional granule cell population code. However, how the granule cell population activity encodes sensory and motor information, and whether granule cell populations can support high-dimensional representations, is poorly understood. To address this, we used a high-speed random-access 3D 2-photon microscope to simultaneously monitor the Ca2+ activity in hundreds of granule cell axons of spontaneously behaving animals. We find that granule cell population activity transitions between separate, orthogonal coding spaces representing periods of quiet wakefulness vs. active movement, and that the granule cell representation is higher dimensional than has previously been observed.

May, 29th – Lucia Prieto Godino

Title: Evolution of olfactory systems on the fly

Abstract: Sensory systems encode the world around us to guide context-dependent appropriate behaviours that are often species-specific. This must involve evolutionary changes in the way that sensory systems extract environmental features and/or in the downstream sensory-motor transformations implemented. However, we still know little about how evolution shapes neural circuits. We are studying the olfactory system of Drosophila and tsetse flies across multiple species spanning a wide range of ecological niches and divergence times. We find divergent odour-guided behaviour towards host odours. To elucidate the cellular, circuit and molecular basis behind this behavioural evolution we are employing a multidisciplinary approach, including field work, the development of genetic tools across species, calcium imaging, single cell transcriptomics and reconstruction of central olfactory circuits at synaptic resolution. I will discuss the progress we have made in our efforts to understand how evolution tinkers neural circuits as animals adapt to different environments.

 

May, 22nd – Bradley Love

Title: A clustering account of spatial and non-spatial concept learning

Abstract: How do we learn to categorise novel items and what is the brain basis of these acts? For example, after a child is told an animal is a dog, how does that experience shape how she classifies future items? I will present model-based fMRI results concerning how people learn categories from examples and touch on parallel findings with monkey single-unit recordings. Our analyses indicate that the medial temporal lobe (MTL), including the hippocampus, plays an important role in both learning and recognition. Successful cognitive models, which explain both behavioural and brain measures, learn to selectively weight (i.e., attend) to stimulus aspects that are task relevant. This form of weighting, or top-down attention, can be viewed as a compression process. I will discuss how the medial prefrontal cortex (mPFC) and the hippocampus coordinate to build low-dimensional representations of learned concepts, as well as how the dimensionality of visual representations along the ventral stream is altered by the learning task. Finally, this general learning mechanism offers a straightforward account of spatial learning, including place and grid cell activity in both human and rodent studies.

 

May, 15th – Grace Lindsay

Title: Modelling the influence of feedback in the visual system

Abstract: Cortico-cortical feedback is common in the visual system and is believed to be involved in processes such as perceptual inference, attention, and learning. In this talk I will demonstrate how convolutional neural networks can be used to explore how such feedback works. In the first half of the talk, I will focus on the signals from prefrontal areas that are believed to control top-down feature attention. In the second half, I’ll discuss ongoing work on how local feedback connections help process noisy images.

 

May, 8th – bank holiday

 

May, 1st –


April, 24th –

 

April, 17th – Easter

 

April, 10th – Bank holiday

 

April, 3rd – Timothy O’Leary


March, 27th – Silvia Maggi

 

Winter 2019/2020

March, 20th –

 

March, 13th – Krasimira Tsaneva-Atanasova

Title: The Origin of GnRH Pulse Generation: An Integrative Mathematical-Experimental Approach

Abstract: The gonadotropin-releasing hormone (GnRH) pulse generator controls the pulsatile secretion of the gonadotropic hormones LH and FSH and is critical for fertility. The hypothalamic arcuate kisspeptin neurons are thought to represent the GnRH pulse generator, since their oscillatory activity is coincident with LH pulses in the blood; a proxy for GnRH pulses. However, the mechanisms underlying GnRH pulse generation remain elusive. We developed a mathematical model of the kisspeptin neuronal network and confirmed its predictions experimentally, showing how LH secretion is frequency-modulated as we increase the basal activity of the arcuate kisspeptin neurons in vivo using continuous optogenetic stimulation. Our model provides a quantitative framework for understanding the reproductive neuroendocrine system and opens new horizons for fertility regulation.

 

March, 6th – Matthias Hennig

Title: SpikeInterface: A project for reproducible next generation electrophysiology

Abstract: Many electrophysiologists would agree that spike sorting is somewhat of a dark art, with many secrets, black-box algorithms (occasionally probably written in blood) and heuristics and superstitions. With exciting new large scale probes and arrays now shipped to many labs and producing terabytes of recordings, reliable and reproducible analysis becomes increasingly harder to achieve. In this talk I will show (and attempt to live-demo) SpikeInterface, a project that aims to bring together the many efforts that have been put into spike sorting by many groups over the past decade and beyond. This project not only wraps many sorters, tools and and file formats, but also provides new methods for assessing quality of sorted spikes based on comparison between sorters and with ground truth data. We found a surprisingly low agreement between sorters, and show that this is due to high false positive rates that cannot be corrected for using common heuristics. Here I will suggest methods and workflows to remedy and improve this situation, which are often implemented with a few lines of code.

https://github.com/SpikeInterface

https://www.biorxiv.org/content/10.1101/796599v1

This project is joint work with: Alessio P. Buccino, Cole L. Hurwitz, Jeremy Magland, Samuel Garcia, Joshua H. Siegle, Roger Hurwitz


Febbruary, 28st – Mara Cercignani

Title: MRI for In Vivo Imaging of the Effects of Inflammation on the CNS

Abstract: Recent evidence supports a role for inflammation in several psychiatric disorders such as Alzheimer’s disease and major depression. One of the mechanisms underpinning CNS inflammation is the activation of microglia, which can be imaged using Translocator Protein (TSPO) PET. This technique, however, is costly and difficult to implement. This talk will present some of the results obtained in our lab using non-invasive, quantitative MRI approaches to assess the effects of inflammation on the brain.

Febbruary, 21st – Arno Onken

 

Febbruary, 14th – Marcus Kaiser

Title: Structure and Dynamics of Human Connectomes: Applications for Informing Diagnosis and Treatment of Brain Disorders

Abstract:

Our work on connectomics over the last 15 years has shown a small-world, modular, and hub architecture of brain networks [1,2]. Small-world features enable the brain to rapidly integrate and bind information while the modular architecture, present at different hierarchical levels, allows separate processing of various kinds of information (e.g. visual or auditory) while preventing wide-scale spreading of activation [3]. Hub nodes play critical roles in information processing and are involved in many brain diseases [4].

After discussing the organisation of brain networks, I will show how connectivity in combination with machine learning and computer simulations can identify the progression towards dementia before the onset of symptoms informing interventions that can delay disease progression [5].

For epilepsy patients, connectome-based simulations can also be used to predict the outcome of surgical interventions as well as alternative target regions [6]. I will also present recent results on local changes in epilepsy, concerning structural connectivity within brain regions, which are more indicative of surgery outcome than connectivity between brain regions. In addition, we also developed models of tissue within a brain region (http://www.vertexsimulator.org). Such models can observe the effects of invasive [7] or non-invasive electrical brain stimulation.

I will finally outline how these models could, in the future, inform invasive interventions, such as optogentic stimulation in epilepsy patients (http://www.cando.ac.uk) or non-invasive interventions using electrical, magnetic or focused ultrasound stimulation.

[1] Martin, Kaiser, Andras, Young. Is the Brain a Scale-free Network? SfN Abstract, 2001.

[2] Sporns, Chialvo, Kaiser, Hilgetag. Trends in Cognitive Science, 2004.

[3] Kaiser et al. New Journal of Physics, 2007.

[4] Kaiser et al. European Journal of Neuroscience, 2007.

[5] Peraza et al. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 2019.

[6] Sinha et al. Brain, 2017.

[7] Thompson et al. Wellcome Open Research, 2019.

Febbruary, 7th – Liad Baruchin

Title: The early developing brain undergoes many changes in its basic neuronal connectivity.

Abstract: Specifically, in our lab, looking at the barrel cortex, we find that circuits involving VIP+ and SST+ IN completely change from birth to adulthood. Currently, I am investigating how these interneuronal populations are involved in early sensory perception. To do that, I am using a genetic model in which either SST+ or VIP+ interneurons are completely silenced. Thus, using silicon probes I can record from different layers over the barrel field and see how silencing this neuronal populations affect the neuronal response to passive whisking. In this talk I will present my most recent results that show that these neuronal populations differentially affect the cortical processing of whisking speed and paired-pulse adaptation.


January, 31st – Eleni Vasilaki

Title: Sparse Reservoir Computing (SpaRCe) for neuromorphic devices

Abstract: In this talk I will present fundamental ideas about biological learning in fruit flies, and how these are related to Machine Learning. Inspired by the architecture of small brains, and within the framework of Ecco State Networks, I will discuss the importance of neuron selectivity to specific stimuli. I will then introduce a threshold per reservoir neuron as an efficient mechanism to achieve sparseness in the neuronal representation. The threshold is adapted via a gradient rule on an error function structurally identical to threshold learning via backpropagation. And yet, a simple mathematical analysis of its consequences for the specific architecture shows that it leads to neuronal selectivity. I will show in simulations that, within this context, our approach is advantageous in terms of performance versus imposing sparseness of weights via L1 norm. I will also discuss how such learning architectures can be exploited in the context of neuromorphic engineering.

January, 24th – Miguel Maravall

Title: Tactile sequence learning induces selectivity to multiple task variables in the mouse barrel cortex.

Abstract: Sequential temporal patterning is a key feature of natural signals, used by the brain to decode stimuli and perceive them as sensory objects. To explore the neuronal underpinnings of sequence recognition and determine if neurons adjust temporal integration as a result of learning, we developed a task in which mice had to discriminate between sequential stimuli constructed from distinct vibrations delivered to the vibrissae (whiskers), assembled in different orders.

Optogenetic inactivation experiments showed that both primary somatosensory ‘barrel’ cortex (S1bf) and secondary somatosensory cortex are involved in the task, consistent with a serial flow of sensory input to decision-making stages. Two-photon imaging in superficial layers of S1bf of well-trained animals revealed heterogeneous neurons with selectivity to task variables including sensory input, the animal’s action decision, and trial outcome (rewards and their departure from prediction). A large fraction of neurons were activated preceding goal-directed licking, thus predicting the animal’s learned response to a target sequence rather than the sequence itself. These neurons were absent in naïve animals. Therefore, in S1bf learning resulted in neurons that embodied the learned association between the presence of the target sequence and licking, instead of neurons that categorically responded to the sequence or integrated features over time.

 

January, 17th – Petra Vertes

Title: Maps, Models and Maths: New strategies for understanding the biological basis of mental ill-health.

Abstract: The last 20 years have witnessed extraordinarily rapid progress in neuroscience, including breakthrough technologies such as optogenetics and the collection of unprecedented amounts of neuroimaging, genetic and other data. However, the translation of this progress into improved understanding and treatment of mental health symptoms has been comparatively slow. One central challenge has been to reconcile different scales of investigation, from genes and molecules to cells, circuits, tissue, whole-brain and ultimately behaviour. In this talk I will describe several strands of work using mathematical, statistical, and bioinformatic methods to bridge these gaps. First, I will describe my work on linking neuroimaging data to the Allen Brain Atlas (a brain-wide, whole-genome map of gene expression) and how we can apply these tools in the nascent field of imaging transcriptomics to further our understanding of schizophrenia and other neuropsychiatric disorders. Next, I will discuss parallel efforts for using network science and control theory for linking microscopic function (ie the role of individual cells) to large-scale behaviour in C. elegans.

Januar, 10th – Mark Walton

Title: Regulation of dopamine during reward-guided decision making: tracking reward prediction in action

Abstract: It is widely accepted that the activity of many dopamine neurons and dopamine release in parts of the striatum represent predictions of future rewards, which in turn can be used to shape decision making. Nonetheless, the precise content and function of these dopamine signals during reward-guided behaviours remains a matter of great controversy. I’ll present ongoing work to examine how dopaminergic correlates of reward prediction and choice, recorded in rodents performing reward-guided decision making tasks, are modulated by action requirements, task structure and context. These data – along with others’ – suggests that dopamine activity can be shaped by a mixture of influences over different timescales and across different parts of striatum.


 

Autumn 2019/2020

December, 13th – Marc Goodfellow

Title: Modelling pathological brain dynamics

Abstract:

Disorders of the brain can often result in alterations to its large-scale dynamics. An example is epilepsy, in which electrographic measurements display abnormal rhythms, particularly during seizures. Understanding why these dynamics are generated is challenging, particularly in the clinical setting, but better insight could help to improve diagnosis and treatment. In this talk I will discuss a particular approach to this problem, using mathematical models of large-scale brain networks to understand pathological dynamics. I will demonstrate how the study of such models can lead to new insight into the generation of seizures, and how models can be combined with clinical data to generate predictions for the surgical treatment of epilepsy.

December, 6th – Armin Lak

Title: Dopaminergic and prefrontal basis of learning from sensory confidence and reward value

Abstract:

Deciding between stimuli requires combining their learned value with one’s sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behaviour conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine learned value with sensory confidence before choice outcome and guide subsequent learning.


November, 29nd – Nothing!!

 

November, 22nd – Bernhard Staresina

Title: Memory consolidation during sleep: Mechanisms and representations

Abstract:

In this talk, I will first present direct recordings from the human hippocampus during natural sleep. Analyses focus on the question how different sleep signatures (slow oscillations, spindles and ripples) interact and may facilitate hippocampal-neocortical information transfer. I will then turn to memory representations being reactivated during sleep. Using targeted memory reactivation, we show that sleep spindles seem to facilitate content-specific consolidation.

November, 15th – Jacques-Donald Tournier

Title: Multi-shell diffusion MRI and its applications in the neonatal brain

Abstract:

Recent advances in MRI acquisition now allow the routine acquisition of large amounts of so-called multi-shell diffusion MRI data within reasonable time frames. This opens up exciting new possibilities, but also brings additional challenges. This talk will present new methods for the acquisition and analysis of such data, both at the single-subject and at the group level. The talk will focus primarily (but not exclusively) on applications within the neonatal brain, using data acquired as part of the developing human brain connectome project.

November, 8th – Dan Goodman

Title: The Reluctant Machine Learner

Abstract:

The unique quality of the brain is that it can perform difficult tasks.

The traditional approach to modelling in neuroscience, though, has focussed on simple tasks, because those were the only ones we could model. Recently, that has all changed with the advent of powerful new methods from machine learning that can recognise some images better than humans, for example. I will argue that we have to study the brain solving difficult tasks, and therefore we have to be using techniques from machine learning because these are the only known methods that enable us to do that. However, that doesn’t mean that the brain is at all like the current best known machine learning models. Those models miss out on a lot of important points, like temporal dynamics and spiking neurons. Moreover, they make mistakes that humans would never make and require vastly more data than we do to learn. Despite these issues, neuroscience has a lot to gain from adopting machine learning methods, and I’ll talk about a couple of ongoing projects in my lab that attempt to use machine learning methods in a way that is more compatible with traditional neural modelling: modelling speech recognition in the auditory system; and trying to understand the computational role of the heterogeneity observed in real brains.

 

November, 1st – Christina Buetfuring

Title: Decision coding by layer 2/3 neurons in primary somatosensory cortex

Abstract:

Sensory information enables us to make informed choices that are critical for survival. While primary sensory areas provide information on sensory stimuli, behaviourally-relevant decision-making variables have been shown to be represented in higher-order association cortices. Therefore, sensory coding and decision-making are typically studied under the assumption of anatomical separation. Neurons in the superficial layers of the whisker region of primary somatosensory cortex (S1), barrel cortex, not only receive somatotopically mapped bottom-up inputs from the thalamorecipient layer 4 but also lateral projections from neighbouring barrels and top-down projections from higher cortical areas. Therefore, layer 2/3 (L2/3) neurons in barrel cortex are a prime candidate for providing an intersection of sensory processing and decision-making in complex behavioural tasks. Previous work using electrophysiological recordings in monkeys, rats and mice has not found conclusive choice activity in S1 but was limited to low number of neurons. Studies using two-photon calcium imaging found that some behavioural aspects modulate activity in L2/3 barrel cortex neurons. It is unclear, however, whether the signal difference across trial types in those studies reflects choice-related signals or a modulation of activity by action-related variables such as motivation, movement preparation etc. Here, we used two-photon calcium imaging of neurons in L2/3 mouse barrel cortex during a cued texture discrimination task with two lickports to determine whether these neurons can code for behaviourally-relevant decision variables. We found neurons carrying information about the stimulus irrespective of the behavioural outcome (‘stimulus neurons’) as well as neurons whose activity carried information about the choice to be made (‘decision neurons’). Choice-related activity in decision neurons is not driven by signals related to motor output, but instead follows stimulus presentation. Furthermore, ambiguous population coding of decision neurons predicts miss trials and an improvement in categorical coding in decision neurons coincides with learning the stimulus-choice association. Our identification of neurons encoding stimulus and behaviourally-relevant decision signals within the same circuit suggests a direct involvement of L2/3 S1 in the decision-making process.

Location: GEOG BLDG G.11N SR1


October, 25th – first year student projects

October, 18th – first year student projects

 

October, 11th – Cian O’Donnell

Title: Neural variability in Autism

Abstract:

Autistic people often have sensory processing deficits, and we would like to understand why. One clue comes from the observation that Autistic peoples’ EEG and fMRI responses to sensory stimuli are more variable than those in neurotypical people. We used in vivo two-photon calcium imaging of populations of layer 2/3 cortical neurons in young wild-type and Fragile-X Syndrome mouse models to search for three aspects of such variability at a cellular level: 1) across single trials from identical stimuli in the same animal, 2) across animals of the same age, and 3) longitudinally across days in the same animals. I will present what we found. Work with Beatriz Mizusaki (Univ of Bristol), Nazim Kourdougli, Anand Suresh, and Carlos Portera-Cailliau (Univ of California, Los Angeles).

Location: PHYS BLDG 3.34

October, 4th – Dimitris Pinotsis

Abstract:

In this talk, I will discuss how deep neural networks can reveal semantic and biophysical properties of memory representations in the brain (neural ensembles or cell assemblies).

First, I will consider a flexible decision-making paradigm and show that deep neural networks allow us to understand the sensory domains and semantics different brain areas prefer (motion vs color) and code (sensory signals vs abstract categories) respectively. These results will also suggest a way for studying sensory and categorical representations in the brain by combining behavioural and neural network models.

Then, I will show that deep neural networks can also reveal cortical connectivity in neural ensembles and explain a well-known behavioral effect in psychophysics, known as the oblique effect. This work will also introduce a new mathematical approach for identifying neural ensembles that exploits a combination of machine learning, biophysics and brain imaging.

 

Spring 2019

June , 28th – Thomas Wills

Geog Sciences G.11N SR1


June, 14th – Shana Silverstein

Geog Sciences G.11N SR1


June, 7th – Denize Atan

Geog Sciences G.11N SR1


Mai, 24th – Tim Howe

Geog Sciences G.11N SR1

Title: Extending evidence for REM-associated Replay in Hippocampal CA1 Place Cells.

Abstract: During periods of inactivity, hippocampal CA1 neurons with spatial receptive fields (“place cells”), reactivate in patterns that recapitulate previously experienced spatial sequences, a phenomenon known as replay. CA1 replay is most prominently associated with sharp-wave ripple (SWR) events during non-REM sleep or quiet wake, and has been implicated in the consolidation of episodic memory. Replay has also been reported during REM sleep [1], however evidence for this phenomenon is substantially less extensive than for replay during non-REM. Non-REM replay occurs on a temporally compressed timescale (approximately 8 times faster than during active behaviour) around brief, discrete SWR events. REM-replay appeared less temporally compressed and occurred during extended periods of elevated theta power, necessitating alternative detection methods to those established for SWR-associated replay. Using tetrode recordings from adult rat dorsal CA1, we present data that corroborate the existence of REM-replay. The activity of multiple place cells was recorded simultaneously while rats performed simple goal-directed maze tasks, and during subsequent extended rest periods in a sleep box. Replay was detected using a moving-window correlation algorithm (from [1]), and confirmed with complementary approaches including hidden Markov model (HMM) and Bayesian trajectory decoding. Extending evidence for REM replay paves the way for analyses exploring its experience-dependence, extra-hippocampal correlates and function contributions.

[1] Louie K & Wilson MA (2001) Neuron 21: 145-156


Mai, 17th- Anna Schapiro (University of Pennsylvania)

Geog Sciences G.11N SR1

http://sleepandcognition.org/anna-schapiro.html

Title: Learning and consolidating patterns in experience

Abstract: There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. This tension is resolved in classic memory systems theories by separating these processes anatomically: the hippocampus rapidly encodes individual episodes, while the cortex slowly extracts regularities over days, months, and years. This framework fails, however, to account for the full range of human learning and memory behavior, including: (1) how we often learn regularities quite quickly—within a few minutes or hours, and (2) how these memories transform over time and as a result of sleep. I will present evidence from fMRI and patient studies suggesting that the hippocampus, in addition to its well-established role in episodic memory, is in fact also responsible for our ability to rapidly extract regularities. I will then use computational modeling of the hippocampus to demonstrate how these two competing learning processes can coexist in one brain structure. Finally, I will present empirical and simulation work showing how these initial hippocampal memories are replayed during offline periods to help stabilize and integrate them into cortical networks. Together, the work provides insight into how structured information in our environment is initially encoded and how it then transforms over time.


Mai, 10th- Andrea Martin (MPI for Psycholinguistic)

Geog Sciences G.11N SR

Title: On neural systems, oscillations, and compositionality 

Abstract: There continues to be vibrant controversy about the fundamental relationship between the information in biological signals and the neural systems that represent and process them. Compositionality is a property of a system such that the meanings of complex entities are derived from the meanings of constituent entities and their structural relations. It is a crucial part of what enables human thought and language to “make infinite use of finite means,” but also part of what makes human thought and language difficult to account for within extant theories of cognition, artificial intelligence, and human neurobiology. I focus on this foundational puzzle and discuss the computational requirements, including the role of neural oscillations, for what I believe is necessary in order to compose structures and meanings within the constraints of a neurophysiological system.


Mai, 3th – Camin Dean

Geog Sciences G.11N SR1

Cancelled


April, 18th – Bridget Lumb

Geog Sciences G.11N SR1

Postponed


April, 11th – Chris Bailey

43 Woodland Rd G.10 LR

Cancelled


April, 4th – Naoki Masuda

43 Woodland Rd G.10 LR

Title: Atypical intrinsic neural timescale in autism
Abstract: How long neural information is stored in a local brain area reflects functions of that region and is often estimated by the magnitude of the autocorrelation of intrinsic neural signals in the area. Here we investigated such intrinsic neural timescales in high-functioning adults with autism. By analysing resting-state fMRI data, we identified shorter neural timescales in the sensory/visual cortices and a longer timescale in the right caudate in autism. The shorter intrinsic timescales in sensory/visual areas were correlated with the severity of autism, whereas the longer timescale in the caudate was associated with cognitive rigidity. Moreover, the intrinsic timescale was correlated with local grey matter volume. This study shows that functional and structural atypicality in local brain areas is linked to higher-order cognitive symptoms in autism. The talk is based on our recent paper: Takamitsu Watanabe, Geraint Rees & Naoki Masuda.

March, 29th- Sam Berens (University of York)

43 Woodland Road, LR G.10

Title: Learning and memory in an uncertain world
Abstract:
We often need to pick up on subtle patterns and learn complex associations in our environment; even when its unclear which pieces of information are important. How is this achieved? I will discuss some of my recent behavioural and fMRI work exploring how we are able to acquire knowledge under uncertain conditions and in the absence of feedback (so-called ‘unsupervised learning’). These studies test various computational models of learning, investigate whether some types of information are preferentially retained or consolidated, and examines the role of metacognitive learning intentions.

http://samberens.co.uk/


March, 22th- Gareth Barker (University of Bristol)

43 Woodland Rd G.10 L

Title: There and back again: Investigations into associative recognition memory network function.

Abstract: Associative recognition memory, our ability to form an association between an object and its spatio-temporal context, is critical for everyday memory function. A network of brain regions critical for associative recognition memory has been identified, however how these brain regions function as a network during associative recognition memory formation is poorly understood at present. We investigated the role of connections between three key nodes in the network, the hippocampus, medial prefrontal cortex and nucleus reuniens of thalamus, by using a combination of optogenetic and chemogenetic approaches.

By manipulating specific connections within this thalamo-cortico-hippocampal memory network, we have revealed that distinct types of associations rely on anatomically distinct projections and have identified distinct, but interleaving circuits for associative recognition memory encoding and retrieval.


March, 15th- Rui Ponte Costa (University of Bristol)

43 Woodland Rd G.10 LR

http://ruipcosta.weebly.com/

Title: Powerful learning via cortical microcircuits

Abstract: Cortical circuits exhibit intricate excitatory and inhibitory motifs, whose computational functions remain poorly understood. I will start out by introducing our work on how state-of-the-art recurrent neural networks used in machine learning may be implemented by cortical microcircuits. In addition, our new results suggest that such biologically plausible recurrent networks exhibit better learning of long-term dependencies. However, learning in such networks relies on solving the credit assignment problem using the classical backpropagation algorithm that appears to be biologically implausible. I will finish my talk discussing our recent work on a biologically plausible solution to the credit assignment problem using well-known properties of cortical microcircuits, which approximates the backpropagation algorithm. Overall, our work demonstrates how cortical microcircuits may enable powerful learning in the brain.

 


March, 8th- Helen Barron (University of Oxford)

43 Woodland Rd G.10 LR

https://www.mrcbndu.ox.ac.uk/people/dr-helen-barron

Title: Inhibitory engrams in memory storage and recall

Abstract: Memories are thought to be represented in the brain by activity in groups of neurons described as memory engrams. Although memory engrams are typically thought to be made up of excitatory neurons, several recent studies suggest that inhibitory neurons also contribute. Indeed, by matching their excitatory counterparts, selective inhibitory interneurons may facilitate a stable storage system that allows memories to lie quiescent unless the balance between excitation and inhibition is perturbed. Here I will present a set of studies that show evidence for selective neocortical inhibition in the human brain using ultra-high field 7T MRI and brain stimulation. I will show that matched excitatory-inhibitory engrams provide a stable storage mechanism for neocortical associations, and protect memories from interference. Finally, I will explore how neocortical memory engrams might interact with the hippocampus during recall, to selectively perturb excitatory-inhibitory balance.


March, 1st- Natalie Doig (University of Oxford)

43 Woodland Rd G.10 LR

https://www.mrcbndu.ox.ac.uk/people/dr-natalie-doig

Title: Structure is Function: Cellular and Network Substrates of Basal Ganglia Dynamics

Abstract: In order to fully understand how the dynamic functions of the nervous system are realised we must evaluate its structure through static measures. In this talk I will discuss two studies which employed a range of neuroanatomical methods to reveal specific cellular and network principles of the organisation of the basal ganglia. In the first study I will discuss the use of modern trans-synaptic tracing techniques to examine the cell type selective connections between nuclei of the basal ganglia. Second, I will highlight the features of a novel connection between the dorsal hippocampus and the nucleus accumbens that shapes memory guided appetitive behaviour. Using these examples, I would like to promote a discussion on the advantages and disadvantages of specific neuroanatomical techniques and what they can tell us about the substrates underlying the neural dynamics of the basal ganglia.


February, 22th- Maria Wimber (University of Birmingham)

43 Woodland Rd G.10 LR

http://www.memorybham.com/maria-wimber/

Title: Tracking the temporal dynamics of memory reactivation in the human brain

Abstract: Our memories are not static. Each attempt to retrieve a past event can adaptively change the underlying memory space. Here I discuss my work on the neurocognitive mechanisms that enable the selective retrieval of episodic memories. I present behavioural and electrophysiological (M/EEG) work that provides insight into how a memory trace unfolds in time during retrieval, on a sub-trial scale. Further, I show evidence from a series of fMRI studies in which we track the representational changes that occur in a memory trace over time and across repeated retrievals. The latter findings demonstrate that retrieval adaptively modifies memories by strengthening behaviourally relevant and weakening behaviourally irrelevant, interfering components. Together, this work sheds light onto the neural dynamics of the retrieval process, and informs theories of adaptive memory.


February, 15th- Jim Dunham (University of Bristol)

43 Woodland Road, G.10 LR

Computing pain – Real time signal processing in human pain nerves.


January, 18th-  Quentin Huys (UCL)

Geog Sciences G.11N SR1

Perceptual conditioning

https://www.quentinhuys.com/research.html


January, 11th – Vitor Lopes dos Santos (Oxford)

Life sciences G14

Neural oscillations

In this talk, I would like to discuss general concepts regarding the study of neuronal oscillations. What does it take for an event to be defined as an oscillation? What is there beyond frequency? Why are oscillations important (are they?)? I will use CA1 oscillations as main case studies, particularly my recent published results (Lopes dos Santos et al. 2018) to discuss such points and more. My aim is to engage in an informal debate shaped by the thoughts of the audience as much as my own.