Autumn 2015

2nd October: Mike Ashby

Room: SM3

Spatial clustering of new synapses


9th October: Naoki Masuda

Room: SM3

Energy landscape of human brain activity during bistable perception

Individual differences in the structure of parietal and prefrontal cortex predict the stability of bistable visual perception. However, the mechanisms linking such individual differences in brain structures to behaviour remain elusive. Here we demonstrate a systematic relationship between the dynamics of brain activity, cortical structure and behaviour underpinning bistable perception. Using fMRI in humans, we find that the activity dynamics during bistable perception are well described as fluctuating between three spatially distributed energy minimums in an energy landscape constructed from fMRI data. Transitions between these energy minimums predicted behaviour, with participants whose brain activity tend to reflect the visual-area-dominant energy minimum exhibiting more stable perception and those whose activity transits to the frontal-area-dominant energy minimum reporting more frequent perceptual switches. These brain activity dynamics are also correlated with individual differences in grey matter volume of the corresponding brain areas.


16th October: Aleks Domanski

Room: SM3

Capture, dissection and manipulation of complex cortical network dynamics in models of Autism

Cortical network function critically depends on many physiological parameters that develop in concert during early postnatal critical periods, notably intrinsic neuronal properties, synaptic function and appropriately balanced synaptic connectivity. Perturbation of normal network development processes by genetic insults associated with Autism can lead to complex network effects but it is unclear to what extent these individual factors contribute to the overall pathophysiology. Combining slice electrophysiology in a mouse model of Autism with single-cell and network simulation, I will dissect the mechanisms by which changes in multiple electrophysiological parameters lead to complex network-level effects, ask which are dominant drivers towards pathological network states, and provide insight into potential scenarios for pharmacological rescue.


23rd October: Alex Cope, Green Brain Project (University of Sheffield)

Room: SM3

Modeling the honeybee

The honeybee, with a brain consisting of 1 million neurons (100,000 times fewer than the human brain), is nevertheless capable of complex tasks normally considered the domain of advanced vertebrates. By studying this versatile insect we can therefore gain insight into the neural basis of such tasks.


30th October: Jon Hanley

Room: SM1

Predicting plasticity from protein-protein interactions

A multitude of interconnecting and highly regulated protein complexes are at the core of cell biology. Synaptic plasticity involves processes such as receptor trafficking, changes in dendritic spine morphology, and rapid, local regulation of protein synthesis. These processes are all underpinned by dynamic protein-protein interactions that are regulated by numerous signalling pathways. I will present a couple of examples of such protein complexes that we are studying, and discuss their importance in determining the outcome of plasticity-inducing stimuli.


6th November: Mark Humphries, University of Manchester

Room: SM3

Population dynamics in a locomotion neural network converge to a recurrent attractor 

Many neural systems are thought to implement some form of cyclical or periodic attractor, in which neural activity “rotates” over time to drive some periodic process, such as locomotion or heading direction. However, we lack direct evidence that such neural attractors exist. We tested the hypothesis that the crawling motor program of the sea-slug Aplysia is directly implemented by a periodic attractor in its pedal ganglion network. Evoking the locomotion program caused population activity to rapidly settle into a low-dimensional, slowly decaying rotational orbit. These recurrent dynamics were consistent between programs evoked in the same animal, indicating convergence on the same basin of attraction, but could differ considerably between animals. Despite this heterogeneity we could decode a specific portion of the motor program directly from the low-dimensional dynamics. Collectively our results support the hypothesis that Aplysia’s pedal ganglion is a cyclical attractor network.


13th November: Laurence Hunt, UCL

Room: SM3

Bridging microscopic and macroscopic choice dynamics in prefrontal cortex

The significance of prefrontal cortex for reward-guided choice is well known from both human imaging and animal neurophysiology studies. However, dialogue between human and animal research remains limited by difficulties in relating observations made across different techniques. A unified modelling framework may help reconcile these data. We have previously used attractor network models to demonstrate that varying decision values can elicit changes in local field potentials (LFP) dynamics, causing value correlates observable with human magnetoencephalography (Hunt et al., Nature Neuroscience 2012). Extended, hierarchical forms of such models can also predict human functional MRI signal in different frames of reference during a multi-attribute decision process (Hunt et al., Nature Neuroscience 2014). In light of this framework, we have recently sought to relate simultaneously recorded LFP and single-unit data from prefrontal cortex of macaque monkeys performing a cost-benefit decision task. I will discuss key findings from these studies, which help us to relate value correlates in human neuroimaging studies to their cellular origins.


20th November: Denize Atan

Room: SM3

From neural stem cells to neural networks: finding light in the dark

Human cognition and behaviour are functions of neuronal networks that comprise the mammalian brain. Of these cognitive abilities, the formation of new episodic memories is critically dependent on the hippocampal formation. The hippocampal dentate gyrus is one of a small number of brain regions in which neurogenesis continues throughout adulthood, and where neurogenesis is important for learning and memory.  Transcription factors play key roles in directing neural differentiation and circuit assembly through their precise spatial, temporal, and cell type-specific control of gene expression.  In this talk, I will describe how our recent investigations of gene regulation and expression have taken us on a journey from events that occur at a molecular level in differentiating neurons……to hippocampal circuit formation, network dynamics, learning and memory, and finally population genetics.


27th November: Marc Goodfellow (Exeter)

Room: SM3

The role of networks in seizure generation

Epilepsy is characterised by the repeated occurrence of seizures, which are periods of pathological brain activity that arise spontaneously from a predominantly healthy functional state. Since the goal of epilepsy treatment is to abolish or reduce the tendency of the brain to transition into seizures (its ictogenicity), it is important to better understand these transitions, and how we might interact with the brain to abate them. However, seizure dynamics emerge in, and affect, large-scale brain networks, and the network paradigm for ictogenesis introduces unfamiliar challenges and new opportunities to understand epilepsy.

In this talk I will introduce a mathematical model-based approach to quantify ictogenicity in brain networks. I will demonstrate how this approach can be used to quantify differences in brain networks between patients with generalised epilepsies and healthy controls. I will also describe how we can extend this approach to quantify the contribution of each component of a network to seizure generation. This quantification is based upon the effect that a treatment-specific perturbation has on network ictogenicity. Using exemplar networks I will explore how the apparent ictogenicity of nodes can vary according to network structure and the presence or absence of “pathological” nodes (seizure foci). I will explain how this approach can potentially provide an insightful and principled way to interpret and describe generalised or focal seizure dynamics, and may enhance our strategies for the classification and treatment of epilepsies.


4th December: 3 Short Talks

Room: SM3

1. Bridget Lumb

Dynamic alterations to prefrontal-midbrain-spinal cord networks and their contribution to pain chronification
2. Hans Reul 
Glucocorticoid action in the brain
3. Clea Warburton 
Mechanisms controlling hippocampal gene activation and memory formation

11th December: Rafal Bogacz (University of Oxford)

Room: SM2

Learning in cortical networks through error back-propagation

To efficiently learn from feedback, the cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error back-propagation. It has been successfully used in both machine learning and modelling of the brain’s cognitive functions. However, in the back-propagation algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified. Hence it has not been known if it can be implemented in biological neural networks. This talk will discuss relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex, in which changes in synaptic weights are only based on activity of pre-synaptic and post-synaptic neurons. It will be shown that when the predictive coding model is used for supervised learning, it performs very similar computations to the back-propagation algorithm. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the back-propagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.

Summer 2015

8th May: Roland Jones, University of Bath

Room: F40 (Medical Sciences)

Extracting background synaptic conductances from spontaneous membrane potential fluctuations in cortical neurones in vitro.

Cortical neurones are embedded in a dense network and are the target of thousands of individual synapses continuously releasing excitatory (glutamate) and inhibitory (GABA) transmitters. This is thought to aid signal detection through stochastic resonance, where sub-threshold synaptic events are pushed above threshold due to the presence of background activity. Thus, the relative level of inhibitory and excitatory back- ground activity is a reflection of network activity and is instrumental in determining the excitability of any given neurone We have studied release of both glutamate and GABA in the entorhinal cortex (EC) using whole cell patch clamp recording of spontaneous synaptic currents in vitro, but this approach does not lend itself well to relating the level of background activity to cellular excitability. Rudolph et al. (2004) proposed a method of estimating global background synaptic conductances from measurement of fluctuations in membrane potential derived from intracellular recordings applied to high conductance states characteristic of cortical networks in vivo. We asked whether we can meaningfully estimate global background excitatory and inhibitory conductances under the quiescent conditions in EC slices in vitro, and used a variety of pharmacological manipulations to validate this approach and relate changes in background to cellular excitability.


15th May: Casimir Ludwig

Room: SM3 (Mathematics)

Information sampling for perceptual decision-making

The choice of an appropriate course of action depends on the state of the environment: different states call for different actions. Frequently, the state of the environment has to be inferred from noisy sensory information. I am interested in the way humans sample information, how their sampling strategy depends on the quality of information, and how the sampling strategy influences their decision-making. I will talk about experimental and theoretical work that tests an “information foraging” account of sampling behaviour.


22th May:  Jon Witton

Room: SM3 (Mathematics)

Studying hippocampal network function in mouse models of cognitive disease

29th May: Cian O’Donnell, Salk Institute

Room: SM3 (Mathematics)

Rogue states: altered dimensionality of neural circuit activity in Fragile-X mice

Brain disorders such as autism and schizophrenia have been hypothesized to arise from an imbalance in excitation/inhibition in neural circuits. Why or how such an imbalance would be detrimental for neural coding remains unclear. We approached the problem by analysing two-photon in vivo neural population recordings from the cortex of both wild-type and Fragile-X Syndrome model mice at different stages of development. We developed a new statistical model for the probability distribution of all 2^N possible neural population activity patterns that required only N^2 parameters, where N is the number of neurons. Using this model we found that the dimensionality of population activity was lower in young Fragile-X than wild-type mice, but surprisingly switched in adulthood so that Fragile-X dimensionality was higher than wild-type. Finally we used a computational model of layer 2/3 somatosensory cortex to show which neural circuit components can give rise to these alterations in dimensionality. Our findings show how small changes in neural circuit parameters can have dramatic consequences for information processing.


5th June: Claire Mitchell, University of Leicester

Room: SM3 (Mathematics)

A high-speed, super-resolution multiphoton microscope for imaging neuronal processes

The newest generation of fluorescent calcium indicators make it easier than ever to optically interrogate neurons for non-invasive, spatially-resolved electrophysiology.  Current fluorescent imaging techniques however, can be limited in depth penetration, speed and/or resolution.  By taking a laser scanning microscope and exchanging the point detector for a camera, it is possible to increase the resolution by a factor of two, a technique known as image scanning microscopy.  This talk will discuss the physical principle behind this resolution increase and describe how our multiphoton implementation of image scanning microscopy can achieve high speed, super-resolution imaging at depth by using acousto-optic devices.  I will then present some recent results; imaging neuronal structures in zebrafish in vivo and super-resolved calcium imaging in mouse hippocampal slices.


12th June: Alain Nogaret, University of Bath

Room: AIMS 2A/B (location)

Construction of accurate neuron models from the assimilation of electrophysiological data.

I will report on recent results that use nonlinear optimization to construct accurate single compartment models of biological neurons.  Our variational approach based on interior point optimization has been successfully applied to extract model parameters from electrophysiological recordings of real neurons from the zebra finch forebrain nucleus HVC.  Our results automatically estimate the 72 model parameters and allows accurate predictions of the actual neuron response to be made to arbitrarily complex current stimulation protocols.  These results provide an important foundational basis for building biologically realistic network models both computational and analogue hardware models for biomedical implants.

Spring 2015

30th Jan: Thilo Gross

Room: SM3 (Mathematics)
Criticality as an ingredient for Information Processing (not just in the Brain)

Consider a computer. At its core we find a microprocessor that is essentially a complex circuit of semiconductor elements. If I took some hundred billion of these element and linked them up randomly, would I get a usable computer? The answer is no. What if I took an equal number of neural cells and wired them up randomly, would I get a functional brain? Again, the answer is no.However, comparing the two systems it is striking that for the neural cells it is much less important that I get the connectivity, right. To build a functional microprocessor, every element needs to be connected with with a precisely prescribed set of other elements. One wrong connection can ruin everything. By contrast, even the random-neuron-network can already show brain-like dynamics and our own brains remain functional while undergoing changes to their connectivity in the course of development and aging. So unlike the computer, the brain does not rely on some carefully designed and hard-wired microscopic connectivity. Instead it is, within reason, able to do its job regardless of the underlying configuration. One can therefore ask, what is the common property of these configurations that allow information processing to take place?

In this talk I focus on criticality, a concept from physics, that is a necessary ingredient for all information processing. I will show that the hallmarks of criticality are found not only in neural recordings, but also in other systems such as insect swarms and fish schools that need to process information collectively. From the perspective of neuroscience recognizing the necessity for criticality may lead to a new way of looking at data and to new diagnostic tools.


6th Feb: Bob Merrison-Hort

Room: SM3 (Mathematics)
Using a computational model to understand swimming and synchrony in the Xenopus tadpole spinal cord

Experimental recordings demonstrate that in addition to alternating left/right “swimming” patterns, the hindbrain/spinal cord of immobilised Xenopus tadpoles can also generate transient bouts of synchrony. During these synchrony bouts, central pattern generator (CPG) neurons on both sides of the body fire in-phase at approximately double the frequency of swimming. Investigating hypotheses about the neuronal mechanism underlying synchrony is difficult in real animals, so we instead use a large-scale computational model of the spinal cord. This “virtual spinal cord” combines synaptic connectivity obtained from a model of axon growth with a physiological model of neuronal activity based on the Hodgkin-Huxley equations. Under normal circumstances, the model produces stable realistic swimming behaviour in response to simulated touch input. However, by applying a suitable perturbation during swimming we found that the model could temporarily switch to a synchronous mode of firing, similar to that seen in experimental recordings. Normally the synchrony regime appears to be unstable, but we found that small increases in the commissural axonal delay stabilises synchrony, and can produce behaviour that is tri-stable between quiescence, swimming and synchrony. These results suggest that the system is close to a bifurcation. I will discuss the possible biological significance of this, and present preliminary results from computational experiments that attempt to use a reduced model to analyse the behaviour of the system more formally.


13th Feb: Garrett Greene

Room: SM3 (Mathematics)
Stabilising the World: Using retinal non-linearities to stabilise visual percepts.

Fixational Eye Movements are unconscious, involuntary and unpredictable eye movements, which continually shift our gaze even during attempted fixation (holding the gaze steady). These movements cause continual motion of the visual image on the retina, leading to an ambiguity between real-world motion and eye motion. Despite this, our perception of the visual world is stable, and these eye movements are rarely – if ever – perceived. Hence there must exist a mechanism to stabilise the visual image, and distinguish visual motion in the outside world from that caused by FEM.
I present a model for such a mechanism which takes advantage of the non-linear response of certain types of retinal ganglion cells which are ubiquitous in the mammalian retina. This model allows for the correction of motion percepts under FEM, without the need for explicit eye movement information. Furthermore, the model offers an explanation for a set of well-known visual illusions, which can be understood as failure modes of this correction mechanism.


20th Feb: Mark Rogers

Room: C44 (Medical Sciences)
Generation and Analysis of Next-Generation Sequencing Data

Deep transcriptome sequencing with next-generation sequencing technologies is providing unprecedented opportunities for researchers to probe the transcriptomes of many species. Two important goals in these studies are (a) to predict changes in gene expression between different conditions and (b) to assess the extent of alternative splicing, a process that increases transcriptome and proteome diversity, and plays a key role in regulating gene expression and protein function.

A number of tools have been developed that can make these predictions automatically, but the old adage “garbage in, garbage out” still applies. Hence to make accurate predictions, one should understand the methods used to obtain data, the limitations of sequence alignment tools, and aspects of experimental design that will have the biggest impact on statistical power.


27th Feb: Ullrich Bartsch

Room: SM3 (Mathematics)
Macro and microstructure of sleep in health and disease

Sleep is a process that exhibits complex dynamics on multiple time scales. There are lifelong changes in sleep patterns (napping in young and old ages), diurnal rhythms of sleep and wake and distinct patterns of sleep stages during the night. Moreover, one can describe sleep on a micro scale where prominent oscillations during specific sleep stages occur and synchronise with millisecond precision.

The observation of these dynamics through methods such as actigraphy and electrophysiology allows a detailed characterisation of sleep dynamics. This may in turn inform models of underlying neuronal processes with potential applications in preclinical and clinical research. Indeed, various psychiatric diseases, such as schizophrenia, depression and Alzheimer’s are associated with dramatic changes in sleep patterns, yet little is known about how much abnormal sleep contributes to symptoms experienced by patients. More recently, sleeps role in overnight memory consolidation has been emphasized which suggest sleep disturbances as a candidate mechanism for cognitive symptoms in psychiatry.

I will present some background on how sleep could be viewed from a dynamical systems perspective and how this could benefit the analysis of preclinical and clinical sleep data. I will show some preliminary analysis on global brain states defined as spectral clusters during sleep in an animal model of schizophrenia. I will also present some recent analysis describing the coordination of sleep oscillations during NREM sleep in patients diagnosed with schizophrenia, and how the microstructure of sleep reveals changes in network connectivity related to overnight memory consolidation.


6th March: Helen Scott

Room: SM3 (Mathematics)
A high content imaging siRNA screen for novel modulators of mitophagy

The selective autophagic removal of damaged mitochondria, known as mitophagy, has been implicated in numerous neurodegenerative diseases. Therefore elucidating the details of this process and its (mis)regulation may lead to therapeutic targets. An imaging based assay has been developed and used to screen a druggable genome siRNA library for novel genes involved in and / or regulating mitophagy. The presentation will focus on the methods used to extract quantitative data from the images and to reveal ‘hit’ genes.


13th March: Steve Coombes

Room: SM3 (Mathematics)
Next generation neural field models

I will introduce neural field models — that is, mathematical theories of brain dynamics in which the interaction of billions of neurons is treated as a continuum process. To date such models have had a major impact in understanding a variety of neural phenomena, including electroencephalogram rhythms, geometric visual hallucinations, mechanisms for short term memory, feature selectivity in the visual cortex, binocular rivalry, and anaesthesia. They have also found applications in autonomous robotic behaviour, embodied cognition, and dynamic causal modelling. Since their inception as integro-differential equations in the 1970s, by Wilson, Cowan, Nunez and Amari, solid mathematical progress has been made in understanding their behaviour. This has included the use of powerful tools from functional analysis and dynamical systems, including geometric singular perturbation analysis, Evans functions, numerical bifurcation techniques, and homogenisation theory. However, the pace of model evolution has been relatively slow and, from a biological point of view, modern day models are almost as minimal as their ancestors. I would like to discuss the development of next generation neural field models that will be more relevant to current challenges in neuroscience, such as large scale brain dynamics for neuroimaging, feature based computation for vision and motion, and solving spatial navigation problems via reward learning.


20th March: Lucia Marucci

Room: C44 (Medical Sciences)
Modelling and engineering dynamics of the Wnt pathway in mouse Embryonic Stem Cells

Complex non-linear dynamics have recently been reported as a signature of pluripotency: Embyonic Stem Cells (ESCs) and cells reprogrammed to a stem-like state (induced Pluripotent Stem Cells, iPSCs) show heterogeneous expression levels and temporal fluctuations of a number of genes. In this talk, I will focus on the dynamics of the Wnt/β-catenin pathway, highly implicated in both pluripotency and somatic cell reprogramming. A combined experimental and modelling approach revealed bistable or oscillatory dynamics of the pathway depending on the culture condition, with important biological implications. Also, I will suggest a synthetic biology strategy to engineer and control the mentioned dynamics in live cells.

Autumn 2014

3rd Oct: Colin Campbell

Large scale data integration in the context of biomedical datasets

Many problems in the biomedical sciences require the integration of large and disparate types of data. We consider three recently completed projects involving data integration. First, we consider the prediction of sequence variants in both the coding and non-coding regions of the human genome, to predict if the variant is functional in disease. Utilising multiple kernel learning, the constructed predictor gives state-of-the-art performance in predicting the status of non-coding variants. Second, we consider a model for the prediction of breast cancer progression using a weighted combination of a variety of different data sources, from genomic data through to clinical measures. Apart from building a model for predicting disease progression, an accurate predictor implicitly indicates those features which most heavily influence disease progression and mortality risk. Next, we consider a variant of Canonical Correlation Analysis for finding correlated linear combinations of features in paired datasets (in this case SNP and disease phenotype data from the British Women’s Heart Survey). This lead to the discovery of novel single nucleotide variants apparently associated with several cardiovascular disease phenotypes. We then discuss future directions.


10th Oct: Conor Houghton

Calculating mutual information for spike trains and other data with distances but no coordinates

Many important data types, such as the spike trains recorded from neurons in typical electrophysiological experiments, have a natural notion of distance or similarity between data points, even though there is no obvious coordinate system. In this talk a simple estimator is presented for calculating the mutual information between data sets of this type.


17th Oct: Sergey Kasparov

Glio-centric view of the brain

24th Oct: Thelma Lovick

To pee or not to pee? – investigating a midbrain network that controls urinary voiding

For successful micturition (urinary voiding) the bladder must contract whilst the urethral sphincter simultaneously relaxes to enable urine to be expelled.  A spino-midbrain-spinal network is engaged to co-originate the event, which occurs only when the individual judges it is safe and socially acceptable to do so, implying that the controlling network can be switched on and off.  Using a rat model we will show that the functional integrity of the midbrain periaqueductal grey (PAG) is key to successful voiding and that the PAG contains a number of neuronal cell types whose firing is synchronised with different components of a void.  By considering the patterned activity of these neurones, it may be possible to model a functional in vivo network that integrates and controls a distinct physiological event.


31st Oct: 2 talks

James Hodge, Edgar Buhl and Krasi Tsaneva-Atanasova

Turning back the hands of time

Animals contain molecular clocks keeping time in dedicated clock neural circuits in their brains. These encode time of day information by changing action potential firing and membrane currents. When animals get old their circadian (~24 hour) rhythms become weaker, with sleep moving earlier in the day and becoming fragmented. These senescence changes and the mechanisms of circadian rhythms are well conserved between flies and humans, but occur in about a month in flies. We will compare fly clock electrophysiology and behavior in young and old flies and determine which properties and currents change and develop a computational model.

Risto Kauppinen

MRI in the context of ‘neural dynamics’

MRI is the gold-standard imaging technique both in basic and clinical brain research. Water, the ubiquitous molecule that samples ‘magnetic environments’ in vivo, is exploited by MRI to form images. MRI directly probes ‘dynamics of water’ in the brain through relaxation (T1 and T2) and thermal translations (diffusion) to generate images from brain macro- and micro-anatomy. Instead, ‘brain activity’ is revealed indirectly by MRI through mechanisms that are only partially characterized. Mathematical modeling of MRI signals yields both anatomical and functional connectivity maps with unprecedented value for understanding functional organization of the brain. This NDF presentation summarizes on-going MRI projects at CRIC with strong link to mathematical modeling


7th Nov: Dave Lester, University of Manchester

How to build an exascale supercomputer for computational neuroscience

In this short talk I will outline the challenges faced by chip designers when they contemplate the next generation of supercomputers: the most pressing of which is how to keep the energy budget manageable. The SpiNNaker team at Manchester has taken its inspiration from the proven energy efficiency of the brain, and has already produced chips and systems for neuroscience and robotic applications. These systems are now undergoing thorough testing and evaluation before the next generation system is produced as part of the HBP project. I will discuss the conclusions so far.


14th Nov: David Murphy

Transcriptomic approaches to understanding complex biological systems

The research interests of the Murphy lab are focused on the role of hypothalamic structures in the neurohumoral and behavioural control of salt and water homeostasis. We have used Affymetrix microarray gene profiling to catalogue gene expression in these brain regions in euhydrated and dehydrated male Sprague Dawley (SD) rats. These gene catalogues were then subject to robust statistical analysis to identify genes that are differentially regulated as a consequence of dehydration. We have exclusive access to ReLyter, a proprietary cutting-edge Java tool developed by Source BioScience LifeSciences to make use of the statistical and machine learning functions within WEKA. The resulting gene networks allow identification of nodal genes with multiple links. Analysis of our transcriptome data from dehydrated rats revealed a putative network around Gonadotrophin inducible transcription factor 1 (Giot1), which is robustly up-regulated in the dehydrated hypothalamus. Rats are normally averse to 2% (w/v) NaCl. However, this aversion is overcome if 2% (w/v) NaCl is their only fluid source. An initial decline in fluid consumption is followed by a progressive increase in drinking over the course of the 7-day stimulus, concurrent with an increase in the excretion of large volumes of urine. However, hypothalamic injection of a lentiviral vector that expresses a Giot1 shRNA completely blocked fluid intake following the onset of salt loading. These data suggest that Giot1 is a crucial component of the drain mechanisms that regulate salt and water balance.


21st Nov: Padraig Gleeson, UCL

The Open Source Brain Initiative, enabling collaborative model development in computational neuroscience

Computational modelling is important for understanding how brain function and dysfunction emerge from lower level neurophysiological mechanisms. However, computational neuroscience has been hampered by poor accessibility, transparency, validation and reuse of models. The Open Source Brain (OSB) initiative (http://www.opensourcebrain.org) has been created to address these issues. This aims to create a repository of neuronal and network models from multiple brain regions and species that will be in accessible, standardised formats and work across multiple simulators. OSB will create a collaborative space to facilitate model creation and sharing, where both computational and experimental researchers can contribute to their development. This talk will introduce the aims of the OSB initiative, describe the current functionality of the website and the range of models already available, and present future plans for the project.


28th Nov: Simon O’Connor, Biocomputations Group, University of Hertfordshire

Tools and Techniques for producing Detailed Biophysical Neuron Models

In this talk I will go through the software and techniques that were used to construct a gap junction connected olfactory bulb mitral cell model (O’Connor, Angelo and Jacob 2012). This will include the digitisation of morphology from fixed slice preparation slides; the fitting of passive parameters to multiple dual patch clamp recordings; and handling of ion channels in Genesis, Neuron and the move towards standardisation within the modelling community.


5th Dec: Tom Shimizu, FOM Institute AMOLF, Amsterdam

What can bacteria teach us about the motility of nematodes?

Motility provides a rich yet well-defined set of problems for studying the physiological bases of behavior. Given the inherently uncertain nature of natural environments, tasks such as ‘exploration’ and ‘exploitation’ of resources resemble computational search/optimization problems, and as such involve the generation and tuning of random variables. We know very little about how organisms with a nervous system generate and modulate random behavior, but much has been learned in recent years about how bacteria achieve this to optimize behavioral performance. In this talk, I will review some of the highlights in the bacterial arena, emphasizing what we have learned about their motile strategy and their mechanistic implementation in these simple cells. I will conclude with our nascent efforts to frame nematode motility as a similar problem – of generating and biasing a random walk.


12th Dec: Tim Vogels (Oxford)

The dance of excitation and inhibition (and some other interesting stories)

The first part of my talk will investigate the electrical filtering abilities of dendritic spine necks.
Most excitatory inputs in the mammalian brain are made on dendritic spines. Spines are thought to compartmentalize calcium gradients, and have been hypothesized to serve also as electrical compartments. The latter hypothesis necessitates relatively high spine neck resistances.  Due to its small size it is difficult to assess spine neck resistance directly in experiments, and has thus been discussed at somewhat above average temperatures in the field. I will show some modeling work that aims to deduce resistance estimates from two recent datasets showing negative correlations between spine neck length and somatically recorded EPSPs, and thus seem to imply high spine neck resistance. Using numerical simulations, we explore the parameter regimes for the spine neck resistance and other variables (such as synaptic conductance changes) necessary to explain these data sets. Since we use NEURON for the above mentioned simulations, and NEURON, and its accompanying MODELDB database for previously published models is (sometimes) notoriously difficult to get comfortable with,  I will show some recent meta-analysis on publicly available model ion channels. Our work visualizes the family relations of over 3000 unique models, and uses the similarity of their performance in standard protocols as a means to suggest a handful truly useful channels for everyday use.
The last part of my talk will visit the stabilizing performance of inhibitory synaptic plasticity in recurrent cortical networks and introduce a class of cortical architectures with very strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition. I will show that excitation and inhibition in such networks dance with each other to transiently amplify specific activity states that can be used to reliably execute multidimensional movement patterns. The intriguing similarity to recent experimental observations along with tightly balanced excitation and inhibition, suggest inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.

Summer 2014

16th May: Liz Coulthard

Learning from patients: effects of dopamine on memory consolidation

In this talk, I will present recent data on the role of dopamine in memory consolidation in patients with Parkinson’s disease and discuss how our conclusions from patient studies can be used to make inference about cognition in people without brain disorders. If the attendees feel it is helpful, we can also explore the range of well-characterised local clinical populations suitable for research aimed at understanding neurobiology of cognition.


23rd May: Ulrik Beierholm, University of Birmingham

How hard to work: testing a model of dopamine and reward related vigour

The question of how humans and animals make behavioural choices has been the subject of an over-whelming number of studies. However, how vigorously to execute a choice has gathered less attention, with very few developed theoretical ideas. A study by Niv et al. (2006) suggested, based on the theory of average-return reinforcement learning, that vigour should be optimally controlled by the opportunity cost of time as measured by the average rate of reward. The study further suggested that the average rate of reward could be physiologically encoded by tonic dopamine in the brain. I will explain the underlying theory and present the results of testing these ideas.


6th June: Ian Forsythe, University of Leicester

Getting excited by inhibition in theory, or is it just impossible physiology?

We will start by considering the physiological response to sound in the medial nucleus of the trapezoid body (MNTB): this receives an excitatory input from the contralateral ear (via the cochlear nucleus and the calyx of Held). The MNTB in turn provides an inhibitory output to several other nuclei of the ipsilateral superior olivary complex. Recording from one of these nuclei shows suppression of spontaneous activity during a sound and a strong burst of AP firing at the end of a sound, or during a brief gap in a sound. How you would you design and implement a molecular circuit for such a response? The answer is an exquisite example of counterintuitive minimal ‘design’, and we’ll consider what it’s good for.


20th June: Jo Sadowski

Sharp wave-ripples shape plasticity in the hippocampus

Synaptic plasticity in the hippocampus is known to be important for learning and memory. However, little is known about the potential for naturally occurring spike patterns to induce plasticity. Using a combination of in vivo and in vitro electrophysiology, I investigate the requirements for plasticity induction in the hippocampus with place cell spike patterns recorded in awake animals. My work has identified a potentially crucial role for hippocampal sharp wave-ripple oscillations in tuning the magnitude of LTP induced by behaviourally relevant spike patterns.


27th June: Jeff Bowers

Why do neurons in the cortex respond so selectively to words, objects, and faces.

A classic finding in neuroscience is that single neurons in cortex often respond to information (e.g., an image of a face) in a highly selective manner. An obvious question is why do neurons respond in this way? I’ll describe a series of neural network simulations that show that models learn to code information in a selective manner when they are trained to store many things at the same time in short-term memory.

Spring 2014

21st Feb: Rich Gardner

Spike timing and phase dynamics in sleep spindle oscillations

In the mammalian forebrain, the state of non-REM sleep is accompanied by an array of neuronal oscillations that span widely different timescales. Some of these oscillations are believed to facilitate the transfer of long-term memory traces from hippocampal region to neocortical areas – a process known as offline ‘systems’ consolidation. Spindle oscillations, which are 0.5-3 second periods of 8-15 Hz activity in the thalamocortical network, are widely believed to play in important role in this process, but precisely how is unknown. During my PhD, I have used tetrode recordings to characterize thalamocortical network activity during spindle oscillations. I found that multiple aspects of thalamic and cortical neuronal firing showed consistent temporal patterns across each spindle epoch, which may explain the stereotyped waxing-waning time course of these oscillations, but could also have bearing on the functional role of spindles in memory processing. I will also briefly talk about my current work, which aims to investigate the specific role that spindles might play in the hippocampus-neocortex dialogue that occurs during sleep after learning.


28th Feb: Laura Atherton

Unravelling the Mechanisms behind Hippocampal Place cell Replay activity in Sharp wave ripples

Sharp wave ripples (SWRs) are a type of oscillation that occur in the hippocampal formation during, for example, periods of slow wave sleep and immobility. During these events, place cell activity which is present in the hippocampus during awake behaviour, tends to be replayed in a time compressed manner. Such activity is believed to facilitate spatial memory consolidation by aiding the transfer of labile spatial memories from the hippocampus to more stable neocortical sites. However it is currently unknown what selects which particular place cells participate, or are active, within a given SWR. Is there a plasticity-dependent bias for certain cells to participate? Are there different intrinsic properties between the participating and non-participating cells? Is there a combination of the two or something else entirely? Unfortunately I can’t answer any of these questions yet, but I will present the work I have been doing during the first year of my PhD to attempt to address the questions, both from an experimental and computational perspective.


7th Mar: Emma Robinson

Emotions, decision-making and depression

Our research group is interested in the pathology of major depressive disorder and its treatment with antidepressants. Despite the fact that the first drug treatments for depression were discovered in the 1950s and we have a detailed understanding of where in the brain they act, we still lack a clear understanding of how these effects relate to the emotional symptoms seen in depression. We have also failed so far to provide an explanation for how or why major depressive disorder develops. In this talk, I will present a novel hypothesis about the cause and treatment of depression and identify some of the challenges we face in trying to test this theory. I will focus on two areas which are of specific interest. The first relates to emotions and decision-making and the second relates to the gene by environment interactions vulnerability to depression.


21st Mar: Innes Cuthill

Animal Camouflage: Evolutionary Biology meets Computational Neuroscience, Art and War

Animal camouflage provides some of the most striking examples of the workings of natural selection; it has also long been an inspiration for military camouflage design, with the pioneers of camouflage theory being both artists and natural historians. While the general benefits of camouflage are obvious, understanding the precise means by which the viewer is fooled represent a challenge. This is because animal camouflage is an adaptation to the eyes and mind of another animal, often with a visual system different from (and sometimes superior to) that of humans. A full understanding of the mechanisms of camouflage therefore requires an interdisciplinary investigation of the perception and cognition of non-human species, involving the collaboration of biologists, neuroscientists, perceptual psychologists and computer scientists. I review the various forms of camouflage from this perspective, illustrated by the recent upsurge of experimental studies of long-held, but largely untested, theories of defensive colouration.


4th Apr: Claudia Clopath

Receptive field formation by interacting excitatory and inhibitory plasticity

Cortical neurons receive a balance of excitatory and inhibitory currents. This E/I balance is thought to be essential for the proper functioning of cortical networks, because it ensures their stability and provides an explanation for the irregular spiking activity observed in vivo. Although the balanced state is a relatively robust dynamical regime of recurrent neural networks, it is not clear how it is maintained in the presence of synaptic plasticity on virtually all synaptic connections in the mammalian brain. We recently suggested that activity-dependent Hebbian plasticity of inhibitory synapses could be a self-organization mechanism by which inhibitory currents can be adjusted to balance their excitatory counterpart (Vogels et al. 2011). The E/I balance not only generates irregular activity, it also changes neural response properties to sensory stimulation. In particular, it can lead to a sharp stimulus tuning in spiking activity although subthreshold inputs are broadly tuned, it can change the neuronal input-output relation and cause pronounced onset activity due to the delay of inhibition with respect to excitation. This control of neuronal output by the interplay of excitation and inhibition suggests that activity-dependent excitatory synaptic plasticity should be sensitive to the E/I balance and should in turn be indirectly controlled by inhibitory plasticity. Because we expected that excitatory plasticity is modulated by inhibitory plasticity, the question under which conditions excitatory Hebbian learning rules can establish receptive fields needs to be re-evaluated in the presence of inhibitory plasticity. In particular, it is of interest under which conditions neurons can simultaneously develop a stimulus selectivity and a cotuning of excitatory and inhibitory inputs. To address these questions, we analyse the dynamical interaction of excitatory and inhibitory Hebbian plasticity. We show analytically that the relative degree of plasticity of the excitatory and inhibitory synapses is an important factor for the learning dynamics. When excitatory plasticity rate is increased with respect to the inhibitory one, the system undergoes a Hopf bifurcation, losing stability. We also find that the stimulus tuning of the inhibitory input neurons also has a strong impact on receptive field formation. When stimulus tuning of the inhibitory input neurons has the same width than the one of the excitatory input neurons, stimulus selectivity is prevented, but if the inhibitory input neurons are not tuned, selectivity emerge. This latter scenario, together with our analysis, suggests that the sliding threshold of BCM rules may not be implemented on a cellular level but rather by plastic inhibition arising from interneurons without stimulus tuning. If the stimulus tuning of the inhibitory input neurons is broader than that of the excitatory inputs, constant with experimental findings, we observe a local BCM behavior that leads to a stimulus selectivity on the spatial scale of the inhibitory tuning width. This work is done in collaboration with Tim Vogels and Henning Sprekeler.

Autumn 2013

18th Oct: Amelia Burroughs

Using microscopy to map the spatial distribution of synapses onto single cells

The precise arrangement of excitatory and inhibitory synapses onto individual neurons remains largely unknown. Many theories exist regarding the optimal distribution of synapses throughout the dendritic tree, but this has been difficult to test experimentally with any accuracy. Here we use a new microscopy technique (Focused Ion Beam Scanning Electron Microscopy) to map individual synaptic connections on Purkinje cells and stellate cells of the cerebellum. We can conclude that the number of excitatory connections far outweighs the number of inhibitory synapses. Inhibition may be specifically located to act as a regulatory mechanism preventing excessive excitation reaching the soma. This occurs at a spatial scale just greater than a single dendrite.


25th Oct: Denize Atan

An eye on the brain

Neurodevelopmental disorders are globally prevalent with a huge healthcare burden on society. Embryologically, the neural retina shares its origin with the CNS, and so predictably, many of the same processes and genes implicated in the development of the CNS are also required for normal retinal development. Transcription factors (TFs) play key roles in directing neural circuit assembly through their precise spatial, temporal, and cell type-specific control of gene expression, and this is highlighted by the large number of mouse mutants that have been identified in which loss of a particular TF results in a specific defect in neural connectivity in the retina and or brain. The importance of properly constructed neuronal networks is particularly pertinent to the epilepsy syndromes, in which imbalanced excitatory (glutamatergic) and inhibitory (GABAergic) inputs lead to cortical hyperexciteability. Although we suspect that many cases are genetic in aetiology, it is often difficult to make a definitive genetic diagnosis. This is because standard technology has a low sensitivity and it is estimated that 4 times the number of genes that have been associated with neurodevelopmental disorders are yet to be discovered. In this talk, I will discuss how the genes that influence the development of neural circuits in the eye have provided insights into similar processes that occur during the development of the brain.


1st Nov: Jonathan Lawry

The Practical Applications of Vagueness

This talk will explore the potential benefits of embedding vagueness as part of formal knowledge representation in intelligent systems. By focusing on the utility of vagueness in multi-agent communications, natural language generation, consensus modelling and decision making, we will investigate different aspects of the phenomenon and outline how these are captured by a number of distinct theories.


8th Nov: Jeff Orchard

From Spikes to Dynamics

Sure, neurons fire spikes. But what do all those spikes mean, and how do they perform computations? In this talk, I will describe a framework for thinking about neural computations, and show you how to turn a system-level description of a network into a network of spiking neurons. The populations of neurons are connected in a way that does your computation for you. Even though you’re working with spiking neurons, you can think about your network in the data domain, and ignore the neural details (if you wish). I’ll also show lots of computer demonstrations.


15th Nov: Dimitris Bampasakis, University of Hertfordshire

Short-term depression of inhibitory Purkinje cell synapses enhances gain modulation in the cerebellar nuclei

Information in neurons can be encoded by their action potential rate, thus making the transformation of input to output rate, the input-output (I-O) relationship, a core computational function. Introduction of a second input, often called modulatory input, can modify this I-O relationship in ways that correspond to different arithmetic operations 1. Here, we examine the modulation of the slope of the I-O relationship, also referred to as gain modulation. Gain modulation can be based on a wide variety of biophysical mechanisms, with short-term depression (STD) of excitatory synapses being one of them. Commonly, gain modulation is studied by examining the effect of tonic or synaptic inhibition on the excitatory I-O relationship. However, some projection neurons, like cerebellar Purkinje cells (PCs), are inhibitory. Therefore, the opposite scenario, in which the effect of inhibition on output rate is being modulated by an excitatory input, may occur as well. As a previous study found that inhibitory synaptic input variability can change the output rate of neurons in the cerebellar nuclei (CN), the question arises how excitatory input can modulate this relationship. Considering the excitatory input from mossy fibres (MF) onto CN neurons as modulatory, we investigated the effects on gain control exerted by STD of the inhibitory synapses that PCs make on a model CN neuron. We found that STD at the inhibitory PC-CN synapse enhanced gain modulation. Thus, like STD at excitatory synapses, STD at inhibitory synapses can enable neurons to perform multiplicative operations on their inputs.


22nd Nov: Nathan Lepora

Probabilistic neuroscience: Evolution or Revolution?

Over the last few decades, probabilistic/statistical methods have became increasingly influential in neuroscience, and also other disciplines such as computer science and robotics. In this talk, I describe how some of this progress is impacting our theoretical conceptualization of brain function. In particular, I focus on perception and learning in the cortico-basal ganglia network, and describe how the probabilistic modelling framework can give a basis for understanding neural function/dysfunction in health and disease.


29th Nov: Matt Jones

Losing control under ketamine

Pharmacology is complicated. Drugs are dirty, and their systemic effects in animals, volunteers and patients arise from a bewildering array of direct and indirect actions on pre- and post-synaptic receptors throughout the CNS. In collaboration with Rosalyn Moran (Virginia Tech), we have been attempting to disentangle the effects of the psychotomimetic ketamine on limbic-cortical theta and gamma network oscillations using Dynamic Causal Modelling (DCM). Setting our findings in the context of predictive coding theories suggests that NMDA receptor antagonism by ketamine in hierarchical and reciprocal networks may result in failure of top-down connections from frontal cortical regions to signal predictions to hippocampus, thereby disrupting error signalling. Given that theta and gamma rhythm abnormalities are also evident in schizophrenic patients, this approach may represent a framework for the study of the synaptic bases of schizo-typical cognitive disturbance. Or not. Please help me to decide.


13th Dec: Ute Leonards, Adeline Paiement and Peta Sharples

Automatic 3D and 4D modelling from sparse and misaligned tomographic data

This talk will present and illustrate with an example how computer vision can enhance the analysis of medical images. It will focus on the automatic modelling of organs from MRI and CT datasets that suffer from misalignments and gaps between the images. We will first present the challenges raised by such datasets, and notably the three inter-dependent issues of registration, segmentation, and interpolation. We then propose an integrated framework to solve these issues and produce an accurate and fully automatic modelling. At the end of the talk, Ute Leonards will introduce a new project that may be an application and an extension of this framework, namely the detection of longitudinal cerebral modifications after childhood traumatic brain injury.

Summer 2013

10th May: Barak Pearlmutter

The Slow Axon Blockade Hypothesis for DBS.

Deep brain stimulation (DBS) can ameliorate essential and Parkinsonian tremor. The detailed mechanism by which this is achieved is unclear, but clues to the mechanism may lie in the known destabilising influence of time delays upon closed-loop systems. We hypothesise that DBS tends to stabilise the system and reduce tremor oscillations by reducing time delays in motor control feedback loops. We posit that the reduction is associated with a partial blockade of axonal pathways by antidromic activation, with the blockade being less complete for axons with higher propagation velocities. The inverse relationship between blockade effectiveness and propagation velocity is due to the blocking pulses clearing the axon faster when their velocity is higher, leaving a larger fraction of the time for signalling activity. Two mathematical models have been used to illustrate the idea: a biomechanical model of arm movement and a random neuronal network. Both models exhibit changes of behaviour under simulated slow axon blockade that agree with several experimental observations of DBS. The hypothesis in general accounts for a variety of known features of DBS, especially regarding the target area and the stimulation frequency, and makes a number of testable predictions.


17th May: Martin Homer

The mathematical modelling of oscillatory dynamics in the accessory olfactory bulb

24th May: Jack Mellor

Modulation of hippocampal synaptic transmission and network oscillations

The hippocampus is sometimes referred to as a very large random access synaptic space where new information can be rapidly compared to existing memories. Networks of associated neurons form rapidly and interact with other networks by the processes of synaptic plasticity and network oscillations. We have been exploring how these processes interact and can be modulated by neuromodulatory input from the cholinergic system.

Spring 2013

25th Jan: Clea Warburton

Recognition memory is our ability to distinguish between familiar objects or places i.e. those which we have encountered before and novel objects or places that we have never come across before. This form of memory is central to our ability to recall day-to-day events, and is notably lost in cases of amnesia following head trauma or neurodegeneration.

Recognition memory is not a unitary process, but rather may be sub-divided depending on the type of information to be remembered. We and others have shown that recognition memory for objects is mediated by the perirhinal cortex in the medial temporal lobe; while the recognition of places is dependent on the hippocampus. Of particular interest to my laboratory are two facets of recognition memory which allows us to remember whether we have encountered an object within a particular location, object-in-place memory, or in a particular sequence, temporal order or serial recognition memory.

Data from our lab. demonstrates that recognition memory is dependent upon a number of key brain regions; namely the perirhinal cortex, medial prefrontal cortex and hippocampus and more importantly we have evidence, which shows that these regions form components of an integrated memory system. Further we have examined the role of synaptic plasticity in the formation of different forms of recognition memory and revealed the importance of a number of neurotransmitter and intracellular signalling mechanisms in the formation of memories with in the neural circuit we have identified.


1st Feb: John Grogan

Dopamine’s effects on reinforcement learning and memory

Dopamine and the basal ganglia have been implicated in reinforcement learning and memory, although there is disagreement on whether dopamine during learning or retrieval is the deciding factor. This talk will focus on an experiment I am running that attempts to separate out these effects using Parkinson’s Disease patients, and the computational models fit to the data.


8th Feb: Krasimira Tsaneva-Atanasova

Gonadotrophin-releasing hormone (GnRH) is a hormone released from the brain to control the secretion of reproductive hormones. Pulsatile GnRH can increase fertility (e.g. in IVF programmes) whereas sustained GnRH reduces fertility (and is used to treat hormone-dependent cancer) but the ways in which the GnRH receptor and its intracellular signalling cascade decode these kinetic aspects of stimulation are essentially unknown. In addition, our knowledge is scarce of the intracellular mechanisms that govern frequency modulation of gonadotropins secretion, much less how such fine-tuning is regulated by different signal inputs. We develop a signalling pathway model of GnRH-dependent transcriptional activation in order to dissect the dynamic mechanisms of differential regulation of gonadotropin subunits gene. The model incorporates key signalling molecules, including extracellular-signal regulated kinase (ERK) and calcium-dependent activation of Nuclear Factor of Activated T-Cells (NFAT), as well as translocation of activated/inactivated ERK and NFAT across the nuclear envelope. In silico experiments designed to probe trancriptional effects downstream of ERK and NFAT reveal that interaction between transcription factors is sufficient to account for frequency discrimination..


15th Feb: Ullrich Bartsch

Neural trajectories of working memory A graphical approach to cognition

To this day the very nature of neural computation still remains elusive. Recently it was proposed that cortical networks operate near the edge of chaos, where transient non-linear network dynamics constitute a fundamental principle of neuronal computing. One implementation of this principle is known as liquid state machine, or reservoir computing (for a recent review see Buonomano and Maass, 2009).

There is only limited evidence from in vivo electrophysiology to corroborate this type of computing in biological networks. Transient dynamics have been identified during encoding of odours in projection neurons in bees and during working memory tasks in cortical networks in rodents.

Inspired by the concept of reservoir computing, I will present some preliminary analysis on extracellular unit recordings in rats during a spatial working memory task. This newly developed analysis aims to embed recorded neural activity into a low dimensional neural state space through calculating distances between spike trains and subsequent multidimensional scaling. This allows visualising neural dynamics during the task in a time resolved manner inside a meaningful coordinate system.

At this stage this is merely a tool for visualising network dynamics over time. One preliminary result is the separation of neuronal trajectories during the working memory period of the task. The dynamics resemble winnerless competition type computation, with brief periods of high synchrony between recorded units.

I would like to use the forum as an opportunity to present these very preliminary results, discuss the usefulness of this approach and most importantly spur a discussion about the nature of neuronal computing.


22nd Feb: Alan Winfield

The Thinking Robot

Press headlines frequently refer to robots that think like humans, have feelings, or even behave ethically, but is there any basis of truth in such headlines, or are they simply sensationalist hype? Computer scientist EW Dijkstra famously wrote the question of whether machines can think is about as relevant as the question of whether submarine can swim , but the question of robot thought is one that cannot so easily be dismissed. In this talk, I will outline the state-of-the-art in robot intelligence, attempt to answer the question how intelligent are present day intelligent robots? and describe efforts to design robots that are not only more intelligent but also have a sense of self. But if we should be successful in designing such robots, would they think like animals, or even humans? Are there risks, or ethical issues, in attempting to design robots that think?


1st Mar: Eoin Lynch

Parameter estimation of an auditory spiking neuron model

Spiking neuron models can accurately model the spike trains of cortical neurons in response to somatically injected currents. An evolutionary optimisation method is presented here for fitting generic spiking neuron models to spike train data. The method is initially tested on in-vitro spike train recordings from cortical neurons responding to known in-vivo like current injection. An extended model, consisting of a cascade of a receptive field like structure which estimates the somatically injected current and a a spiking neuron model, is optimised using the method to find a model that characterises the spike train responses of auditory neurons in the zebra finch auditory forebrain responding to natural auditory stimuli in vivo.


8th Mar: Casimir Ludwig

Control over fixation duration

Human vision relies critically on sampling the visual environment during brief periods of stable fixation. During any one fixation, the observer essentially performs three tasks: (i) analyse the visual information at the current point of gaze; (ii) analyse peripheral visual information in order to decide where to fixate next; (iii) decide when to shift gaze to the next target location. In this seminar I will focus on this temporal component. I will discuss models based on integrating sensory evidence to a decision criterion. In this regard, one critical question is whether fixation duration is controlled by the quality of sensory evidence at all, and if so, whether this evidence comes from the current point of fixation or from potential target locations in the periphery.


15th Mar: Simon Farrell

Clustering in working memory and episodic memory

I’ll present part of a programme of work that suggests some common principles and mechanisms that underlie working memory and episodic memory. I’ll talk about some data from serial recall (a prototypical short-term memory task) and free recall (a standard episodic memory task), and discuss a model that gives a fine-grained account of these data. The essential idea of the model is that longer sequences of information are segregated into clusters of serially ordered information, and that free recall and serial recall primarily differ in the strategies employed to access those clusters. Depending on time I’ll talk about individual differences, effects of ageing, amnesia, and the question of why memory should behave in this fashion.


26th Apr: L.J.B. Briant

High blood pressure (BP), or neurogenic hypertension, is known to be related to dysfunction of the sympathetic nervous system (SNS). To investigate how SNS dysfunction can cause a chronic rise in BP, we have constructed a model of the pathway of transmission from the SNS to the smooth muscle cells (SMCs) that are responsible for the contraction of arteries. The differential equations describe spike generation in the nerve cells, to the calcium-mediated contractile response of SMCs.

Data from hypertensive rats indicates that a change in the phase and amplitude of respiratory component of the sympathetic input to SMCs occurs in this disease state. We use the model to show that changing the respiratory component of the input influences the contractile force generated in the SMC.

Autumn 2012

12th Oct: David Coyle

Intelligent machines and human agency

Intelligent computing systems – particularly in research contexts – are becoming increasing complex. These systems have the potential to infer human intentions and then provide assistance or act on these inferences. This raises important questions regarding the ownership and control of actions when humans use, and interact with, new technologies. In neuroscience literature the sense of agency is defined as the experience of being in control of one’s own actions and, through this control, affecting the external word and having responsibility for the consequences of our actions. This talk will describe research I have undertaken over the past two years, in collaboration with psychiatrists and cognitive neuroscientists at the Behavioural and Clinical Neuroscience Institute in Cambridge. We have applied neuro-cognitive experimental techniques to investigate peoples’ experience of agency when interacting with intelligent computer interfaces, and with changing modalities of human-computer interactions. I will discuss two specific experiments, but would also like to discuss the ways in which inter-disciplinary collaborations, between human-computer interaction and neuroscience researchers, offers new opportunities to extend both disciplines. David Coyle is Lecturer in Human Computer Interaction with the Department of Computer Science, in the University of Bristol.


19th Oct: Alan Roberts

How to build the connectome of a small CNS controlling rhythmic activity’

The brainstem and spinal neurons and networks controlling swimming in hatchling frog tadpoles have been defined in some detail. A simple axon growth model can match real neuron populations and generate a synaptic connection map or “connectome”. When this connectome is mapped onto a functional model it can swim in response to brief “sensory” stimuli. Our knowledge of this system allows detailed questions to be asked about the crucial features of the neurons and network controlling a rhythmic activity, in particular a population of electrically coupled pacemaker neurons.


26th Oct: Jonathan Brooks

Exploring brainstem – spinal cord connectivity during distraction based analgesia

Distraction based analgesia is a robust finding from human and experimental animal studies. The amount of pain perceived by a subject can be modified by attention related processes, which allow continued performance at tasks during the experience of pain. The first point at which painful stimuli are processed in the central nervous system lies in the dorsal horn of spinal cord, and key areas in the brainstem (peri-aqueductal grey matter) and rostral ventromedial medulla have been shown to influence these incoming pain-related signals. I will present data from a recent functional magnetic resonance imaging (fMRI) study which explored these interactions through a 2×2 factorial design (factors: task difficulty – hard or easy; applied temperature – high or low). The question remains whether (in man) there is a tight coupling between the brainstem and spinal cord activity that facilitates the suppression of incoming nociceptive (i.e. pain-related) signals.


2nd Nov: Ruth Betterton

Brain waves: Gamma Oscillations in the Hippocampus

A neuronal oscillation can be broadly described as the synchronised firing of a population of cells. Gamma oscillations (30-100 Hz) are associated with a variety of cognitive functions including attention, sensory processing and learning and memory. We developed an in vitro preparation to study the properties of gamma oscillations within CA3 of the rat hippocampus. This system enabled concomitant local field potential and whole cell electrophysiological recordings. Simulations run in a computational model showed many of the properties observed in the slice preparation and in vivo work by others. Future work will extend both models for the investigation of the cholinergic modulation of these processes.


9th Nov: Rafal Bogacz

New evidence for Bayes’ theorem being hardwired in the basal ganglia

This talk will present results of two experiments testing predictions of a model assuming that during decision making the cortico-basal-ganglia circuit computes probabilities that considered alternatives are correct, according to Bayes theorem. The talk will start with a review of the model. Then it will present results of an experiment from the lab of Peter Magill in Oxford on the microcircuitry of globus pallidus (Mallet et al., 2012, Neuron), and an experiment performed by Chrystalina Antoniades from Oxford on the effect of deep brain stimulation on patients representation of probabilities.


16th Nov: Tom Jahans-Price

Hippocampal-prefrontal information coding during spatial decision-making

In order to investigate information processing during decision-making, we introduce a computational model describing a maze-based task in which rats have choose between a left or right turn depending on the direction of their previous turn (Jones & Wilson 2005, PLoS Biology 3 e402). The model uses differential equations to describe the behaviour and interactions of populations of neurons, and integrates sensory input with working memory and rule-learning to produce learning and performance that accurately recapitulate behavioural data from rats. The model predicts the occurrence of turn- and memory-dependent activity in neuronal networks subserving task performance.

We tested these model predictions using a new software toolbox (Maze Query Language, MQL) to analyse activity of prefrontal cortical (PFC) and hippocampal (CA1) neurons recorded from 6 adult rats during task performance. The firing rates of CA1 neurons discriminated context (i.e. precise trajectory between reward points on a given trial) but were not turn-selective. In contrast, we found a subset of PFC neurons selective for turn–direction and/or trajectory that display a gradual buildup of activity before the decision turn; turn-selectivity in PFC was significantly reduced during error trials. We found some PFC neurons selective for turn, some selective for context and some conjunctively encoding both.

These analyses complement data from neurophysiological recordings in non-human primates indicating that firing rates of cortical neurons correlate with integration of sensory evidence during perceptual decision-making. Further analyses of the rodent data will allow us to link this cortical processing to input from subcortical structures including hippocampus and striatum.


23rd Nov: Stafford Lightman and Jamie Walker

Neuroendocrine Dynamics

Oscillating levels of adrenal glucocorticoid hormones are essential for optimal gene expression, and for maintaining physiological and behavioural responsiveness to stress. The biological basis for these oscillations is not known, but a neuronal pulse generator within the hypothalamus has remained a popular hypothesis. We have used mathematical modelling combined with experiments to show that pulsatile hypothalamic activity is not required for generating ultradian glucocorticoid oscillations, and that the oscillations are generated by a sub-hypothalamic pituitary-adrenal system, which functions as a deterministic peripheral hormone oscillator with a characteristic ultradian frequency. We will present these findings and discuss some of the new challenges that follow on from our results


30th Nov: Alex Pavlides

A key pathology of Parkinson’s disease is the occurrence of persistent beta oscillations. We investigate a model of the circuit composed of subthalamic nucleus and globus pallidus, which receives delayed feedback. This feedback models the closed loop structure of the basal ganglia. I will show how the network’s stability and frequency is influenced by the delayed feedback and discuss how this model builds on earlier work.


7th Dec Roland Baddeley