Tatyana O. Sharpee1

Correspondence: Tatyana O. Sharpee - sharpee@snl.salk.edu

BMC Neuroscience 2016, 17(Suppl 1):A1

Neural circuits are notorious for the complexity of their organization. Part of this complexity is related to the number of different cell types that work together to encode stimuli. I will discuss theoretical results that point to functional advantages of splitting neural populations into subtypes, both in feedforward and recurrent networks. These results outline a framework for categorizing neuronal types based on their functional properties. Such classification scheme could augment classification schemes based on molecular, anatomical, and electrophysiological properties.

A2 Mesoscopic modeling of propagating waves in visual cortex

Alain Destexhe1,2

Correspondence: Alain Destexhe - destexhe@unic.cnrs-gif.fr

BMC Neuroscience 2016, 17(Suppl 1):A2

Propagating waves are large-scale phenomena widely seen in the nervous system, in both anesthetized and awake or sleeping states. Recently, the presence of propagating waves at the scale of microns–millimeters was demonstrated in the primary visual cortex (V1) of macaque monkey. Using a combination of voltage-sensitive dye (VSD) imaging in awake monkey V1 and model-based analysis, we showed that virtually every visual input is followed by a propagating wave (Muller et al., Nat Comm 2014). The wave was confined within V1, and was consistent and repeatable for a given input. Interestingly, two propagating waves always interact in a suppressive fashion, and sum sublinearly. This is in agreement with the general suppressive effect seen in other circumstances in V1 (Bair et al., J Neurosci 2003; Reynaud et al., J Neurosci 2012).

To investigate possible mechanisms for this suppression we have designed mean-field models to directly integrate the VSD experiments. Because the VSD signal is primarily caused by the summed voltage of all membranes, it represents an ideal case for mean-field models. However, usual mean-field models are based on neuronal transfer functions such as the well-known sigmoid function, or functions estimated from very simple models. Any error in the transfer function may result in wrong predictions by the corresponding mean-field model. To palliate this caveat, we have obtained semi-analytic forms of the transfer function of more realistic neuron models. We found that the same mathematical template can capture the transfer function for models such as the integrate-and-fire (IF) model, the adaptive exponential (AdEx) model, up to Hodgkin–Huxley (HH) type models, all with conductance-based inputs.

Using these transfer functions we have built “realistic” mean-field models for networks with two populations of neurons, the regular-spiking (RS) excitatory neurons, showing spike frequency adaptation, and the fast-spiking (FS) inhibitory neurons. This mean-field model can reproduce the propagating waves in V1, due to horizontal interactions, as shown previously using IF networks. This mean-field model also reproduced the suppressive interactions between propagating waves. The mechanism of suppression was based on the preferential recruitment of inhibitory cells over excitatory cells by afferent activity, which acted through the conductance-based shunting effect of the two waves onto one another. The suppression was negligible in networks with identical models for excitatory and inhibitory cells (such as IF networks). This suggests that the suppressive effect is a general phenomenon due to the higher excitability of inhibitory neurons in cortex, in line with previous models (Ozeki et al., Neuron 2009).

Work done in collaboration with Yann Zerlaut (UNIC) for modeling, Sandrine Chemla and Frederic Chavane (CNRS, Marseille) for in vivo experiments. Supported by CNRS and the European Commission (Human Brain Project).

A3 Dynamics and biomarkers of mental disorders

Mitsuo Kawato1

Correspondence: Mitsuo Kawato - kawato@hip.atr.co.jp

BMC Neuroscience 2016, 17(Suppl 1):A3

Current diagnoses of mental disorders are made in a categorical way, as exemplified by DSM-5, but many difficulties have been encountered in such categorical regimes: the high percentage of comorbidities, usage of the same drug for multiple disorders, the lack of any validated animal model, and the situation where no epoch-making drug has been developed in the past 30 years. NIMH started RDoC (research domain criterion) to overcome these problems [1], and some successful results have been obtained, including common genetic risk loci [2] and common neuroanatomical changes for multiple disorders [3] as well as psychosis biotypes [4].

In contrast to the currently dominant molecular biology approach, which basically assumes one-to-one mapping between genes and disorders, I postulate the following dynamics-based view of psychiatric disorders. Our brain is a nonlinear dynamical system that can generate spontaneous spatiotemporal activities. The dynamical system is characterized by multiple stable attractors, only one of which corresponds to a healthy or typically developed state. The others are pathological states.

The most promising research approach within the above dynamical view is to combine resting-state functional magnetic resonance imaging, machine learning, big data, and sophisticated neurofeedback. Yahata et al. developed an ASD biomarker using only 16/9730 functional connections, and it did not generalize to MDD or ADHD but moderately to schizophrenia [5]. Yamashita’s regression model of working memory ability from functional connections [6] generalized to schizophrenia and reproduced the severity of working-memory deficits of four psychiatric disorders (in preparation).

With the further development of machine learning algorithms and accumulation of reliable datasets, we hope to obtain a comprehensive landscape of many psychiatric and neurodevelopmental disorders. Guided by this full-spectrum structure, a tailor-made neurofeedback therapy should be optimized for each patient [7].

Correspondence: Vladislav Sekulić - vlad.sekulic@utoronto.ca

BMC Neuroscience 2016, 17(Suppl 1):F1

The theta rhythm (4–12 Hz) is a prominent network oscillation observed in the mammalian hippocampus and is correlated with spatial navigation and mnemonic processing. Inhibitory interneurons of the hippocampus fire action potentials at specific phases of the theta rhythm, pointing to distinct functional roles of interneurons in shaping this rhythmic activity. One hippocampal interneuron type, the oriens-lacunosum/moleculare (O-LM) cell, provides direct feedback inhibition and regulation of pyramidal cell activity in the CA1 region. O-LM cells express the hyperpolarization-activated, mixed-cation current (Ih) and, in vitro, demonstrate spontaneous firing at theta that is impaired upon blockade of Ih. Work using dynamic clamp has shown that in the presence of frequency-modulated artificial synaptic inputs, O-LM cells exhibit a spiking resonance at theta frequencies that is not dependent on Ih [1]. However, due to the somatic injection limitation of dynamic clamp, the study could not examine the potential contributions of putative dendritic Ih or the integration of dendritically-located synaptic inputs. To overcome this, we have used a database of previously developed multi-compartment computational models of O-LM cells [2].

We situated our OLM cell models in an in vivo-like context by injecting Poisson-based synaptic background activities throughout their dendritic arbors. Excitatory and inhibitory synaptic weights were tuned to produce similar baseline activity prior to modulation of the inhibitory synaptic process at various frequencies (2–30 Hz). We found that models with dendritic inputs expressed enhanced resonant firing at theta frequencies compared to models with somatic inputs. We then performed detailed analyses on the outputs of the models with dendritic inputs to further elucidate these results with respect to Ih distributions. The ability of the models to be recruited at the modulated input frequencies was quantified using the rotation number, or average number of spikes across all input cycles. Models with somatodendritic Ih were recruited at >50 % of the input cycles for a wider range of theta frequencies (3–9 Hz) compared to models with somatic Ih only (3–4 Hz). Models with somatodendritic Ih also exhibited a wider range of theta frequencies for which phase-locked output (vector strength >0.75) was observed (4–12 Hz), compared to models with somatic Ih (3–5 Hz). Finally, the phase of firing of models with somatodendritic Ih given 8–10 Hz modulated input was delayed 180–230° relative to the time of release from inhibitory synaptic input.

O-LM cells receive phasic inhibitory inputs at theta frequencies from a subpopulation of parvalbumin-positive GABAergic interneurons in the medial septum (MS) timed to the peak of hippocampal theta, as measured in the stratum pyramidale layer [3]. Furthermore, O-LM cells fire at the trough of hippocampal pyramidal layer theta in vivo [4], an approximate 180˚ phase delay from the MS inputs, corresponding to the phase delay in our models with somatodendritic Ih. Our results suggest that, given dendritic synaptic inputs, O-LM cells require somatodendritic Ih channel expression to be precisely recruited during the trough of hippocampal theta activity. Our strategy of leveraging model databases that encompass experimental cell type-specificity and variability allowed us to reveal critical biophysical factors that contribute to neuronal function within in vivo-like contexts.

Acknowledgements: Supported by NSERC of Canada, an Ontario Graduate Scholarship, and the SciNet HPC Consortium.

Correspondence: Daniel K. Wójcik - d.wojcik@nencki.gov.pl

BMC Neuroscience 2016, 17(Suppl 1):F2

Extracellular recordings of electric potential, with a century old history, remain a popular tool for investigations of brain activity on all scales, from single neurons, through populations, to the whole brains, in animals and humans, in vitro and in vivo [1]. The specific information available in the recording depends on the physical settings of the system (brain + electrode). Smaller electrodes are usually more selective and are used to capture local information (spikes from single cells or LFP from populations) while larger electrodes are used for subdural recordings (on the cortex, ECoG), on the scalp (EEG) but also as depth electrodes in humans (called SEEG). The advantages of extracellular electric potential are the ease of recording and its stability. Its problem is interpretation: since electric field is long range one can observe neural activity several millimeters from its source [2–4]. As a consequence every recording reflects activity of many cells, populations and regions, depending on which level we focus. One way to overcome this problem is to reconstruct the distribution of current sources (CSD) underlying the measurement [5], typically done to identify activity on systems level from multiple LFP on regular grids [6].

We recently proposed a kernel-based method of CSD estimation from multiple LFP recordings from arbitrarily placed probes (i.e. not necessarily on a grid) which we called kernel Current Source Density method (kCSD) [7]. In this overview we present the original proposition as well as two recent developments, skCSD (single cell kCSD) and kESI (kernel Electrophysiological Source Imaging). skCSD assumes that we know which part of the recorded signal comes from a given cell and we have access to the morphology of the cell. This could be achieved by patching a cell, driving it externally while recording the potential on a multielectrode array, injecting a dye, and reconstructing the morphology. In this case we know that the sources must be located on the cell and this information can be successfully used in estimation. In kESI we consider simultaneous recordings with subdural ECoG (strip and grid electrodes) and with depth electrodes (SEEG). Such recordings are taken on some epileptic patients prepared for surgical removal of epileptogenic zone. When MR scan of the patient head is taken and the positions of the electrodes are known as well as the brain’s shape, the idea of kCSD can be used to bound the possible distribution of sources facilitating localization of the foci.

Correspondence: Jae Kyoung Kim - jaekkim@kaist.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):F2

In mammals, circadian (~24 h) rhythms are mainly regulated by a master circadian clock located in the suprachiasmatic nucleus (SCN) [1]. The SCN consists of ~20,000 neurons, each of which generates own rhythms via intracellular transcriptional negative feedback loop involving PER-CRY and BMAL1-CLOCK. These individual rhythms of each neuron are synchronized through intercellular coupling via neurotransmitters including VIP [2]. In this talk, I will discuss that the synchronized periods via coupling signal strongly depend on the mechanism of intracellular transcription repression [3–4]. Specifically, using mathematical modeling and phase response curve analysis, we find that the synchronized period of SCN stays close to the population mean of cells’ intrinsic periods (~24 h) if transcriptional repression occurs via protein sequestration. However, the synchronized period is far from the population mean when repression occurs via Hill-type regulation (e.g. phosphorylation-based repression). These results reveal the novel relationship between two major functions of the SCN-intracellular rhythm generation and intercellular synchronization of rhythms. Furthermore, this relationship provides an explanation for why the protein sequestration is commonly used in circadian clocks of multicellular organisms, which have a coupled master clock, but not in unicellular organisms [4].

Correspondence: Irene Elices - irene.elices@uam.es

BMC Neuroscience 2016, 17(Suppl 1):O1

Found in all nervous systems, central pattern generators (CPGs) are neural circuits that produce flexible rhythmic motor patterns. Their robust and highly coordinated spatio-temporal activity is generated in the absence of rhythmic input. Several invertebrate CPGs are among the best known neural circuits, as their neurons and connections have been identified and mapped. The crustacean pyloric CPG is one of these flagship neural networks [1, 2]. Experimental and computational studies of CPGs typically examine their rhythmic output in periodic spiking-bursting regimes. Aiming to understand the fast rhythm negotiation of CPG neurons, here we present experimental and theoretical analyses of the pyloric CPG activity in situations where irregular yet coordinated rhythms are produced. In particular, we focus our study in the context of two sources of rhythm irregularity: intrinsic damage in the preparation, and irregularity induced by ethanol. The analysis of non-periodic regimes can unveil important properties of the robust dynamics controlling rhythm coordination in this system.

Adult male and female shore crabs (Carcinus maenas) were used for the experimental recordings. The isolated stomatrogastric ganglion was kept in Carcinus maenas saline. Membrane potentials were recorded intracellularly from the LP and PD cells, two mutually inhibitory neurons that form a half-center oscillator in the pyloric CPG. Extracellular electrodes allowed monitoring the overall CPG rhythm. Conductance-based models of the pyloric CPG neurons and their associated graded synapses as described in [3, 4] were also used in this dual experimental and theoretical study.

Irregularity and coordination of the CPG rhythms were analyzed using measures characterizing the cells’ instantaneous waveform, period, duty cycle, plateau, hyperpolarization and temporal structure of the spiking activity, as well as measures describing instantaneous phases among neurons in the irregular rhythms and their variability. Our results illustrate the strong robustness of the circuit to keep LP/PD phase relationships in intrinsic and induced irregularity conditions while allowing a large variety of burst waveforms, durations and hyperpolarization periods in these neurons. In spite of being electrically coupled to the pacemaker cell of the circuit, the PD neurons showed a wide flexibility to participate with larger burst durations in the CPG rhythm (and larger increase in variability), while the LP neuron was more restricted in sustaining long bursts in the conditions analyzed. The conductance-based models were used to explain the role of asymmetry in the dynamics of the neurons and synapses to shape the irregular activity observed experimentally. Taking into account the overall experimental and model analyses, we discuss the presence of preserved relationships in the non-periodic but coordinated bursting activity of the pyloric CPG, and their role in the fast rhythm negotiating properties of this circuit.

Acknowledgements: We acknowledge support from MINECO DPI2015-65833-P, TIN2014-54580-R, TIN-2012-30883 and ONRG grant N62909-14-1-N279.

Correspondence: Jee Hyun Choi - jeechoi@kist.re.kr

BMC Neuroscience 2016, 17(Suppl 1):O2

Particular behaviors are associated with different spatio-temporal patterns of cortical EEG oscillations. A recent study suggests that the cortically-projecting, parvalbumin-positive (PV+) inhibitory neurons in the basal forebrain (BF) play an important role in the state-dependent control of cortical oscillations, especially ~40 Hz gamma oscillations [1]. However, the cortical topography of the gamma oscillations which are controlled by BF PV+ neurons and their relationship to behavior are unknown. Thus, in this study, we investigated the spatio-temporal patterns and the functional role of the cortical oscillations induced or entrained by BF PV+ neurons by combining optogenetic stimulation of BF PV+ neurons with high-density EEG [2, 3] in channelrhodopsin-2 (ChR2) transduced PV-cre mice. First, we recorded the spatio-temporal responses in the cortex with respect to the stimulation of BF PV+ neurons at various frequencies. The topographic response patterns were distinctively different depending on the stimulation frequencies, and most importantly, stimulation of BF PV+ neurons at 40 Hz (gamma band frequency) induced a preferential enhancement of gamma band oscillations in prefrontal cortex (PFC) with a statistically significant increase in intracortical connectivity within PFC. Second, optogenetic stimulation of BF PV+ neurons was applied while the mice were exposed to auditory stimuli (AS) at 40 Hz. The time delay between optogenetic stimulation and AS was tested and the phase response to the AS was characterized. We found that the phase responses to the click sound in PFC were modulated by the optogenetic stimulation of BF PV+ neurons. More specifically, the advanced activation of BF PV+ neurons by π/2 (6.25 ms) with respect to AS sharpened the phase response to AS in PFC, while the anti-phasic activation (π, 12.5 ms) blunted the phase response. Interestingly, like PFC, the primary auditory cortex (A1) also showed sharpened phase response for the π/2 advanced optogenetic BF PV+ neuron activation during AS. Considering that no direct influence of BF PV+ neurons on A1 was apparent in the response to stimulation of BF PV+ neurons alone, the sharpened phase response curve of A1 suggests a top-down influence of the PFC. This result implies that the BF PV+ neurons may participate in regulating the top-down influence that PFC exerts on primary sensory cortices during attentive behaviors, and supports the idea that the modulating activities of BF PV+ neurons might be a potential target for restoring top-down cognitive functions as well as abnormal frontal gamma oscillations associated with psychiatric disorders.

Acknowledgements: This research was supported by the Department of Veterans Affairs, the Korean National Research Council of Science & Technology (No. CRC-15-04-KIST), NIMH R01 MH039683 and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2015R1D1A1A01059119). The contents of this report do not represent the views of the US Department of Veterans Affairs or the United States government.

O3 Modeling auditory stream segregation, build-up and bistability

James Rankin1, Pamela Osborn Popp1, John Rinzel1,2

1Center for Neural Science, New York University, New York 10003, NY; 2Courant Institute of Mathematical Sciences, New York University, New York 10012, NY

Correspondence: James Rankin - james.rankin@nyu.edu

BMC Neuroscience 2016, 17(Suppl 1):O3

With neuromechanistic modelling and psychoacoustic experiments we study the perceptual dynamics of auditory streaming (cocktail party problem). The stimulus is a sequence of two interleaved tones, A and B in a repeating triplet pattern: ABA_ABA_ (‘_’ is a silent gap). Initially, subjects hear a single integrated pattern, but after some seconds they hear segregated A_A_A_ and _B___B__ streams (build-up of streaming segregation). For long presentations, build-up is followed by irregular alternations between integrated and segregated (auditory bistability). We recently presented [1] the first neuromechanistic model of auditory bistability; it incorporates common competition mechanisms of mutual inhibition, slow adaptation and noise [2]. Our competition network is formulated to reside downstream of primary auditory cortex (A1). Neural responses in macaque A1 to triplet sequences [3] encode stimulus features and provide the inputs to our network (Fig. 1A). In our model recurrent excitation with an NMDA-like timescale links responses across gaps between tones and between triplets. It captures the dynamics of perceptual alternations and the stimulus feature dependence of percept durations. To account for build-up we incorporate early adaptation of A1 responses [3] (Fig. 1B, upper). Early responses in A1 are broadly tuned and do not reflect the frequency difference between the tones; later responses show a clear tonotopic dependence. This adaptation biases the initial percept towards integration, but occurs faster (~0.5 s) than the gradual build-up process (~5–10 s). The low initial probability of segregation gradually builds up to the stable probability of later bistable alternations (Fig. 1B, lower). During build-up, a pause in presentation may cause partial reset to integrated [4]. Our extended model shows this behavior assuming that after a pause A1 responses recover on the timescale of early adaptation. Moreover, the modeling results agree with our psychoacoustic experiments (compare filled and open circles in Fig. 1B, lower).

Fig. 1

A Model schematic: tone inputs IA and IB elicit pulsatile responses in A1, which are pooled as inputs to a three-population competition network. Central unit AB encodes integrated, peripheral units A and B encode segregated. Mutual inhibition between units and recurrent excitation are incorporated with adaptation and noise. B A1 inputs show early initial adaptation, also if a pause is present. Build-up function shows proportion segregated increasing over time, here shown for three tone-frequency differences, DF, with no pause (dashed) or with a pause (solid curves). Time-snapshots from model (filled circles) agree with data (empty circles with SEM error bars, N = 8)

Conclusions For the first time, we offer an explanation of the discrepancy in the timescales of early A1 responses and the more gradual build-up process. Recovery of A1 responses can explain resetting for stimulus pauses. Our model offers, to date, the most complete account of the early and late dynamics for auditory streaming in the triplet paradigm.

Correspondence: Alejandro Tabas - atabas@bournemouth.ac.uk

†Equal contribution

BMC Neuroscience 2016, 17(Suppl 1):O4

Auditory evoked fields (AEFs) observed in MEG experiments systematically present a transient deflection known as the N100 m, elicited around 100 ms after the tone onset in the antero-lateral Heschl’s Gyrus. The exact N100m’s latency is correlated with the perceived pitch of a wide range of stimulus [1, 2], suggesting that the transient component reflects the processing of pitch in auditory cortex. However, the biophysical substrate of such precise relationship remains an enigma. Existing models of pitch, focused on perceptual phenomena, did not explain the mechanism generating cortical evoked fields during pitch processing in biophysical detail. In this work, we introduce a model of interacting neural ensembles describing, for the first time to our knowledge, how cortical pitch processing gives rise to observed human neuromagnetic responses and why its latency strongly correlates with pitch.

To provide a realistic cortical input, we used a recent model of the auditory periphery and realistic subcortical processing stages. Subcortical processing was based on a delay-and-multiply operation carried out in cochlear nucleus and inferior colliculus [3], resulting in realistic patterns of neural activation in response to the stimulus periodicities. Subcortical activation is transformed into a tonotopic receptive-field-like representation [4] by a novel cortical circuit composed by functional blocks characterised by a best frequency. Each block consist of an excitatory and an inhibitory population, modelled using mean-field approximations [5]. Blocks interact with each other through local AMPA- and NMDA-driven excitation and GABA-driven global inhibition [5].

The excitation-inhibition competition of the cortical model describes a general pitch processing mechanism that explains the N100m deflection as a transient state in the cortical dynamics. The deflection is rapidly triggered by a rise in the activity elicited by the subcortical input, peaks after the inhibition overcomes the input, and stabilises when model dynamics reach equilibrium, around 100 ms after onset. As a direct consequence of the connectivity structure among blocks, the time necessary for the system to reach equilibrium depends on the encoded pitch of the tone. The model quantitatively predicts observed latencies of the N100m in agreement with available empirical data [1, 2] in a series of stimuli (see Fig. 2), suggesting that the mechanism potentially accounts for the N100 m dynamics.

Fig. 2

N100 m predictions in comparison with available data [1, 2] for a range of pure tones (A) and HCTs (B)

Correspondence: Hamish Meffin - hmeffin@unimelb.edu.au

BMC Neuroscience 2016, 17(Suppl 1):O5

Retinal implants can restore vision to patients suffering photoreceptor loss by stimulating surviving retinal ganglion cells (RGCs) via an array of microelectrodes implanted within the eye [1]. However, the acuity offered by existing devices is low, limiting the benefits to patients. Improvements may come by increasing the number of electrodes in new devices and providing patterned vision, which necessitates stimulation using multiple electrodes simultaneously. However, simultaneous stimulation poses a number of problems due to cross-talk between electrodes and uncertainty regarding the resulting activation pattern.

Here, we present a model and methods for estimating the responses of RGCs to simultaneous electrical stimulation. Whole cell in vitro patch clamp recordings were obtained from 25 RGCs with various morphological types in rat retina. The retinae were placed onto an array of 20 stimulating electrodes. Biphasic current pulses with 500 µs phase duration and 50 µs interphase gap were applied simultaneously to all electrodes at a frequency of 10 Hz, with the amplitude of current on each electrode sampled independently from a Gaussian distribution.

A linear-nonlinear model was fit to the responses of each RGC using spike-triggered covariance analyses on 80 % of the recorded data. The analysis revealed a single significant principle component corresponding to the electrical receptive field for each cell, with the second largest principle component having negligible effect on the neural response (Fig. 3a). This indicates that interactions between electrodes are approximately linear in their influence on the cells’ responses.

Fig. 3

a Spike triggered covariance showing the full set of stimuli (black dots) projected onto the first two principle components. Stimuli causing a spike formed two clusters: net cathodic first pulses (blue) and net anodic first pulse (red). b Electrical receptive fields superimposed on the electrode array are shown for the cathodic first (blue) and anodic first clusters (red)

Furthermore, the spike-triggered ensemble showed two clusters (red and blue in Fig. 3a) corresponding to stimulation that had a net effect that was either anodic first or cathodic first. The electrical receptive fields for both anodic first and cathodic first stimulation were highly similar (Fig. 3b). They consisted of a small number (1–4) of electrodes that were close to the cell body (green dot).

The remaining 20 % of data were used to validate the model. The average model prediction root-mean-square error was 7 % over the 25 cells. The accuracy of the model indicates that the linear-nonlinear model is appropriate to describe the responses of RGCs to electrical stimulation.

Acknowledgements: This research was supported by the Australian Research Council (ARC). MI, HM, and SC acknowledge support through the Centre of Excellence for Integrative Brain Function (CE140100007), TK through ARC Discovery Early Career Researcher Award (DE120102210) and HM and TK through the ARC Discovery Projects funding scheme (DP140104533).

Correspondence: Veronika Koren - veronika.koren@bccn-berlin.de

BMC Neuroscience 2016, 17(Suppl 1):O6

Linking sensory coding and behavior is a fundamental question in neuroscience. We have addressed this issue in behaving monkey visual cortex (areas V1 and V4) while animals were trained to perform a visual discrimination task in which two successive images were either rotated with respect to each other or were the same. We hypothesized that the animal’s performance in the visual discrimination task depends on the quality of stimulus coding in visual cortex. We tested this hypothesis by investigating the functional relevance of neuronal correlations in areas V1 and V4 in relation to behavioral performance. We measured two types of correlations: noise (spike count) correlations and correlations in spike timing. Surprisingly, both methods showed that correct responses are associated with significantly higher correlations in V4, but not V1, during the delay period between the two stimuli. This suggests that pair-wise interactions during the spontaneous activity preceding the arrival of the stimulus sets the stage for subsequent stimulus processing and importantly influences behavioral performance.

Experiments were conducted in 2 adult monkeys that were previously trained for the task. After 300 ms of fixation, the target stimulus, consisting of a naturalistic stimulus, is shown for 300 ms, and after a random delay period (500–1200 ms), a test stimulus is shown for 300 ms. The test can either be identical to the target stimulus (match) or rotated with respect to the target (non-match). Monkey responded by pressing a button and was rewarded for a correct response with fruit juice. Two linear arrays with 16 recording channels each were used to record population activity in areas V1 and V4. The difficulty of the task is calibrated individually to have 70 % correct responses on average. The analysis is conducted on non-match condition, comparing activity in trials with correct responses with trials where the monkey responded incorrectly. Noise correlations were assessed as pair-wise correlations of spike counts (method 1) and of spike timing (method 2). For method 1, z-scores of spike counts of binned spike trains are computed in individual trials. r_sc is computed as Pearson correlation coefficient of z-scores in all available trials, balanced across correct/incorrect condition. For the method 2, cross-correlograms were computed, from which the cross-correlograms from shuffled trials are subtracted. Resulting function was summed around zero lag and normalized with sum of autocorrelograms [1].

While firing rates of single units or of the population did not significantly change for correct and incorrect responses, noise correlations during the delay period were significantly higher in V4 pairs, computed with both r_sc method (p = 0.0005 in monkey 1, sign-rank test) and with r_ccg method (p = 0.0001 and p = 0.0280 in monkey 1 and 2, respectively, 50 ms integration window). This result is robust to changes in the length of the bin (method 1) and to the length of the summation window (method 2). In agreement with [2], we confirm the importance of spontaneous activity preceding the stimulus on performance and suggest that higher correlations in V4 might be beneficial for successful read-out and reliable transmission of the information downstream.

Correspondence: Maria Psarrou - m.psarrou@herts.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):O7

Gain modulation is a brain-wide principle of neuronal computation that describes how neurons integrate inputs from different presynaptic sources. A gain change is a multiplicative operation that is defined as a change in the sensitivity (or slope of the response amplitude) of a neuron to one set of inputs (driving input) which results from the activity of a second set of inputs (modulatory input) [1, 2].

Different cellular and network mechanisms have been proposed to underlie gain modulation [2–4]. It is well established that input features such as synaptic noise and plasticity can contribute to multiplicative gain changes [2–4]. However, the effect of neuronal morphology on gain modulation is relatively unexplored. Neuronal inputs to the soma and dendrites are integrated in a different manner: whilst dendritic saturation can introduce a strong non-linear relationship between dendritic excitation and somatic depolarization, the relationship between somatic excitation and depolarization is more linear. The non-linear integration of dendritic inputs can enhance the multiplicative effect of shunting inhibition in the presence of noise [3].

Neurons in the cerebellar nuclei (CN) provide the main gateway from the cerebellum to the rest of the brain. Understanding how inhibitory inputs from cerebellar Purkinje cells interact with excitatory inputs from mossy fibres to control output from the CN is at the center of understanding cerebellar computation. In the present study, we investigated the effect of inhibitory modulatory input on CN neuronal output when the excitatory driving input was delivered at different locations in the CN neuron. We used a morphologically realistic conductance based CN neuron model [5] and examined the change in output gain in the presence of distributed inhibitory input under two conditions: (a) when the excitatory input was confined to one compartment (the soma or a dendritic compartment) and, (b), when the excitatory input was distributed across particular dendritic regions at different distances from the soma. For both of these conditions, our results show that the arithmetic operation performed by inhibitory synaptic input depends on the location of the excitatory synaptic input. In the presence of distal dendritic excitatory inputs, the inhibitory input has a multiplicative effect on the CN neuronal output. In contrast, excitatory inputs at the soma or proximal dendrites close to the soma undergo additive operations in the presence of inhibitory input. Moreover, the amount of the multiplicative gain change correlates with the distance of the excitatory inputs from the soma, with increasing distances from the soma resulting in increased gain changes and decreased additive shifts along the input axis. These results indicate that the location of synaptic inputs affects in a systematic way whether the input undergoes a multiplicative or additive operation.

Correspondence: Yuguo Yu - yuyuguo@fudan.edu.cn

Accurate estimation of action potential (AP)-related metabolic cost is essential for understanding energetic constraints on brain connections and signaling processes. Most previous energy estimates of the AP were obtained using the Na+-counting method [1, 2], which seriously limits accurate assessment of metabolic cost of ionic currents that underlie AP generation. Moreover, the effects of axonal geometry and ion channel distribution on energy consumption related to AP propagation have not been systematically investigated.

To address these issues, we return to the cable theory [3] that underlies our HH-type cortical axon model [4], which was constructed based on experimental measurements. Based on the cable equation that describes how ion currents flow along the cable as well as analysis of the electrochemical energy in the equivalent circuit, we derived the electrochemical energy function for the cable model,

where gNamax (in a range of 50–650 mS/cm2), gKmax (5–100 mS/cm2), and gL = 0.033 mS/cm2 are the maximal sodium, maximal potassium, and leak conductance per unit membrane area, respectively; and VNa = 60, VK = −90 VL = −70 mV are the reversal potentials of the sodium, potassium, and leak channels, respectively. The gate variables m, h, and n are dimensionless activation and inactivation variables, which describe the activation and inactivation processes of the sodium and potassium channels [4]. This equation describes the AP-related energy consumption rate per unit membrane area (cm2/s) at any axonal distance and any time. The individual terms on the right-hand side of the equation represent the contributions of the sodium, potassium, leak, and axial currents, respectively. Then we employed the cable energy function to calculate energy consumption for unbranched axons and axons with several degrees of branching (branching level, BL). Calculations based on this function distinguish between the contributions of each item toward total energy consumption.

Our analytical approach predicts an inhomogeneous distribution of metabolic cost along an axon with either uniformly or nonuniformly distributed ion channels. The results show that the Na+-counting method severely underestimates energy cost in the cable model by 20–70 %. AP propagation along axons that differ in length may require over 15 % more energy per unit of axon area than that required by a point model. However, actual energy cost can vary greatly depending on axonal branching complexity, ion channel density distributions, and AP conduction states. We also infer that the metabolic rate (i.e. energy consumption rate) of cortical axonal branches as a function of spatial volume exhibits a 3/4 power law relationship.

Acknowledgements: Dr. Yu thanks for the support from the National Natural Science Foundation of China (31271170, 31571070), Shanghai program of Professor of Special Appointment (Eastern Scholar SHH1140004).

Correspondence: Eli Shlizerman - shlizee@uw.edu

BMC Neuroscience 2016, 17(Suppl 1):O9

Modeling neuronal systems involves incorporating the two layers: a static map of neural connections (connectome), and biophysical processes that describe neural responses and interactions. Such a model is called the ‘dynome’ of a neuronal system as it integrates a dynamical system with the static connectome. Being closer to reproducing the activity of a neuronal system, investigation of the dynome has more potential to reveal neuronal pathways of the network than the static connectome [1]. However, since the two layers of the dynome are considered simultaneously, novel tools have to be developed for the dynome studies. Here we present a visualization methodology, called `interactome’, that allows to explore the dynome of a neuronal system interactively and in real-time, by viewing the dynamics overlaid on a graph representation of the connectome.

We apply our methodology to the nervous system of Caenorhabditis elegans (C. elegans) worm, which connectome is almost fully resolved [2], and a computational model of neural dynamics and interactions (gap and synaptic) based on biophysical experimental findings was recently introduced [3]. Integrated together, C. elegans dynome defines a unique set of neural dynamics of the worm. To visualize the dynome, we propose a dynamic force-directed graph layout of the connectome. The layout is implemented using D3 visualization platform [4], and is designed to communicate with an integrator of the dynome. The two-way communication protocol between the layout and the integrator allows for stimulating (injecting current) into any subset of neurons at any time point (Fig. 4B). It also allows for simultaneously viewing the response of the network on top of the layout visualized by resizing graph nodes (neurons) according to their voltage. In addition, we support structural changes in the connectome, such as ablation of neurons and connections.

Fig. 4

A Visualization of C. elegans dynome, B communication diagram between the dynome and the layout, C snapshots of visualization of C. elegans during the PLM/AVB excitations (forward crawling)

Our visualization and communication protocols thereby display the stimulated network in an interactive manner and permit to explore different regimes that the stimulations induce. Indeed, with the interactome we are able to recreate various experimental scenarios, such as stimulation of forward crawling (PLM/AVB neurons and/or ablation of AVB) and show that its visualization assists in identifying patterns of neurons in the stimulated network. As connectomes and dynomes of additional neuronal systems are being resolved, the interactome will enable exploring their functionality and inference to its underlying neural pathways [5].

Justas Birgiolas1, Richard C. Gerkin1, Sharon M. Crook1,2

Correspondence: Justas Birgiolas - justas@asu.edu

BMC Neuroscience 2016, 17(Suppl 1):O10

Objectively evaluating and selecting computational models of biological neurons is an ongoing challenge in the field. Models vary in morphological detail, channel mechanisms, and synaptic transmission implementations. We present the results of an automated method for evaluating computational models against property values obtained from published cell electrophysiology studies. Seven published deterministic models of olfactory bulb mitral cells were selected from ModelDB [1] and simulated using NEURON’s Python interface [2]. Passive and spike properties in response to step current stimulation pulses were computed using the NeuronUnit [3] package and compared to their respective, experimentally obtained means of olfactory bulb mitral cell properties found in the NeuroElectro database [4].

Results reveal that across all models, the resting potential and input resistance property means deviated the most from their experimentally measured means (Rinputttest p = 0.02, VrestWilcoxon-test p = 0.01). The time constant, spike half-width, spike amplitude, and spike threshold properties, in the order of decreasing average deviation, matched well with experimental data (p > 0.05) (Fig. 5 top).

Fig. 5

The average deviations of models and cell electrophysiology properties as measured in multiples of the 95 % CI bounds of experimental data means. Dashed line represents 1 CI bound threshold. Top rows show average deviations across all models for each cell property. Bottom rows show deviations across all cell properties for each model

In three models, the property deviations were, on average, outside the 95 % CI of the experimental means (Fig. 5 bottom), but these averages were not significant (t test p > 0.05). All other models were within the 95 % CI, while the model of Chen et al. had the lowest deviation [5].

Overall, the majority of these olfactory bulb mitral cell models display some properties that are not significantly different from their experimental means. However, the resting potential and input resistance properties significantly differ from the experimental values. We demonstrate that NeuronUnit provides an objective method for evaluating the fitness of computational neuroscience cell models against publicly available data.

Acknowledgements: The work of JB, RG, and SMC was supported in part by R01MH1006674 from the National Institutes of Health.

O11 Cooperation and competition of gamma oscillation mechanisms

1Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen (Medical Centre), The Netherlands; 2Department for Biophysics, Faculty of Science, Radboud University Nijmegen, The Netherlands; 3Department for Neuroinformatics, Faculty of Science, Radboud University Nijmegen, The Netherlands; 4Center for Theoretical Neuroscience, Columbia University, New York, NY, USA

Correspondence: Atthaphon Viriyopase - a.viriyopase@science.ru.nl

BMC Neuroscience 2016, 17(Suppl 1):O11

Two major mechanisms that underlie gamma oscillations are InterNeuronal Gamma (“ING”), which is related to tonic excitation of reciprocally coupled inhibitory interneurons (I-cells), and Pyramidal InternNeuron Gamma (“PING”), which is mediated by coupled populations of excitatory pyramidal cells (E-cells) and I-cells. ING and PING are thought to serve different biological functions. Using computer simulations and analytical methods, we [1] therefore investigate which mechanism (ING or PING) will dominate the dynamics of a network when ING and PING interact and how the dominant mechanism may switch.

We find that ING and PING oscillations compete: The mechanism generating the higher oscillation frequency “wins”. It determines the frequency of the network oscillations and suppresses the other mechanism. The network oscillation frequency (green lines corresponding to the network topology given in Fig. 6C) corresponding to the network with type-I-phase-response-curve interneurons and type-II-phase-response-curve interneurons is plotted in Fig. 6D, E, respectively. We explain our simulation results by a theoretical model that allows a full theoretical analysis.

Fig. 6

Oscillations in full and reduced networks of reciprocally coupled pyramidal cells and interneurons. A, B Illustrate topologies of reduced networks that generate “pure” ING and “pure” PING, respectively, while C highlights the topology of a “full” network that could in principle generate either ING or PING oscillations or mixtures of both. D, E Frequency of pure ING-rhythm generated by the reduced network in A (blue line), pure PING-rhythm generated by the reduced network in b (red line), and rhythms generated by the full network in C (green line) as a function of mean current to I-cells I0,I and as function of mean current to E-cells I0,E, respectively. D Results for networks with type-I interneurons while E shows results for networks with type-II interneurons. Pyramidal cells are modeled as type-I Hodgkin–Huxley neurons

Our study suggests experimental approaches to decide whether oscillatory activity in networks of interacting excitatory and inhibitory neurons is dominated by ING or PING oscillations and whether the participating interneurons belong to class I or II. Consider as an example networks with type-I interneurons where the external drive to the E-cells, I0,E, is kept constant while the external drive to the I-cells, I0,I, is varied. For both ING and PING dominated oscillations the frequency of the rhythm increases when I0,I increases (cf. Fig. 6D). Observing such an increase does therefore not allow to determine the underlying mechanism. However, the absolute value of the first derivative of the frequency with respect to I0,I allows a distinction, as it is much smaller for PING than for ING (cf. Fig. 6D). In networks with type-II interneurons, the non-monotonic dependence near the ING-PING transition may be a characteristic hallmark to detect the oscillation character (and the interneuron type): Decrease (increase) of the frequency when increasing I0,E indicates ING (PING), cf. Fig. 6E. These theoretical predictions are in line with experimental evidence [2].

Correspondence: Yuri Dabaghian - dabaghian@rice.edu

BMC Neuroscience 2016, 17(Suppl 1):O12

A physiological interpretation of the biological rhythms, e.g., of the local field potentials (LFP) depends on the mathematical and computational approaches used for its analysis. Most existing mathematical methods of the LFP studies are based on braking the signal into a combination of simpler components, e.g., into sinusoidal harmonics of Fourier analysis or into wavelets of the Wavelet Analysis. However, a common feature of all these methods is that their prime components are presumed from the onset, and the goal of the subsequent analysis reduces to identifying the combination that best reproduces the original signal.

We propose a fundamentally new method, based on a number of deep theorems of complex function theory, in which the prime components of the signal are not presumed a priori, but discovered empirically [1]. Moreover, the new method is more flexible and more sensitive to the signal’s structure than the standard Fourier method.

Applying this method reveals a fundamentally new structure in the hippocampal LFP signals in rats in mice. In particular, our results suggest that the LFP oscillations consist of a superposition of a small, discrete set of frequency modulated oscillatory processes, which we call “oscillons”. Since these structures are discovered empirically, we hypothesize that they may capture the signal’s actual physical structure, i.e., the pattern of synchronous activity in neuronal ensembles. Proving this hypothesis will help enormously to advance a principal, theoretical understanding of the neuronal synchronization mechanisms. We anticipate that it will reveal new information about the structure of the LFP and other biological oscillations, which should provide insights into the underlying physiological phenomena and the organization of brains states that are currently poorly understood, e.g., sleep and epilepsy.

Acknowledgements: The work was supported by the NSF 1422438 grant and by the Houston Bioinformatics Endowment Fund.

O13 Direction-specific silencing of the Drosophila gaze stabilization system

Anmo J. Kim1,†, Lisa M. Fenk1,†, Cheng Lyu1, Gaby Maimon1

Correspondence: Anmo J. Kim - anmo.kim@gmail.com

† Authors contributed equally

BMC Neuroscience 2016, 17(Suppl 1):O13

Many animals, including insects and humans, stabilize the visual image projected onto their retina by following a rotating landscape with their head or eyes. This stabilization reflex, also called the optomotor response, can pose a problem, however, when the animal intends to change its gaze. To resolve this paradox, von Holst and Mittelstaedt proposed that a copy of the motor command, or efference copy, could be routed into the visual system to transiently silence this stabilization reflex when an animal changes its gaze [1]. Consistent with this idea, we recently demonstrated that a single identified neuron associated with the optomotor response receives silencing motor-related inputs during rapid flight turns, or saccades, in tethered, flying Drosophila [2].

Here, we expand on these results by comprehensively recording from a group of optomotor-mediating visual neurons in the fly visual system: three horizontal system (HS) and six vertical system (VS) cells. We found that the amplitude of motor-related inputs to each HS and VS cell correlates strongly with the strength of each cell’s visual sensitivity to rotational motion stimuli around the primary turn axis, but not to the other axes (Fig. 7). These results support the idea that flies send rotation-axis-specific efference copies to the visual system during saccades—silencing the stabilization reflex only for a specific axis, but leaving the others intact. This is important because saccades consist of stereotyped banked turns, which involve body rotations around all three primary axes of rotation. If the gaze stabilization system is impaired for only one of these axes, then the fly is expected to attempt to maintain gaze stability, through a combination of head and body movements, for the other two. This prediction is consistent with behavioral measurements of head and body kinematics during saccades in freely flying blow flies [3]. Together, these studies provide an integrative model of how efference copies counteract a specific aspect of visual feedback signals to tightly control the gaze stabilization system.

Fig. 7

The amplitudes of saccade-related potentials (SRPs) to HS and VS cells are strongly correlated with each cell’s visual sensitivity to rightward yaw motion stimuli. A Experimental apparatus. B Maximal-intensity z-projections of the lobula plate to visualize HS- or VS-cell neurites that are marked by a GAL4 enhancer trap line. C, D The amplitude of saccade-related potentials (SRPs) were inversely correlated with visual responses, when measured under rightward yaw motion stimuli, but not under clockwise roll motion stimuli. Each sample point corresponds to each cell type. Error bars indicate SEM

O14 What does the fruit fly think about values? A model of olfactory associative learning

Chang Zhao1, Yves Widmer2, Simon Sprecher2, Walter Senn1

1Department of Physiology, University of Bern, Bern, 3012, Switzerland; 2Department of Biology, University of Fribourg, Fribourg, 1700, Switzerland

Correspondence: Chang Zhao - zhao@pyl.unibe.ch

BMC Neuroscience 2016, 17(Suppl 1):O14

Associative learning in the fruit fly olfactory system has been studied from the molecular to the behavior level [1, 2]. Fruit flies are able to associate conditional stimuli such as odor with unconditional aversive stimuli such as electrical shocks, or appetitive stimuli such as sugar or water. The mushroom body in the fruit fly brain is considered to be crucial for olfactory learning [1, 2]. The behavioral experiments show that the learning can not be explained simply by an additive Hebbian (i.e. correlation-based) learning rule. Instead, it depends on the timing between the conditional and unconditional stimulus presentation. Yarali and colleagues suggested a dynamic model on the molecular level to explain event timing in associative learning [3]. Here, we present new experiments together with a simple phenomenological model for learning that shows that associative olfactory learning in the fruit fly represents value learning that is incompatible with Hebbian learning.

In our model, the information of the conditional odor stimulus is conveyed by Kenyon cells from the projection neurons to the mushroom output neurons; the information of the unconditional shock stimulus is represented by dopaminergic neurons to the mushroom output neurons through direct or indirect pathways. The mushroom body output neurons encode the internal value (v) of the odor (o) by synaptic weights (w) that conveys the odor information, v = w∙o. The synaptic strength is updated according to the value learning rule, Δw = η(s − v)õ, where s represents the (internal) strength of the shock stimulus, õ represents the synaptic odor trace, and η is the learning rate. The value associated with the odor determines the probability of escaping from that odor. This simple model reproduces the behavioral data and shows that olfactory conditioning in the fruit fly is in fact value learning. In contrast to the prediction of Hebbian learning, the escape probability for repeated odor-shock pairings is much lower than the escape probability for a single pairing with a correspondingly stronger shock.

Correspondence: Geir Halnes - geir.halnes@nmbu.no

BMC Neuroscience 2016, 17(Suppl 1):O15

The local field potential (LFP) in the extracellular space (ECS) of the brain, is a standard measure of population activity in neural tissue. Computational models that simulate the relationship between the LFP and its underlying neurophysiological processes are commonly used in the interpretation such measurements. Standard methods, such as volume conductor theory [1], assume that ionic diffusion in the ECS has negligible impact on the LFP. This assumption could be challenged during endured periods of intense neural signalling, under which local ion concentrations in the ECS can change by several millimolars. Such concentration changes are indeed often accompanied by shifts in the ECS potential, which may be partially evoked by diffusive currents [2]. However, it is hitherto unclear whether putative diffusion-generated potential shifts are too slow to be picked up in LFP recordings, which typically use electrode systems with cut-off frequencies at ~0.1 Hz.

To explore possible effects of diffusion on the LFP, we developed a hybrid simulation framework: (1) The NEURON simulator was used to compute the ionic output currents from a small population of cortical layer-5 pyramidal neurons [3]. The neural model was tuned so that simulations over ~100 s of biological time led to shifts in ECS concentrations by a few millimolars, similar to what has been seen in experiments [2]. (2) In parallel, a novel electrodiffusive simulation framework [4] was used to compute the resulting dynamics of the potential and ion concentrations in the ECS, accounting for the effect of electrical migration as well as diffusion. To explore the relative role of diffusion, we compared simulations where ECS diffusion was absent with simulations where ECS diffusion was included.

Our key findings were: (i) ECS diffusion shifted the local potential by up to ~0.2 mV. (ii) The power spectral density (PSD) of the diffusion-evoked potential shifts followed a 1/f2 power law. (iii) Diffusion effects dominated the PSD of the ECS potential for frequencies up to ~10 Hz (Fig. 8). We conclude that for large, but physiologically realistic ECS concentration gradients, diffusion could affect the ECS potential well within the frequency range considered in recordings of the LFP.

Fig. 8

Power spectrum of ECS potential in a simulation including ECS diffusion (blue line) and a simulation without ECS diffusion (red line). Units for frequency and power are Hz and mV2/Hz, respectively

Yasunori Yamada1

1IBM Research - Tokyo, Japan

Correspondence: Yasunori Yamada - ysnr@jp.ibm.com

BMC Neuroscience 2016, 17(Suppl 1):O16

Brain connectivity studies have revealed fundamental properties of normal brain network organization [1]. In parallel, they have reported structural connectivity abnormalities in brain diseases such as Alzheimer’s disease (AD) [1, 2]. However, how these structural abnormalities affect information processing and cognitive functions involved in brain diseases is still poorly understood. To deepen our understanding of this causal link, I developed two large-scale cortical models with normal and abnormal structural connectivity of diffusion tensor imaging on aging APOE-4 non-carriers and carriers in the USC Multimodal Connectivity Database [2, 3]. The possession of the APOE-4 allele is one of the major risk factors in developing later AD, and it has known abnormalities in structural connectivity characterized by lower network communication efficiency in terms of local interconnectivity and balance of integration and interconnectivity [2]. The two cortical models share other parameters and consist of 2.4 million spiking neurons and 4.8 billion synaptic connections. First, I demonstrate the biological relevance of the models by confirming that they reproduce normal patterns of cortical spontaneous activities in terms of the following distinctive properties observed in vivo [4]: low firing rates of individual neurons that approximate log-normal distributions, irregular spike trains following a Poisson distribution, a network balance between excitation and inhibition, and greater depolarization of the average membrane potentials. Next, to investigate how the difference in structural connectivity affects cortical information processing, I compare cortical response properties to an input during spontaneous activity between the cortical models. The results show that the cortical model with the abnormal structural connectivity decreased the degree of cortical response as well as the number of cortical regions responding to the input (Fig. 9), suggesting that the structural connectivity abnormality observed in APOE-4 carriers might reduce cortical information propagation and lead to negative effects in information integration. Indeed, imaging studies support this suggestion by reporting structural abnormality with lower network communication efficiency observed in the structural connectivity of both APOE-4 carriers and AD patients [1, 2]. This computational approach allowing for manipulations and detailed analyses that are difficult or impossible in human studies can help to provide a causal understanding of how cognitive deficits in patients with brain diseases are associated with their underlying structural abnormalities.

Fig. 9

Responses to input to the left V1 in the two cortical models with normal/abnormal structural connectivity. A Average firing rates. B–D Cortical regions and cortical areas that significantly responded to the input

Acknowledgements: This research was partially supported by the Japan Science and Technology Agency (JST) under the Strategic Promotion of Innovative Research and Development Program.

O17 Spatial coarse-graining the brain: origin of minicolumns

Moira L. Steyn-Ross1, D. Alistair Steyn-Ross1

1School of Engineering, University of Waikato, Hamilton 3240, New Zealand

Correspondence: Moira L. Steyn-Ross - msr@waikato.ac.nz

BMC Neuroscience 2016, 17(Suppl 1):O17

The seminal experiments of Mountcastle [1] over 60 years ago established the existence of cortical minicolumns: vertical column-like arrays of approximately 80–120 neurons aligned perpendicular to the pial surface, penetrating all six cortical layers. Minicolumns have been proposed as the fundamental unit for cortical organisation. Minicolumn formation is thought to rely on gene expression and thalamic activity, but exactly why neurons cluster into columns of diameter 30–50 μm containing approximately 100 neurons is not known.

In this presentation we describe a mechanism for the formation of minicolumns via gap-junction diffusion-mediated coupling in a network of spiking neurons. We use our recently developed method of cortical “reblocking” (spatial coarse-graining) [2] to derive neuronal dynamics equations at different spatial scales. We are able to show that for sufficiently strong gap-junction coupling, there exists a minimum block size over which neural activity is expected to be coherent. This coherence region has cross-sectional area of order (40–60 μm)2, consistent with the areal extent of a minicolumn. Our scheme regrids a 2D continuum of spiking neurons using a spatial rescaling theory, established in the 1980s, that systematically eliminates high-wave-number modes [3]. The rescaled neural equations describe the bulk dynamics of a larger block of neurons giving “true” (rather than mean-field) population activity, encapsulating the inherent dynamics of a continuum of spiking neurons stimulated by incoming signals from neighbors, and buffeted by ion-channel and synaptic noise.

Our method relies on a perturbative expansion. In order for this coarse-graining expansion to converge, we require not only a sufficiently strong level of inhibitory gap-junction coupling, but also a sufficiently large blocking ratio B. The latter condition establishes a lower bound for the smallest “cortical block”: the smallest group of neurons that can respond to input as a collective and cooperative unit. We find that this minimum block-size ratio lies between 4 and 6. In order to relate this 2D geometric result to the 3D extent of a 3-mm-thick layered cortex, we project the cortex onto a horizontal surface and count the number of neurons contained within each l × l grid micro-cell. Setting l ≈ 10 μm and assuming an average of one interneuron per grid cell, a blocking ratio at the mid-value B = 5 implies that the side-length of a coherent “macro-cell” will be L = Bl = 50 μm containing ~25 inhibitory plus 100 excitatory neurons (assuming an i to e abundance ratio of 1:4) in cross-sectional area L2. Thus the minicolumn volume will contain roughly 125 neurons. We argue that this is the smallest diffusively-coupled population size that can support cooperative dynamics, providing a natural mechanism defining the functional extent of a minicolumn.

We propose that minicolumns might form in the developing brain as follows: Inhibitory neurons migrate horizontally from the ganglionic eminence to form a dense gap-junction coupled substrate that permeates all layers of the cortex [4]. Progenitor excitatory cells ascend vertically from the ventricular zone, migrating through the inhibitory substrate of the cortical plate. Thalamic input provides low-level stimulus to activate spiking activity throughout the network. Inhibitory diffusive coupling allows a “coarse graining” such that neurons within a particular areal extent respond collectively to the same input. The minimum block size prescribed by the coarse graining imposes constraints on minicolumn geometry, leading to the spontaneous emergence of cylindrical columns of coherent activity, each column centered on an ascending chain of excitatory neurons and separated from neighboring chains by an annular surround of inhibition. This smallest aggregate is preferentially activated during early brain development, and activity-based plasticity then leads to the formation of tangible structural columns.

Correspondence: Jorge F. Mejias - jorge.f.mejias@gmail.com

BMC Neuroscience 2016, 17(Suppl 1):O18

Visual cortical areas in the macaque are organized according to an anatomical hierarchy, which is defined by specific patterns of anatomical projections in the feedforward and feedback directions [1, 2]. Recent macaque studies also suggest that signals ascending through the visual hierarchy are associated with gamma rhythms, and top-down signals with alpha/low beta rhythms [3–5]. It is not clear, however, how oscillations presumably originating at local populations can give rise to such frequency-specific large-scale interactions in a mechanistic way, or the role that anatomical projections patterns might have in this.

To address this question, we build a large-scale cortical network model with laminar structure, grounding our model on a recently obtained anatomical connectivity matrix with weighted directed inter-areal projections and information about their laminar origin. The model involves several spatial scales—local or intra-laminar microcircuit, inter-laminar circuits, inter-areal interactions and large-scale cortical network—and a wide range of temporal scales—from slow alpha oscillations to gamma rhythms. At any given level, the model is constrained anatomically and then tested against electrophysiological observations, which provides useful information on the mechanisms modulating the oscillatory activity at different scales. As we ascend through the local to the inter-laminar and inter-areal levels, the model allows us to explore the sensory-driven enhancement of gamma rhythms, the inter-laminar phase-amplitude coupling, the relationship between alpha waves and local inhibition, and the frequency-specific inter-areal interactions in the feedforward and feedback directions [3, 4], revealing a possible link with the predictive coding framework.

When we embed our modeling framework into the anatomical connectivity matrix of 30 areas (which includes novel areas not present in previous studies [2, 6]), the model gives insight into the mechanisms of large-scale communication across the cortex, accounts for an anatomical and functional segregation of FF and FB interactions, and predicts the emergence of functional hierarchies, which recent studies have found in macaque [4] and human [5]. Interestingly, the functional hierarchies observed experimentally are highly dynamic, with areas moving across the hierarchy depending on the behavioral context [4]. In this regard, our model provides a strong prediction: we propose that these hierarchical jumps are triggered by laminar-specific modulations of input into cortical areas, suggesting a strong link between hierarchy dynamics and context-dependent computations driven by specific inputs.

Correspondence: Alexandra Kruscha - alexandra.kruscha@bccn-berlin.de

BMC Neuroscience 2016, 17(Suppl 1):O19

Synchronous firing of neurons is a prominent feature in many brain areas. Here, we are interested in the information transmission by the synchronous spiking output of a noisy neuronal population, which receives a common time-dependent sensory stimulus. Earlier experimental [1] and theoretical [2] work revealed that synchronous spikes encode preferentially fast (high-frequency) components of the stimulus, i.e. synchrony can act as an information filter. In these studies a rather strict measure of synchrony was used: the entire population has to fire within a short time window. Here, we generalize the definition of the synchronous output, for which only a certain fraction γ of the population needs to be active simultaneously—a setup that seems to be of more biological relevance. We characterize the information transfer in dependence of this fraction and the population size, by the spectral coherence function between the stimulus and the partial synchronous output. We present two different analytical approaches to derive this frequency-resolved measure (one that is more suited for small population sizes, while the second one is applicable to larger populations). We show that there is a critical synchrony fraction, namely the probability at which a single neuron spikes within the predefined time window, which maximizes the information transmission of the synchronous output. At this value, the partial synchronous output acts as a low-pass filter, whereas deviations from this critical fraction lead to a more and more pronounced band-pass filtering effect. We confirm our analytical findings by numerical simulations for the leaky integrate-and-fire neuron. We also show that these findings are supported by experimental recordungs of P-Units electroreceptors of weakly electric fish, where the filtering effect of the synchronous output occurs in real neurons as well.

O20 Decoding context-dependent olfactory valence in Drosophila

Laurent Badel1, Kazumi Ohta1, Yoshiko Tsuchimoto1, Hokto Kazama1

1RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, 351-0198, Japan

Correspondence: Laurent Badel - laurent@brain.riken.jp

BMC Neuroscience 2016, 17(Suppl 1):O20

Many animals rely on olfactory cues to make perceptual decisions and navigate the environment. In the brain, odorant molecules are sensed by olfactory receptor neurons (ORNs), which convey olfactory information to the central brain in the form of sequences of action potentials. In many organisms, axons of ORNs expressing the same olfactory receptor converge to one or a few glomeruli in the first central region (the antennal lobe in insects and the olfactory bulb in fish and mammals) where they make contact with their postsynaptic targets. Therefore, each glomerulus can be considered as a processing unit that relays information from a specific type of receptor. Because different odorants recruit different sets of glomeruli, and most glomeruli respond to a wide array of odors, olfactory information at this stage of processing is contained in spatiotemporal patterns of glomerular activity. How these patterns are decoded by the brain to guide odor-evoked behavior, however, remains largely unknown.

In Drosophila, attraction and aversion to specific odors have been linked to the activation of one or a few glomeruli (reviewed in [1]) in the antennal lobe (AL). These observations suggest a “labeled-line” coding strategy, in which individual glomeruli convey signals of specific ethological relevance, and their activation triggers the execution of hard-wired behavioral programs. However, because these studies used few odorants, and a small fraction of glomeruli were tested, it is unclear how the results generalize to broader odor sets, and whether similar conclusions hold for each of the ~50 glomeruli of the fly AL. Moreover, how compound signals from multiple glomeruli are integrated is poorly understood.

Here, we combine optical imaging, behavioral and statistical techniques to address these questions systematically. Using two-photon imaging, we monitor Ca2+ activity in the AL in response to 84 odors. We next screen behavioral responses to the same odorants. Comparing these data allows us to formulate a decoding model describing how olfactory behavior is determined by glomerular activity patterns in a quantitative manner. We find that a weighted sum of normalized glomerular responses recapitulates the observed behavior and predicts responses to novel odors, suggesting that odor valence is not determined solely by the activity a few privileged glomeruli. This conclusion is supported by genetic silencing and optogenetic activation of individual ORN types, which are found to evoke modest biases in behavior in agreement with model predictions. Finally, we test the model prediction that the relative valence of a pair of odors depends on the identity of other odors presented in the same experiment. We find that the relative valence indeed changes, and may even switch, suggesting that perceptual decisions can be modulated by the olfactory context. Surprisingly, our model correctly captured both the direction and the magnitude of the observed changes. These results indicate that the valence of olfactory stimuli is decoded from AL activity by pooling contributions over a large number of glomeruli, and highlight the ability of the olfactory system to adapt to the statistics of its environment, similarly to the visual and auditory systems.

P1 Neural network as a scale-free network: the role of a hub

B. Kahng1

1Department of Physics and Astronomy, Seoul National University, 08826, Korea

Correspondence: B. Kahng - bkahng@snu.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):P1

Recently, increasing attention has been drawn to human neuroscience in network science communities. This is because recent fMRI and anatomical experiments have revealed that neural networks of normal human brain are scale-free networks. Thus, accumulated knowledges in a broad range of network sciences can be naturally applied to neural networks to understand functions and properties of normal and disordered human brain networks. Particularly, the degree exponent value of the human neural network constructed from the fMRI data turned out to be approximately two. This value has particularly important meaning in scale-free networks, because the number of connections to neighbors of a hub becomes largest and thus functional role of the hub becomes extremely important. In this talk, we present the role of the hub in pattern recognition and dynamical problems in association with neuroscience.

Nicoladie D. Tam1

1Department of Biological Sciences, University of North Texas, Denton, TX 76203, USA

Correspondence: Nicoladie D. Tam - nicoladie.tam@unt.edu

BMC Neuroscience 2016, 17(Suppl 1):P2

This study focuses on the relationship between the emotional response, decision and the hemodynamic responses in the prefrontal cortex. This is based on the computational emotional model that hypothesizes the emotional response is proportional to the discrepancy between the expectancy and the actuality. Previous studies had shown that emotional responses are related to decisions [1, 2]. Specifically, the emotional responses of happy [3], sad [4], angry [5], jealous [6] emotions are proportional to the discrepancy between what one wants and what one gets [1, 3–7].

Methods Human subjects are asked to perform the classical behavioral economic experiment called Ultimatum Game (UG) [8]. This experimental paradigm elicits the interrelationship between decision and emotion in human subjects [3–6]. The hemodynamic responses of the prefrontal cortex were recorded while the subjects performed the UG experiment.

Results The results showed that the hemodynamic response, which corresponds to the neural activation and deactivation based on the metabolic activities of the neural tissues, are proportional to the emotional intensity and the discrepancy between the expectancy and the actuality. This validates the hypothesis of the proposed emotional theory [9–11] that the intensity of emotion is proportional to the disparity between the expected and the actual outcomes. These responses are also related to the fairness perception [7], with respect to the survival functions [9, 10] similar to the responses established for happy [1] emotion, and for fairness [12] experimentally. This is consistent with the computational relationship between decision and fairness [13].

Tam D. EMOTION-II model: a theoretical framework for happy emotion as a self-assessment measure indicating the degree-of-fit (congruency) between the expectancy in subjective and objective realities in autonomous control systems. Open Cybern Syst J. 2007;1:47–60.

Nicoladie D. Tam1, Luca Pollonini2, George Zouridakis3

1Department of Biological Sciences, University of North Texas, Denton, TX 76203, USA; 2College of Technology, the University of Houston, TX, 77204, USA; 3Departments of Engineering Technology, Computer Science, and Electrical and Computer Engineering, University of Houston, Houston, TX, 77204, USA

Correspondence: Nicoladie D. Tam - nicoladie.tam@unt.edu

BMC Neuroscience 2016, 17(Suppl 1):P3

We aim to extract the intentional movement directions of the hemodynamic signals recorded from noninvasive optical imaging technique, such that a brain-computer-interface (BCI) can be built to control a wheelchair based on the optical signals recorded from the brain. Real-time detection of neurodynamic signals can be obtained using functional near-infrared spectroscopy (fNIRS), which detects both oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) levels in the underlying neural tissues. In addition to the advantage of real-time monitoring of hemodynamic signals using fNIRS over fMRI (functional magnetic resonance imaging), fNIRS also can detect brain signals of human subjects in motion without any movement artifacts. Previous studies had shown that hemodynamic responses are correlated with the movement directions based on the temporal profiles of the oxy-Hb and deoxy-Hb levels [1–5]. In this study, we will apply a phase space analysis to the hemodynamic response to decode the movement directions instead of using the temporal analysis in the previous studies.

Methods In order to decode the movement directions, human subjects were asked to execute two different orthogonal directional movements in the front-back and right-left directions while the optical hemodynamic responses were recorded in the motor cortex of the dominant hemisphere. We aim to decode the intentional movement directions without a priori any assumption on how arm movement directions are correlated with the hemodynamic signals. Therefore, we used the phase space analysis to determine how the trajectories of oxy-Hb and deoxy-Hb are related to each other during these arm movements.

Results The results show that there are subpopulations of cortical neurons that are task-related to the intentional movement directions. Specifically, using phase space analysis of the oxy-Hb and deoxy-Hb levels, opposite movement direction is represented by the different hysteresis of the trajectories in opposite direction in the phase space. Since oxy-Hb represents the oxygen delivery and deoxy-Hb represents the oxygen extraction by the underlying brain tissues, the phase space analysis provides a means to differentiate the movement direction by the ratio between oxygen delivery and oxygen extraction. In other words, the oxygen demands in the subpopulation of neurons in the underlying tissue differ depending on the movement direction. This also corresponds to the opposite patterns of neural activation and deactivation during execution of opposite movement directions. Thus, phase space analysis can be used as an analytical tool to differentiate different movement directions based on the trajectory of the hysteresis with respect to the hemodynamic variables.

P4 Modeling jamming avoidance of weakly electric fish

Jaehyun Soh1, DaeEun Kim1

Correspondence: DaeEun Kim - daeeun@yonsei.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):P4

Weakly electric fish use electric field generated by the electric organ in the tail of the fish. They detect objects by sensing the electric field with electroreceptors on the fish’s body surface. Obstacles in the vicinity of the fish distort the electric field generated by the fish and the fish detect this distortion to recognize environmental situations. Generally, weakly electric fish produce species-dependent electric organ discharge (EOD) signals. Frequency bands of the fish’s signals include a variety of frequencies, 50–600 Hz or higher than 800 Hz. The EOD signals can be disturbed by similar frequency signals emitted by neighboring weakly electric fish. They change their EOD frequencies to avoid jamming signals when they detect the interference of signals. This is called jamming avoidance response (JAR).

Electroreceptors of the fish read other electric fish’s EOD while they sense their own EOD. Therefore, when two weakly electric fish are close enough and they sense similar frequencies, their sensing ability by EOD is impaired because of signal jamming [1, 2]. The fish lowers its EOD frequency in response to the jamming signals when a slightly higher frequency of signals are detected and otherwise, raises its EOD. This response is shown in Fig. 10. The fish shift their EOD frequency almost immediately without trial and error.

Fig. 10

Jamming avoidance response

The method of how to avoid jamming has been studied for a long time, but the corresponding neural mechanisms have not been revealed yet so far. The JAR of Eigenmannia can be analyzed by Lissajous graphs which consist of amplitude modulations and differential phase modulations. Relative intensity of signals at each skin can show that the signal frequency is higher than its own signal frequency or lower [3].

We suggest an algorithm of jamming avoidance for EOD signals, especially for wave-type fish. We explore the diagram of amplitude modulation versus phase modulation, and analyze the shape over the graph. The phase differences or amplitude differences will contribute to the estimation of the signal jamming situation. From that, the jammed signal frequency can be detected and so it can guide the jamming avoidance response. It can provide a special measure to predict the jamming avoidance response. However, what type of neural structure is available in weakly electric fish is an open question. We need further study on this subject.

Acknowledgements: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2014R1A2A1A11053839).

P5 Synergy and redundancy of retinal ganglion cells in prediction

Minsu Yoo1, S. E. Palmer1,2

1Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA; 2Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA

Correspondence: Minsu Yoo - minsu@uchicago.edu

BMC Neuroscience 2016, 17(Suppl 1):P5

Recent work has shown that retina ganglion cells (RGC) of salamanders predict future sensory information [1]. It has also been shown that these RGC’s carry significant information about the future state of their own population firing patterns [2]. From the perspective of downstream neurons in the visual system that do not have independent access to the visual scene, the correlations in the RGC firing, itself, may be important for predicting the future visual input. In this work, we explore the structure of the generalized correlation in firing patterns in the RGC, with a particular focus on coding efficiency. From the perspective of efficient neural coding, we might expect neurons to code for their own future state independently (decorrelation across cells), and to have very little predictive information extending forward in time (decorrelation in the time domain).

In this work, we quantify whether neurons in the retina code for their own future input independently, redundantly, or synergistically, and how long these correlations persist in time. We use published extracellular multi-electrode data from the salamander retina in response to repeated presentations of a natural movie [1]. We find significant mutual information in the population firing that is almost entirely independent except at very short time delays, where the code is weakly redundant (Fig. 11). We also find that the information persists to delays of up to a few 100 ms. In addition, we find that individual neurons vary widely in the amount of predictive information they carry about the future population firing state. This heterogeneity may contribute to the diversity of predictive information we find across groups in this experiment.

Fig. 11

Predictive information in the retinal response is coded for independently. Red the mutual information between the binary population firing patterns at times t and t + Δt, for 1000 randomly selected groups of 5 cells from our 31-cell population. Time is binned in 16.67 ms bins, and the (rare) occurrence of two spikes in a bin is recorded as a ‘1’. Blue the sum of the mutual information between a single cell response at time t and the future response of the group at time t + Δt. Error bars indicate the standard error of the mean across groups. All information quantities are corrected for finite-size effects using quadratic extrapolation [3]

The results in this study may provide useful information for building a model of the RGC population that can explain why redundant coding is only observed at short delays, or what makes one RGC more predictive than another. Building this type of model will illustrate how the retina represents the future.

P6 A neural field model with a third dimension representing cortical depth

Viviana Culmone1, Ingo Bojak1

Correspondence: Viviana Culmone - v.culmone@pgr.reading.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P6

Neural field models (NFMs) characterize the average properties of neural ensembles as a continuous excitable medium. So far, NFMs have largely ignored the extension of the dendritic tree, and its influence on the neural dynamics [1]. As shown in Fig. 12A, we implement a 3D-NFM, including the dendritic extent through the cortical layers, starting from a well-known 2D-NFM [2]. We transform the equation for the average membrane potential he for the point-like soma in the 2D-NFM [2] to a full cable equation form (added parts in bold):

Fig. 12

A The 3D-NFM adds a dendritic dimension to the 2D one [1]. One single macrocolumn has inhibitory (I) and excitatory (E) subpopulations. B (Top) Discretization of the dendrite. (Bottom) Equilibrium membrane potential along the dendrite for two different synaptic inputs. C PSDs of he for the 2D- and 3D-NFM. Increasing the synaptic input recovers the lost alpha rhythm

The 3D-NFM is modeled considering the dendritic tree as a single linear cable. Figure 12B shows the resulting resting potential along the extended dendrite for synaptic input in two different locations. Naively keeping the parameters of the 2D-NFM for the 3D-NFM results in a power spectral density (PSD) without an alpha rhythm resonance, see Fig. 12C. However, increasing the synaptic input by a factor fsyn can compensate for the dispersion along the dendrite and recovers the peak in the alpha band. We study the influence of varying the distribution of synaptic inputs along the dendritic (vertical) dimension and of changing the (horizontal) area of the simulated cortical patch. We also provide an outlook on how to compare our results with local field potential recordings from real cortical tissues. We expect that 3D-NFMs will be used widely in the future for describing such experimental data, and that the methods used to extend the specific 2D-NFM used here [2] will generalize to other 2D-NFMs.

Andrea Ferrario1, Robert Merrison-Hort1, Roman Borisyuk1

Correspondence: Andrea Ferrario - andrea.ferrario@plymouth.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P7

Our previous results [1, 2] describe a computational anatomical model of the Xenopus tadpole spinal cord which includes about 1400 neurons of seven types allocated on two sides of the body. This model is based on a developmental approach, where axon growth is simulated and synapses are created (with some probability) when axons cross dendrites. A physiological model of spiking neurons with the generated connectivity of about 85,000 synapses produces a very reliable swimming pattern of anti-phase oscillations in response to simulated sensory input [2].

Using the developmental model we generate 100 different sets of synaptic connections (“connectomes”), and use this information to create a generalized probabilistic model. The probabilistic model provides a new way to easily generate tadpole connectomes and, remarkably, these connectomes produce similar simulated physiological behavior to those generated using the more complex developmental approach (e.g. they swim when stimulated). Studying these generated connectivity graphs allows us to analyze the structure of connectivity in a typical tadpole spinal cord.

Many complex neuronal networks have been found to have “small world” properties, including those in the nematode worm C. elegans [3, 6], cat and macaque cortex and the human brain [4]. Small world networks are classified between regular and random networks, and are characterized by a high value of the clustering coefficient C and a relatively small value of the average path length L, when compared with Erdős-Rényi and degree matched graphs of a similar size. We used graph theory tools to calculate the strongly connected component of each network, which was then used to measure C and L. For the degree-matched network, these computations have been based on finding the probabilistic generating function [5]. By comparing these measures with those of degree matched random graphs, we found that tadpole’s network can be considered a small world graph. This is also true for the sub-graph consisting only of neurons on one side of the body, which displays properties very similar to those of the C. elegans network. Another important subgraph, comprising only the two main neuron types in the central pattern generator (CPG) network also shows small world properties, but is less similar to the C. elegans network.

Our approach allows us to study the general properties of the architecture of the tadpole spinal cord, even though in reality the actual network varies from individual to individual (unlike in C. elegans). This allows us to develop ideas about the organizing principles of the network, as well as to make predictions about the network’s functionality that can be tested first in computer simulations and later in real animal experiments. In this work we combine several graph theory techniques in a novel way to analyze the structure of a complex neuronal network where not all biological details are known. We believe that this approach can be applied widely to analyze other animals’ nervous systems.

P8 The recognition dynamics in the brain

Chang Sub Kim1

Correspondence: Chang Sub Kim - cskim@jnu.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):P8

Over the years an extensive research endeavor has been given to understanding the brain’s cognitive function in a unified principle and to providing a formulation of the corresponding computational scheme of the brain [1]. The explored free-energy principle (FEP) claims that the brain’s operation on perception, learning, and action rests on brain’s internal mechanism of trying to avoid aberrant events encountering in its habitable environment. The theoretical measure for this biological process has been suggested to be the informational free-energy (IFE). The computational actualization of the FEP is carried out via the gradient descent method (GDM) in machine learning theory.

The information content of the cognitive processes is encoded in the biophysical matter as spatiotemporal patterns of the neuronal correlates of the external causes. Therefore, any realistic attempt to account for the brain function must conform to the physics laws and the underlying principles. Notwithstanding the grand simplicity, however, the FEP framework embraces some extra-physical constructs. Two major such extra-physical constructs are the generalized motions, which are non-Newtonian objects, and the GDM in executing the brain’s computational mechanism of perception and active inference. The GDM is useful in finding mathematical solutions in the optimal problems, but not derived from a physics principle.

In this work, we cast the FEP in the brain science into the framework of the principle of least action (PLA) in physics [2]. The goal is to remove the extra-physical constructs embedded in the FEP and to reformulate the GDM within the standard mechanics arena. Previously, we suggested setting up the minimization scheme of the IFE in the Lagrange mechanics formalism [3] which contained only primitive results. In the present formulation we specify the IFE as the information-theoretic Lagrangian and thus formally define the informational action (IA) as time-integral of the IFE. Then, the PLA prescribes that the viable brain minimizes the IA when encountering uninhabitable events by selecting an optimal path among all possible dynamical configurations in the brain’s neuronal network. Specifically, the minimization yields the mechanistic equations of motion of the brain states, which are inverting algorithms of sensory inputs to infer their external causes. The obtained Hamilton–Jacobi–Bellman-type equation prescribes the brain’s recognition dynamics which do not require the extra-physical concept of higher order motions. Finally, a neurobiological implementation of the algorithm is presented which complies with the hierarchical, operative structure of the brain. In doing so, we adopt the local field potential and the local concentration of ions in the Hodgkin–Huxley model as the effective brain states [4]. Thus, the brain’s recognition dynamics is operatively implemented in a neuro-centric picture. We hope that our formulation, conveying a wealth of structure as an interpretive and mechanistic description of explaining how the brain’s cognitive function may operate, will provide with a helpful guidance for future simulation.

Hodgkin A, Huxley A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44.

P9 Multivariate spike train analysis using a positive definite kernel

Taro Tezuka1

1Faculty of Library, Information and Media Science, University of Tsukuba, Tsukuba, 305-0821, Japan

Correspondence: Taro Tezuka - tezuka@slis.tsukuba.ac.jp

BMC Neuroscience 2016, 17(Suppl 1):P9

Multivariate spike trains, obtained by recording multiple neurons simultaneously, is a key to uncovering information representation in the brain [1]. Other expressions used to refer to the same type of data include “multi-neuron spike train” [2] and “parallel spike train’” [3]. One approach to analyze spike trains is to use kernel methods, which are known to be among the most powerful machine learning methods. Kernel methods rely on defining a symmetric positive-definite kernel suited to the given data. This work proposes a general way of extending kernels on univariate (or single-unit) spike trains to multivariate spike trains.

In this work, the mixture kernel, which naturally extends a kernel defined on univariate spike trains, is proposed and evaluated. There are many univariate spike train kernels proposed [4–9], and the mixture kernel is applicable to any of these kernels. Considered abstractly, a multivariate spike train is a set of time points at which different types of events occurred. In other words, it is a sample taken from a marked point process. The method proposed in this paper is therefore applicable to other data with the same structure.

The mixture kernel is defined as a linear combination of symmetric positive-definite kernels on the components of the target data structure, in this case univariate spike trains. The name “mixture kernel” derives from the common use of the word “mixture” to indicate a linear combination in physics and machine learning, for example in Gaussian mixture models. One can prove that the mixture kernel is symmetric positive-definite if coefficient matrix of the mixture is a symmetric positive-semidefinite matrix.

The performance of the mixture kernel was evaluated by kernel ridge regression for estimating the value of the parameter for generating synthetic spike train data, and also the stimulus given to the animal as the spike trains were recorded. For synthetic data, multivariate spike trains were generated using homogenous Poisson processes. For real data, the pvc-3 data set [2] in the CRCNS (Collaborative Research in Computational Neuroscience) data sharing website was used, which is a 10-unit multivariate spike trains recorded from the primary visual cortex of a cat.

Acknowledgement: This work was supported in part by JSPS KAKENHI Grant Numbers 21700121, 25280110, and 25540159.

P10 Synchronization of burst periods may govern slow brain dynamics during general anesthesia

Pangyu Joo1

1Physics, POSTECH, Pohang, 37673, Republic of Korea

Correspondence: Pangyu Joo - pangyu32@postech.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):P10

Researchers have utilized electroencephalogram (EEG) as an important key to study brain dynamics in general anesthesia. Representative features of EEG in deep anesthesia are slow wave oscillation and burst suppression [1], and they have so different characteristics that they seem to have different origins. Here, we propose that the two feature may be a different aspect of same phenomenon and show that the slow oscillation could arise from partial synchronization of bursting periods. To model the synchronization of burst periods, modified version of Ching’s model of burst suppression [2] is used. 20 pyramidal neurons and 20 fast spiking neurons are divided into 10 areas composed of 2 pyramidal and 2 fast spiking neurons so that each area exhibit burst suppression behavior independently. Then, all the pyramidal neurons are all to all connected and the connection strength modulates the amount of synchronization of burst periods. The action potentials of pyramidal neurons are substituted by 1 when the action potential larger than 0, and all other case 0. Then they are averaged over the neurons and convoluted with 50 ms square function to see the collective activity of the neurons. As shown in Fig. 13A, At high level of ATP recovery rate (JATP > 1), there are no suppression period so that slow oscillation does not appear regardless of synchronization. At low level of ATP recovery rate (JATP = 0.5), we can observe that the slow oscillation appears with increasing amplitude and finally become burst suppression as relative connection strength increases (Fig. 13B). When the ATP recovery rate is 0, then the pyramidal neurons do not fire at all. These results suggest that the burst period synchronization model could explain some important features of EEG during general anesthesia: the increasing slow oscillation amplitude as anesthesia deepen, significantly high activity in bursting period, and the peak max phase amplitude coupling in deep anesthesia.

Fig. 13

A The convoluted signal with different ATP recovery rates (JATP) and relative connection strengths (C). B Standard deviation of the convoluted signals

Correspondence: Young-Ah Rho - yarho75@gmail.com

BMC Neuroscience 2016, 17(Suppl 1):P11

Synchronization in neural oscillations is a prominent feature of neural activity and thought to play an important role in neural coding. Theoretical and experimental studies have described several mechanisms for synchronization based on coupling strength and correlated noise input. In the olfactory systems, recurrent and lateral inhibition mediated by dendrodendritic mitral cell–granule cell synapses are critical for synchronization, and intrinsic biophysical heterogeneity reduce the ability to synchronize. In our previous study, a simple phase model was used to examine how physiological heterogeneity in biophysical properties and firing rates across neurons affects correlation-induced synchronization (stochastic synchrony). It has showed that heterogeneity in the firing rates and in the shapes of the phase response curves (PRCs) reduced output synchrony. In this study, we extend the previous phase model to a conductance based model to examine how the density of specific ion channels in mitral cells impacts on stochastic synchrony. A recent study revealed that mitral cells are highly heterogeneous in the expression of the sag current, a hyperpolarization-activated inward current (Angelo, 2011). The variability in the sag contributes to the diversity of mitral cells and thus we wanted to know how this variability influences synchronization. Mitral cell oscillations and bursting are also regulated by an inactivating potassium current (IA). Based on these ion channels, we examined the effect of changing the current densities (gA, gH) on diversity of PRCs and of synchrony. In order to identify oscillatory patterns of bursting and repetitive spiking across gA and gH to the model, two parameter bifurcation analysis was performed in the presence and absence of noise. Increasing gH alone reduces the region of bursting, but does not completely eliminate bursting, and PRCs changed much more with respect to gA than gH. Focusing on varying gA, we next examined a role of gA density and firing rate in stochastic synchrony by introducing the fluctuating correlated input resembling the shared presynaptic drives. We found that heterogeneity in A-type current mainly influenced on stochastic synchrony as we predicted in PRCs investigated theoretically, and diversity in firing rate alone didn’t account for it. In addition, heterogeneous population with respect to gA, given decent amount of gA density, showed better stochastic synchrony than homogeneous population in same firing rate.

P12 Circular statistics of noise in spike trains with a periodic component

Petr Marsalek1,2

Correspondence: Petr Marsalek - petr.marsalek@lf1.cuni.cz

BMC Neuroscience 2016, 17(Suppl 1):P12

Introduction We estimate parameters of the inter-spike interval distributions in binaural neurons of the mammalian sound localization neural circuit, neurons of the lateral and medial superior olive [1]. We present equivalent descriptions of spike time probabilities using both standard and circular statistics. We show that the difference between sine function and beta density in the circular domain is negligible.

Results Estimation of the spike train probability density function parameters is presented in relation to harmonic and complex sound input. The resulting densities are expressed analytically with the use of harmonic and Bessel functions. Parameter fits are verified by numerical simulations of spike trains (Fig. 14).

Fig. 14

Comparison of circular probability density functions of sine and beta density. A Beta density with parameters a = b = 3.3818, matches closely that of the sine function, used as a probability density function (PDF). Beta density with parameters a = b = 3 solid line, is matched by sine function y = 1.05 − 1.1 cos(2π x/1.1). B Cumulative distribution function (CDF) is shown for these densities together with the difference between the two CDFs multiplied by 100 to visualize the comparison of the two distributions. C For testing different vector strengths we use uniform distributions with pre-set vector strengths (ρ = 0.8, 0.5 and 0.08)

Conclusions We use analytical techniques, where it is possible. We calculate the one-to-one correspondence of vector strength parameters and parameters of circular distributions used for description of data. We show here introductory figure of our paper with the two representative circular densities. We also use experimental data [2, 3] and simulated data to compare them with these theoretical distributions.

Acknowledgements: Supported by the PRVOUK program no. 205024 at the Charles University in Prague. I acknowledge contributions to the analytical computations by Ondrej Pokora and simulation in Matlab by Peter G. Toth.

P14 Representations of directions in EEG-BCI using Gaussian readouts

1Department of Bio and Brain Engineering and 2Program of Brain and Cognitive Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, 34141; 3Korea Science Academy of KAIST, Busan, South Korea, 10547

Correspondence: Jaeseung Jeong - jsjeong@kaist.ac.kr

BMC Neuroscience 2016, 17(Suppl 1):P14

EEG (electroencephalography) is one of most useful neuroimaging technology and best options for BCI (Brain-Computer Interface) because EEG has portable size, wireless and well-wearing design in any situations. The key objective of BCI is physical control of machine such as cursor movement in screen and robot movement [1, 2]. In previously study, the motor imagery had used for represent of direction to movement [1, 2]. For example, the left hand imagery mapping to move the left, the right hand imagery mapping to move the right and both hand imagery mapping to move the forward. In this study, however, we considered only brain signals when a subject thinks directions to movements not motor imageries. We designed the recurrent neural networks which consist of 300–10,000 artificial linear neurons using Echo State Networks paradigm [3]. We also recorded EEG signals using Emotiv EPOC+ which has 16 channels (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 and two of reference). All raw data of channels were normalized and then used inputs to recurrent neural networks. For representation of directions, we had built Gaussian readouts which has preferred directions and fitted the Gaussian functions (Fig. 15). The firing rate of readout were high when the subject thought preferred direction. However, when the subject thought not preferred direction, the firing rate of readout slightly low down. For implement these readouts, all of neuros in recurrent neural networks had linearly connected to all readouts and weights of these connections were trained by linear learning rules. In result, we considered 5 healthy subjects and recorded EEG signals for each directions. The readouts were showed well Gaussian fitted direction preference. In this study, we considered only two dimensions but many situations of BCI has three dimensional space. Therefore, our study which using Gaussian readouts should be extended to three dimensional version.

Correspondence: Yaroslav I. Molkov - ymolkov@gsu.edu

BMC Neuroscience 2016, 17(Suppl 1):P15

The basal ganglia (BG) comprise a number of interconnected nuclei that are collectively involved in a wide range of motor and cognitive behaviors. The commonly accepted theory is that the BG play a pivotal role in action selection and reinforcement learning facilitated by the activity of dopaminergic neurons of substantia nigra pars compacta (SNc). These dopaminergic neurons encode prediction errors when reward outcomes exceed or fall below anticipated values. The BG gate appropriate behaviors from multiple moto-cortical command candidates arriving at the striatum (BG’s input nuclei) but suppress competing inappropriate behaviors. The selected motor action is realized when the internal segment of the globus pallidus (GPi) (BG’s output nuclei) disinhibits thalamic neurons corresponding to the gated behavior. The BG network performs motor command selection through the facilitation of the appropriate behavior via the “direct” striatonigral (GO) pathway and inhibition of competing behaviors by the “indirect” striatopallidal (NOGO) pathway.

Several modeling studies have showed plausibility of the above concept in simplified cases, e.g. for binary action selection in response to a binary cue. However, in these previous models, the possible actions/behaviors were represented in an abstract way, and did not have a detailed implementation as specific neuronal patterns actuating the muscular-skeletal apparatus. To address these details, the motor system in the present study includes a 2D-biomechanical arm model in the horizontal plane to simulate realistic reaching movements. The arm consists of two segments (upper arm and forearm) and has two joints (shoulder and elbow) controlled by four monoarticular (flexor and extensor at each joint) and two bi-articular (shoulder and elbow flexor, and shoulder and elbow extensor) muscles. The neural component of the model includes the BG, the thalamus, the motor cortex, and spinal circuits. The low-level spinal circuitry contains six motoneurons (each controlling one muscle), and receives proprioceptor feedback from muscles. Cortical neurons provide inputs to the spinal network. Their activity is calculated by solving an inverse problem (inverting the internal model) based on the initial position of the arm, reaching distance and direction.

In the model, reaching movements in different directions were used as a set of possible behaviors. We simulated movements in response to a sensory cue defining the target arm position. The cortex generated signals corresponding to the cue and all possible motor commands and delivered these signals to the BG. The resulting neuronal patterns in the motor cortex were calculated as a convolution of the thalamic activity and all possible motor commands. The function of BG was to establish the association between the cue and the appropriate action(s) by adjusting weights of plastic corticostriatal projections through reinforcement learning. The BG model contained an exploratory mechanism, operating through the subthalamic nucleus (STN) that allowed the model to constantly seek better cue-action associations that deliver larger rewards. Reinforcement learning relied on the SNc dopaminergic signal that measured trial-to-trial changes in the reward value, defined by performance errors.

Using this model, we simulated several learning tasks in the conditions of different unexpected perturbations. When a perturbation was introduced, the model was capable of quickly switching away from pre-learned associations and learning novel cue-action associations. The analysis of the model reveals several features, that can have general importance for brain control of movements: (1) potentiation of the cue-NOGO projections is crucial for quick destruction of preexisting cue-action associations; (2) the synaptic scaling (the decay of the cortical-striatal synaptic weights in the absence of dopamine-mediated potentiation/depression) has a relatively short time-scale (10–20 trials); (3) quick learning is associated with a relatively poor accuracy of the resultant movement. We suggest that BG may be involved in a quick search for behavioral alternatives when the conditions change, but not in the learning of skilled movements that require good precision.

P17 Axon guidance: modeling axonal growth in T-junction assay

Csaba Forro1, Harald Dermutz1,László Demkó1, János Vörös1

1LBB, ETH Zürich, Zürich, 8051, Switzerland

Correspondence: Csaba Forro - forro@biomed.ee.ethz.ch

BMC Neuroscience 2016, 17(Suppl 1):P17

The current field of neuroscience investigates the brain at scales varying from the whole organ, to brain slices and down to the single cell level. The technological advances miniaturization of electrode arrays has enabled the investigation of neural networks comprising several neurons by recording electrical activity from every individual cell in the network. This level of complexity is key in the study of the core principles at play in the machinery of the brain. Indeed, it is the first layer of complexity above the single cell that is still tractable for the human scientist without needing to resort to a ‘Big Data’ approach. In light of this, we strive to create topologically well-defined neural networks, akin to mathematical directed graphs, as a model systems in order to study the basic mechanisms emerging in networks of increasing complexity and varying topology. This approach will also yield statistically sound and reproducible observations, something which is sought after in neuroscience [1].

The first step in realizing such a well-defined neural network is to reliably control the guidance of individual axons in order to connect the network of cells in a controlled way. For this purpose, we present a method consisting of obstacles forcing the axon to turn one way or the other. The setup is made of PolyDiMethylSiloxane (PDMS) which is microstructured by ways of state of the art photolithography procedures. Two tunnels of 5 µ height are patterned into a block of 100 µ thick PDMS and connected in the shape of a T-junction (Fig. 16). Primary cortical neurons are inserted via entry holes at the base of the tunnels. The entry angle of the bottom tunnel (“vertical part of the T”) into the junction is varied between 20° (steep entry) and 90° (vertical entry). We observe that the axons prefer to turn towards the smaller angle. We show how this observed angular selectivity in axon guidance can be explained by a simple model and how this principle can be used to create topologically well-defined neural networks (Fig. 16B).

Fig. 16

A The T-junction assay with an entry angle of 20°. The axon is expected to prefer a right-turn at this angle. B A simple model is constructed where the direction of growth of the axon is proportional to area (red) it can explore

Yuri Dabaghian1,2, Andrey Babichev1,2

Correspondence: Yuri Dabaghian - dabaghian@rice.edu

BMC Neuroscience 2016, 17(Suppl 1):P19

The reliability of our memories is nothing short of remarkable. Thousands of neurons die every day, synaptic connections appear and disappear, and the networks formed by these neurons constantly change due to various forms of synaptic plasticity. How can the brain develop a reliable representation of the world, learn and retain memories despite, or perhaps because of, such complex dynamics? Here we consider the specific case of spatial navigation in mammals, which is based on mental representations of their environments—cognitive maps—provided by the network of the hippocampal place cells—neurons that become active only in a particular region of the environment, known as their respective place fields. Experiments suggest that the hippocampal map is fundamentally topological, i.e., more similar to a subway map than to a topographical city map, and hence amenable to analysis by topological methods [1]. By simulating the animal’s exploratory movements through different environments we studied how stable topological features of space get represented by assemblies of simulated neurons operating under a wide range of conditions, including variations in the place cells’ firing rate, the size of the place fields, the number of cells in the population [2,3]. In this work, we use methods from Algebraic Topology to understand how the dynamic connections between hippocampal place cells influence the reliability of spatial learning. We find that although the hippocampal network is highly transient, the overall spatial map encoded by the place cells is stable.

Acknowledgements: The work was supported by the NSF 1422438 grant and by the Houston Bioinformatics Endowment Fund.

P20 Theory of population coupling and applications to describe high order correlations in large populations of interacting neurons

Haiping Huang1

1RIKEN Brain Science Institute, Wako-shi, Saitama, Japan

Correspondence: Haiping Huang - physhuang@gmail.com

BMC Neuroscience 2016, 17(Suppl 1):P20

Correlations among neurons spiking activities play a prominent role in deciphering the neural code. Various models were proposed to understand the pairwise correlations in the population activity. Modeling these correlations sheds light on the functional organization of the nervous system. In this study, we interpret correlations in terms of population coupling, a concept recently proposed to understand the multi-neuron firing patterns of the visual cortex of mouse and monkey [1]. We generalize the population coupling to its higher order (PC2), characterizing the relationship of pairwise firing with the population activity. We derive the practical dimensionality reduction method for extracting the low dimensional representation parameters, and test our method on different types of neural data, including ganglion cells in the salamander retina onto which a repeated natural movie was projected [2], and layer 2/3 as well as layer 5 cortical cells in the medial prefrontal cortex (MPC) of behaving rats [3].

For the retinal data, by considering the correlation between the pairwise firing activity and the global population activity, i.e., the second order population coupling, the three-cell correlation could be predicted partially (64.44 %), which suggests that PC2 acts as a key circuit variable for third order correlations. The interaction matrix revealed here may be related to the found overlapping modular structure of retinal neuron interactions [4]. In this structure, neurons interact locally with their adjacent neurons, and in particular this feature is scalable and applicable for larger networks.

About 94.79 % of three-cell correlations are explained by PC2 in the MPC circuit. The PC2 matrix shows clear hubs’ structure in the cortical circuit. Some neuron interacts strongly with a large portion of neurons in the population, and such neurons may play a key role in shaping the collective spiking behavior during the working memory task. The hubs and non-local effects are consistent with findings reported in the original experimental paper [3].

Acknowledgements: We are grateful to Shigeyoshi Fujisawa and Michael J Berry for sharing us the cortical and retinal data, respectively. We also thank Hideaki Shimazaki and Taro Toyoizumi for stimulating discussions. This work was supported by the program for Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) from Japan Agency for Medical Research and development, AMED.

P21 Design of biologically-realistic simulations for motor control

Sergio Verduzco-Flores1

Correspondence: Sergio Verduzco-Flores - sergio.verduzco@oist.jp

BMC Neuroscience 2016, 17(Suppl 1):P21

Several computational models of motor control, although apparently feasible, fail when simulated in 3-dimensional space with redundant manipulators [1, 2]. Moreover, it has become apparent that the details of musculoskeletal simulations, such as the muscle model used, can fundamentally affect the conclusions of a computational study [3].

There would be great benefits from being able to test theories involving motor control within a simulation framework that brings realism in the musculoskeletal model, and in the networks that control movements. In particular, it would be desirable to have: (1) a musculoskeletal model considered to be research-grade within the biomechanics community, (2) afferent information provided by standard models of the spindle afferent and the Golgi tendon organ, (3) muscle stimulation provided by a spiking neural network that follows the basic known properties of the spinal cord, and (4) a cerebellar network as part of adaptive learning.

Creating this type of model is only now becoming practical, not only due to faster computers, but due to properly validated musculoskeletal models and simulation platforms from the biomechanics community, as well as mature software and simulations techniques from the computational neuroscience community. We show how these can be harnessed in order to create simulations that are grounded both by physics and by neural implementation. This pairing of computational neuroscience and biomechanics is sure to bring further insights into the workings of the central nervous system.

P22 Towards understanding the functional impact of the behavioural variability of neurons

Filipa Dos Santos1, Peter Andras1

Correspondence: Filipa Dos Santos - f.d.s.brandao@keele.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P22

The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.

Fig. 17

The time distances between the first and second spikes of the simulated PD neurons as a function of the gK and gCaT conductances of the neuron with variable conductances. A first spikes. B Second spikes. The PD neuron with fixed conductances had gK = 1.5768 μS and gCaT = 0.0225 μS

Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong.

Christoph Metzner1, Achim Schweikard2, Bartosz Zurowski3

1Science and Technology Research Institute, University of Hertfordshire, Hatfield, United Kingdom; 2Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck, Germany; 3Department of Psychiatry, University of Luebeck, Schleswig–Holstein, Luebeck, Germany

Correspondence: Filipa Dos Santos - c.metzner@herts.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P23

In recent years, a significant amount of biomarkers and endophenotypic signatures of psychiatric illnesses have been identified, however, only a very limited number of computational models in support thereof have been described so far [1]. Furthermore, the few existing computational models typically only investigate one possible mechanism in isolation, disregarding the potential multifactoriality of the network behaviour [2]. Here we describe a computational instantiation of an endophenotypic finding for schizophrenia, an impairment in gamma entrainment in auditory click paradigms [3].

We used a model of primary auditory cortex from Beeman [4] and simulated a click entrainment paradigm with stimulation at 40 Hz, to investigate gamma entrainment deficits, and at 30 Hz as a control condition. We explored the multifactoriality by performing an extensive parameter search (approx. 4000 simulations). We focused on synaptic and connectivity parameters of the fast spiking inhibitory interneurons in the model (i.e. number and strength of and, GABAergic decay times at I-to-E and I-to-I connections, independently). We performed a time–frequency analysis of simulated EEG signals and extracted the power in the 40 Hz and the 30 Hz band, respectively. Using the power in the 40 Hz band for 40 Hz stimulation we identified regions in the parameter space showing strong reductions in gamma entrainment. For these we calculated cycle-averaged EEG signals and spike time histograms of both network populations, in order to explore the dynamics underlying the reduction in gamma power.

We find three regions in the parameter space which show strong reductions in gamma power. These three regions, however, have very different parameter settings and show very different oscillatory dynamics. The first, which produces the strongest reduction, is characterised by a strong prolongation of decay times at I-to-E synapses and strong and numerous I-to-E connections. Cycle-averaged spike histograms show a broadening of distributions which indicate that the overall synchrony is reduced, leading to the strong reduction in gamma power. However, this parameter setting also produced a strong reduction of power in the 30 Hz control condition, which is not seen experimentally. The second region, is characterized by prolonged I-to-I decay times together with numerous and strong I-to-I connectivity. Here, a second peak appears in the cycle-average spike histogram of the excitatory population, which leads to a loss of synchrony and thus a reduction in gamma power. The third parameter region, is also characterized by prolonged I-to-I decay times. Moreover, it is associated with a reduction in I-to-I connection numbers and strengths together with strong I-to-E connections. Here, we found that in every second cycle, the spike histogram of the inhibitory neurons showed two peaks, one at the beginning and one in the middle of the cycle. This second peak then inhibited the excitatory neurons’ response to the next stimulation. Hence, the EEG signal showed beat-skipping, i.e. every second gamma peak was suppressed, resulting in a decrease in gamma power.

Performing an extensive parameter search in an in silico instantiation of an endophenotypic finding for schizophrenia, we have identified distinct regions of the parameter space that give rise to analogous network level behaviour found in schizophrenic patients using electrophysiology [3]. However, the oscillatory dynamics underlying this behaviour substantially differ across regions. These regions might correspond to different subtypes of schizophrenic patients and hence, subtypes of what might have different targets for alleviating the deficits because of their differences in underlying dynamics.

Correspondence: James P. Roach - roachjp@umich.edu

BMC Neuroscience 2016, 17(Suppl 1):P24

In the brain, representations of the external world are encoded by patterns of neural activity. It is critical that representations be stable, but still easily moved between. This phenomenon has been modeled at the network level as auto associative memory. In auto associative network models, such as the Hopfield network, representations, or memories, are stored within synaptic weights and form stable fixed points, or attractors [1]. Spike frequency adaptation (SFA) provides a biologically plausible mechanism for switching between stabile fixed points in the Hopfield network. In the present work we show that for low levels of SFA networks will stabilize in a representation that corresponds to the nearest memory activity space, regardless of strength. In networks with higher levels of SFA only the pattern corresponding to the strongest memory, or a global minimum in activity space. The effects of SFA are similar to fast, or thermodynamic noise, but also allows for deterministic destabilization of memories leading to periodic activation of memories through time. We argue that control of SFA level is a universal mechanism for network-wide attractor selectivity. SFA is tightly regulated by the neurotransmitter acetylcholine (ACh) and can be changed on behaviorally relevant timescales. To support this claim we demonstrate that SFA controls selectivity of spatial attractors in a biophysical model of cholinergic modulation in cortical networks [2, 3]. This model produces localized bumps of firing. A region with enhanced recurrent excitation acts as an attractor for the bump location and selectivity for these regions is quickly diminishes as SFA level increases [3]. When multiple spatial attractors of varying strengths are stored in a network moderate increases SFA level will lead to the weak attractors being destabilized and activity localizing within the strongest attractor. This effect is qualitatively similar to the effects of SFA in the Hopfield network. These results indicate that ACh controls memory recall and perception within the cortex by regulation of SFA and explain the important role cholinergic modulation plays in cognitive functions such as attention and memory consolidation [4].

Acknowledgements: JPR was supported by an NSF Graduate Research Fel- lowship Program under Grant No. DGE 1256260 and a UM Rackham Merit Fellowship. MRZ and LMS were supported by NSF PoLS 1058034.

Correspondence: Michal Zochowski - michalz@umich.edu

BMC Neuroscience 2016, 17(Suppl 1):P25

Dynamic neural representations underlie cognitive processing and are an outcome of complex interactions of network structural properties and cellular dynamics. We have developed a new framework to study dynamics of network representations during rapid memory formation in the hippocampus in response to contextual fear conditioning (CFC) [1]. Experimentally, this memory paradigm is achieved by exposing mice to foot shocks while in a novel environment and later testing for behavioral responses when reintroduced to that environment. We employ the average minimum distance (AMD) functional connectivity algorithm to spiking data recorded before, during, and after CFC using implanted stereotrodes. Comparing changes in functional connectivity using cosine similarity, we find that stable functional representations correlate well with animal performance in learning. Using extensive computer simulations, we show that the most robust changes compared to baseline occur when the system resides near criticality. We attribute these results to emergence of long-range correlations during the initial process of memory formation. Furthermore, we have developed a generic model using a generalized Hopfield framework to link formation of novel memory representation to functional stability changes. The network initially stores a single representation, which is to exemplify biologically already stored (old) memories, and is then presented a new representation by freezing a randomly chosen fraction of nodes from a novel pattern. We show that imposing fractional input of the new representation may partially stabilize this representation near the phase transition (critical) point. We further show that invoking synaptic plasticity rules may fully stabilize this new representation only when the dynamics of the network reside near criticality. Taken together these results show, for the first time, that only when the network is at criticality can it stabilize novel memory representations, the dynamical regime which also yields an increase of network stability. Furthermore, our results match well experimental data observed from CFC experiments.

Correspondence: Changsong Zhou - cszhou@hkbu.edu.hk

BMC Neuroscience 2016, 17(Suppl 1):P26

Self-organized critical states (SOCs) and stochastic oscillations (SOs) are simultaneously observed in neural systems [1], which appears to be theoretically contradictory since SOCs are characterized by scale-free avalanche sizes but oscillations indicate typical scales. Here, we show that SOs can emerge in SOCs of small size systems due to temporal correlation between large avalanches at the finite-size cutoff, resulting from the accumulation-release process in SOCs. In contrast, the critical branching process without accumulation-release dynamics cannot exhibit oscillations. The reconciliation of SOCs and SOs is demonstrated both in the sandpile model and robustly in biologically plausible neuronal networks. The oscillations can be suppressed if external inputs eliminate the prominent slow accumulation process, providing a potential explanation of the widely studied Berger effect or event-related desynchronization in neural response. The features of neural oscillations and suppression are confirmed during task processing in monkey eye-movement experiments. Our results suggest that finite-size, columnar neural circuits may play an important role in generating neural oscillations around the critical states, potentially enabling functional advantages of both SOCs and oscillations for sensitive response to transient stimuli. The results have been published in [2].

P27 Neurofield: a C++ library for fast simulation of 2D neural field models

1School of Physics, University of Sydney, Sydney, New South Wales, 2006, Australia; 2Center for Integrative Brain Function, University of Sydney, Sydney, New South Wales, 2006, Australia

Correspondence: Paula Sanz-Leon - paula.sanz-leon@sydney.edu.au

BMC Neuroscience 2016, 17(Suppl 1):P27

Neural field theory [1] has addressed numerous questions regarding brain dynamics and its interactions across many scales, becoming a highly flexible and unified framework for the study and prediction experimental observables of the electrical activity of the brain. These include EEG spectra [2, 3], evoked response potentials, age-related changes to the physiology of the brain [4], epileptic seizures [5, 6], and synaptic plasticity phenomena [7]. However, numerical simulations of neural field models are not widely available despite their extreme usefulness in cases where analytic solutions are less tractable. This work introduces the features of NeuroField, a research-ready library applicable to simulate a wide range of neural field based systems involving multiple structures (e.g., cortex, cortex and thalamic nuclei, and basal ganglia). The link between a given neural field model, its mathematical representation (i.e., a delay-partial differential equations system with spatial periodic boundary conditions) and its computational implementation is described. The resulting computational model has the capability to represent from spatially extended to neural-mass-like systems, and it has been extensively validated against analytical solutions and against experiment [1–10]. To illustrate its flexibility, a range of simulations modeling a variety of arousal-, sleep- and epilepsy-state phenomena is presented [8, 9]. NeuroField has been written using object-oriented programming in C++ and is bundled together with MATLAB routines for quantitative offline analysis, such as spectral and dynamic spectral analysis.

Yoonsuck Choe1, Huei-Fang Yang2

Correspondence: Yoonsuck Choe - choe@tamu.edu

BMC Neuroscience 2016, 17(Suppl 1):P28

How can we decode the neural activation patterns (Fig. 18A)? This is a key question in neuroscience. We as scientists have the luxury of controlling the stimulus, based on which we can find the meaning of the spikes (Fig. 18C-right). However, as shown in Fig. 18A (and C-left), the problem seems intractable from the point of view of the brain itself since neurons deeply embedded in the brain do not have direct access to the stimulus. In [1] and related work, we showed that the decoding problem seems intractable only because we left out the motor system from the picture. Figure 18D shows how motor action can help processes deeply embedded in the brain can understand the meaning of the spikes by generating motor behavior and observing the resulting change in the neural spikes. Here, a key principle is to generate motion that keeps the neural spike pattern invariant over time (Fig. 18E), which allows the following to coincide (1) the property of the motion (diagonal movement) and (2) the encoded property of the input (45° orientation). Using reinforcement learning, we showed that the invariance criterion leads to near optimal state-action mapping for synthetic and natural image inputs (Fig. 18F, G), where the encoded property of the input is mapped to congruent motor action. Furthermore, we showed that the receptive fields can be learned simultaneously with the state-action mapping (Fig. 18H). The main lesson we learned is that the encoding/decoding framework in neural code can lead to a dead end unless the problem is posed from the perspective of the brain itself; and the motor system can play an important role in the shaping of the sensory/perceptual primitives (also see [2]).

P29 Neural computation in a dynamical system with multiple time scales

Yuanyuan Mi1,†, Xiaohan Lin1,†, Si Wu1

Correspondence: Si Wu - wusi@bnu.edu.cn

† Y.M. and X.L. contributed equally to this work

BMC Neuroscience 2016, 17(Suppl 1):P29

The brain performs computation by updating its internal states in response to external inputs. Neurons, synapses, and the circuits are the fundamental units for implementing brain functions. At the single neuron level, a neuron integrates synaptic inputs and generates spikes if its membrane potential crosses the threshold. At the synapse level, neurons interact with each other to enhance or depress their responses. At the network level, the topology of neuronal connection pattern shapes the overall population activity. These fundamental computation units of different levels encompass rich short-term dynamics, for example, spike-frequency adaptation (SFA) at single neurons [1], short-term facilitation (STF) and depression (STD) at neuronal synapses [2]. These dynamical features typically expand a broad range of time scale and exhibit large diversity in different brain regions. Although they play a vital part in the rise of various brain functions, it remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics.

In this study, we propose that one benefit for having multiple dynamical features with varied time scales is that the brain can fully exploit the advantages of these features to implement which are otherwise contradictory computational tasks. To demonstrate this idea, we consider STF, SFA and STD with increasing time constants in the dynamics of a CANN. The potential brain regions with these parameter values are the sensory cortex, where the neuronal synapses are known to be STD-dominating. We show that the network is able to implement three seemingly contradictory computations, which are persistent activity, adaptation and anticipative tracking (see Fig. 19). Simply state, the role of STF is to hold persistent activity in the absence of external drive, the role of SFA is to support anticipative tracking for a moving input, and the role of STD is to eventually suppress neural activity for a static or transient input. Notably, the time constants of SFA and STD can be swapped with each other, since SFA and STD have the similar effects on the network dynamics. Nevertheless, we need to include both of them, since a single negative feedback modulation is unable to achieve both anticipative tracking and plateau decay concurrently. The implementation of each individual computational task based on a single dynamical feature has been studied previously. Here, our contribution is on revealing that these tasks can be realized concurrently in a single neural circuit by combined dynamical features with coordinated time scales. We hope that this study will shed light on our understanding of how the brain orchestrates its rich dynamics at various levels to realize abundant cognitive functions.

The neocortex is composed of 6 different layers. In the primary visual cortex (V1), the functional architecture of basic stimulus selectivity is experimentally found to be similar across these layers [1]. The organization in functional columns justifies the use of cortical models describing only two-dimensional layers and disregarding functional organization in the third dimension.

Here we show theoretically that already small deviations from an exact columnar organization can lead to non-trivial three-dimensional functional structures (see Fig. 20). Previously, two-dimensional orientation domains were modeled by Gaussian random fields, the maximum entropy ensemble, allowing for an exact calculation of pinwheel densities [2]. Pinwheels are points surrounded by neurons preferring all possible orientations and these points generalize to pinwheel strings in three dimensions. We extend the previous two-dimensional model characterized by its typical scale of orientation domains to a three-dimensional model by keeping the typical scale in each layer and introducing a columnar correlation length. We dissect in detail the three-dimensional functional architecture for flat geometries and for curved gyri-like geometries with different columnar correlation lengths. The model is analyzed analytically complemented by numerical simulations to obtain solutions for its intrinsic statistical parameters. We find that (i) pinwheel strings are generally curved, (ii) for large curvatures closed loops and reconnecting pinwheel strings appear and (iii) for small columnar correlation lengths a novel transition to a rodent-like interspersed organization emerges.

This theory extends the work of [2, 3] by adding a columnar dimension and supplements the work of [4] by a rigorous statistical treatment of the three-dimensional functional architecture of V1. Furthermore, the theory sheds light on the required precision of experimental techniques for probing the fine structure of the columnar organization in V1.

Yoriko Yamamura1, Jeffery R. Wickens1

Correspondence: Yoriko Yamamura - yoriko@oist.jp

Curiosity in humans appears to follow an inverted U-shaped function of unpredictability: stimuli that are neither too predictable nor too unpredictable evoke the greatest interest [1]. Rewarding moderate sensory unpredictability is an effective strategy for reinforcing explorations that improve our predictive models of the world [1, 2]. However, the computations and neural circuits underlying this unpredictability-dependence of curiosity remain largely unknown.

A rodent model of curiosity would be useful for elucidating its underlying neural circuitry, because more specific manipulation techniques are available than in humans. It has been shown that mice prefer unpredictable sounds to predictable ones when the sounds are paired with light [3]. However, frequency of stimulus presentation was a potential confound in this study. Furthermore, a more systematic sampling of stimulus unpredictability is necessary to determine whether a rodent analogue of the U-shaped curve indeed exists.

We have devised an operant conditioning paradigm building on [3], using sensory stimuli as “reward” to quantify the rewardingness of various levels of sensory predictability for rats. Rats (Long Evans, male) are placed in a soundproofed chamber with two nosepoke holes. A combination of sound and light stimuli is presented whenever the rat pokes the active hole; no stimulus is associated with the inactive hole (counterbalanced across subjects).

We hypothesize that reward is also a U-shaped function of stimulus unpredictability in rats, and that this is due to a Bayesian precision weighting placing more importance on deviations from reliabile predictions. This departs from previous learning-based accounts [2]. There are five experimental conditions, systematically varied in unpredictability of the sound stimuli (as quantified by entropy H), and a control condition, in which a nosepoke in neither hole has any consequence (Fig. 21). Specifically, the sound stimuli are random sequences of two possible 125-ms sound snippets of equal value to the rat, with their frequencies of occurrence varied across conditions to vary H. Each sequence contains eight such snippets. Across all conditions, the light stimulus simply remains on while the sound is being played; it is added to enhance the rats’ responding to auditory stimuli [3]. We predict that the rats’ active nosepoke responses will be maximally increased at intermediate H (Fig. 21).

Fig. 21

Schematic of the sound stimuli used in all conditions, and the predicted reward for each

The proposed assay quantifies the rewardingness of sensory unpredictability in rats. By systematically varying the entropy of the sound sequence, we can probe the computations behind the putative unpredictability-driven reward. The assay can furthermore be used to study the effect of pharmacological or genetic manipulations on unpredictability-driven reward, in order to validate mechanistic implementations of such computations.

Correspondence: Christina M. Weaver - christina.weaver@fandm.edu

BMC Neuroscience 2016, 17(Suppl 1):P32

Recently we developed a three-stage optimization method for fitting conductance-based models to data [1]. The method makes novel use of Latin hypercube sampling (LHS), a statistical space-filling design, to determine appropriate weights automatically for various error functions that quantify the difference between empirical target and model output. The method uses differential evolution to fit parameters active in the subthreshold and suprathreshold regimes (below and above action potential threshold). We have applied the method to spatially extended models of layer 3 pyramidal neurons from the prefrontal cortex of adult rhesus monkeys, in which in vitro action potential firing rates are significantly higher in aged versus young animals [2]. Here we validate our optimization method by testing its ability to recover parameters used to generate synthetic target data. Results from the validation fit the voltage traces of the synthetic target data almost exactly (Fig. 22A–C), whether fitting a model with 4 ion channels (10 parameters), or 8 ion channels (23 parameters). The optimized parameter values are either identical to, or nearby, the original target values (Fig. 22D–F), except for a few parameters that were not well constrained by the simulated protocols. Further, our LHS-based scheme for weighting error functions is significantly more efficient at recovering target parameter values than by weighting all error functions equally, or by choosing weights manually. We are now using the method to fit models to data from several young, middle-aged, and aged monkeys. Adding new conductances to the model, and allowing altered channel kinetics in the axon initial segment versus the soma, improves the quality of the model fits to data. We use published results from empirical studies of layer 3 neocortical pyramidal neurons to determine whether the optimized parameter sets are biologically plausible.

Fig. 22

A–C Membrane potential of the synthetic target (black), and of randomly chosen members of the final population (colors, overlaid almost exactly), from three validation studies. Optimized 10 and 23 parameters in A–C respectively. D–F Parameter values used to generate synthetic data (black lines), and mean ± standard deviation of values recovered in the searches (colored circles), normalized to the range used in the optimization

Hu He1, Xu Yang2, Hailin Ma1, Zhiheng Xu1, Yuzhe Wang1

Correspondence: Xu Yang - yangxu@tsinghua.edu.cn

BMC Neuroscience 2016, 17(Suppl 1):P33

In this work, an algorithm to build self-growing and self-organizing neuron network according to external signals is presented, in attempt to build neuron network with high intelligence. This algorithm takes a bionic way to build complex neuron network. We begin with very simple external signals to provoke neurons.

In order to propagate the signals, neurons will seek to connect to each other, thus building neuron networks. Those generated networks will be verified and optimized, and be treated as seeds to build more complex networks. Then we repeat this process, use more complex external signals, and build more complex neuron networks. A parallel processing method is presented, to enhance the computation efficiency of the presented algorithm, and to help build large scale of neuron network with reasonable time. The result shows that, neuron network built by our algorithm can self-grow and self-organize as the complexity of the input external signals increase. And with the screening mechanism, neuron network that can identify different input external signals is built successfully (Fig. 23).

Fig. 23

Neuron network generated by our algorithm

Acknowledgements: This work is supported by the Core Electronic Devices, High-End General Purpose Processor, and Fundamental System Software of China under Grant No. 2012ZX01034-001-002, the National Natural Science Foundation of China under Grant No. 61502032, Tsinghua National Laboratory for Information Science and Technology (TNList), and Samsung Tsinghua Joint Laboratory.

Kwangyeol Baek1,2, Laurel S. Morris1, Prantik Kundu3, Valerie Voon1

1Department of Psychiatry, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; 2Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea; 3Departments of Radiology and Psychiatry, Icahn School of Medicine at Mount Sinai, New York City, 10029, USA

Correspondence: Kwangyeol Baek - kb567@cam.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P34

The efficient organization and communication of brain networks underlies cognitive processing, and disruption in resting state brain network has been implicated in various neuropsychiatric conditions including addiction disorder. However, few studies have focused on whole-brain networks in the maladaptive consumption of natural rewards in obesity and binge-eating disorder (BED). Here we use a novel multi-echo resting state functional MRI (rsfMRI) technique along with a data-driven graph theory approach to assess global and regional network characteristics in obesity and BED.

We collected multi-echo rsfMRI scans from 40 obese subjects (including 20 BED patients) and 40 healthy controls, and used multi-echo independent component analysis (ME-ICA) to remove non-BOLD noise. We estimated the normalized correlation across mean rsfMRI signals in 90 brain regions of the Automated Anatomical Labeling atlas, and computed global and regional network metrics in the binarized connectivity matrix with density threshold of 5–25 %. In addition, we confirmed the observed alterations in network metrics using the Harvard-Oxford atlas which was parcellated into 470 even-sized regions.

Obese subjects exhibited significantly reduced global and local efficiency as well as decreased modularity in the whole-brain network compared to healthy controls (Fig. 24). Both BED patients and the obese subjects without BED exhibited the same alteration of network metrics compared with healthy controls, but two obese groups did not differ from each other. In regional network metrics, bilateral putamen, thalamus and right pallidum exhibited profoundly decreased nodal degree and efficiency in obese subjects, and left superior frontal gyrus showed decreased nodal betweeness in obese subjects (all p < 0.05, Bonferroni correction). Network-based statistics revealed a cortico-striatal/cortico-thalamic network with significantly decreased functional connectivity which consisted of bilateral putamen, pallidum, thalamus, primary motor cortex, primary somatosensory cortex, supplementary motor area, paracentral lobule, superior parietal lobule, superior temporal cortex and left amygdala. Interestingly, when examining the same network properties but using only single-echo rsfMRI data analysis without ME-ICA, we find no significant differences between groups.

Therefore, using data-driven graph theory analysis of multi-echo rsfMRI data, we highlight more subtle impairments in cortico-striatal/cortico-thalamic networks in obesity that have previously been associated with substance addictions. We emphasize global impairments in network efficiency in obesity with disrupted local network organization closer to random networks. Mathematically capturing brain network alterations in obesity provides novel insights into potential biomarkers and therapeutic targets.

P35 Dynamics of cooperative excitatory and inhibitory plasticity

Everton J. Agnes1, Tim P. Vogels1

1Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, OX1 3SR, UK

Correspondence: Everton J. Agnes - everton.agnes@cncb.ox.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P35

Neurons receive balanced excitatory and inhibitory inputs, a phenomenon thought to be essential for a variety of computations [1–3]. Inhibitory synaptic plasticity is an obvious candidate for imposing this balanced input regime [2,4], leaving excitatory synapses available to learn patterns and memories. Recent experimental work seems to agree with that notion of collaborative excitatory and inhibitory plasticity [4], but recent models do not take direct interactions into consideration. Instead, learning rules are usually tuned to indirectly but constructively interact via the firing-rates they elicit [3,5]. Without proper parameter tuning, this can be problematic because excitatory and inhibitory synaptic plasticity models may have different homeostatic set points, making synaptic weights fluctuate wildly (Fig. 25A, B; green lines). Here we present a hybrid model of inhibitory synaptic plasticity that combines the simplicity of spike-based models with the addition of a excitatory/inhibitory input dependence. It captures recent experimental findings showing that changes at inhibitory synapses are strongly correlated with the balance between excitation and inhibition and that inhibitory synapses do not change when excitatory input is blocked [4]. Essentially, our model is a symmetric spike-timing-dependent plasticity (STDP) rule in which the learning-rate is controlled by excitatory and inhibitory activities—a spike-timing- and current-dependent plasticity (STCDP) model. Balance is maintained, but the learning rule does not impose fixed-point attractor dynamics to post-synaptic neurons, because there is no change in inhibitory synapses once the total input is balanced. Inhibitory synapses change depending on excitatory synapses, which means that plasticity depends on at least three synaptic participants (trisynaptic) instead of only two (bisynaptic). We show that when combined with an excitatory synaptic plasticity model, both excitatory and inhibitory weights converge to stable values, as the firing-rate reaches the fixed-point imposed by the excitatory learning rule (Fig. 25B; yellow lines). More importantly, the learning rule allows efficient and stable learning of new weights when the balance is disrupted, opening the door for effective and stable learning of arbitrary synaptic patterns.

Fig. 25

A Schematics representing the neuronal network. A group of 2000 excitatory neurons and 500 inhibitory neurons are recurrently connected with sparse connectivity and the excitatory neurons receive random input from an external pool of neurons. B Excitatory neurons’ mean firing-rate (top), mean excitatory weight onto excitatory neurons (middle) and mean inhibitory weight onto excitatory neurons (connections marked as plastic in A). Simulation of the neuronal network with a spike-based inhibitory learning rule is represented by green lines (STDP) while simulation with our novel spike-timing- and current-dependent learning rule is shown in yellow (STCDP). The dashed lines represent the fixed points imposed by the excitatory (high) and inhibitory (low) learning rules. The low fixed point only exists for the inhibitory STDP model (simulation represented by the green lines)

Acknowledgements: This work was partially funded by the Brazilian agency CNPq (Grant Agreement Number 235144/2014-2) and the Sir Henry Dale Fellowship (Grant Agreement WT100000).

William F. Podlaski1, Tim P Vogels1

1Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK

Correspondence: William F. Podlaski - william.podlaski@cncb.ox.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P36

Neural oscillations—the periodic synchronisation of neuronal spiking—is a common feature of brain activity, with several hypothesised functions relating to information flow, attention and brain state [1]. Previous experimental work has shown that oscillatory activity correlates with moments of heightened attention, and that communication between different brain areas is often marked by an increase in oscillatory coherence between the regions [2]. Theoretical and modelling work has helped to explore the mechanisms behind neuronal oscillations, and some of their effects on neural coding and signal propagation [3]. Recently, theoretical studies have explored how resonance might affect signal processing [4, 5] and how information can be propagated along different pathways according to oscillatory phase and frequency [6].

We expand this work here by studying how resonance at the single neuron level might be used for frequency-dependent gating of information flow in neuronal networks. We show that in feed-forward spiking network simulations background oscillations can synchronise or desynchronise the spikes of a propagated signal, changing its content and emphasis from rate code to synfire code or vice versa. Such a mechanism can modulate information flow without rewiring the signal pathways themselves, allowing to select for specific downstream readout targets. Building on this idea, we can create entire pathways that can be selectively (in-)activated by different background oscillatory frequencies without changing the connectivity of the network. We hypothesise that neuronal resonance, combined with resonance in synapses and network motifs, can allow for precise oscillatory gating of information in cortex. Building on previous studies of resonance and oscillatory signal propagation [4,5,6] we propose a plausible mechanism for how fast and precise frequency-dependent gating might be achieved in the brain.

Acknowledgements: Research was supported by a Sir Henry Dale Royal Society and Wellcome Trust Research Fellowship (WT100000).

Martin Giese1, Pradeep Kuravi2, Rufin Vogels2

Correspondence: Martin Giese - martin.giese@uni-tuebingen.de

BMC Neuroscience 2016, 17(Suppl 1):P37

For repeated stimulation neurons in higher-level visual cortex show adaptation effects. Such effects likely influence repetition suppression paradigms in fMRI studies and the formation of high-level after-effects, e.g. for faces [1]. A variety of theoretical explanations has been discussed, which are difficult to distinguish without detailed electrophysiological data [2]. Meanwhile, detailed physiological experiments on the adaptation of shape-selective neurons in inferotemporal cortex (area IT) have provided constraints that help to narrow down possible neural processes. We propose a neurodynamical model that reproduces a number of these experimental observations by biophysically plausible neural circuits. Our model uses the mean-field limit and consists of a neural field of shape-selective dynamic linear-threshold neurons that are augmented several adaptation processes: (i) spike-rate adaptation; (ii) an input fatigue adaptation process, modeling adaptation in earlier hierarchy levels and of afferent synapses; (iii) a firing-rate fatigue adaptation process that models adaptation dependent on the output firing rates of the neurons. The model with a common parameter set is compared to results from several studies about adaptation in area IT. The model reproduces the following experimentally observed effects: (i) shape of the typical PSTHs of IT neurons; (ii) temporal decay for repeated stimulation of the same neurons with many repetitions of the same stimulus [3] (Fig. 26A); (iii) dependence of adaptation on efficient and ineffective adaptor stimuli, which stimulate the neuron strongly or only moderately [4] (Fig. 26B); (iv) dependence of the strength of the adaptation effect on the duration of the adaptor (Fig. 26C). A mean field model with several additional adaptive processes can account for the observed experimental effects, where all introduced processes were necessary to account for the results. Especially the observed dependence on the effectivity of the adaptor cannot be reproduced without an appropriate mixture if an input fatigue and a firing-rate fatigue mechanism. This suggests that adaptation in IT neurons is significantly influenced by several biophysical processes with different spatial and temporal scales.

Fig. 26

Simulation results. A Decay of neural activity for multiple repetitions of the same stimulus. B Experiment adapting with effective and ineffective stimuli. C Dependence of the PSTH on adaptor duration and unadapted response (black)

Correspondence: Alexander Seeholzer - alex.seeholzer@epfl.ch

† These authors contributed equally to this work.

BMC Neuroscience 2016, 17(Suppl 1):P38

Ion channels are fundamental constituents determining the function of single neurons and neuronal circuits. To understand their complex interactions, the field of computational modeling has proven essential: since its emergence, thousands of ion channel models have been created and published as part of detailed neuronal simulations [1]. Faced with this large variety of models, it is difficult to determine how particular models relate to each other, to the interpretability of simulations and, importantly, to experimental data.

Here, we present a framework within which we analyzed a pilot set of 2378 voltage- or calcium-dependent published ion channel models for the NEURON simulator [1]. We extracted annotated metadata from all associated publications, helping identify their use in simulations (e.g. the animal type, neuron type or area of compartmental models) and the provenance of ion channel models as they were derived from other published work. This categorical and relational metadata is combined with quantitative evaluations of all channel models: individual channels are characterized by their responses to voltage clamp protocols. With subsequent cluster analysis, we extract topologies of ion channel similarity and genealogy, identifying redundancy and groups of common channel kinetics.

The result of this large-scale assay of published work is freely accessible through interactive visualizations (see Fig. 27A) on the Ion Channel Genealogy (ICG) web-resource [2], providing a tool for model discovery and comparison. Bridging the gap between model and experiment, our resource allows classifying new channel models and experimental current traces within the topology of all models currently in the database (see Fig. 27B, C). The ICG framework thus allows for quantitative comparison of ion channel kinetics, experimental and model alike, aimed to facilitate field-wide standardization of experimentally-constrained modeling.

Fig. 27

A Visualizations available on the web-resource [2] for model browsing. B Schematic of upload and evaluation. Both experimental current traces and mod files can be uploaded to our servers, where they are scored and compared to all models currently in the database. C Exemplary result of automated comparison: Current traces (recorded from “Ramp” and “Activation” voltage clamp protocols) of the uploaded model (red) together with mean (1st, 2nd, 3rd, 4th) and individual (gray) traces of the four most similar clusters of channel models in the database

Acknowledgements: Research was supported by a Sir Henry Dale Royal Society & Wellcome Trust Research Fellowship (WT100000). A.S. was supported by the Swiss National Science Foundation (200020_147200). R.R. was supported by the EPFL Blue Brain Project Fund and the ETH Board funding to the Blue Brain Project.

Correspondence: Pablo Varona - pablo.varona@uam.es

BMC Neuroscience 2016, 17(Suppl 1):P39

Neuronal subthreshold oscillations underlie key mechanisms of information discrimination in single cells while dynamic synapses provide channel-specific input modulation. Previous studies have shown that intrinsic neuronal properties, in particular subthreshold oscillations, constitute a biophysical mechanism for the emergence of non-trivial single-cell input/output preferences (e.g., preference towards decelerating vs. accelerating input trains of the same average rate) [1, 2]. It has also been shown that short-term synaptic dynamics, in the form of short-term depression and/or short-term facilitation, can provide a channel-specific mechanism for the enhancement of the post-synaptic effects of temporally specific input sequences [3, 4]. While intrinsic oscillations and synaptic dynamics are typically studied independently, it is reasonable to hypothesize that their interplay can lead to more selective and complex temporal input processing.

Here, we extend and refine our previous computational study on the interaction between subthreshold oscillations and synaptic depression [5]. In particular, we investigated whether, and under which conditions, the combination of intrinsic subthreshold oscillations and short-term synaptic dynamics can act synergistically to enable the emergence of robust and channel-specific selectivity in neuronal input–output transformations. We calculated analytically the voltage trajectories and spike output of generalized integrate-and-fire (GIF) model neurons in response to temporally distinct trains of input EPSPs. In particular, we considered triplets of input EPSPs in a range that covers intrinsic and synaptic time scales, and analyzed the model output as intrinsic and synaptic parameters were varied.

Our results show that intrinsic and synaptic dynamics interact in a complex manner for the emergence of specific input–output transformations. In particular, precise non-trivial preferences emerge from synergistic intrinsic and synaptic preferences, while broader selectivity is observed for mismatched intrinsic and synaptic dynamics. We discuss the conditions for robustness of the observed input/output relationships.

We conclude that the interaction of intrinsic and synaptic properties can enable the biophysical implementation of complex and channel-specific mechanisms for the emergence of selective neuronal responses. We further interpret our results in the light of experimental evidence describing distinct short-term synaptic dynamics in different afferents converging onto the same neuron, as in the case of parallel and climbing fiber inputs to cerebellar Purkinje cells, and advance specific hypotheses that link heterogeneous synaptic dynamics of distinct pathways onto the same post-synaptic target to their distinct computational function. We also discuss the impact of single-channel/single-neuron temporal input discrimination in the context of information processing based on heterogeneous elements.

Acknowledgements: We acknowledge support from MINECO FIS2013-43201-P, DPI2015-65833-P, TIN-2012-30883 and ONRG Grant N62909-14-1-N279.

Correspondence: Bart Gips - bart.gips@donders.ru.nl

† Authors have made equal contribution

BMC Neuroscience 2016, 17(Suppl 1):P40

Neural activity in awake primate early visual cortex exhibits transients with intervals of 250-300 ms. Experimental work by us and others has shown that these transients are related to microsaccadic eye movements [1, 2]. These short transients are followed by periods of steady activity that last until the next microsaccade (Fig. 28A).

Fig. 28

A Time–frequency representation of local field potential (LFP) locked to a microsaccade (MS) recorded in primate V1. B Time–frequency representation of simulated LFP. C Schematic representation of the model network illustrating input (injection current), recurrent connection pattern and output (spike trains). D The input to the neurons is best reflected in the simulated spike trains (output) during phase I, quantified by mutual information (MI). E Recurrent connection pattern is best reflected in the output during phase II

We found that computational models of excitatory-inhibitory spiking networks organized in a structure of columns and hypercolumns, are able to represent relevant stimulus information when subjected to 3–4 Hz saccade-like transients. The simulated networks expressed evoked responses with power in the alpha–beta band (~8–25 Hz) as well as gamma rhythmic activity (~25–80 Hz) similar to in vivo local field recordings in monkey V1 (Fig. 28A, B).

We show that in phase I, the model produces large-scale spatial synchrony and pronounced alpha–beta power. In phase II the model exhibits narrow-band gamma oscillations with spatially local synchrony. The activity in the model network (rate and timing coding) in phase I mainly reflects feedforward input (Fig. 28C, D), whereas, the network activity in phase II was dominated by recurrent connections (Fig. 28C, E).

The model network activity closely matches that found in experiments. The simulation results suggest that transient phase (phase I) allows for resetting the network and rapid feedfoward processing of novel information, whereas detailed processing and contextualization by recurrent activity take place in the period of steady gamma activity (phase II). Therefore we arrived at hypotheses on the functional interpretation of phases I and II that can be possibly tested in an experimental setup. First, because of the reset of network activity by a microsaccade, phase I is the optimal time window to switch information flow among competing networks through a top-down signal. This indicates that signals related to visual attention are most likely to occur just after a saccade. Second, the increased efficacy of recurrent connections during phase II indicate that contextualization operations such as figure-ground segregation [3] and contour completion occur in the steady phase ~100 ms after the onset of a (micro)saccade.

Correspondence: Abdorreza Goodarzinick - a.goodarzinick@iasbs.ac.ir

BMC Neuroscience 2016, 17(Suppl 1):P41

In recent years, several experimental observations have confirmed the emergence of self-organized criticality (SOC) in the brain at different scales [1]. At large scale, functional brain networks obtained from fMRI data have shown that node-degree distributions and probability of finding a link versus distance are indicative of scale-free and small-world networks regardless of the tasks in which the subjects were involved [2]. At small scale, the study of neuronal avalanches in networks of living neurons revealed power-law behavior in both spatial and temporal scales [3]. It is also shown that functional networks of the brain are strikingly similar to those derived from the 2D Ising model at critical temperature [4] and the 2D abelian sandpile model [5].

The importance to see whether brain network’s scaling properties associated with healthy conditions are altered under various pathologies and how structural defects of a system at criticality can affect its functional connectivity motivated us to study robustness of functional networks of 2D Ising model at critical point against elimination of structural sites. The results showed that the statistics of the functional network indicative of criticality (evident in healthy brain controls), such as power-law behavior and small-worldness remained robust against random elimination of structural sites up to percolation limit (see Fig. 29). The resulting functional network maintained its key properties orders of magnitude higher than those of the same system poised in a super-critical or sub-critical state. These results can show that self-organized critical behavior, besides having unique advantages like fasciliation of alteration of functional patterns, optimization of information transfer and maximization of correlation length, shows striking robustness against structural deficits. Taking into account brain’s long-range anatomical connections and compensatory mechanisms like neuroplasticity, if the results of this study are generalizable to the brain, they may help to explain the delay in clinical diagnosis of multiple neurodegenerative diseases in which possible deficit in functional connectivity among brain regions contribute to the cognitive dysfunctions.

Fig. 29

Relevant parameters of functional network of 2D Ising model at critical point versus fraction of defect to the structural cells. A Power-law exponent of degree-distribution, B small-worldness measure, C average degree

Correspondence: Aref Pariz - a.pariz@iasbs.ac.ir

BMC Neuroscience 2016, 17(Suppl 1):P42

Signal transmission is of interest from both fundamental and clinical perspective and has been well studied in nonlinear science and complex networks [1, 2]. In particular, in nervous systems, cognitive processing involves signal propagation through multiple brain regions and the activation of large numbers of specific neurons [3–6]. In information propagation through brain regions, each part, known as generator, activated locally as information comes to it from neighboring generators. Although the problem is well studied in the context of complex networks, our focus here is on the effect of the intrinsic dynamical properties of the reciprocal generators on the propagation of signal.

In this study we explored the propagation of information in a chain of neurons and networks. As signal propagate through the chain of networks, the firing rate of networks show a fluctuation as host network (the network which receive signal). Here the response is the amplitude of fast Fourier transform of firing rates of each network. If the host network has sufficiently higher intrinsic firing rate than others, signal can transfer with higher amplitude, otherwise, other networks will not get affected. As a result of propagation of signal, for the former case, all networks will show a peak in frequency domain at exactly the same frequency as input signal (Fig. 30A), but with different amplitude which show the efficacy of transmitted information. Also the same result can obtain by a chain of single LIF neurons (Fig. 30B). As phase response curve of the chain and it response to signal show, if the host neuron has higher firing rate (call it leader neuron), the propagation of information will be enhanced. But this higher firing rate has a limit which after that the whole chain will act asynchronously and results the loss of information was aimed to propagate.

Fig. 30

Inhomogeneity of input current on host network, increases the response of network. A, B Response of networks of neurons and chain of neurons, for different inhomogeneity on host network and host neuron, respectively

Correspondence: Julia M. Warburton - julia.warburton@bristol.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P43

Alzheimer’s disease (AD) is the main form of dementia and is characterised clinically by cognitive decline and impairments to memory function. One of the key histopathological features of AD thought to cause this neurodegeneration is the abnormal aggregation of the protein amyloid-β (Aβ) [1]. Transgenic mouse models that overexpress Aβ are used to investigate the potential functional consequences of this amyloidopathy in AD. In this study we use in vitro electrophysiology data recorded from PDAPP transgenic mice (a mouse model of amyloidopathy) and their wild-type littermates to parameterise a hippocampal network model [2]. The aim of the study is to investigate how amyloidopathy alters gamma frequency oscillations within the hippocampus, which is one of the regions first affected in AD.

We use a synaptically connected network of excitatory pyramidal neurons and inhibitory interneurons to simulate the gamma frequency activity [3]. Each cell is described by a single-compartment Hodgkin–Huxley type equation, with the properties of the voltage-gated channels fit to the intrinsic properties measured experimentally, which included stimulated firing frequency data and the associated action potentials from CA1 pyramidal neurons and three-types of CA1 interneuron. Network activity is either driven deterministically via a direct stimulus, such as a step pulse or a theta wave, or via a stochastic input. We perform power spectral density analysis to analyse the oscillatory activity.

Our model focuses on gamma frequency oscillations, which lie in the 30–100 Hz range, because of the associations with attention, sensory processing and potentially of most relevance to AD, with learning and memory. It has been shown that within the hippocampus gamma oscillations enable cross-talk between distributed cell assemblies, with low frequency gamma associated with coupling between the CA1 and the CA3 region and fast frequency gamma associated with coupling between the CA1 and the medial entorhinal cortex [4]. EEG measurements from AD mouse models have identified network hypersynchrony alongside decreased gamma activity, with the role of interneurons in this process highlighted. [5]. By incorporating the pyramidal neuron and interneuron data in our model we aim to learn more about which parameters are most significant in these effects and to further understanding of the effects of amyloidopathy on oscillatory activity.

Acknowledgements: This work was supported by funding from the EPSRC.

References

1.

Hardy J, Selkoe DJ. The amyloid hypothesis of Alzheimer’s disease: progress and problems on the road to therapeutics. Science. 2002;297:353–6.

P44 Long-tailed distributions of inhibitory and excitatory weights in a balanced network with eSTDP and iSTDP

Florence I. Kleberg1, Jochen Triesch1

Correspondence: Florence I. Kleberg - kleberg@fias.uni-frankfurt.de

BMC Neuroscience 2016, 17(Suppl 1):P44

The strengths of excitatory synapses in cortex and hippocampus have been shown to follow a rightward-skewed or long-tailed distribution [1,2]. Such distributions can be achieved in recurrent balanced networks [3, 4], after synaptic modification by spike-timing dependent plasticity (STDP) [5] and synaptic scaling [6]. Recently, long-tailed distributions have also been observed for inhibitory synapses in cultured cortical neurons [7], confirming early findings in hippocampal slices [8]. However, the conditions and plasticity mechanisms necessary for achieving long-tailed distributions of inhibitory synapses are unknown. Furthermore, different forms of inhibitory STDP have been reported, but their effect on the distribution of inhibitory synaptic efficacies are largely unknown [9-11].

Here we investigate how plasticity in the inhibitory synapses in a self-organised recurrent neural network (SORN [12]) with leaky integrate-and-fire neurons can lead to long-tailed distributions of synaptic weights. We examine different inhibitory STDP (iSTDP) rules and characterize the conditions under which right-skewed shapes of inhibitory synaptic weight distributions are obtained while a balance between excitation and inhibition is maintained. While the ratio of long-term potentiation to long-term depression in iSTDP affects the shape of the distribution, a variety of window shapes for iSTDP can each achieve long-tailed distributions of inhibitory weights. We find that a precise balance of excitation and inhibition can be achieved with a strongly right-skewed distribution of inhibitory weights. Our results suggest that long-tailed distributions of inhibitory weights could be a ubiquitous feature of neural circuits that employ different plasticity mechanism.

P45 Simulation of EMG recording from hand muscle due to TMS of motor cortex

1Computational and Theoretical Neuroscience Laboratory, School of Information Technology and Mathematical Sciences, University of South Australia, Australia; 2Frankfurt Institute for Advanced Studies, Goethe-Universität, Germany; 3Robinson Research Institute, School of Medicine, University of Adelaide, Australia; 4School of Mathematical Sciences, University of Nottingham, UK

Correspondence: Bahar Moezzi - bahar.moezzi@unisa.edu.au

BMC Neuroscience 2016, 17(Suppl 1):P45

Single pulse transcranial magnetic stimulation (TMS) is a technique which (at moderate intensities) activates corticomotor neuronal output cells transynaptically and evokes a complex descending volley in the corticospinal tract. Rusu et al. developed a computational model of TMS induced I-waves that reproduced observed epidural recordings in conscious humans [1]. In humans, epidural responses can be recorded in anaesthetized subjects during surgery or conscious subjects with electrodes implanted for the treatment of chronic pain. Such opportunities are uncommon and invasive. The effects of TMS can be non-invasively studied using surface electromyography (EMG) recordings from the hand first dorsal interosseous (FDI) muscle.

We simulated the surface EMG signal due to TMS of motor cortex in the hand FDI muscle. Our model comprises a population of cortical layer 2/3 cells, which drive layer 5 cortico-motoneuronal cells with excitatory and inhibitory synaptic inputs as in [1]. The layer 5 cells in turn project to a pool of motoneurons, which are modeled as an inhomogeneous population of integrate-and-fire neurons to simulate motor unit recruitment and rate coding. The input to motoneurons from cortical layer 5 consists of TMS-induced spikes and baseline firing. We modeled baseline firing with a Poisson drive to layer 2/3 cells. Hermite-Rodriguez functions were used to simulate motor unit action potential shape. The EMG signal was obtained from the summation of motor unit action potentials of active motor units. Parameters were tuned to simulate recordings from the FDI muscle.

Our simulated EMG signals match experimental surface EMG recordings due to TMS of motor cortex in the hand FDI muscle in shape, size and time scale both at rest and during voluntary contraction (see Fig. 31). The simulated EMG traces exhibit cortical silent periods (CSP) that lie within the biological range.

Correspondence: Martin Zapotocky - zapotocky@biomed.cas.cz

BMC Neuroscience 2016, 17(Suppl 1):P46

Axons growing in vivo or in culture may adhere to each other and form a connected network, which subsequently guides the paths of newly arriving axons. We investigated the development of such a network formed by growing axons in primary cell culture.

Olfactory epithelium explants from mouse embryos (day 13–14) were cultured on laminin substrate for 2 days and then recorded using DIC or phase contrast videomicroscopy for up to 24 h. The growing axons established a dense network within which large fascicles of axons were progressively formed. Within the recorded time period, the network remained stable, with limited further gowth of the axons but with ongoing rearrangement in the network structure. Based on segmentation of the recorded images, we determined the principal network characteristics (including the total length, the total number of vertices, and the network anisotropy) and their evolution in time.

This quantitative characterization permitted an analysis of the mechanisms of the observed network coarsening. We relate the network dynamics to the elementary processes of zippering, during which two axons or axon fascicles progressively adhere to each other [1]. We compare the structural features of the network (such as the distribution of vertex angles) with those reported in an electron microscopy investigation of a plexus of sensory neurites in Xenopus embryo [2]. We show that both our ex vivo study and the in vivo study of Ref. [2] support a similar underlying mechanism of the formation of the axon network.

Correspondence: Sakyasingha Dasgupta - sdasgup@jp.ibm.com

BMC Neuroscience 2016, 17(Suppl 1):P47

The source of cortical variability and its influence on signal processing remain an open question. We address the latter, by studying two types of randomly connected networks of quadratic integrate-and-fire neurons with balanced excitation-inhibition that produce irregular spontaneous activity patterns (Fig. 32A): (a) a deterministic network with strong synaptic interactions that actively generates variability by chaotic dynamics (internal noise) and (b) a stochastic network that has weak synaptic interactions but receives noisy input (external noise), e.g. by stochastic vesicle releases. These networks of spiking neurons are analytically tractable in the limit of a large network-size and slow synaptic-time-constant. Despite the difference in their sources of variability, spontaneous (baseline) activity patterns of these two models are indistinguishable unless majority of neurons are simultaneously recorded. We characterize the network behavior with dynamic mean field analysis and reveal a single-parameter family that allows interpolation between the two networks, sharing nearly identical spontaneous activity (Fig. 32B). Despite the close similarity in the spontaneous activity, the two networks exhibit remarkably different sensitivity to external stimuli. Input to the former network reverberates internally and can be successfully read out over long time. Contrarily, input to the latter network rapidly decays and can be read out only for short time. This is also observed in the significant changes in the spiking probability of evoked responses across this family (Fig. 32C). The difference between the two networks is further enhanced if input synapses undergo activity-dependent plasticity, producing significant difference in the ability to decode external input from neural activity. We show that, this difference naturally leads to distinct performance of the two networks to integrate spatio-temporally distinct signals from multiple sources. Unlike its stochastic counterpart, the deterministic chaotic network activity can serve as a reservoir to perform near optimal Bayesian integration and Monte-Carlo sampling from the posterior distribution. We describe implications of the differences between deterministic and stochastic neural computation on population coding and neural plasticity.

Fig. 32

A Schematic illustrations of the two balanced QIF networks models considered in the present study. The left network consists of strongly coupled neurons without noise, while the right network consists of weak coupling among neurons with noisy input. B Nearly identical rate autocorrelation functions in the two networks. The red line (C0) represents the value of the autocorrelation at time 0 and cyan line (C∞) is the value of auto-correlation function in the limit of large t. C Change in spiking probability for different network connectivity strengths \( (\tilde{g}), \) after being stimulated by a brief input at time t = 0

P48 Modeling the effect of riluzole on bursting in respiratory neural networks

Correspondence: Daniel T. Robb - robb@roanoke.edu

BMC Neuroscience 2016, 17(Suppl 1):P48

To accommodate constantly changing environmental and metabolic demands, breathing should be able to vary flexibly within a range of frequencies. The respiratory neural network in the pre-Botzinger complex of the ventrolateral medulla controls and flexibly maintains the breathing rhythm, coordinating network-wide bursting to signal the inspiratory phase of the breath. The frequency of this rhythmic activity is controlled by a number of neuromodulators, the majority of which are excitatory. Therefore, the central pattern generator for rhythmic respiratory activity should possess two seemingly contradictory properties: it has to be able to change frequency in response to excitatory input, but it also has to preserve stable rhythmic activity under a wide range of conditions.

A persistent sodium current (INaP) been identified as one of the key currents for generation of inspiratory activity [1]. It has been shown that some of the neurons in Pre-BotC possess an intrinsic bursting mechanism, which relies on inactivation of this current. Higher expression of INaP correlates with higher burst frequency of a single pacemaker neuron [2]. However, the INaP pacemaker mechanism can only function within very narrow ranges of external excitation—NaP dependent pacemaker tends to switch to tonic firing after a small increase in depolarizing current [3].

In this combined experimental and computational study, we tested the effect of the persistent sodium blocker Riluzole (RIL) in several different levels of continuous depolarization, simulated by application of K+. Whereas increased potassium increases the bursting frequency of the control network, in the presence of RIL the increased potassium does not alter the bursting frequency (Fig. 33). These findings indicate that INaP is responsible for flexible modulation of respiratory rhythm, but there is another mechanism, which can sustain rhythmic activity in its absence. We developed a computational model which incorporates a Calcium sensitive Non-specific cationic current (IcaN) in addition to INaP. Our simulations indicate that IcaN and INaP can maintain the rhythm in respiratory neurons in the presence of RIL, and are capable of providing stable oscillations in the presence of tonic excitation by K+.

Fig. 33

Summary of experiment on the effect of riluzole on the dependence of burst frequency on potassium concentration. Without riluzole (left), the frequency increases steadily with increasing potassium concentration. With riluzole present (right), the frequency remains essentially constant with increasing potassium concentration

P49 Mapping relaxation training using effective connectivity analysis

Rongxiang Tang1, Yi-Yuan Tang2

Correspondence: Yi-Yuan Tang - yiyuan.tang@ttu.edu

BMC Neuroscience 2016, 17(Suppl 1):P49

Relaxation training (RT)is a behavioral therapy that has been applied in stress management, muscle relaxation and other health benefit. However, compared to short-term meditation training, previous studies did not show the significant differences in brain changes following same amount of RT [1,2]. One possible reason might derive from the insensitive correlation based routine functional connectivity method that could not reveal training-related changes in effective connectivity (directed information flow) among these distributed brain regions. Here, we applied a novel spectral dynamic causal modeling (spDCM) to resting state fMRI to characterize changes in effective connectivity.

Twenty-three healthy college students were recruited through campus advertisements and received 4 weeks of RT (10 h in total), previously reported in our randomized studies [1, 2]. All neuroimaging data were collected using an Allegra 3-Telsa Siemens scanner and processed using the Data Processing Assistant for Resting-State fMRI, which is based on SPM and Resting-State fMRI Data Analysis Toolkit [3]. For each participant, the subsequent standard procedures included slice timing, motion correction, regression of WM/CSF signals, and spatial normalization [3]. Based on previous literature, we specified four regions of interest within default mode network (DMN)—medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and bilateral inferior parietal lobule (left IPL and right IPL), same coordinates as in previous spDCM studies [4]. A standard DCM analysis involves a specification of plausible models, which are then allows the model parameters to be estimated following Bayesian model selection. In both pre- and post-RT conditions, the procedure selected the fully connected model as the best model with a posterior probability of almost 1. The fully connected model had 24 parameters describing the extrinsic connections between nodes, the intrinsic (self-connections) within nodes and neuronal parameters describing the neuronal fluctuations within each node. We used Bayesian Parametric Average to quantify the differences between pre- and post-RT, and a classical multivariate test—canonical variate analysis to test for any significances in these differences [4]. Our results showed no significant differences in causal relationships among the above nodes following RT (all P > 0.05).

Conclusions Four weeks of RT could not induce significant changes in effective connectivity among DMN nodes. Long-term RT effect on brain changes warrants further investigation.

Acknowledgements: This work was supported by the Office of Naval Research.

Correspondence: Yi-Yuan Tang - yiyuan.tang@ttu.edu

BMC Neuroscience 2016, 17(Suppl 1):P50

Implicit learning (IL) occurs without goal-directed intent or conscious awareness but has important influences on our everyday functioning and overall health such as environmental adaptation, developing habits and aversions. Most of IL studies used event-related potentials (ERPs) to study brain response by taking the grand average of all event-related brain signals. How neuron oscillation (EEG frequency band) involves in IL remains unknown. Moreover, ERP analysis requires brain signals that are not only time locked, but also phase locked to the event, therefore the information with phase locked signals are missed and not presented in potentials. To address this issue, we applied time–frequency analysis and cluster-based permutation test in this study.

Fifteen healthy participants were recruited to perform three sessions of triplets learning task (TLT), an IL task commonly used in the field [1]. Three successive cues were presented and participants were asked to observe the first two cues and only respond to the third cue (target) by pressing corresponding keys. During the task, EEG signals were recorded. Cluster based permutation on alpha and theta band is used to deal with family-wise error rate and in the same time, help to find out difference occurred in specific time range along with spatial information among different triplet types.

Base on the behavioral result, overall learning occurs in session1, while triplet-specific learning takes place in session2. We find significant difference in both alpha (8–13 Hz) and theta (4–8 Hz) frequency band. For alpha band, power modulation shows significant difference between high versus low frequency triplet group in session2 in the frontal cortex. For theta band, theta power shows significant difference between session1 and session3 in the frontal cortex. It started from as early as target onset until the end of the trial in high frequency triplet group. However, in the low frequency triplet group, the power differential occurs later, from around 1000 ms till the end of the next trial.

Conclusions Behavioral result showed that the brain learned the regularity of sequence implicitly. Alpha power modulation indicated that the brain allocated resource in attention among two different triplet types. Theta power modulation showed the difference of memory processing and retrieval among two different triplet types. Our results indicated that participants did not find the regularity of the triplet types till the end of the study, but the brain in fact reacts to these two different triplet types differently.

Acknowledgements: This work was supported by the Office of Naval Research.

P51 The role of cerebellar short-term synaptic plasticity in the pathology and medication of downbeat nystagmus

Julia Goncharenko1, Neil Davey1, Maria Schilstra1, Volker Steuber1

Correspondence: Julia Goncharenko - i.goncharenko@herts.ac.uk

BMC Neuroscience 2016, 17(Suppl 1):P51

Downbeat nystagmus (DBN) is a common eye fixation disorder that is linked to cerebellar pathology. DBN patients are treated with 4-aminopyridine (4-AP), a K channel blocker, but the underlying mechanism is unclear. DBN is associated with an increased activity of floccular target neurons (FTNs) in the vestibular nuclei. It was previously believed that the reason for the increased activity of FTNs in DBN is a pathological decrease in the spike rate of their inhibitory Purkinje cell inputs, and that the effect of 4-AP in treating DBN could be mediated by an increased Purkinje cell activity, which would restore the inhibition of FTNs and bring their activity back to normal [1]. This assumption, however, has been questioned by in vitro recordings of Purkinje cells from tottering (tg/tg) mice, a mouse model of DBN. It was shown that therapeutic concentrations of 4-AP did not increase the spike rate of the Purkinje cells, but that they restored the regularity of their spiking, which is impaired in tg/tg mice [2].

Prompted by these experiments, Glasauer and colleagues performed computer simulations to investigate the effect of the regularity of Purkinje cell spiking on the activity of FTNs [3]. Using a conductance based FTN model, they found that changes in the regularity of the Purkinje cell input only affected the FTN spike rate when the input was synchronized. In this case, increasing the regularity of the Purkinje cell spiking resulted in larger gaps in the inhibitory input to the FTN and an increased FTN spike rate. These results predict that the increased irregularity in the Purkinje cell activity in DBN should lead to a decreased activity of the FTNs, rather than the increased activity that is found in experiments, and they are therefore unable to explain the therapeutic effect of 4-AP.

However, the model by Glasauer and colleagues does not take short-term depression (STD) at the Purkinje cell—FTN synapses into account. We hypothesized that this absence of STD could explain the apparent contradiction between the experimental [2] and computational [3] results. To study the role of STD in the pathology and 4-AP treatment of DBN, we used a morphologically realistic conductance based model of a cerebellar nucleus (CN) neuron [4, 5] as an FTN model to simulate the effect of irregular versus regular Purkinje cell input. The coefficients of variation of the irregular and regular Purkinje cell spike trains during DBN and after 4-AP treatment, respectively, were taken from recordings from wild-type and tg/tg mice [6], which served as a model system for DBN. We presented the FTN model with synchronized and unsynchronized input and found that, for both conditions, irregular (DBN) input trains resulted in higher FTN spike rates than regular (4-AP) ones. In the presence of unsynchronized Purkinje cell input, the acceleration of the FTN spike output during simulated DBN and the deceleration during simulated 4-AP treatment depended on STD at the Purkinje cell synapses. Our results provide a potential explanation for the pathology and 4-AP treatment of pathological nystagmus.

P52 Nonlinear response of noisy neurons

Sergej O. Voronenko1,2, Benjamin Lindner1,2

Correspondence: Sergej O. Voronenko - sergej@physik.hu-berlin.de

BMC Neuroscience 2016, 17(Suppl 1):P52

In many neuronal systems that exhibit high trial-to-trial variability the time-dependent firing rate is thought to be the main information channel for time-dependent signals. However, for nerve cells with low intrinsic noise and highly oscillatory activity synchronization, mode locking and frequency locking seem to be of major importance. Here, we present an extension to the linear response theory [1, 2] for the leaky integrate-and-fire neuron model to second order and demonstrate how the time-dependent firing rate can exhibit features that are reminiscent of mode-locking and frequency-locking. Although our theory allows to predict the response to general weak time-dependent signals, the second-order effects are best demonstrated using cosine signals as in Fig. 34A. We consider a leaky integrate-and-fire model for which the subthreshold voltage, Fig. 34B, is subject to the signal and to Gaussian white noise. Whenever the voltage hits the threshold, it is reset to zero and a spike time is recorded in the raster plot, Fig. 34C. The firing rate can be obtained numerically by averaging over the spike trains or via a perturbation approach similar to the weakly nonlinear analysis in [3]. We find that the firing rate can exhibit pronounced nonlinear behavior as can be seen from the excitation of a harmonic oscillation in Fig. 34D. Further effects that are not shown in Fig. 34 but are revealed by our analysis are a signal-dependent change of the mean firing rate and a pronounced nonlinear response to the sum of two cosine signals.

Fig. 34

Nonlinear modulation of the firing rate by a cosine signal. A Signal, B subthreshold voltage, C rasterplot, D The time-dependent firing rate (red, noisy trace) is significantly different from the linear theory (dashed line) but is accurately described by the second-order response (solid line)

Summary and conclusions Here we demonstrate that the time-dependent firing rate (equivalent to the instantaneous population rate for neurons driven by a common stimulus) can exhibit pronounced nonlinearities even for weak signal amplitudes. The linear theory does not only give quantitatively wrong predictions but also fails to capture the timing of the modulation peaks. Hence, our theory has not only implications for sinusoidal stimulation that is commonly used to study dynamic properties of nerve cells but also demonstrates the relevance of the nonlinear response for the encoding of complex time-dependent signals.

Acknowledgements: This work was supported by the BMBF (FKZ: 01GQ1001A) and the DFG (research training group GRK1589/2).