Multimodal features of structural and functional magnetic resonance imaging (MRI) of the human brain can assist in the diagnosis of schizophrenia. We performed a classification study on age, sex, and handedness-matched subjects. The dataset we used is publicly available from the Center for Biomedical Research Excellence (COBRE) and it consists of two groups: patients with schizophrenia and healthy controls. We performed an independent component analysis and calculated global averaged functional connectivity-based features from the resting-state functional MRI data for all the cortical and subcortical anatomical parcellation...

The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics...

Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access...

Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections...

Data visualization is one of the most important tool to explore the brain as we know it. In this work, we introduce a novel browser-based solution for medical imaging data visualization and interaction with diffusion-weighted magnetic resonance imaging (dMRI) and tractography data: Fiberweb. It uses a recent technology, WebGL, that has yet to be fully explored for medical imaging purposes. There are currently very few software tools that allow medical imaging data visualization in the browser, and none of these tools support efficient data interaction and processing, such as streamlines selection and real-time deterministic and probabilistic tractography (RTT)...

One of the outstanding problems in the sorting of neuronal spike trains is the resolution of overlapping spikes. Resolving these spikes can significantly improve a range of analyses, such as response variability, correlation, and latency. In this paper, we describe a partially automated method that is capable of resolving overlapping spikes. After constructing template waveforms for well-isolated and distinct single units, we generated pair-wise combinations of those templates at all possible time shifts from each other...

The use of automatic electrical stimulation in response to early seizure detection has been introduced as a new treatment for intractable epilepsy. For the effective application of this method as a successful treatment, improving the accuracy of the early seizure detection is crucial. In this paper, we proposed the application of a frequency-based algorithm derived from principal component analysis (PCA), and demonstrated improved efficacy for early seizure detection in a pilocarpine-induced epilepsy rat model...

Computation of headmodel and sourcemodel from the subject's MRI scan is an essential step for source localization of magnetoencephalography (MEG) (or EEG) sensor signals. In the absence of a real MRI scan, pseudo MRI (i.e., associated headmodel and sourcemodel) is often approximated from an available standard MRI template or pool of MRI scans considering the subject's digitized head surface. In the present study, we approximated two types of pseudo MRI (i.e., associated headmodel and sourcemodel) using an available pool of MRI scans with the focus on MEG source imaging...

Human functional magnetic resonance imaging (fMRI) studies examining the putative firing of grid cells (i.e., the grid code) suggest that this cellular mechanism supports not only spatial navigation, but also more abstract cognitive processes. Despite increased interest in this research, there remain relatively few human grid code studies, perhaps due to the complex analysis methods, which are not included in standard fMRI analysis packages. To overcome this, we have developed the Matlab-based open-source Grid Code Analysis Toolbox (GridCAT), which performs all analyses, from the estimation and fitting of the grid code in the general linear model (GLM), to the generation of grid code metrics and plots...

Recent discoveries that astrocytes exert proactive regulatory effects on neural information processing and that they are deeply involved in normal brain development and disease pathology have stimulated broad interest in understanding astrocyte functional roles in brain circuit. Measuring astrocyte functional status is now technically feasible, due to recent advances in modern microscopy and ultrasensitive cell-type specific genetically encoded Ca(2+) indicators for chronic imaging. However, there is a big gap between the capability of generating large dataset via calcium imaging and the availability of sophisticated analytical tools for decoding the astrocyte function...

The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail...

Certain differences between brain networks of healthy and epilectic subjects have been reported even during the interictal activity, in which no epileptic seizures occur. Here, magnetoencephalography (MEG) data recorded in the resting state is used to discriminate between healthy subjects and patients with either idiopathic generalized epilepsy or frontal focal epilepsy. Signal features extracted from interictal periods without any epileptiform activity are used to train a machine learning algorithm to draw a diagnosis...

The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data...

NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times...

Recently proposed tractography and connectomics approaches often require a very large number of streamlines, in the order of millions. Generating, storing and interacting with these datasets is currently quite difficult, since they require a lot of space in memory and processing time. Compression is a common approach to reduce data size. Recently such an approach has been proposed consisting in removing collinear points in the streamlines. Removing points from streamlines results in files that cannot be robustly post-processed and interacted with existing tools, which are for the most part point-based...

Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set...

Faced with a new concept to learn, our brain does not work in isolation. It uses all previously learned knowledge. In addition, the brain is able to isolate the knowledge that does not benefit us, and to use what is actually useful. In machine learning, we do not usually benefit from the knowledge of other learned tasks. However, there is a methodology called Multitask Learning (MTL), which is based on the idea that learning a task along with other related tasks produces a transfer of information between them, what can be advantageous for learning the first one...

Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem...

To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack...