2016

Google Summer of Code (GSoC) is a global program that offers students stipends to write code for open source projects.

We plan to apply as a mentoring organization again in 2016, with projects proposed and mentored by our Nodes and international community. Below is our initial project ideas list. The list will be updated continuously until a few days before the mentoring org application deadline on February 19, so please check back here for updates.

Read this first!

If you see a project you like and want to know more about, please contact us on gsoc@incf.orgso we can give you access to our discussion group for INCF mentors and students. We will be using Trellis, the communication and collaboration platform for the scientific community that AAAS is developing - please see the box at top right of this page for more info. Please ask your questions in the group first (after having read through already existing questions).

If you have general questions about INCF and our participation in GSoC, please contact us on gsoc@incf.org.

Other resources

2016 Google Summer of Code webpages (see especially the timeline and the F.A.Q.)

Proposals and ideas for potential INCF projects within Google Summer of Code:

The Virtual Brain (TVB) is one of the few open source neuroinformatics platforms used to simulate whole brain dynamics. Models are not limited to the human brain but researchers can also work with the macaque's or the rodent's connectome. Models based on biologically realistic macroscopic connectivity will hopefully help us to understand the global dynamics observed in the healthy and diseased brain. Whether you are interested in beautiful visualizations or differential equations, you can join us and help us improve!

Several open issues addressed by the following proposals involve:

* improving performance * enhancing data IO and visualization

1.1 Visualize a large Connectivity

Description: Data visualization plays a crucial role in TVB's neuroinformatics platform, and a connectivity is a core datatype, modeling a full brain. An interaction paradigm needs to be proposed, as well as the implementation needs to be done for such a connectivity visualizer in the browser client of TVB. We need to easily display and interact with up to 1000 regions in a connectivity (1000^2 adjacency matrix). Rendering performance as well as per-element interaction is important.

Skills required: HTML/JS/CSS & Python; Experience in web development, JQuery, SVG, WebGL, as well as server side frameworks such as CherryPy, is helpful.

1.2 Visualize a multi-dimensional space of data.

Description: One important feature of TVB is the possibility to launch a Parameter Space Exploration (PSE) group of simulations, which will result in many datatypes, arranged in a multi-dimensional space. We should offer the possibility to the users to easily explore this space by visual interaction. An interaction paradigm needs to be first proposed and discussed, and then the implementation needs to be done optimal.

Skills required: HTML/JS/CSS & Python; Experience in web development, JQuery, SVG, WebGL, as well as server side frameworks such as CherryPy, is helpful.

Description: TVB's neural network simulator is currently being rewritten in C from the original Python. Part of the motivation is to take advantage of parallelization opportunities such as OpenMP, a lightweight API for parallelization, or CUDA, an API for general purpose computing with graphics processing units. This project involves profiling the existing code, proposing potential reorganizations of the data structures and rewrites of the algorithms in a parallel fashion and implementing either via OpenMP or CUDA/OpenCL.

Description TVB provides many options in terms of neural mass models, however comparing these models to simulations from other simulators such as NEST or PyNN remains challenging because they do not implement neural mass models such as those in TVB. However, a standard model description language, NeuroML / LEMS, has been developed. This project proposes to translate TVB's neural mass models into the NeuroML or LEMS format, test their behavior against the current Python implementation and publish them as an open source resource.

Further information

2. Implementation of a HDF5 export to EEGBase

Description: Our laboratory produces data/metadata from Electroencephalography (EEG) and Event­-related potentials (ERP) experiments. Data and describing metadata are stored in a web portal EEGBase (http://eegdatabase.kiv.zcu.cz). Because of the heterogeneous nature of data/metadata we have been implementing a system of templates where a user can define different metadata for different experiments. These templates based on a defined terminology are implemented in the odML format. When the user is performing an experiment he/she can fill in metadata though the forms generated on the basis of a selected template. Then stored data are prepared for sharing with others.

There are lot of initiatives within the community they propagate a concept of open­data published in open standardized formats. The strong requirement is to provide them in unified formats such as odML or HDF5. Therefore we want to provide data in HDF5 as well.

Aims: The task for a candidate is to learn a structure of data/metadata stored in EEGBase. Then to learn the last progress in the development of a standardized format for electrophysiology data in HDF5. According to the acquired knowledge he/she could implement a library for converting EEGBase data/metadata to HDF5. Last, this library will be integrated in EEGBase.

3. Open source, cross simulator, large scale cortical models

Description: An increasing number of studies are using large scale network models incorporating realistic connectivity to understand information processing in cortical structures. High performance computational resources are becoming more widely available to computational neuroscientists for this type of modelling and general purpose, well tested simulation environments such as NEURON and NEST are widely used. New, well annotated experimental data and detailed compartmental models are becoming available from the large scale brain initiatives. However, the majority of neuronal models which have been published over the past number of years are only available in simulator specific formats, illustrating a subset of features associates with the original studies.

This work will involve converting a number of published large scale network models into open, simulator independent formats such as NeuroML and PyNN and testing them across multiple simulator implementations. They will be made freely available to the community through the Open Source Brain repository for reuse, modification and extension.

Aims:1) Select a number of large scale cortical network models for the conversion & testing process. 2) Convert network structure and cell/synaptic properties to NeuroML and/or PyNN. Where appropriate use the simulator independent specification in LEMS to specify cell/synapse dynamics & to allow mapping to simulators. Implementing extensions to PyNN, NeuroML or other tools may be required.3) Make models available on the Open Source Brain repository, along with documentation and references.Mentor: Padraig Gleeson (University College London, UK)

Keywords: Python, XML, networks, modelling, simulation

4. PISAK and OpenBCI

4.1 Qt5 in PISAK.org, an open system for alternative communication

General intro: Thousands of people are living hell on Earth, called Locked­-in State. Disease or accidents have left them unable to speak or type, turn on a radio or ask for help, unless accompanied by a trained caregiver. Salvation in terms of self­agency comes from assistive technologies. Communication system provided by Intel to Stephen Hawking proves that empathy in not the sole reason why society should help those people by providing appropriate technologies.

Tool intro/description: PISAK (http://braintech.pl/about­pisak) is an alternative communication system, created during a 3­-years 200 k Euro project subsidized by the Polish National Centre for Research and Development, which included testing on groups of disabled users (project completes in March 2016). Written in Python, customizable via JSON/CSS configs, works in GNU/Linux and provides email, blogs and multimedia for those who can control switch, sip­and­-puff, head movement or eye-tracking interfaces (BCI planned in close future), in a FOSS and highly customizable system.

Project description & aims: Proposed project consists of replacing currently unmaintained Clutter (and residually used GTK) graphical engine with an actively developed Qt5 framework. This change will allow for faster development and integration with recent assistive technology libraries and modules, facilitate integration with BCI, internationalization, porting to other operating systems and will ensure that PISAK is prepared for a long term development using cutting-­edge Open Source tools.

General intro: Brain­-Computer Interfaces (BCIs) are part of the future, that is already here, but not evenly distributed. Available BCI research frameworks encompass decades of great research, yet outside the Academia users are fooled by mediocre EEG systems marketed as “BCI”, and for patients in locked­in state there is just one commercial system with limited functionality.

Tool intro/description: OpenBCI (http://openbci.pl) is one of the first FOSS frameworks for BCI, started in 2009 at the University of Warsaw with the motto “from lab to bedside”. Written in Python and bundled with the best Open Source viewer for EEG (http://braintech.pl/svarog), after 8 years of research the system in on the verge of integration with a complete framework for assistive technologies PISAK.org. The system is also compatible with the open hardware EEG amplifiers from the openbci.com project recently funded via Kickstarter.

Project description & aims: Proposed project consists of replacing the core engine passing all the messages, currently Azouk Multiplexer (my former student’s project, used and maintained only by us) by a lighter, popular and continuously maintained engine like 0mq (zeromq.org), and creating low-­level documentation. That would be a great step towards making OpenBCI truly multisystem—it’s written in Python, except for the drivers and abovementioned Azouk Multiplexer.

5 Next-generation neuroscience data and model exploration tools

Fovea (https://github.com/robclewley/fovea) is a user-extensible graphical diagnostic tool for working with complex data and models. It is a library of Python classes that are built over components of PyDSTool (http://pydstool.sf.net) and Matplotlib. Fovea’s present capabilities and applications to neural data and models were developed significantly in GSOC 2015 (see http://robclewley.github.io/spike_detection_with_fovea/ and prior blog posts), but there are several directions remaining to develop and explore to maximize the utility and accessibility of this package to less specialized users.

A primary capability of Fovea is to assist the creation of organizing “layers” of graphic data in 2D or 3D sub-plots that are associated with specific calculations. These layers can be dynamically updated as parameters are changed or as time advances during a time series, and they can be grouped and hierarchically constructed to ease navigation of complex data. Examples of layer usage is to display:

● Feature representations, such as characteristic threshold crossings of time series, clusters or principal component projections, domains of an approximation’s error bound satisfaction ● Augmenting meta-data, such as vectors showing velocity or acceleration at a specified position or time point during a simulation ● Other diagnostic quantities that can visually guide parameter fitting of a model or algorithm.

In addition, GUI buttons and dialog boxes, and command-line interfacing can provide additional interactivity with the graphical views.

Aims: Depending on student interest and skills, there are three possible directions this project could usefully take: (1) Redevelop the existing prototype “application” into an actual Django or flask web-based app running over a local server (maybe using Bokeh); (2) Extend the development of the existing core tools to work better for dynamical systems applications(especially biophysical models of neural excitability); (3) Continue to streamline the workflow for constructing new applications by improving core design and adding convenient utilities; (4) Integrate existing functionality with the literate modeling prototype discussed in Project 2(maybe in collaboration with a student working directly on that project, if both projects run).

As part of any direction of this project, there might also be opportunity to create innovative new forms of plug-and-play UI component that will assist in visualization and diagnostics of neural (or similar) data.

5.2 “Literate modeling” capabilities for data analysis and model development

A basic workflow and loosely structured package for “literate modeling” has recently been explored (http://robclewley.github.io/ipython-notebooks-for-literate-modeling/). This prototype reuses several PyDSTool classes and data structures, but is intended to work mostly standalone within other computational or analytical frameworks. It is a part-graphical, part-command-line tool to explore and test model behavior and parameter variations against quantitative objective data or qualitative features while working inside a core computational framework (e.g. simulator), and could work well as an integration with the Fovea package mentioned in Project 5.1.

“Literate modeling” is a natural extension to “literate programming” and reproducible research practices that is intended to create a rich audit trail of the model development process itself (in addition to recording the provenance of code versions and parameter choices for simulation runs once a model is developed). Literate modeling aims to add structured, interpretive metadata to version-controlled models or analysis workflows through specially structured “active documents” (similar to the Jupyter notebook). Examples of such metadata include validatory regression tests, exploratory hypotheses (as sets of expected features and data-driven constraints), and data-driven contexts for parameter search or optimization. There is presently no other software tool that aims to provide this advanced support for hypothesis generation and testing in a computational setting!

Aims: Refine, improve, or redesign existing prototype classes, database structure, and user workflow for the core functions of literate modeling. Add other core functionality to support other workflows. Option to design a browser-based interface to this system using Django or similar technology or to collaborate on integration with Fovea (see Project 5.1). Document the tools, including creating a tutorial for the website. Test according to predefined examples that will include manipulations of the Hodgkin Huxley neural model and other simple dynamical systems.

6: Surface based cortical parcellations

The Scalable Brain Atlas is a widely used platform for sharing public brain atlases and related content on the web. It started at a time where WebGL was too much in its infancy to be of practical use, and therefore it uses quasi-3d, slice-based views of the brain. Times have changed, however, and WebGL has become widely available.

SBA now contains an experimental 3d viewer, based on the X3D interface to WebGL, see for example this macaque template. What is immediately apparent, is that the region borders on this surface are not smooth but jagged, and this is what we propose to fix in the GSOC project. It is up to the candidate to propose a strategy to do this, we have two lines of thought.

1) Take the surface mesh and the label volume, and return a surface mesh with slightly adjusted vertex positions, such that region borders are smooth. Polygons can be fitted to region contours on flattened surfaces.

2) Model each cortical region as an individual shape. This is the approach taken at: http://www.civm.duhs.duke.edu/rhesusatlas/view_3d.htmlOne downside of this is that the smoothing of individual shapes creates rounded corners that affect the overall shape of the brain. Perhaps some combination of 1 and 2 is possible to get the best of both worlds.

You are welcome to submit your ideas and propose yourself as a GSOC candidate. Include a detailed list of your past programming experience.

Depending on the number of candidates, we may organize a pre-selection round on March 14. To be included in this round, contact us in the preceeding week with your draft application.

7. The NIX project

7.1 Python-neuroshare, bringing it into the future together with NIX

Description: Python-neuroshare[1] is a high-level interface to the Neuroshare API, a standardized interface to access electrophysiology data stored in various different file formats. One of the "unsung heroes of scientific software" it is in the top 3 of used software in neuroscience[2,3].

Currently python-neuroshare provides only a limited custom format for storing the data. The NIX project[4] otoh aims to develop standardized methods and models for storing electrophysiology and other neuroscience data together with their metadata in one common, open file format.

This GSoC project would be twofold: 1) Refresh and modernize the python-neuroshare code base, e.g. adding Python 3 support. 2) Modify the current neuronshare to hdf5 converter to use the NIX format instead (via nixpy[5]); having such a converter would mean any proprietary file that can be read with neuroshare could be converted to an open non-proprietary format.

7.2 NIX Dataframes support

Description: The NIX project aims to develop standardized methods and models for storing electrophysiology and other neuroscience data together with their metadata in one common, open file format. It does so by having a central DataArray object that can hold n-dimensional homogeneous data, linked with units and other metadata. Although a DataArray can hold any type of data (int, float, etc) it can only hold one type at a time. In recent years working with heterogenous data in form of tables (DataFrames) has become more and more popular (cf. pandas[2] for Python). The aim of this GSoC project would be to develop a proof-of-concept for such DataFrames in NIX and its python bindings (so pandas DataFrames can be read and written to NIX files).

7.3 NIX filesystem backend support

Description: The NIX project aims to develop standardized methods and models for storing electrophysiology and other neuroscience data together with their metadata in one common, open file format. Currently NIX uses HDF5 files[2] to store the data on-disk, however, the NIX C++ library was designed from the beginning with the idea in mind that different use cases might require different storage solutions. Therefore NIX has the concept of backends that are responsible for writing and reading the binary data. The basic groundwork has been done for a ‘filesystem’ backend that would store the data not in an H5 but directly onto the filesystem usingappropriate binary file formats, like NumPy’s npy format[3], for numeric data, and the YAML format for metadata. The aim of this GSoC project would be to complete the implementation of the ‘filesystem’ backend to the same level as the HDF5 backend, and to implement copy and sync methods to be able to copy between different files of possibly different backends.

8 Modular Machine Learning and Classification Toolbox for ImageJ

Executive summary:ImageJ is an open source Java based image processing program extensively used in life sciences. The project aims at developing an ImageJ plugin which provides state-of the art image classification and segmentation based on modularized filtering approach. The starring point of development is the existing Trainable Weka Segmentation plugin.

Context:ImageJ is a public domain Java image processing program extensively used in life sciences. The program was designed with an open architecture that provides extensibility via Java plugins. User-written plugins make it possible to solve almost any image processing or analysis problem or integrate the program with 3rd party software.

Weka (Waikato Environment for Knowledge Analysis) is a collection of machine learning algorithms for data mining tasks. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. It is also well-suited for developing new machine learning schemes. The algorithms can either be applied directly to a dataset or called from your Java itself.

The Trainable Weka Segmentation (TWS) is a ImageJ/Fiji plugin and library that combines a collection of machine learning algorithms with a set of selected image features to produce pixel-based segmentations. TWS was developed with the main goal of providing a general purpose workbench that would allow biologists to access state-of-the-art techniques in machine learning to improve their image segmentation results. It is part of the standard Fiji (ImageJ) distribution.

Motivation: The current disadvantage of the WTS plugin is that the filters are fixed and the input parameters are hard-coded in the GUI. This limits the expandability and therefore the practical utility of the platform. The filter set is assembled ad hoc and some of the implementations are suboptimal. The aim of the project will be to redesign the existing code base and provide an extendable end user platform.

Project description: The project will start by examining and refactoring the existing WTS plugin with the purpose to make it manifestly modular and able to incorporate pluggable functionality. The immediate objectives of the development are to

The candidate is expected to propose a specification and detail the scope of the planned work.

Guidance and support: Mentors will provide guidance in machine learning and ImageJ integration for the candidate for developing the plugin.

Minimal set of deliverables

1) Requirement specification - Prepared by the candidate after understanding the functionality2) System Design - Detailed plan for development of the plugin and test cases3) Implementation and testing - Details of implementation and testing of the plugin

9 GPU enhanced Neuronal Networks (GeNN)

GeNN is an open source framework for GPU accelerated simulations of spiking neuronal networks based on code generation methods. Users define neuronal networks in a simple C++ API. GeNN translates this model description into optimised CUDA and C/C++ code that can then be used in user-side application code to simulate the described network. Depending on the GPU hardware and the model details, GeNN can achieve speedups between none and 500X. Below are a number of proposals that suggest improvements and extensions to GeNN.

9.1 A PyNN interface to GeNN

PyNN is a Python based framework for describing neuronal network models. It is widely used in the computational neuroscience and neuromorphic computing communities. The proposal is to develop a PyNN interface for GeNN so that users of PyNN will be able to benefit from accelerated GPU simulations with GeNN. Important aspects of this work will be a flexible design that allows for future changes in both PyNN and GeNN, good coverage of the entire PyNN model range and optimised data management between Python and the C/C++ based GeNN.

9.2 Adding OpenCL support to GeNNCurrently, GeNN generates C/C++ and CUDA code. This limits its use to NVIDIA hardware. The proposal is to extend GeNN to have the option to generate OpenCL code instead of CUDA code. This would make a much larger variety of GPU and other multi-core hardware available for the use with GeNN. It will be interesting to explore in this project how optimisations translate between the different backends.

9.3 Developing brian2gennBrian2genn is an interface that allows to formulate neuronal network models in Brian 2 and run them on the GPU accelerated GeNN framework. Brian2genn is in an alpha stage and covers a considerable portion of Brian 2 features. However, there are Brian 2 features that are currently not supported and there are some known inefficiencies in the brian2genn implementation. The proposed project would add additional support for Brian 2 features, address some of the known inefficiencies and could also look more generally at refactoring and completing brian2genn for public release.

Skills required: Python, C/C++; knowledge of Brian or Brian 2, GeNN, and CUDA would be helpful.

10 Co-simulation in Geppetto: Integration of Geppetto and MUSIC

Description: Geppettois a web-based multi-algorithm, multi-scale simulation platform engineered to support the simulation of complex biological systems and their surrounding environment. It is developed together with scientists, is entirely open source and was born out of the open science project OpenWorm.

Currently, it is possible to run different simulators in Geppetto. However it is not possible to communicate spikes between simulators on-line, enabling co-simulations where recurrently coupled network modules can run in different simulators.

MUSIC is an API and software library which enables efficient on-line communication of data between simulation tools connected in arbitrary topologies. Integrating MUSIC technology into Geppetto would enable user-friendly composition of larger simulations consisting of component networks simulated in different simulators. A typical example would be a multi-scale simulation combining multi-compartment neurons simulated in NEURON with integrate-and-fire neurons simulated in NEST.

Aims: The task for the candidate is to integrate MUSIC into Geppetto. A first step would be to develop and propose extensions to the Geppetto model abstraction with elements supporting cross-model connectivity (representing MUSIC port connections at a lower level). The conversion services (based on org.neuroml.export) which translate models into a simulator specification language, e.g., NeuroML->NEURON, would then be extended to support these elements. Finally, Geppetto's internal glue code would be extended to support MUSIC co-simulations.

11 The OpenWorm project

11.1 Advanced Neuron Dynamics in WormSim

Description: The OpenWorm project is building a simulation of the C. elegans in an open science fashion. Last year, OpenWorm released WormSim, which puts a simple version of the worm simulation online, making it available within a web browser without any need to compile any code, courtesy of Geppetto. Under the hood, Geppetto reuses a lot of open source libraries, both on the browser client side, and many java-based libraries on the server side.

Geppetto functionality has been built with a strong focus on its API, both server side and client side with Javascript to ensure reproducibility and scripting capabilities. The console based interactions are ideal for developers and testers in order for scientists to easily access all the existing functionality.

Aims: The current visualization of the C. elegans< nervous system in WormSim represents its 302 neurons as spheres connected by lines in a “ball and stick” model laid out in the shape of the worm. It currently only shows connectivity between the neurons, without showing dynamics of the neurons simulated. Since Geppetto 0.2.4, however, there is the potential to add several things to improve the neuronal view experience and make it easier to understand what is going on:>

Aims: Currently, the master python script consists of stubs and partially working calls out to several software packages. Furthermore, the latest version of a unified docker container for OpenWorm is still unable to execute the full loop. By the end of this project, the master python script will be completely implemented, enabling anyone who installs the docker container to be able to execute the essential code bases to execute a run of the simulation. This will increase the transparency of the OpenWorm code bases greatly, encourage prospective contributors to help make the simulation loop better, and to find a specific code base to help contribute to.

Description: The OpenWorm project is building a simulation of the C. elegans in an open science fashion. The model completion dashboard is a web-based visualization of the digital versions of biological entities that are currently captured within OpenWorm’s database API, PyOpenWorm. This interface is designed to display the results of the unifying modeling activity, and should be coordinated with the crowdsourcing platform for C. elegans ion channels known as ChannelWorm. This interface allows a user to drill down into our model, and view the states of completion of modeled components at each level. At the highest level, matrices display, using a color indicator, the level of completion of each cell in the model. Rolling over the data displayed at each level gives information about the references for that particular piece of data.

12 Importing and exporting simulator-independent model-descriptions with the Brian simulator

Brian is a widely used simulator for spiking neural networks, written in Python. The aim of this project is to automate the translation of models defined in Brian to and from simulator-independent formats. This will have several important applications: (1) it will make it possible to validate simulations by comparing the behaviour of the same model across different simulators, (2) it will facilitate the construction of shared resources (e.g. neuron models on http://www.opensourcebrain.org), that can be used by the widest possible community of researchers, (3) it will allow to document Brian simulations, e.g. for the use in reports or publications.

This project has been made possible by a relatively recent convergence in the features offered by various simulators for flexible definitions of neural models. Brian can simulate arbitrary neural models that are described by the user with mathematical equations and statements. Brian 2 (the version of Brian currently in Beta version) has built-in facilities for code generation[1], and can therefore run simulations in various programming languages even if it is itself written in Python. Until recently, simulator-independent languages (such as PyNN[2] and NeuroML[3]) worked by selecting from a fixed set of standard models, allowing only the parameters and connectivity to be varied. However, in recent years, several initiatives created simulator-independent model languages that also describe neural models based on mathematical equations, e.g. NineML [4] and LEMS[5] (used with NeuroML2), and so it has now become possible to translate arbitrary models between these formats.

This work will built on existing work done as part of the GSoC 2015, as well as in the NeuroML2 project [6].

Aims: 1. Write an export module (based on Brian's code generation facilities) that exports a LEMS/NeuroML2 description from a Brian 2 script 2. Validate the exporter on published models across simulators 3. Write a module to export a human-readable model description from a Brian 2 simulation script

Optional aims: - Import of models into Brian 2: - identify missing features in the existing LEMS -> Brian 2 converter created by the NeuroML2 project and extend/improve it - create a module that allows to import specific parts of a LEMS model (e.g. the equations for a specific ion channel) into Brian 2 - enhance the support for multicompartmental models in Brian 2 so that it can load a NeuroML2 description of a cell morphology

13 Image Search for Brain Maps

About the project: Much of the modern studies of the human brain use Magnetic Resonance Imaging (MRI) to look which parts of the brain are responsible for different behaviours (such as moving one’s finger) and experiences (such as fear). Scientists traditionally have reported such findings in a print form of an academic journal, which generally lacks the availability of machine-readable information. This is why we have created NeuroVault.org - a website for sharing and visualising three-dimensional maps of the human brain that makes it very easy for researchers to upload and annotate their data. In return, we provide visualisation and decoding services that can help researchers interpret their data. Most importantly all of the data deposited in NeuroVault.org is publicly available for everyone and can be used to further our understanding of the human brain.

The project is built on the Django platform and has a small (~10) but responsive developer community. Our infrastructure is based on Docker which makes it very easy for a new developer to bring up a production equivalent software stack and start contributing. The code is distributed under the MIT license and is available at https://github.com/NeuroVault/NeuroVault

About your task: A few months ago we have rolled out a new feature - the ability to use a selected brain map to search for other maps exhibiting similar patterns. The prototype (http://www.neurovault.org/images/15481/find_similar) has proven to be a useful tool for researchers since it allows them to find studies from other researchers showing similar results. However, the current implementation is based on pairwise comparisons of brain maps which scale quadratically with the size of the database. We expect that with the growth of the database image based search will slow down considerably. Therefore, we need to optimize and refactor this feature.

Skills: This task will require some experience with Python, the Django framework and databases. Familiarity with Content-Based Image Search will be a plus.

About the mentors: Chris Gorgolewski (https://github.com/chrisfilo, chris.gorgolewski@gmail.com) is a neuroinformatics expert with many years of experience in developing projects for both academia and the industry. Apart from NeuroVault and other data sharing initiatives he has been involved in neuroimaging data processing package called Nipype. Dr. Gorgolewski has previously supervised both master students as well as interns.

Russ Poldrack is a professor of psychology at Stanford University. He has been doing functional neuroimaging (fMRI) for over two decades, and along multiple academic publications he has also written a handbook on the topic of fMRI data analysis. In the past he has successfully supervised multiple masters and PhD students.

14 BRIAN/PyNN like network simulation in MOOSE

Description: The aim of this project is to make neurons in MOOSE using a similar format as used by BRIAN (briansimulator.org). This is exciting because MOOSE provides powerful capabilities for single-neuron and subcellular modeling, whereas BRIAN is designed for rapid network modeling. Thus the project will greatly strengthen multiscale modeling capabilities. The BRIAN approach involves solving a set of ordinary differential equations for each neuron while the neurons communicate with each other using spikes. These spikes result in discontinuous changes in the variables of the differential equations. In this project the BRIAN model definition system will be incorporated into the MOOSE environment.

Document Actions

2. Accept the invite to Trellis, and enter some basic info in your profile

3. Find the discussion thread for the project you're interested in. Maybe the answer is already there - if not, go ahead and ask your question!

What is Trellis?

Trellis is a communication and collaboration platform for the scientific community – built and operated by AAAS. Trellis is currently in a private beta period, but INCF has early access and can invite users. For more information on Trellis, please see this blog post.