Authors:Guoliang Xu; Ming Li Chong ChenFirst page: 014002Abstract: We have previously reported an L 2 -gradient flow (L2GF) method for cryo-electron tomography and
single-particle reconstruction, which has a reasonably good performance. The aim of this paper is to
further upgrade both the computational efficiency and accuracy of the L2GF method. In a
finite-dimensional space spanned by the radial basis functions, a minimization problem combining a
fourth-order geometric flow with an energy decreasing constraint is solved by a bi-gradient method.
The bi-gradient method involves a free parameter ##IMG##
[http://ej.iop.org/images/1749-4699/8/1/014002/csd510374ieqn1.gif] {$\beta \in [0,1].$} As β
increases from 0 to 1, the structures of the reconstructed function from coarse to fine are
captured. The experimental results show that the proposed method yields more desirable results.Citation: Computational Science & DiscoveryPubDate: 2015-03-26T00:00:00ZDOI: 10.1088/1749-4680/8/1/014002Issue No:Vol. 8, No. 1 (2015)

Authors:Lion Krischer; Tobias Megies, Robert Barsch, Moritz Beyreuther, Thomas Lecocq, Corentin Caudron Joachim WassermannFirst page: 014003Abstract: The Python libraries NumPy and SciPy are extremely powerful tools for numerical processing and
analysis well suited to a large variety of applications. We developed ObsPy ( http://obspy.org
[http://obspy.org] ), a Python library for seismology intended to facilitate the development of
seismological software packages and workflows, to utilize these abilities and provide a bridge for
seismology into the larger scientific Python ecosystem. Scientists in many domains who wish to
convert their existing tools and applications to take advantage of a platform like the one Python
provides are confronted with several hurdles such as special file formats, unknown terminology, and
no suitable replacement for a non-trivial piece of software. We present an approach to implement a
domain-specific time series library on top of the scientific NumPy stack. In so doing, we show a
realization of an abstract internal representation of time series data permitting I/O support for a
diverse co...Citation: Computational Science & DiscoveryPubDate: 2015-05-18T00:00:00ZDOI: 10.1088/1749-4699/8/1/014003Issue No:Vol. 8, No. 1 (2015)

Authors:X Hou; B R Hodges, S Negusse C BarkerFirst page: 014004Abstract: The Hydrodynamic and oil spill modeling system for Python (HyosPy) is presented as an example of a
multi-model wrapper that ties together existing models, web access to forecast data and
visualization techniques as part of an adaptable operational forecast system. The system is designed
to automatically run a continual sequence of hindcast/forecast hydrodynamic models so that multiple
predictions of the time-and-space-varying velocity fields are already available when a spill is
reported. Once the user provides the estimated spill parameters, the system runs multiple oil spill
prediction models using the output from the hydrodynamic models. As new wind and tide data become
available, they are downloaded from the web, used as forcing conditions for a new instance of the
hydrodynamic model and then applied to a new instance of the oil spill model. The predicted spill
trajectories from multiple oil spill models are visualized through Python methods invoking Google
Map TM a...Citation: Computational Science & DiscoveryPubDate: 2015-06-12T00:00:00ZDOI: 10.1088/1749-4699/8/1/014004Issue No:Vol. 8, No. 1 (2015)

Authors:Kapil Arya; Gene CoopermanFirst page: 014005Abstract: DMTCP (Distributed MultiThreaded CheckPointing) is a mature checkpoint–restart package. It operates
in user space without kernel privilege, and adapts to application-specific requirements through
plugins. While DMTCP has been able to checkpoint Python and IPython ‘from the outside’ for many
years, a Python module has recently been created to support DMTCP. IPython support is included
through a new DMTCP plugin. A checkpoint can be requested interactively within a Python session or
under the control of a specific Python program. Further, the Python program can execute specific
Python code prior to checkpoint, upon resuming (within the original process) and upon restarting
(from a checkpoint image). Applications of DMTCP are demonstrated for: (i) Python-based graphics
using virtual network client, (ii) a fast/slow technique to use multiple hosts or cores to check one
(Cython Behnel S et al 2011 Comput. Sci. Eng. 13Citation: Computational Science & DiscoveryPubDate: 2015-07-17T00:00:00ZDOI: 10.1088/1749-4699/8/1/014005Issue No:Vol. 8, No. 1 (2015)

Authors:Ursula Iturrar?n-Viveros; Miguel Molero-ArmentaFirst page: 014006Abstract: Graphics processing units (GPUs) have become increasingly powerful in recent years. Programs
exploring the advantages of this architecture could achieve large performance gains and this is the
aim of new initiatives in high performance computing. The objective of this work is to develop an
efficient tool to model 2D elastic wave propagation on parallel computing devices. To this end, we
implement the elastodynamic finite integration technique, using the industry open standard open
computing language (OpenCL) for cross-platform, parallel programming of modern processors, and an
open-source toolkit called [Py]OpenCL. The code written with [Py]OpenCL can run on a wide variety of
platforms; it can be used on AMD or NVIDIA GPUs as well as classical multicore CPUs, adapting to the
underlying architecture. Our main contribution is its implementation with local and global memory
and the performance analysis using five different computing devices (including Kepler, one of the
fastest and ...Citation: Computational Science & DiscoveryPubDate: 2015-07-27T00:00:00ZDOI: 10.1088/1749-4699/8/1/014006Issue No:Vol. 8, No. 1 (2015)

Authors:James Bergstra; Nicolas Pinto David D CoxFirst page: 014007Abstract: Machine learning benchmark data sets come in all shapes and sizes, whereas classification algorithms
assume sanitized input, such as ( x , y ) pairs with vector-valued input x and integer class label y
. Researchers and practitioners know all too well how tedious it can be to get from the URL of a new
data set to a NumPy ndarray suitable for e.g. pandas or sklearn. The SkData library handles that
work for a growing number of benchmark data sets (small and large) so that one-off in-house scripts
for downloading and parsing data sets can be replaced with library code that is reliable,
community-tested, and documented. The SkData library also introduces an open-ended formalization of
training and testing protocols that facilitates direct comparison with published research. This
paper describes the usage and architecture of the SkData library.Citation: Computational Science & DiscoveryPubDate: 2015-07-28T00:00:00ZDOI: 10.1088/1749-4699/8/1/014007Issue No:Vol. 8, No. 1 (2015)

Authors:James Bergstra; Brent Komer, Chris Eliasmith, Dan Yamins David D CoxFirst page: 014008Abstract: Sequential model-based optimization (also known as Bayesian optimization) is one of the most
efficient methods (per function evaluation) of function minimization. This efficiency makes it
appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to
train. The Hyperopt library provides algorithms and parallelization infrastructure for performing
hyperparameter optimization (model selection) in Python. This paper presents an introductory
tutorial on the usage of the Hyperopt library, including the description of search spaces,
minimization (in serial and parallel), and the analysis of the results collected in the course of
minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that
provides automatic algorithm configuration of the Scikit-learn machine learning library. Following
Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing
module can be taken together to represent a ...Citation: Computational Science & DiscoveryPubDate: 2015-07-28T00:00:00ZDOI: 10.1088/1749-4699/8/1/014008Issue No:Vol. 8, No. 1 (2015)

Authors:Geoffrey M PooreFirst page: 014010Abstract: PythonTeX is a LaTeX package that allows Python code in LaTeX documents to be executed and provides
access to the output. This makes possible reproducible documents that combine results with the code
required to generate them. Calculations and figures may be next to the code that created them. Since
code is adjacent to its output in the document, editing may be more efficient. Since code output may
be accessed programmatically in the document, copy-and-paste errors are avoided and output is always
guaranteed to be in sync with the code that generated it. This paper provides an introduction to
PythonTeX and an overview of major features, including performance optimizations, debugging tools,
and dependency tracking. Several complete examples are presented. Finally, advanced features are
summarized. Though PythonTeX was designed for Python, it may be extended to support additional
languages; support for the Ruby and Julia languages is already included. PythonTeX contains a
utility f...Citation: Computational Science & DiscoveryPubDate: 2015-07-30T00:00:00ZDOI: 10.1088/1749-4699/8/1/014010Issue No:Vol. 8, No. 1 (2015)

Authors:Florent Duchaine; St?phan Jaur?, Damien Poitou, Eric Qu?merais, Gabriel Staffelbach, Thierry Morel Laurent GicquelFirst page: 015003Abstract: In many communities such as climate science or industrial design, to solve complex coupled problems
with high fidelity external coupling of legacy solvers puts a lot of pressure on the tool used for
the coupling. The precision of such predictions not only largely depends on simulation resolutions
and the use of huge meshes but also on high performance computing to reduce restitution times. In
this context, the current work aims at studying the scalability of code coupling on high performance
computing architectures for a conjugate heat transfer problem. The flow solver is a Large Eddy
Simulation code that has been already ported on massively parallel architectures. The conduction
solver is based on the same data structure and thus shares the flow solver scalability properties.
Accurately coupling solvers on massively parallel architectures while maintaining their scalability
is challenging. It requires exchanging and treating information based on two different computational
grids...Citation: Computational Science & DiscoveryPubDate: 2015-07-27T00:00:00ZDOI: 10.1088/1749-4699/8/1/015003Issue No:Vol. 8, No. 1 (2015)