The recent years have seen a rapid
increase in the complexity of electrophysiology experiments. This
complexity arises firstly from the interest in simultaneously
analyzing the activity recorded from large numbers of channels in
order to investigate the role of concerted neural activity in brain
function. These efforts have led to advances in data analysis methods
[1] that exploit the parallel properties of such data sets [2]. A
second source of complexity is in the sophistication of stimulus
protocols. To take the visual system as an example, typical visual
stimulation has progressed from simple moving bars or drifting
gratings to natural movies, Gabor noise, apparent motion stimuli. In
the somatosensory system, new technology now allows the entire rodent
whisker array to be stimulated in essentially arbitrary patterns.
However, an often neglected aspect of these technological advances is
that both massively parallel data streams and highly complex stimuli
place new demands on handling their complexity during all stages of
the project [3]: from the initial recording, throughout the analysis
process, to the final publication.

Three factors contribute these new demands: First, the sheer
quantity of data complicates the organization of data sources, and
the resulting automatization of analysis steps renders the validation
of interim and final results difficult. Second, modern analysis
methods often require intricate, multi-layered implementations,
leading to sophisticated analysis toolchains [4]. Third, a growing
number of projects needs to be carried out in teams, within a
laboratory or in collaborative efforts, requiring transparent
workflows that guarantee smooth interaction. Taken together, the
increase in complexity calls for a reevaluation of the ad-hoc
traditional approaches to such projects. Can we derive general
guiding principles that may be adopted for designs of efficient
workflows? How could these improve our confidence in handling the
data by providing better cross-validation of findings, reliably
managing provenance data, and enabling tighter collaborative
research, while at the same time leaving the scientist with the
flexibility required for creative research?

Although several projects are devoted to finding solutions for
specific aspects of a workflow design (e.g., [5-7]), on a more
general level there is lack of a thorough discussion on what goals
are expected from a workflow, and which of these can be realistically
addressed. Here, we summarize feedback received from experimenters
and theoreticians that pinpoints the fundamental problems typically
encountered in the analysis of high-dimensional electrophysiological
data. Illustrated by examples from our own experience, we further
show obstacles that prevent us from harmonizing workflows to common
guidelines. For selected issues we draw parallels to other
communities that are faced with similar problems (e.g., neuronal
network modeling [8-9]; neuroimaging [10]). Lastly, we propose how
existing concepts and software [9,11] could assist in practically
implementing workflows that are tailored to the needs of a specific
project, yet guarantee high standards by adhering to general
guidelines of accepted best-practice.

Acknowledgements: This project was supported by the European Union
(FP7-ICT-2009-6, BrainScales).