P-RECS 2018

First International Workshop on Practical Reproducible Evaluation of
Computer Systems.

June 11, 2018. In conjunction with HPDC’18.
In cooperation with (pending):

This workshop will focus heavily on practical, actionable aspects of
reproducibility in broad areas of computational science and data
exploration, with special emphasis on issues in which community
collaboration can be essential for adopting novel methodologies,
techniques and frameworks aimed at addressing some of the challenges
we face today. The workshop will bring together researchers and
experts to share experiences and advance the state of the art in the
reproducible evaluation of computer systems, featuring contributed
papers and invited talks.

Topics

We expect submissions from topics such as, but not limited to:

Experiment dependency management.

Software citation and persistence.

Data versioning and preservation.

Provenance of data-intensive experiments.

Tools and techniques for incorporating provenance into publications.

Automated experiment execution and validation.

Experiment portability for code, performance, and related metrics.

Experiment discoverability for re-use.

Cost-benefit analysis frameworks for reproducibility.

Usability and adaptability of reproducibility frameworks into already-established domain-specific tools.

Blinding and selecting artifacts for review while maintaining history.

Reproducibility-aware computational infrastructure.

Submission

Submit (single-blind) via
EasyChair. We look
for two categories of submissions:

Position papers. This category is for papers whose goal is to
propose solutions (or scope the work that needs to be done) to
address some of the issues outlined above. We hope that a research
agenda comes out of this and that we can create a community that
meets yearly to report on our status in addressing these problems.

Experience papers. This category consists of papers reporting
on the authors’ experience in automating one or more
experimentation pipelines. The committee will look for submissions
reporting on their experience: what worked? What aspects of
experiment automation and validation are hard in your domain? What
can be done to improve the tooling for your domain? As part of the
submission, authors need to provide a URL to the automation
service they use (e.g., TravisCI,
GitLabCI,
CircleCI,
Jenkins, etc.) so reviewers can verify
that there is one or more automated pipelines associated to the
submission.

Format

Authors are invited to submit manuscripts in English not exceeding 5
pages of content. The 5-page limit includes figures, tables and
appendices, but does not include references, for which there is no
page limit. Submissions must use the ACM Master
Template
(please use the sigconf format with default options).

Proceedings

The proceedings will be archived in both the ACM Digital Library and
IEEE Xplore through SIGHPC.