We would like to thank the community for validating various results and sharing unexpected behavior with us
via our public CK repository!

Abstract

Empirical auto-tuning and machine learning techniques have been
showing high potential to improve execution time, power
consumption, code size, reliability and other important metrics
of various applications for more than two decades.
However, they are still far from widespread production use
due to lack of native support for auto-tuning in an ever changing
and complex software and hardware stack, large and
multi-dimensional optimization spaces, excessively long
exploration times, and lack of unified mechanisms for preserving
and sharing of optimization knowledge and research material.

We present a possible collaborative approach to solve above
problems using Collective Mind knowledge management system.
In contrast with previous cTuning framework, this modular
infrastructure allows to preserve and share through the Internet the
whole auto-tuning setups with all related artifacts and their
software and hardware dependencies besides just performance data.
It also allows to gradually structure, systematize and describe
all available research material including tools, benchmarks, data
sets, search strategies and machine learning models. Researchers
can take advantage of shared components and data with extensible
meta-description to quickly and collaboratively validate and
improve existing auto-tuning and benchmarking techniques
or prototype new ones. The community can now gradually learn and
improve complex behavior of all existing computer systems while
exposing behavior anomalies or model mispredictions to an
interdisciplinary community in a reproducible way for further
analysis. We present several practical, collaborative and
model-driven auto-tuning scenarios. We also decided to release
all material at c-mind.org/repo
to set up an example for
a collaborative and reproducible research as well as our new
publication model in computer engineering where experimental
results are continuously shared and validated by the community.

Average time (green line) doesn't make sense. Normality test also fails. So most of the time, researchers will simply skip this result.

In CK, we analyze variation via gaussian KDE from scipy, which returns 2 expected values in the following graphs suggesting
that there are several states in experiments and hence some features are missing, that can separate these states. In our case,
the feature is frequency that was added to program pipeline. See paper for more details.

Density graphHistogram graph

Note: CK allows the community validate the results and share unexpected behavior in public cknowledge.org/repo
here.Note: we are gradually converting all the code and data related to this paper
from the deprecated Collective Mind Format to the new Collective Knowledge Framework.