Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-0.16%

3.95%

9.48%

🙂

pybench

0.17%

-0.05%

7.61%

4.20%

😐

regex_v8

0.55%

0.05%

-0.10%

10.60%

🙂

nbody

0.13%

-0.07%

11.86%

4.16%

😐

json_dump_v2

0.29%

-0.17%

0.33%

11.93%

😐

normal_startup

1.70%

0.81%

-1.93%

1.91%

😐

ssbench

0.15%

-0.28%

-0.09%

2.95%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.17%

-0.14%

4.10%

4.74%

🙂

pybench

0.18%

0.02%

7.65%

2.98%

😐

regex_v8

0.55%

0.17%

-0.16%

9.77%

🙂

nbody

0.12%

-0.08%

11.92%

1.81%

😐

json_dump_v2

0.30%

0.26%

0.50%

10.45%

🙁

normal_startup

2.08%

0.47%

-2.75%

2.44%

😐

ssbench

0.20%

0.08%

0.19%

2.34%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.11%

0.65%

4.24%

7.38%

🙂

pybench

0.17%

-0.11%

7.64%

3.72%

😐

regex_v8

0.59%

0.03%

-0.33%

9.72%

🙂

nbody

0.08%

-0.02%

11.99%

2.41%

😐

json_dump_v2

0.26%

-0.03%

0.24%

10.52%

🙁

normal_startup

1.96%

-0.99%

-3.24%

2.85%

😐

ssbench

0.15%

0.24%

0.11%

3.34%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.11%

0.65%

4.24%

7.38%

🙂

pybench

0.17%

-0.11%

7.64%

3.72%

😐

regex_v8

0.59%

0.03%

-0.33%

9.72%

🙂

nbody

0.08%

-0.02%

11.99%

2.41%

😐

json_dump_v2

0.26%

-0.03%

0.24%

10.52%

🙁

normal_startup

1.96%

-0.99%

-3.24%

2.85%

😐

ssbench

0.15%

0.24%

0.11%

3.34%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc from 2015-05-23 16:02:14+00:00

benchmark

relative std_dev*

change since last run

change since baseline

current rev run with PGO

🙂

django_v2

0.14%

-1.21%

3.62%

9.83%

🙂

pybench

0.15%

0.01%

7.74%

2.39%

😐

regex_v8

0.59%

-0.17%

-0.36%

10.25%

🙂

nbody

0.07%

0.01%

12.01%

4.51%

😐

json_dump_v2

0.30%

-0.18%

0.27%

12.36%

🙁

normal_startup

1.95%

0.40%

-2.23%

2.09%

😐

ssbench

0.17%

-0.25%

-0.13%

3.03%

* Relative Standard Deviation (Standard Deviation/Average)

Note: Benchmark results for ssbench are measured in requests/second while all other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also dropped by more than 1% for at least one workload

Our lab does a nightly source pull and build of the Python project and measures performance changes against the previous stable version and the previous nightly measurement. This is provided as a service to the community so that quality issues with current hardware can be identified quickly.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.