Thursday, March 25, 2010

We're pleased to announce two things that we were constantly asked for: Nightly builds and Ubuntu PPA for 1.2 release made by Bartosz Skowron. There are no nightly build ubuntu packages (yet).

Nightly builds are what they are - pure pypy executables with JIT compiled in (for linux only now). They require either a pypy checkout or a release download. The main difference is that by default display more debugging information than release builds and that they contain recent bugfixes and improvements of course :-)

Cheers
fijal

Hello.

We're pleased to announce two things that we were constantly asked for: Nightly builds and Ubuntu PPA for 1.2 release made by Bartosz Skowron. There are no nightly build ubuntu packages (yet).

Nightly builds are what they are - pure pypy executables with JIT compiled in (for linux only now). They require either a pypy checkout or a release download. The main difference is that by default display more debugging information than release builds and that they contain recent bugfixes and improvements of course :-)

Saturday, March 13, 2010

Now that the release is done I wanted to list and to thank some people that
were essential in the process of getting it out of the door, particularly
because the work of some of them is not very visible usually.

Armin Rigo and Maciej Fijałkowski tirelessly worked on most aspects of
the release, be it fixing the last known bugs and performance problems,
packaging or general wizardry.

Amaury Forgeot d'Arc made sure that PyPy 1.2 actually supports Windows as a
platform properly and compiled the Windows binaries.

Miquel Torres designed and implemented our new speed overview page,
http://speed.pypy.org which is a great tool for us to spot performance
regressions and to showcase our improvements to the general public.

tav designed the new user-oriented web page, http://pypy.org which is a lot
nicer for people that only want to use PyPy as a Python implementation (and not
be confused by how PyPy is actually made).

Holger Krekel fixed our main development server codespeak.net, even while
being on vacation and not really having online connectivity. Without that, we
couldn't actually have released anything.

Bartosz Skowron worked a lot on making Ubuntu packages for PyPy, which is
really cool. Even though he didn't quite finish in time for the release, we will
hopefully get them soon.

Thanks to all you guys!

Now that the release is done I wanted to list and to thank some people that
were essential in the process of getting it out of the door, particularly
because the work of some of them is not very visible usually.

Armin Rigo and Maciej Fijałkowski tirelessly worked on most aspects of
the release, be it fixing the last known bugs and performance problems,
packaging or general wizardry.

Amaury Forgeot d'Arc made sure that PyPy 1.2 actually supports Windows as a
platform properly and compiled the Windows binaries.

Miquel Torres designed and implemented our new speed overview page,
http://speed.pypy.org which is a great tool for us to spot performance
regressions and to showcase our improvements to the general public.

tav designed the new user-oriented web page, http://pypy.org which is a lot
nicer for people that only want to use PyPy as a Python implementation (and not
be confused by how PyPy is actually made).

Holger Krekel fixed our main development server codespeak.net, even while
being on vacation and not really having online connectivity. Without that, we
couldn't actually have released anything.

Bartosz Skowron worked a lot on making Ubuntu packages for PyPy, which is
really cool. Even though he didn't quite finish in time for the release, we will
hopefully get them soon.

Friday, March 12, 2010

We are pleased to announce PyPy's 1.2 release.
This version 1.2 is a major milestone and it is the first release to ship
a Just-in-Time compiler that is known to be faster than CPython
(and unladen swallow) on some real-world applications (or the best benchmarks
we could get for them). The main theme for the 1.2 release is speed.

The JIT is stable and we don't observe crashes. Nevertheless we would
recommend you to treat it as beta software and as a way to try out the JIT
to see how it works for you.

Highlights:

The JIT compiler.

Various interpreter optimizations that improve performance as well as help
save memory. Read our variousblogposts about achievements.

Introducing speed.pypy.org made by Miquel Torres, a new service that monitors our performance
nightly.

There will be ubuntu packages on PyPy's PPA made by Bartosz Skowron,
however various troubles prevented us from having them as of now.

Known JIT problems (or why you should consider this beta software) are:

The only supported platform is 32bit x86 for now, we're looking for help with
other platforms.

It is still memory-hungry. There is no limit on the amount of RAM that
the assembler can consume; it is thus possible (although unlikely) that
the assembler ends up using unreasonable amounts of memory.

If you want to try PyPy, go to the download page on our excellent new site
and find the binary for your platform. If the binary does not work (e.g. on
Linux, because of different versions of external .so dependencies), or if
your platform is not supported, you can try building from the source.

We are pleased to announce PyPy's 1.2 release.
This version 1.2 is a major milestone and it is the first release to ship
a Just-in-Time compiler that is known to be faster than CPython
(and unladen swallow) on some real-world applications (or the best benchmarks
we could get for them). The main theme for the 1.2 release is speed.

The JIT is stable and we don't observe crashes. Nevertheless we would
recommend you to treat it as beta software and as a way to try out the JIT
to see how it works for you.

Highlights:

The JIT compiler.

Various interpreter optimizations that improve performance as well as help
save memory. Read our variousblogposts about achievements.

Introducing speed.pypy.org made by Miquel Torres, a new service that monitors our performance
nightly.

There will be ubuntu packages on PyPy's PPA made by Bartosz Skowron,
however various troubles prevented us from having them as of now.

Known JIT problems (or why you should consider this beta software) are:

The only supported platform is 32bit x86 for now, we're looking for help with
other platforms.

It is still memory-hungry. There is no limit on the amount of RAM that
the assembler can consume; it is thus possible (although unlikely) that
the assembler ends up using unreasonable amounts of memory.

If you want to try PyPy, go to the download page on our excellent new site
and find the binary for your platform. If the binary does not work (e.g. on
Linux, because of different versions of external .so dependencies), or if
your platform is not supported, you can try building from the source.

Wednesday, March 3, 2010

Some time ago, we introduced our nightly performance graphs. This was a quick
hack to allow us to see performance regressions. Thanks to Miquel Torres,
we can now introduce http://speed.pypy.org, which is a Django-powered web
app sporting a more polished visualisation of our nightly performance runs.

While this website is not finished yet, it's already far better than our previous
approach :-)

If you're are interested in having something similar for other benchmark runs, contact Miquel (tobami at gmail).

Quoting Miquel: "I would also like to note, that if other performance-oriented
opensource projects are interested, I would be willing to see if we can set-up
such a Speed Center for them. There are already people interested in
contributing to make it into a framework to be plugged into buildbots, software
forges and the like. Stay tuned!"

Hello.

Some time ago, we introduced our nightly performance graphs. This was a quick
hack to allow us to see performance regressions. Thanks to Miquel Torres,
we can now introduce http://speed.pypy.org, which is a Django-powered web
app sporting a more polished visualisation of our nightly performance runs.

While this website is not finished yet, it's already far better than our previous
approach :-)

If you're are interested in having something similar for other benchmark runs, contact Miquel (tobami at gmail).

Quoting Miquel: "I would also like to note, that if other performance-oriented
opensource projects are interested, I would be willing to see if we can set-up
such a Speed Center for them. There are already people interested in
contributing to make it into a framework to be plugged into buildbots, software
forges and the like. Stay tuned!"

Monday, March 1, 2010

I recently did some benchmarking of twisted on top of PyPy. For the very
impatient: PyPy is up to 285% faster than CPython. For more patient people,
there is a full explanation of what I did and how I performed measurments,
so they can judge themselves.

The benchmarks are living in twisted-benchmarks and were mostly written
by Jean Paul Calderone. Even though he called them "initial exploratory
investigation into a potential direction for future development resulting
in performance oriented metrics guiding the process of optimization and
avoidance of complexity regressions", they're still much much better than
average benchmarks found out there.

The methodology was to run each benchmark for
quite some time (about 1 minute), measuring number of requests each 5s.
Then I looked at dump of data and substracted some time it took
for JIT-capable interpreters to warm up (up to 15s), averaging
everything after that. Averages of requests per second are in the table below (the higher the better):

benchname

CPython

Unladen swallow

PyPy

names

10930

11940 (9% faster)

15429 (40% faster)

pb

1705

2280 (34% faster)

3029 (78% faster)

iterations

75569

94554 (25% faster)

291066 (285% faster)

accept

2176

2166 (same speed)

2290 (5% faster)

web

879

854 (3% slower)

1040 (18% faster)

tcp

105M

119M (7% faster)

60M (46% slower)

To reproduce, run each benchmark with:

benchname.py -n 12 -d 5

WARNING: running tcp-based benchmarks that open new connection for each
request (web & accept) can exhaust number of some kernel structures,
limit n or wait until next run if you see drops in request per second.

The first obvious thing is that various benchmarks are more or less amenable
to speedups by JIT compilation. Accept and tcp getting smallest speedups, if at
all. This is understandable, since JIT is mostly about reducing interpretation
and frame overhead, which is probably not large when it comes to accepting
connections. However, if you actually loop around, doing something, JIT
can give you a lot of speedup.

The other obvious thing is that PyPy is the fastest python interpreter
here, almost across-the board (Jython and IronPython won't run twisted),
except for raw tcp throughput. However, speedups can vary and I expect
this to improve after the release, as there are points, where PyPy can
be improved. Regarding raw tcp throughput - this can be a problem for
some applications and we're looking forward to improve this particular
bit.

The main reason to use twisted for this comparison is a lot of support from
twisted team and JP Calderone in particular, especially when it comes to
providing benchmarks. If some open source project wants to be looked at
by PyPy team, please provide a reasonable set of benchmarks and infrastructure.

If, however, you're a closed source project fighting with performance problems
of Python, we're providing contracting for investigating opportunities, how
PyPy and not only PyPy, can speed up your project.

I recently did some benchmarking of twisted on top of PyPy. For the very
impatient: PyPy is up to 285% faster than CPython. For more patient people,
there is a full explanation of what I did and how I performed measurments,
so they can judge themselves.

The benchmarks are living in twisted-benchmarks and were mostly written
by Jean Paul Calderone. Even though he called them "initial exploratory
investigation into a potential direction for future development resulting
in performance oriented metrics guiding the process of optimization and
avoidance of complexity regressions", they're still much much better than
average benchmarks found out there.

The methodology was to run each benchmark for
quite some time (about 1 minute), measuring number of requests each 5s.
Then I looked at dump of data and substracted some time it took
for JIT-capable interpreters to warm up (up to 15s), averaging
everything after that. Averages of requests per second are in the table below (the higher the better):

benchname

CPython

Unladen swallow

PyPy

names

10930

11940 (9% faster)

15429 (40% faster)

pb

1705

2280 (34% faster)

3029 (78% faster)

iterations

75569

94554 (25% faster)

291066 (285% faster)

accept

2176

2166 (same speed)

2290 (5% faster)

web

879

854 (3% slower)

1040 (18% faster)

tcp

105M

119M (7% faster)

60M (46% slower)

To reproduce, run each benchmark with:

benchname.py -n 12 -d 5

WARNING: running tcp-based benchmarks that open new connection for each
request (web & accept) can exhaust number of some kernel structures,
limit n or wait until next run if you see drops in request per second.

The first obvious thing is that various benchmarks are more or less amenable
to speedups by JIT compilation. Accept and tcp getting smallest speedups, if at
all. This is understandable, since JIT is mostly about reducing interpretation
and frame overhead, which is probably not large when it comes to accepting
connections. However, if you actually loop around, doing something, JIT
can give you a lot of speedup.

The other obvious thing is that PyPy is the fastest python interpreter
here, almost across-the board (Jython and IronPython won't run twisted),
except for raw tcp throughput. However, speedups can vary and I expect
this to improve after the release, as there are points, where PyPy can
be improved. Regarding raw tcp throughput - this can be a problem for
some applications and we're looking forward to improve this particular
bit.

The main reason to use twisted for this comparison is a lot of support from
twisted team and JP Calderone in particular, especially when it comes to
providing benchmarks. If some open source project wants to be looked at
by PyPy team, please provide a reasonable set of benchmarks and infrastructure.

If, however, you're a closed source project fighting with performance problems
of Python, we're providing contracting for investigating opportunities, how
PyPy and not only PyPy, can speed up your project.