On the other hand, this is the last release that Dropbox is sponsoring. We wanted to take some time to talk about what that means, both about the space of Python performance, and about the Pyston project specifically.

What’s happened

It’s hard to break down the change in cost-benefit analysis, but here are some factors that went into our decision:

We spent much more time than we expected on compatibility

We similarly had to spend more time on memory usage due to it being a bigger concern than expected

Dropbox has increasingly been writing its performance-sensitive code in other languages, such as Go

Our personal take is that the increase on the “cost” side could potentially be considered typical, whereas the decrease on the “benefit” side was probably a larger driver. It’s hard to say, though, since if we had managed to build things twice as fast the calculus would have been different.

Where we are

We are quite proud that, over the last three years, we’ve been able to achieve meaningful speedups while maintaining a higher level of compatibility than other solutions: we are the first alternative Python runtime to be able to run Dropbox faster.

As for numbers, on the just-released v0.6.1, we are 95% faster on standard Python benchmarks. On web-workload benchmarks that we created, we are 48% faster. On Dropbox’s server, we are 10% faster.

We think it’s worth mentioning that the 10% speedup on Dropbox code is just a small fraction of what we think is possible with our approach. We’ve spent most of our time working on compatibility and memory usage and have not had time to optimize this particular workload.

Where we go from here

Marius and I are no longer spending our time working on Pyston and are transitioning to other projects. The project itself remains open source and available, and we are investigating ways that the project can continue, either in whole or in part. We are also looking into upstreaming parts of our code back to CPython, since our code is now based on theirs.

We’re proud of what we’ve done and we are looking forward to going into more detail about the technical details in the near future. We also take some small consolation in having helped map out what Python performance-versus-compatibility tradeoffs may be valuable.

In the end, we are happy that we attempted this, are excited about the many potential ways that our work on Pyston could still be useful, and are happy to refocus ourselves on domains with more immediate needs.

Python JIT compiler world can’t converge.
Many options, few survivals, small market adoption. I’m strong PyPy proponent, though my final opinion is that Python is a bad language (overcomplicated) for this stuff.

I’m curious if JIT JavaScript implementation get popular 5-10 years earlier then would we see horrific dominance of JS in the scientific web world?

I am looking forward to WebAssembly and a world where there is a Python implementation that runs on Node.js (since V8 can now load WebAssembly modules) or similar (e.g., node-chakracore). I know Emscripten uses LLVM and is being ported to a WebAssembly backend (to reduce parse times and network traffic). Now if we could just merge the Pyston LLVM frontend with the Emscripten LLVM backend being developed and hopefully WebAssembly performance will improve on V8, etc. it could become and an (even more) interesting work for Python. Imagine not only a Python implementation that targets WebAssembly, but an actual WebAssembly implementation of Python itself. Now you have the making of a portable Python plug-in to allow Python to run as script in all web browsers supporting WebAssembly (and so far all the major players are in still buying into this tech this time around). WebAssembly might have been developed to solve the secure cross platform plug-in issue (over NPAPI, asm.js, and NaCl) but it has the potential to become “the” cross platform language virtual machine (beating .NET CLI, Java, etc.)–time will tell.