Frame optimizations: once a function is called it retains the allocated frame for use in future calls, avoiding allocation and initialization overhead. Frame size has also been slightly reduced (RichardJones).

This gave a 10% PyStone improvement on RichardJones' test machine, compared to Python 2.4 (from 20242 to 22935).

Applied Py_LOCAL and PY_LOCAL_AGGRESSIVE compiler tuning from SRE to ceval.c (currently for Windows only). This results in a 3-10% PyStone speedup on our Intel boxes (FredrikLundh).

Speed up string and Unicode operations (AndrewDalke, FredrikLundh). Most notably, repeat is much faster, and most search operations (find, index, count, in) are a LOT faster (25x for the related stringbench tests). Parts of the code has also been moved into a "stringlib" directory, which contain code that's used by both string types, and lots of new tests has been added to the test suite. Here are the current "stringbench" results:

Patch 1335972 was a combination bugfix+speedup for string->int conversion. These are the speedups measured on my Windows box for decimal strings of various lengths. Note that the difference between 9 and 10 is the difference between short and long Python ints on a 32-bit box. (TimPeters) The patch doesn't actually do anything to speed conversion to long directly; the speedup in those cases is solely due to detecting "unsigned long" overflow more quickly:

The struct module has been rewritten to pre-compile struct descriptors (similar to the RE module). This gives a 20% speedup, on average, for the test suite (BobIppolito). Taking advantage of new ability to "compile" a struct pattern (similar to compiling regexps) can be much faster still.

A pack_to() method has been added to the struct module to support packing directly into a writable buffer. Also, recv_buf() and recvfrom_buf() methods were added to the socket module to read directly into a writable buffer (MartinBlais). Right now, the only way to create a writable buffer from Python is via the array module, but I'm adding a new class that supports this (see below).

Worked on using profile guided optimizations in Visual Studio 8 (KristjanJonsson, RichardMTew). This appears to give on the order of 15% speed improvement in the pybench test suite. A new PCBuild8 directory will be added with automated mechanisms for doing this.

Patch 1442927 aimed at speeding long(str, base) operations. It required major fiddling for portability, endcase correctness, and avoiding significant slowdowns on shorter input strings. Now it's up to 6x faster, although it takes a lot of digits to get that, and it's still slower for 1-digit inputs. (TimPeters)

Speedups at various lengths for decimal inputs, comparing 2.4.3 with current trunk:

Made all built-in exceptions real C new-style types. In 2.5 alpha, they were just objects faking to be classes, which slowed them down by 20% compared to 2.4. After this change, exception handling is around 30% faster than in 2.4 (RichardJones, GeorgBrandl).

Added support for min(), max(), sum(), and pow() to Psyco. This produces significant speedups when used in combination with virtualized objects. (JohnBenediktsson)

NB. These operations now all take 0.016 seconds because they are constant-folded! You need examples with non-constant arguments in the branchmarks. Also, you need to compile Psyco in debugging mode for testing - some of these examples generate fatal assertion errors in the Psyco-NFS branch. (ArminRigo)

GeorgBrandl and JackDiederich worked on a rewrite of the Decimal module in C. This will kick-start a Google SummerOfCode project, which will therefore achieve more than originally planned.

regexp searching speedup by 1%-2% on linux just by using PyObject_(MALLOC|FREE) for small allocations instead of system malloc. Unfinished is a free lists implementation for the small frequently allocated objects. _sre.c uses very few objects at the same time - even caching a single SRE_REPEAT object gives a speedup of almost 1%! I [JackDiederich] will try and finish the patch before 2.5beta.

[unfinished] _sre.c matching can be sped up by 3-5% by using free lists for the small frequently allocated objects. It uses very few at the same time - even caching a single SRE_REPEAT object gives a speedup of almost 1%. I [JackDiederich] will try and finish the patch before 2.5beta.

[unfinished] Implemented a new “hot buffer” class, which consists in a moving string that sits on top of a fixed allocated memory buffer. The buffer has a window which defines the visible portion (you can change this window during parsing). Given a hot buffer, you should be able to read data into it from the network or from a file without having to create temporary strings, and to extract bytes and other basic types from it similarly. (Martin Blais). We still need to implement direct I/O from/to a file (we have network only now) and to implement the common use patterns described in the test in C, e.g. for parsing netstrings, parsing line-delimited input. See the README.txt file in the module for more stuff to do. (I will complete this later).