Sunday, October 16, 2011

Last weekend, I attended GitHub's PyCodeConf in Miami Florida and had the opportunity to give a talk on array-oriented computing and Python. I would like to thank Tom and Chris (GitHub founders) for allowing me to come speak. I enjoyed my time there, but I have to admit I felt old and a bit out of place. There were a lot of young people there who understood a lot more about making web-pages and working at cool web start-ups than solving partial differential equations with arrays. Fortunately, Dave Beazley and Raymond Hettinger were there so I didn't feel completely ancient. In addition, Wes McKinney and Peter Wang helped me represent the NumPy community.

At the conference I was reminded of PyPy's recent success at showing the speedups some said weren't possible from a dynamic language like Python --- speed-ups which make it possible to achieve C-like speeds using Python constructs. This is very nice as it illustrates again that high-level languages can be compiled to low-level speeds. I also became keenly aware of the enthusiasm that has cropped up in porting NumPy to PyPy. I am happy for this enthusiasm as it illustrates the popularity of NumPy which pleases me. On the other hand, in every discussion I have heard or read about this effort, I'm not convinced that anyone excited about this porting effort actually understands the complexity of what they are trying to do nor the dangers that it could create in splitting the small community of developers who are regularly contributing to NumPy and SciPy and causing confusion for the user base of Python in Science.

I'm hopeful that I can provide some perspective. Before I do this, however, I want to congratulate the PyPy team and emphasize that I have the utmost respect for the PyPy developers and what they have achieved. I am also a true-believer in the ability for high-level languages to achieve faster-than-C speeds. In fact, I'm not satisfied with a Python JIT. I want the NumPy constructs such as vectorization, fancy indexing, and reduction to be JIT compiled. I also think that there are use-cases of NumPy all by itself that makes it somewhat interesting to do a NumPy-in-PyPy port. I would also welcome the potential things that can be learned about how to improve NumPy that would come out of trying to write a version of NumPy in RPython.

However, to avoid detracting from the overall success of Python in Science, Statistics, and Data Analysis, I think it is important that 3 things are completely clear to people interested in the NumPy on PyPy idea.

NumPy is just the beginning (SciPy, matplotlib, scikits, and 100s of other packages and legacy C/C++ and Fortran code are all very important)

NumPy should be a lot faster than it is currently.

NumPy has an ambitious roadmap and will be moving forward rather quickly over the coming years.

NumPy is just the beginning

Most of the people who use NumPy use it as an entry-point to the entire ecosystem of Scientific Packages available for Python. This ecosystem is huge. There are at least 1 million unique visitors to the http://www.scipy.org site every year and that is just an entry point to the very large and diverse community of technical computing users who rely on Python.

Most of the scientists and engineers who have come to Python over the past years have done so because it is so easy to integrate their legacy C/C++ and Fortran code into Python. National laboratories, large oil companies, large banks and many other Fortune 50 companies all must integrate their code into Python in order for Python to be part of their story. NumPy is part of the answer that helps them seamlessly view large amounts of data as arrays in Python or as arrays in another compiled language without the non-starter of copying the memory back and forth.

Once the port of NumPy to PyPy has finished, are you going to port SciPy? Are you going to port matplotlib? Are you going to port scikits.learn, or scikits.statsmodels? What about Sage? Most of these rely on not just the Python C-API but also the NumPy C-API which you would have to have a story for to make a serious technical user of Python get excited about a NumPy port to PyPy.

To me it is much easier to think about taking the ideas of PyPy and pulling them into the Scientific Python ecosystem then going the other way around. It's not to say that there isn't some value in re-writing NumPy in PyPy, it just shouldn't be over-sold and those who fund it should understand what they aren't getting in the transaction.

C-speed is the wrong target

Several examples including my own previous blog post has shown that vectorized Fortran 90 can be 4-10 times faster than NumPy. Thus, we know there is room for improvement even on current single-core machines. This doesn't even take into account the optimizations that should be possible for multiple-cores, GPUs and even FPGAs all of which are in use today but are not being utilized to the degree they should be. NumPy needs to adapt to make use of this kind of hardware and will adapt in time.

NumPy will be evolving rapidly over the coming years

The pace of NumPy development has leveled off in recent years, but this year has created a firestorm of new ideas that will be coming to fruition over the next 1-2 years and NumPy will be evolving fairly rapidly during that time. I am committed to making this happen and will be working very hard in 2012 on the code-base itself to realize some of the ideas that have emerged. Some of this work will require some re-factoring and re-writing as well. I would honestly rather collaborate with PyPy than compete, but my constraints are that I care very much about backward compatibility and very much about the entire SciPy ecosystem. I sacrificed a year of my life in 1999 (delaying my PhD graduation by at least 6-12 months) bringing SciPy to life. I sacrificed my tenure-track position in academia bringing NumPy to life in 2005. Constraints of keeping my family fed, clothed, and housed seem to keep me on this 6-7 year sabbatical-like cycle for SciPy/NumPy but it looks like next year I will finally be in a position to spend substantial time and take the next steps with NumPy to help it progress to the next stage.

Some of the ideas that will be implemented include:

integration of non-contiguous memory chunks into the NumPy array structure (generalization of strides)

improvements to the data-type infrastructure to make it easier to add new data-types

improvements to the calculation infrastructure (iterators and fast general looping constructs)

fancy-indexing as views

integration of Pandas group-by features

missing data bit-patterns

distributed arrays

Over conversations with many people this year, more ideas than room to talk about them have emerged and I am excited to start seeing these ideas come to fruition to make NumPy and Python the best solution for data-analysis. Beginning next year, I will be pushing hard for their introduction into the NumPy/SciPy ecosystem --- with a careful eye on backward compatibility which has long been one of NumPy's strengths.

A way forward

I would love to see more scientific code written at a high-level without sacrificing run-time performance. The high-level intent allows for the creation of faster machine code than lower-level translations of the intent often does. I know this is possible and I intend to do everything I can professionally to see that happen (but from within the context of the entire SciPy ecosystem). As this work emerges, I will encourage PyPy developers to join us using the hard-won knowledge and tools they have created.

Even if PyPy continues as a separate eco-system, then there are points of collaboration that will benefit both groups. One of these is to continue the effort Microsoft initially funded to separate the C-parts of NumPy away from the CPython interface to NumPy. This work is now in a separate branch that has diverged from the main NumPy branch and needs to be re-visited. If people interested in NumPy on PyPy spent time on improving this refactoring into basically a NumPy C-library, then PyPy can call this independent library using its methods for making native calls just as CPython can call it using its extension approach. Then IronPython, Jython (and for that matter Ruby, Javascript, etc.) can all call the C-library and leverage the code. There is some effort to do this and it's not trivial. Perhaps, there is even a way for PyPy to generate C-libraries from Python source code --- now that would be an interesting way to collaborate.

The second way forward is for PyPy to interact better with the Cython community. Support in PyPy for Cython extension modules would be a first step. There is wide agreement among NumPy developers that more of NumPy should be written at a high-level (probably using Cython). Cython already is used to implement many, many extension modules for the Sage project. William Stein's valiant efforts in that community have made Cython the de-facto standard for how most scientists and engineers are writing extensions modules for Python these days. This is a good thing for efforts like PyPy because it adds a layer of indirection that allows PyPy to make a Cython backend and avoid the Python C-API.

I was quite astonished that Cython never came up in the panel discussion at the last PyCon when representatives from CPython, PyPy, IronPython, and Jython all talked about the Python VMs. To me that oversite was very troublesome. I was left doubting the PyPy community after Cython was not mentioned at all --- even when the discussion about how to manage extensions to the language came up during the panel discussion. It shows that pure Python developers on all fronts have lost sight of what the scientific Python community is doing. This is dangerous. I encourage Python developers to come to a SciPy conference and take a peek at what is going on. I hope to be able to contribute more to the discussion as well.

If you are a Python developer and want to extend an olive leaf, then put a matrix infix operator into the language. It's way past time :-)

Monday, July 4, 2011

After getting a few great comments on my recent post --- especially regarding using PyPy and Fortran90 to speed up Python --- I decided my simple comparison needed an update.

The big-news is that my tests for this problem actually showed PyPy quite favorably (even a bit faster than the CPython NumPy solution). This is very interesting indeed! I knew PyPy was improving, but this shows it has really come a long way.

Also, I updated the Python-only comparison to not use NumPy arrays at all. It is well-known that NumPy arrays are not very efficient containers for doing element-by-element calculations in Python syntax. There is both more overhead for getting and setting elements than there is for simple lists, and the NumPy scalars that are returned when specific elements of NumPy arrays are selected can be a bit slow when doing scalar math computations on the Python side.

Finally, I included a Fortran 90 example based on the code and comments provided by SymPy author Ondrej Certik. Fortran 77 was part of the original comparison that Prabhu Ramanchandran put together several years ago. Fortran 90 includes some nice constructs for vectorization that make it's update code very similar to the NumPy update solution. Apparently, gfortran can optimize this kind of code very well. In fact, the Fortran 90 solution was the very best of all of the approaches I took (about 4x faster than the NumPy solution and 2x faster than the other compiled approaches).

At Prabhu's suggestion, I made the code available at github under a new GitHub repository in the SciPy project so that others could contribute and provide additional comparisons.

The new results are summarized in the following table which I updated to running on a 150x150 grid with again 8000 iterations.

Method

Time (sec)

Relative Speed

Pure Python

202

36.3

NumExpr

8.04

1.45

NumPy

5.56

1

PyPy

4.71

0.85

Weave

2.42

0.44

Cython

2.21

0.40

Looped Fortran

2.19

0.39

Vectorized Fortran

1.42

0.26

The code for both the Pure Python and the PyPy solution is laplace2.py. This code uses a list-of-lists for the storage of the values. The same code produces the Pure Python solution and the PyPy solution. The only difference is that one is run with the standard CPython and the other with the PyPy binary. Here is sys.version from the PyPy binary used to obtain these results:

For the other solutions, the code that was executed is located at laplace.py. The Fortran 90 module compiled and made available to Python with f2py is located at _laplace.f90. The single Cython solution is located at _laplace.pyx.

It may be of interest to some to see what the actual calculated potential field looks like. Here is an image of the 150x150 grid after 8000 iterations:

Here is a plot showing three lines from the image (at columns 30, 80, 130 respectively):

It would be interesting to add more results (from IronPython, Jython, pure C++, etc.). Feel free to check out the code from github and experiment. Alternatively, add additional problems to the speed project on SciPy and make more comparisons. It is clear that you can get squeeze that last ounce of speed out of Python by linking to machine code. It also seems clear that there is enough information in the vectorized NumPy expression to be able to produce fast machine code automatically --- even faster than is possible with an explicit loop. The PyPy project shows that generally-available JIT-technology for Python is here and the scientific computing community should grapple with how we will make use of it (and improve upon it). My prediction is that we can look forward to more of that in the coming months and years.

Monday, June 20, 2011

The high-level nature of Python makes it very easy to program, read, and reason about code. Many programmers report being more productive in Python. For example, Robert Kern once told me that "Python gets out of my way" when I asked him why he likes Python. Others express it as "Python fits your brain." My experience resonates with both of these comments.

It is not rare, however, to need to do many calculations over a lot of data. No matter how fast computers get, there will always be cases where you still need the code to be as fast as you can get it. In those cases, I first reach for NumPy which provides high-level expressions of fast low-level calculations over large arrays. With NumPy's rich slicing and broadcasting capabilities, as well as its full suite of vectorized calculation routines, I can quite often do the number crunching I am trying to do with very little effort.

Even with NumPy's fast vectorized calculations, however, there are still times when either the vectorization is too complex, or it uses too much memory. It is also sometimes just easier to express the calculation with a simple loop. For those parts of the application, there are two general approaches that work really well to get you back to compiled speeds: weave or Cython.

Weave is a sub-package of SciPy and allows you to inline arbitrary C or C++ code into an extension module that is dynamically loaded into Python and executed in-line with the rest of your Python code. The code is compiled and linked at run-time the very first time the code is executed. The compiled code is then cached on-disk and made available for immediate later use if it is called again.

Cython is an extension-module generator for Python that allows you to write Python-looking code (Python syntax with type declarations) that is then pre-compiled to an extension module for later dynamic linking into the Python run-time. Cython translates Python-looking code into "not-for-human-eyes" C-code that compiles to reasonably fast C-code. Cython has been gaining a lot of momentum in recent years as people who have never learned C, can use Cython to get C-speeds exactly where they want them starting from working Python code. Even though I feel quite comfortable in C, my appreciation for Cython has been growing over the past few years, and I know am an avid supporter of the Cython community and like to help it whenever I can.

Recently I re-did the same example that Prabhu Ramachandran first created several years ago which is reported here. This example solves Laplace's equation over a 2-d rectangular grid using a simple iterative method. The code finds a two-dimensional function, u, where ∇2 u = 0, given some fixed boundary conditions.

This code takes a very long time to run in order to converge to the correct solution. For a 100x100 grid, visually-indistinguishable convergence occurs after about 8000 iterations. The pure Python solution took an estimated 560 seconds (9 minutes) to finish (using IPython's %timeit magic command).

NumPy Solution

Using NumPy, we can speed this code up significantly by using slicing and vectorized (automatic looping) calculations that replace the explicit loops in the Python-only solution. The NumPy update code is:

Using num_update as the calculation function reduced the time for 8000 iterations on a 100x100 grid to only 2.24 seconds (a 250x speed-up). Such speed-ups are not uncommon when using NumPy to replace Python loops where the inner loop is doing simple math on basic data-types.

Quite often it is sufficient to stop there and move on to another part of the code-base. Even though you might be able to speed up this section of code more, it may not be the critical path anymore in your over-all problem. Programmer effort should be spent where more benefit will be obtained. Occasionally, however, it is essential to speed-up even this kind of code.

Even though NumPy implements the calculations at compiled speeds, it is possible to get even faster code. This is mostly because NumPy needs to create temporary arrays to hold intermediate simple calculations in expressions like the average of adjacent cells shown above. If you were to implement such a calculation in C/C++ or Fortran, you would likely create a single loop with no intermediate temporary memory allocations and perform a more complex computation at each iteration of the loop.

In order to get an optimized version of the update function, we need a machine-code implementation that Python can call. Of course, we could do this manually by writing the inner call in a compilable language and using Python's extension facilities. More simply, we can use Cython and Weave which do most of the heavy lifting for us.

Cython solution

Cython is an extension-module writing language that looks a lot like Python except for optional type declarations for variables. These type declarations allow the Cython compiler to replace generic, highly dynamic Python code with specific and very fast compiled code that is then able to be loaded into the Python run-time dynamically. Here is the Cython code for the update function:

This code looks very similar to the original Python-only implementation except for the additional type-declarations. Notice that even NumPy arrays can be declared with Cython and Cython will correctly translate Python element selection into fast memory-access macros in the generated C code. When this function was used for each iteration in the inner calculation loop, the 8000 iterations on a 100x100 grid took only 1.28 seconds.

For completeness, the following shows the contents of the setup.py file that was also created in order to produce a compiled-module where the cy_update function lived.

The extension module was then built using the command: python setup.py build_ext --inplace

Weave solution

An older, but still useful, approach to speeding up code is to use weave to directly embed a C or C++ implementation of the algorithm into the Python program directly. Weave is a module that surrounds the bit of C or C++ code that you write with a template to on-the-fly create an extension module that is compiled and then dynamically loaded into the Python run-time. Weave has a caching mechanism so that different strings or different types of inputs lead to a new extension module being created, compiled, and loaded. The first time code using weave runs, the compilation has to take place. Subsequent runs of the same code will load the cached extension module and run the machine code.

The inline function takes a string of C or C++ code plus a list of variable names that will be pushed from the Python namespace into the compiled code. The inline function takes this code and the list of variables and either loads and executes a function in a previously-created extension module (if the string and types of the variables have been previously created) or else creates a new extension module before compiling, loading, and executing the code.

Notice that weave defines special macros so that U2 allows referencing the elements of the 2-d array u using simple expressions. Weave also defines the special C-array of integers Nu to contain the shape of the u array. There are also special macros similarly defined to access the elements of array u if it would have been a 1-, 3-, or 4-dimensional array (U1, U3, and U4). Although not used in this snippet of code, the C-array Su containing the strides in each dimension and the integer Du defining the number of dimensions of the array are both also defined.

Using the weave_update function, 8000 iterations on a 100x100 grid took only 1.02 seconds. This was the fastest implementation of all of the methods used. Knowing a little C and having a compiler on hand can certainly speed up critical sections of code in a big way.

Faster Cython solution (Update)

After I originally published this post, I received some great feedback in the Comments section that encouraged me to add some parameters to the Cython solution in order to get an even faster solution. I was also reminded about pyximport and given example code to make it work more easily. Basically by adding some compiler directives to Cython to avoid some checks at each iteration of the loop, Cython generated even faster C-code. To the top of my previous Cython code, I added a few lines:

#cython: boundscheck=False#cython: wraparound=False

I then saved this new file as _laplace.pyx, and added the following lines to the top of the Python file that was running the examples:

This provided an update function cy_update2 that resulted in the very fastest implementation (943 ms) for 8000 iterations of a 100x100 grid.

Summary

The following table summarizes the results which were all obtained on a 2.66 Ghz Intel Core i7 MacBook Pro with 8GB of 1067 Mhz DDR3 Memory. The relative speed column shows the speed relative to the NumPy implementation.

Method

Time (sec)

Relative Speed

Pure Python

560

250

NumPy

2.24

1

Cython

1.28

0.57

Weave

1.02

0.45

Faster Cython

0.94

0.42

Clearly when it comes to doing a lot of heavy number crunching, Pure Python is not really an option. However, perhaps somewhat surprisingly, NumPy can get you most of the way to compiled speeds through vectorization. In situations where you still need the last ounce of speed in a critical section, or when it either requires a PhD in NumPy-ology to vectorize the solution or it results in too much memory overhead, you can reach for Cython or Weave. If you already know C/C++, then weave is a simple and speedy solution. If, however, you are not already familiar with C then you may find Cython to be exactly what you are looking for to get the speed you need out of Python.

Saturday, June 18, 2011

Today I was trying to make progress on a few different NumPy proposal enhancements and ended up frustrated knowing that come Monday morning, I will not have any time to follow-up on them. Managing a growing consulting company takes a lot of time (Enthought is over 30 people now and growing to near 50 by the end of the year). There are countless meetings devoted to new hires, program development, project reviews, customer relations, budgeting, and sales. I also take a direct role in delivering on training and select consulting projects. Someday I may get a chance to write something of use about things I've learned along the way, but that is for another day (and likely another blog). This post is to get a few ideas I've been sitting on written down in the hopes that somebody might read them and get excited about contributing. At the very least anybody that reads this post, will know (at least some of) my current opinion about a few technical proposals.

About a month ago, I had the privilege of organizing a "data-array" summit in which several people in the NumPy and SciPy community came together at the Enthought offices to discuss some ideas related to how to improve data analysis with the NumPy and SciPy stack. We spent 3-days thinking and brainstorming which led to many fruitful discussions. I expect that some of the ideas generated will result in important and interesting changes to NumPy and SciPy over the coming months and years. More information about the summit can be learned by listening to the relevant inSCIght podcast.

It's actually a very exciting time to get involved in the SciPy community as Python takes its place as one of the approaches people will be using to analyze all the data that we are generating. In that spirit, I wanted to express a few of what I consider to be important enhancements that are needed to Python and NumPy.

I will start with Python and leave NumPy to another post. Here there are really three big missing features that would really benefit those of us who use Python for technical computing. Unfortunately, I don't think there is enough representation of the Python for Science crowd in the current batch of Python developers. This is not due to any exclusion from the Python developers who have always been very accommodating. It is simply due to the scarcity of people who understand the SciPy perspective and use-cases also willing to engage with developers in the Python world. Those (like Mark Dickinson) who cross the chasm are a real gem.

If anyone has an interest in shepherding a PEP in any of the following directions, you will have my '+1' support (and any community-organizing that I can muster to help you). Honestly, if these things were put into Python 3, there would be a serious motivation to move to Python 3 for the scientific community (which is otherwise going to lag in the great migration).

Python Enhancements I Want

Adding additional operators

We need additional operators to easily represent at least matrix multiplication, matrix power, and matrix solve). I could possibly back-off the last two if we at least had matrix multiplication. This should have been done a long time ago. If I had been able to spare the time, I would have pushed to hold off porting of NumPy to Python 3 until we got matrix multiplication operators. Yes, I know that blackmail usually backfires and thankfully Pauli Virtanen and Charles Harris acted before I even had a chance to suggest such a thing :-). But, seriously, we need this support in the language.

The reasons are quite simple:

Syntax matters: writing d = numpy.solve(numpy.dot(numpy.dot(a,b),c), x) is a whole lot more ugly than something like d = (a*b*c) \ x. If the former is fine, then we should all just go back to writing LISP. The point of having nice syntax is to minimize the line-noise and mental overhead of mapping the mental idea to working code. For Python to be used with mental efficiency in technical computing you need to write expressions involving higher-order operations like this all the time.

Right now, the recommended way to do this is to convert a, b, c, and x to "matrices", perform the computation in a nice expression and then convert back to arrays. This is clunky at best.

I've been back and forth on this for 13 years and can definitively say that we would be much better off in Python if we had a matrix multiplication operator. Please, please, can we get one! The relevant PEPS where this has been discussed are: PEP 211 and PEP 225. I think I like having more than just one operator added (ala PEP 225, but the subject would have to be re-visited by a brave soul).

Overloadable Boolean Operations

PEP 335 was a fantastic idea. I really wish we had the ability to overload and, or, and not. Among other things, this would allow the very nice syntax so that mask = 2<a<10 would generate an array of True and False values when a is an array. Currently, to generate this same mask you have to do (2<a)&(a<4). The PEP has other important use-cases as well. It would be excellent if this PEP were re-visited, championed, and put into Python 3.

Allowing slice object literals outside of []

Python's syntax allows construction of a slice object inside brackets so that one can write a[1:3] which is equivalent to a.__getitem__(slice(1,3)). Many times over the years, I have wanted to be able to specify a slice object using the syntax start:stop:step outside of the getitem. Even, if Python's parser were extended to allow the slice literal to be accepted as the input to a function it would be preferred. The biggest wart this would remove is the (ab)use of getitem to return new ranges and grids in NumPy (go use mgrid and r_ in NumPy to see what I mean). I would prefer that these were functions, but I would need mgrid(1:5, 1:5) to work.

There was a PEP for range literals (PEP 204) once upon a time. There were some interesting aspects about that proposal, but frankly I don't want the slice syntax to produce ranges. I would just be content for it always to produce slice objects --- just allow it outside of brackets.

While I started by lamenting my lack of time to implement NumPy enhancements, I will leave the discussion of NumPy enhancements I'm dreaming about to another post. I would be thrilled if somebody took up the charge to push any of these Python enhancements in Python 3. If Python 3 ends up with any of them, it would be a huge motivation to me to migrate to Python 3 entirely.

Friday, February 11, 2011

My pathway to probability theory was a little tortured. Like most people, I sat through my first college-level "Statistics" class fairly befuddled. I was good at math and understood calculus pretty well. As a result, I did well in the course, but didn't feel that I really understood what was going on. I took a course that used as its text this book by Papoulis. Now the text is a great reference for me, but at the time I didn't really understand the point of most of the more theoretical ideas. It wasn't until later after I had studied measure theory, and understood more of the implications of the set-theory studies of George Cantor that I began to see the significance of a Borel algebra and why some of the complexity was necessary from a foundational perspective.

I still believe, however, that diving into the details of measure theory is over-kill for introducing probability theory. I've been convinced by E.T. Jaynes that probability theory is an essential component of any education and as such should be presented in multiple ways at multiple times and certainly not thrown at you as "just an application of measure theory" the way it sometimes is in math courses. I think this is improving, but there is still work to do.

What typically still happens is that people get their "taste" of probability theory (or worse, their taste of "statistics") and then move on not ever really assimilating the lessons in their life. The trouble is everyone must deal with uncertainty. Our brains are hard-wired to deal with it --- often in ways that can be counter-productive. At its core, probability theory is just a consistent and logical way to deal with uncertainty using real numbers. In fact, it can be argued that it is the only way to deal with uncertainty.

I've done a bit of experimentation over the years and dealt with a lot of data (MRI, ultrasound, electrical impedance data). In probability theory, I found a framework for understanding what the data really tells me which led me to spend several years studying inverse problems. There are a lot of problems that can be framed as inverse problems. Basically, inverse problem theory can be applied to any problem where you have data and you want to understand what the data tells you. To apply probability theory to solve an inverse problem you have to have some model that determines how what you want to know leads to the data you've got. Then, you basically invert the model. Bayes' theorem provides a beautiful framework for this inversion.

The result of this Bayesian approach to inverse problems, though, is not just a number. It is explicitly a probability density function (or probability mass function). In other words, the result of a proper solution to an inverse problem is a random variable, or probability distribution. Seeing the result of any inverse problem as a random variable changes the way you think about drawing conclusions from data.

Think about the standard problem of fitting a line to data. You plug-and-chug using a calculator or a spreadsheet (or a function call in NumPy), and you get two numbers (the slope and intercept). If you properly understand inverse problems as requiring the production of a random variable, then you will not be satisfied with just these numbers. You will want to know, how certain am I about these numbers. How much should I trust them? What if I am going to make a decision on the basis of these numbers? (Speaking of making decisions, someday, I would like to write about how probability theory is also under-utilized in standard business financial projections and business decisions).

Some statisticians when faced with this regression problem will report the "goodness" of fit and feel satisfied, but as one who sees the power and logical simplicity of Bayesian inverse theory, I'm not satisfied by such an answer. What I want is the joint probability distribution for slope and intercept based on the data. A lot of common regression techniques do not provide this. I'm not going to go into details regarding the historical reasons for why this is. You can use google to explore some of the details if you are interested. A lot of it comes down to the myth of objectivity and the desire to eliminate the need for a prior which Bayesian inverse theory exposes.

As an once very active contributor to SciPy (now an occasional contributor who is still very interested in its progress), I put in the scipy.stats package a few years ago a little utility for estimating the mean, standard deviation, and variance from data that expresses my worldview a little bit. I recently updated this utility and created a function called mvsdist. This function finally returns random variable objects (as any good inverse problem solution should!) for the Mean, Standard deviation, and Variance derived from a vector of data. The assumptions are 1) the data were all sampled from a random variable with the same mean and variance, 2) the standard deviation and variance are "scale" parameters, and 3) non-informative (improper) priors.

The details of the derivation are recorded in this paper. Any critiques of this paper are welcome as I never took the time to try and get formal review for it (I'm not sure where I would have submitted it for one --- and I'm pretty sure there is a paper out there that already expresses all of this, anyway).

It is pretty simple to get started playing with mvsdist (assuming you have SciPy 0.9 installed). This function is meant to be called any time you have a bunch of data and you want to "compute the mean" or "find the standard deviation." You collect the data into a list or NumPy array of numbers and pass this into the mvsdist function:

This returns three distribution objects which I have intentionally named mean, var, and std because they represent the estimates of mean, variance, and standard-deviation of the data. Because they are estimates, they are not just numbers, but instead are (frozen) probability distribution objects. These objects have methods that let you evaluate the probability density function: .pdf(x), compute the cumulative density function: .cdf(x), generate random samples drawn from the distribution: .rvs(size=N), determine an interval that contains some percentage of the random draws from this distribution: .interval(alpha), and calculate simple statistics: .stats(), .mean(), .std(), .var().

Notice that once we have the probability distribution we can report many things about the estimate that provide for not only the estimate itself, but also any question we might have regarding the uncertainty in the estimate. Often, we may want to visualize the probability density function as is shown below for the standard deviation estimate and the mean estimate

It is not always easy to solve an inverse problem by providing the full probability distribution object (especially in multiple dimensions). But, when it's possible, it really does provide for a more thorough understanding of the problem. I'm very interested in SciPy growing more of these kinds of estimator approaches where possible.