Un pitone a San Luca

Thursday, June 26, 2014

This blog has been on standby for more time that is healthy, and things changed a lot since my last post. With this in mind, combined with the discomfort that blogger impose to write a technical blog, I decide to transfer all its material to a new location.

Tuesday, April 16, 2013

After a long absence due to...well, to a lot of things, here is a short synthesys of the last few months.

First of all, I'm finally moving to python 3.3! as I pointed out in my previous post, Moving to python 3k for numerical computation, the situation is still from perfect, but in the last months it got better: still no good response from mayavi and tables (and that's a real shame), but both scikits.learn and biopython (even if only installing from sources) got the golden status of py3k ready. Given that I rarely use tables or mayavi, almost 100% of my forkflow is ready for the transition. The reason for the transition is quite simple: I like new shiny toys :) aside from that, in the last year I've been bitten frequently by the unicode management in python 2.7, and several times I've desired strongly to move to a better behaved language like python 3. The python team made a great job, and the result is a language that has a cleaner and more logical structure. Why should I stick to the less than optimal python 2? I don't particularly love giving myself troubles for free, and I still have very few strings attached so that my transition can be carefree. The only thing that kept me back was the status of the support of my everyday packages. To keep track of the evolution of the support of your preferred package, two good point of reference are python 3 wall of shame superpowers and the official page of the top support in python 3 http://py3ksupport.appspot.com/. And do me a favor, start writing python 3 compatible script with a proper use of the __future__ statement and the six library!

Aside from that, I spent a lot of time working with the guys of the statsmodels project. I got a pull request accepted to implement the mosaic plot and two more are waiting a response: one is an poor-man implementation of the facet plot and one is targeted to microarray and pathway analysis. I have to admit that this has been a TREMENDOUS experience. I learned a lot of things from them, first of all the huge gap that exist between writing code that do something and code that let other do the same thing. Code readibility, good docstring, package organization and a lot of new, fun things to do. It's has been a good excuse to get more confident with the git workflow, that I have always wanted to learn but always postponed.

I also tried to post a package on pipy, the central python packages repository. The package is named keggrest, and it's a basic implementation of the rest API of the KEGG biological database. It's a shame, as it has close to no documentation and no support to python 3. Talk about throwing the first stone :). In the next few days I will give it some love to make it a package like the one that I expect to use.

Wednesday, January 23, 2013

As you can see I've just changed the whole aspect of the blog.
This is a response to a couple of design need. The lesser one is that all the plots that I will create will be with a white background, and against a dark background it result in a pain in the eye. The greater one is that Blogger suck for posting code and images. I spend half of my time to check the font and color of the code, and every image require saving it to disk. I can't even show any more complicated code as the results should be reformatted by hand. And, to be honest, I always dreamed of simply blogging my ipython notebook scripts.

So, thank to Brian E. Granger, who wrote a simple method to post an ipython notebook as a frame in the post, I can have the cake and eat it too!
The method is simple as adding an iframe tag (with the correct dimension put by hand, but that's a minor flaw) in the HTML code of the post, and voilà!

Leveraging the magic of the nbviewer server (that is an amazing service, by the way) I can now write a wonderfully formatted notebook, with code and formulas and plots and everything, and just hand it to you. What I'm going to do is set a git repository, create my notebook in there and link them with nbviewer in here. By the way, the notebook that it's linked it's the wonderful XKCD plot style created by Jake Vanderplas, a great blogger and python developer

I will try to explain how I'm going to set up and manage the repository.
First of all I create a new repository called blogger_notebook
Having set up a github account (that is really easy), the next step is to create the directory that will host my material.

mkdir blogger_codecd blogger_code/

GitHub give some useful information on how to create a new repository. First of all, we create it by writing

git init

this tell to git that in this directory we have a git repository and that it should keep the version control backup of the data. Now I tell him the online repository location:

Ok, we are close to the goal. I copy the notebook that will be my next post, basemap.ipynb. Now I need to tell git to follow it

git add basemap.ipynb

now everytime I make a modification to this file that I want to remember I can save it with the commit command. I also need to add a description of the modification done

git commit basemap.ipynb -m "creation of an example of basemap usage"

lastly, to keep up the repository online, I should put it into the GitHub repository. This will ask for my username and password and will upload all the modification to the online repository.

git push -u origin master

you can see the results here:

https://github.com/EnricoGiampieri/blogger_notebook

Now, the last step is to create a nbviewer link to the notebook. You should take the link to the raw file (you can obtain it going into the file and search for the RAW button) and give it to the nbviewer main page. it will give you a nice link to the notebook with all it's content:

Obviously this is barely scratching the surface of the (super)power of git, but there are tons of manuals online that explain it better than what I could ever do. This was just a step by step guide to how to setup this "delayed blogging" method.

Sunday, January 13, 2013

One of the python module to which I have the most controversial feelings is without any doubt sympy.

Sympy is a great piece of software that can deal with a huge amount of problem in a quite elegant way, and I would really like to use it more in my work. The main drawback was a very poor support for statistic, and making all those integral by hand felt a little odd.

It was with a lot of happiness that I read about the development of a new module for statistics in sympy, called sympy.stats, that promised to address all (or at least most) of the needs that someone can have working out statistical problems.

The foundation for this module has been put into place by Matthew Rocklin in the summer 2012. He made a good job, and the module has been indeed extended to support a great amount of probability distribution, both continuous and finite. There is yet no support for infinite discreet space like the natural numbers, and this means that few very important distribution like the Poisson or the Negative Binomial are still left out, but the overall feeling is very good.

The library is based on the idea of Random variable, defining a probability measure over a certain domain. For example, a normal variable is defined over the whole real axis and implements the gaussian probability density.

A selected amount of operation can be done over these random variables, notably obtaining the density estimation, the probability of an event or the expectation value.

But let the code speak.
Let's import sympy and sympy.stats, and create a Normal variable with a fixed variance and mean represented by a sympy real variable. Remember that any time we specify a new sympy symbol we have to declare a name for that symbol. in this case our normal distribution will be called simply X.

We can now ask the expected mean and standard deviation of out random variable:

print sympy.simplify(stats.E(X))print sympy.simplify(stats.variance(X))
that return, as expected, mu and 1.
We can also create new random expression based on the original one.
We know for example that a chi squared variable is the sum of N normal, so we can obtain the mean and variance of a 2-degree of freedom Chi distribution simply by summing up the squares of two normal distribution:

that are exactly the values we were expecting (see Chi Squared distribution)
We can sample our expression with sample or sample_iter, and we can look at the resulting distribution:

samples = list(stats.sample_iter(Chi, numsamples=1e4))

we can plot the histogram with pylab as simple as:

pylab.hist(samples, bins=100)pylab.show()

We can also evaluate the conditioned probability of events, but on continuous function this lead to some heavy integrals, so I will demonstrate it using the more simpler Die class, that represents the launch of a fair n-sided die.

We can ask what is the probability that a realization of X is grater than 4:

stats.P(X>4)# 1/3

or that it equals a certain value, say 3 (the ugly syntax cannot be avoided due to how the equality test is evaluated):

stats.P(sympy.Eq(X,3))# 1/6

We can also ask what is the probability that the three dice Z will roll more than 10 given that the first die rolled a 4:

stats.P(Z>10, sympy.Eq(X,4))

So, summing up, the stats module of sympy is really promising and I hope that a lot of work will be done on it to make it even better. If I will understand the sympy development process and the module class hierarchy, I will surely try to make a contribution.
Given these praises, for my needs it still lacks several fundamental features:

support for non-limited discreet spaces

better support for mixtures of distribution (right now I still get only error complaining about the invertibility of the CDF)

better fall-back to numerical evaluation, as a lot of distribution are described by integrals and special functions and, even if the integration routine of sympy is pretty solid, not everything can be solved analytically

Monday, December 10, 2012

In the last few years of working with python, I've always suffered from being kept back to the 2.x version of python by the need of the scientific libraries. The good news is that in the last year most of them made the great step and shipped a 3.x ready version (or, to be honest, a 3.x convertible 2.x version).So right now I'm having fun trying to install everything on my laptop, an Ubuntu 12.10.The first step is to install python 3.2 and the appropriate version of the pip packaging system:sudo apt-get install python3 python3-pipthen we can just plug the normal installation process using the pip-3.2sudo pip-3.2 install -U numpysudo pip-3.2 install -U scipysudo pip-3.2 install -U matplotlibsudo pip-3.2 install -U sympysudo pip-3.2 install -U pandassudo pip-3.2 install -U ipythonsudo pip-3.2 install -U nosesudo pip-3.2 install -U networkxsudo pip-3.2 install -U statsmodelssudo pip-3.2 install -U cython

Sadly mayavi,scikit-learn, numexpr, biopython and tables are still working on the transition, so they're not yet available. This leave the numerical side of python quite crippled, but I hope that they will soon reach the others and allow us to use py3k as the rest of the world out there.

Sunday, December 2, 2012

Few days ago I had the occasion to play around with the descriptor syntax of python. The normal question of "What is a descriptor" is always replied with a huge wall of text, but in reality are quite a simple concept: they are a generalization of the properties.

For those not familiar with the concept of properties, they are a trick to call function with the same syntax of an attribute. if prop is a property, you can write this assignment as a normal attribute:

A.prop = value

But the property will allow you to perfom check and similar on the value before the real assignment.
The basic syntax start from a normal get/set syntax (never use them unless you plan to work with a property!), but the you add a new element to the class that put togheter this two function under the name x:

You can now use it as a normal class attribute, but when you assign a value to it, it will react with the setter function.

a = A()a.x = 5#set the value of x to 5print a.x

This open a new world of possible interaction between the class and the user, with a very simple syntax. The only limit is that any extra information ha to be stored in the class itself, while sometimes can be useful to keep it separated. It can also become very verbose, which is something that is frown upon when programming in python (Python is not Java, remember).

If we have to create several attribute which behave in a similar way, repeting the same code for the property can be quite an hassle. that's where the descriptor start to became precious (yes, they can do a lot more, but I don't have great requirements).

The descriptor is a class which implements the method __get__ and, optionally, the method __set__ and __delete__. These are the methods that will be called when you try to use the attribute created with these properties.

Let's see a basic implementation of a constant attribute, i.e. and attribute that is fixed in the class and cannot be modified. To to this we need to implement the __get__ method to return the value, and the __set__ method to raise an error if one try to modify it. To avoid possible modification, the actual value is stored inside the Descriptor itself (via the self reference). To interact with the object that possess the Descriptor we can use the instance reference

We can now create a class that use this descriptor. We pass the name of the attribute to the __init__ otherwise the Descriptor would have no information on which name the class has registered it under.

class A(object): c = ConstantAttr(10,'c')

Using an instance of the class we can see that the value if printed correctly at 10, but if we try to modify it, we obtain an exception.

a = A()print a.c#10a.c = 5#raise AttributeError: the attribute c cannot be written

Now we can create as many constant attributes as we need with almost no code duplication at all! That's a good start.

The reason I started playing around with the descriptor was a little more complicated. I needed a set of attributes to have a validity test of the inserted value, raising error if the test wasn't correct. You can performa this with properties, but you can't use raise statement in a lambda, forcing you to write a lot of different setters, polluting the class source code and __dict__ with a lot of function. To remove the pollution from the dict you can always delete the function you used to create the property

This could work, but you still have 7 or more lines to define something that is no more than a lambda with a message error attached.

So, here come the Descriptor. To keep the pollution to the minimum, I store all the protected values in a intern dictionary called props.
What this code does is to take a test function for testing if the given value is acceptable, then set it if it's correct or raise the given error if it's not.

Annnnnd...That's it. With this Descriptor code we imposed the condition that bot the width and height should be greater than zero and obtained an attribute area which return the value without giving the possibility of setting it, in only 7 lines of code. Talk about synthesis!

To end with something more difficult let's try to describe the Triangle, which condition use also the values of the other side. This is not a 100% safe version and not performance fine-tuned, but I guess is simple enough to be used:

EDIT:I forgot two interesting details for the implementation of the descriptors. The first one address the issue of accessing the descriptor from the class rather than from an instance. I would expect to obtain a reference to the Descriptor instance, but I got the default value. What I should have done was to check if the instance was None (meaning access from the class) and return the descriptor itself:

The second bit is about the documentation. If I write the documentation of the Descriptor, I lose the opportunity to obtain a documentation for each instance, that is one of the cool feature of the property object. This can be done in a simple way...using a property ;)

Friday, November 23, 2012

Matplotlib, as I said before, is quite an amazing graphics library, and can do some power heavy-lifting in data visualization, as long as you lose some time to understand how it works. Usually it's quite intuitive, but one field where it is capable of giving huge headhace is the generation of personalized colormaps.

This page (http://matplotlib.org/examples/api/colorbar_only.html) of the matplotlib manual give some direction, but it's not really useful. What we usually want is to create a new, smooth colormap with our colors of choice.
To do that the only solution is the matplotlib.colors.LinearSegmentedColormap class...which is quite a pain to use. Actually there is a very useful function that avoid this pain, but I will tell the secret after we see the basic behavior.

The main idea of the LinearSegmentedColormap is that for each color (red, green and blue) we divide the colormap in intervals and explain to the colormap two colors to interpolate in between. This is the code to create the simplest colormap, a grayscale:

First of all there is the name of the colormap, the last is the number of point of the interpolation and the middle section is the painful one.
The colormap is described for each color by a sequence of three numbers: the first one is the position in the colormap, and can go from 0 to 1, monotolically. The second and the third numbers represents the value of the color before and after the selected position.
This basic example is composed of two point for each color, 0 and 1, and it say that at those position the color is absent (0) or present (1)

To understand better, we can use a colormap that go from red 0 to 0.25 in the first half, then just after the half switch to 0.75 and go to 1 as the colormap go to 1

Ok, this is really powerful, but is clearly an overshot in most cases! The matplotlib developers realized this, but for some reason didnt create a whole new class clearly in the module, deciding to create a method of the LinearSegmentedColormap instead, called from_list.
This is the magic cure that we need: to make a simple colormap that goes from red to black to blue, we just need this.

mycm = lscm.from_list('mycm',['r','k','b'])

of course you can mix named colors with tuple of rgb, at your hearth content!

mycm = lscm.from_list('mycm',['pink','k',(0.5,0.5,0.95)])

Ok, now we have our wonderful colormap...but if we have some nan value in our data, everything is going bad, and value is represented in white, out of our control. Don't worry, as what we need is just to set the color to use for the nan values (actually, for the masked ones) with the function set_bad. in this case we put it to green:

Blog Archive

About Me

I'm a physics post-doc at the Bologna University, Italy. My research focuses on stochastic dynamics for biology and bioinformatics. Python is my everyday workhorse and the language that make me fell in love with programming.