The incomplete Cholesky decomposition for a dense symmetric positive definite matrix A is a simple way of approximating A by a matrix of low rank (you can choose the rank). It has been used frequently in machine learning (Fine, Scheinberg; Bach, Jordan). Here is an efficient implementation.

Supported kernels in the moment are RBF (Gaussian) and squared-exponential, I might add some more if I need them.
Please consider sending me extensions to new kernels you wrote yourself.