Especially, our feature-sign search algorithm (L1-regularized Least Squares solver) is very fast, and can be used for many other machine learning problems; when tested for the benchmark data, the feature-sign search algorithm outperforms many other existing algorithms such as LARS, basis pursuit, and grafting.
For more details, see our NIPS'06 paper.

We also apply this efficient sparse coding algorithm to a new machine learning framework called "self-taught learning", where we are given a small amount of labeled data for a supervised learning task, and lots of additional unlabeled data that does not share the labels of the supervised problem and does not arise from the same distribution. For more details, see our ICML'07 paper.