Motivated by the observation that a given signal $\boldsymbol{x}$ admits
sparse representations in multiple dictionaries $\boldsymbol{\Psi}_d$ but with
varying levels of sparsity across dictionaries, we propose two new algorithms
for the reconstruction of (approximately) sparse signals from noisy linear
measurements. Our first algorithm, Co-L1, extends the well-known lasso
algorithm from the L1 regularizer $\|\boldsymbol{\Psi x}\|_1$ to composite
regularizers of the form $\sum_d \lambda_d \|\boldsymbol{\Psi}_d
\boldsymbol{x}\|_1$ while self-adjusting the regularization weights
$\lambda_d$. Our second algorithm, Co-IRW-L1, extends the well-known
iteratively reweighted L1 algorithm to the same family of composite
regularizers. We provide several interpretations of both algorithms: i)
majorization-minimization (MM) applied to a non-convex log-sum-type penalty,
ii) MM applied to an approximate $\ell_0$-type penalty, iii) MM applied to
fully-Bayesian inference under a particular hierarchical prior, and iv)
variational expectation-maximization (VEM) under a particular prior with
deterministic unknown parameters. A detailed numerical study suggests that our
proposed algorithms yield significantly improved recovery SNR when compared to
their non-composite L1 and IRW-L1 counterparts.