If the model is correctly specified, the OLS estimator $\hat\beta_{OLS}=(X'X)^{-1}X'y$ will be the minimum-variance (or best) linear unbiased estimator (MVLUE or BLUE) of $\beta$.

When it comes to minimizing the mean squared error (MSE), ridge regression and lasso may provide estimators $\hat\beta_{ridge}(\lambda_{ridge})$ and $\hat\beta_{lasso}(\lambda_{lasso})$, respectively, with smaller MSEs than that of OLS for some intervals of penalty intensity $\lambda_{ridge}$ and $\lambda_{lasso}$.

As far as I understand, this concerns the entire parameter vector $\beta$ rather than each individual parameter $\beta_k$ for $k=1,\dotsc,K$.

Question: Can something concrete and useful be said about the individual parameter estimates $\hat\beta_{k,OLS}$ versus $\hat\beta_{k,ridge}$ and $\hat\beta_{k,lasso}$ in terms of MSE?