The default setting for deconv uses the Poisson family and a natural cubic spline basis of degree 5 as \(Q.\) The regularization parameter for this example (c0) is set to 1. The ignoreZero parameter indicates that this dataset contains zero counts, i.e., there zeros have not been truncated. (In the Shakespeare example below, the counts are of words seen in the canon, and so there is a natural truncation at zero.)

Some warnings are emitted by the nlm routine used for optimization, but they are mostly inconsequential.

Since deconv works on a sample at a time, the result above is a list of lists from which various statistics can be extracted. Below, we construct a table of values for various values of \(\Theta\).

The quantity \(R(\alpha)\) in the paper (Efron, Biometrika 2015) can be extracted from the stats list; in this case for a regularization parameter of c0=2 we can print its value:

print(result$S)

## [1] 0.005534954

The stats list contains other estimates quantities as well.

As noted in the paper citing this package, about 44 percent of the total mass of \(\hat{g}\) lies below \(\Theta = 1\), which is an underestimate. This can be corrected for by defining \[
\tilde{g} = c_1\hat{g} / (1 - e^{-\theta_j}),
\] where \(c_1\) is the constant that normalizes \(\tilde{g}\).

When there is truncation at zero, as is the case here, the deconvolveR package now returns an additional column in stats[, "tg"] which contains this correction for thinning. (The default invocation of deconv assumes zero truncation for the Poisson family, argument ignoreZero = FALSE).