Step 6. Analysis

Counts per se, rather than a summary (RPKM, FRPKM, …), are
relevant for analysis

For a given gene, larger counts imply more information; RPKM etc.,
treat all estimates as equally informative.

Comparison is across samples at each region of interest; all
samples have the same region of interest, so modulo library size
there is no need to correct for, e.g., gene length or mapability.

Normalization

Libraries differ in size (total counted reads per sample) for
un-interesting reasons; we need to account for differences in
library size in statistical analysis.

Total number of counted reads per sample is not a good estimate of
library size. It is un-necessarily influenced by regions with large
counts, and can introduce bias and correlation across
genes. Instead, use a robust measure of library size that takes
account of skew in the distribution of counts (simplest: trimmed
geometric mean; more advanced / appropriate encountered in the lab).

Library size (total number of counted reads) differs between
samples, and should be included as a statistical offset in
analysis of differential expression, rather than 'dividing by' the
library size early in an analysis.

Appropriate error model

Count data is not distributed normally or as a Poisson process,
but rather as negative binomial.

Result of a combination Poisson (shot' noise, i.e., within-sample
technical and sampling variation in read counts) with variation
between biological samples.

A negative binomial model requires estimation of an additional
parameter ('dispersion'), which is estimated poorly in small
samples.

Basic strategy is to moderate per-gene estimates with more robust
local estimates derived from genes with similar expression values (a
little more on borrowing information is provided below).

Pre-filtering

Naively, a statistical test (e.g., t-test) could be applied to each
row of a counts table. However, we have relatively few samples
(10's) and very many comparisons (10,000's) so a naive approach is
likely to be very underpowered, resulting in a very high false
discovery rate

A simple approach is perform fewer tests by removing regions that
could not possibly result in statistical significance, regardless of
hypothesis under consideration.

Example: a region with 0 counts in all samples could not possibly be
significant regradless of hypothesis, so exclude from further
analysis.

Basic approaches: 'K over A'-style filter – require a minimum of A
(normalized) read counts in at least K samples. Variance filter,
e.g., IQR (inter-quartile range) provides a robust estimate of
variability; can be used to rank and discard least-varying regions.

One way of developing intuition is to recognize a t-test (for
example) as a ratio of variances. The numerator is
treatment-specific, but the denominator is a measure of overall
variability.

Variances are measured with uncertainty; over- or under-estimating
the denominator variance has an asymmetric effect on a t-statistic
or similar ratio, with an underestimate inflating the statistic
more dramatically than an overestimate deflates the statistic. Hence
elevated false discovery rate.

Under the typical null hypothesis used in microarray or RNA-seq
experiments, each gene may respond differently to the treatment
(numerator variance) but the overall variability of a gene is
the same, at least for genes with similar average expression

The strategy is to estimate the denominator variance as the
between-group variance for the gene, moderated by the average
between-group variance across all genes.

This strategy exploits the fact that the same experimental design
has been applied to all genes assayed, and is effective at
moderating false discovery rate.