Navigation

This tutorial will guide you through a typical PyMC application. Familiarity
with Python is assumed, so if you are new to Python, books such as [Lutz2007]
or [Langtangen2009] are the place to start. Plenty of online documentation
can also be found on the Python documentation page.

Consider the following dataset, which is a time series of recorded coal mining
disasters in the UK from 1851 to 1962 [Jarrett1979].

Recorded coal mining disasters in the UK.

Occurrences of disasters in the time series is thought to be derived from a
Poisson process with a large rate parameter in the early part of the time
series, and from one with a smaller rate in the later part. We are interested
in locating the change point in the series, which perhaps is related to changes
in mining safety regulations.

\(r_t\): The rate parameter of the Poisson distribution of disasters in year \(t\).

\(s\): The year in which the rate parameter changes (the switchpoint).

\(e\): The rate parameter before the switchpoint \(s\).

\(l\): The rate parameter after the switchpoint \(s\).

\(t_l\), \(t_h\): The lower and upper boundaries of year \(t\).

\(r_e\), \(r_l\): The rate parameters of the priors of the early
and late rates, respectively.

Because we have defined \(D\) by its dependence on \(s\), \(e\) and
\(l\), the latter three are known as the “parents” of \(D\) and
\(D\) is called their “child”. Similarly, the parents of \(s\) are
\(t_l\) and \(t_h\), and \(s\) is the child of \(t_l\) and
\(t_h\).

At the model-specification stage (before the data are observed), \(D\),
\(s\), \(e\), \(r\) and \(l\) are all random variables.
Bayesian “random” variables have not necessarily arisen from a physical random
process. The Bayesian interpretation of probability is epistemic, meaning
random variable \(x\)‘s probability distribution \(p(x)\) represents
our knowledge and uncertainty about \(x\)‘s value [Jaynes2003]. Candidate
values of \(x\) for which \(p(x)\) is high are relatively more
probable, given what we know. Random variables are represented in PyMC by the
classes Stochastic and Deterministic.

The only Deterministic in the model is \(r\). If we knew the values of
\(r\)‘s parents (\(s\), \(l\) and \(e\)), we could compute the
value of \(r\) exactly. A Deterministic like \(r\) is defined by a
mathematical function that returns its value given values for its parents.
Deterministic variables are sometimes called the systemic part of the
model. The nomenclature is a bit confusing, because these objects usually
represent random variables; since the parents of \(r\) are random,
\(r\) is random also. A more descriptive (though more awkward) name for
this class would be DeterminedByValuesOfParents.

On the other hand, even if the values of the parents of variables
switchpoint, disasters (before observing the data), early_mean or
late_mean were known, we would still be uncertain of their values. These
variables are characterized by probability distributions that express how
plausible their candidate values are, given values for their parents. The
Stochastic class represents these variables. A more descriptive name for
these objects might be RandomEvenGivenValuesOfParents.

We can represent model (1) in a file called
disaster_model.py (the actual file can be found in pymc/examples/) as
follows. First, we import the PyMC and NumPy namespaces:

Notice that from pymc we have only imported a select few objects that are
needed for this particular model, whereas the entire numpy namespace has
been imported, and conveniently given a shorter name. Objects from NumPy are
subsequently accessed by prefixing np. to the name. Either approach is
acceptable.

DiscreteUniform is a subclass of Stochastic that represents
uniformly-distributed discrete variables. Use of this distribution suggests
that we have no preference apriori regarding the location of the
switchpoint; all values are equally likely. Now we create the
exponentially-distributed variables early_mean and late_mean for the
early and late Poisson rates, respectively:

Next, we define the variable rate, which selects the early rate
early_mean for times before switchpoint and the late rate late_mean
for times after switchpoint. We create rate using the deterministic
decorator, which converts the ordinary Python function rate into a
Deterministic object.:

The last step is to define the number of disasters disasters. This is a
stochastic variable but unlike switchpoint, early_mean and
late_mean we have observed its value. To express this, we set the argument
observed to True (it is set to False by default). This tells PyMC
that this object’s value should not be changed:

3.2.1. Why are data and unknown variables represented by the same object?¶

Since its represented by a Stochastic object, disasters is defined by its
dependence on its parent rate even though its value is fixed. This isn’t
just a quirk of PyMC’s syntax; Bayesian hierarchical notation itself makes no
distinction between random variables and data. The reason is simple: to use
Bayes’ theorem to compute the posterior \(p(e,s,l \mid D)\) of model
(1), we require the likelihood \(p(D \mid e,s,l)\). Even
though disasters’s value is known and fixed, we need to formally assign it a
probability distribution as if it were a random variable. Remember, the
likelihood and the probability function are essentially the same, except that
the former is regarded as a function of the parameters and the latter as a
function of the data.

This point can be counterintuitive at first, as many peoples’ instinct is to
regard data as fixed a priori and unknown variables as dependent on the data.
One way to understand this is to think of statistical models like
(1) as predictive models for data, or as models of the
processes that gave rise to data. Before observing the value of disasters, we
could have sampled from its prior predictive distribution \(p(D)\) (i.e.
the marginal distribution of the data) as follows:

Sample early_mean, switchpoint and late_mean from their priors.

Sample disasters conditional on these values.

Even after we observe the value of disasters, we need to use this process
model to make inferences about early_mean , switchpoint and
late_mean because it’s the only information we have about how the variables
are related.

We have above created a PyMC probability model, which is simply a linked
collection of variables. To see the nature of the links, import or run
disaster_model.py and examine switchpoint’s parents attribute from
the Python prompt:

We are using rate as a distributional parameter of disasters (i.e.rate is disasters’s parent). disasters internally labels rate as
mu, meaning rate plays the role of the rate parameter in disasters’s
Poisson distribution. Now examine rate’s children attribute:

Because disasters considers rate its parent, rate considers
disasters its child. Unlike parents, children is a set (an unordered
collection of objects); variables do not associate their children with any
particular distributional role. Try examining the parents and children
attributes of the other parameters in the model.

The following directed acyclic graph is a visualization of the parent-child
relationships in the model. Unobserved stochastic variables switchpoint,
early_mean and late_mean are open ellipses, observed stochastic
variable disasters is a filled ellipse and deterministic variable rate is
a triangle. Arrows point from parent to child and display the label that the
child assigns to the parent. See section Graphing models for more details.

Directed acyclic graph of the relationships in the coal mining disaster model example.

As the examples above have shown, pymc objects need to have a name assigned,
such as switchpoint, early_mean or late_mean. These names are used
for storage and post-processing:

as keys in on-disk databases,

as node labels in model graphs,

as axis labels in plots of traces,

as table labels in summary statistics.

A model instantiated with variables having identical names raises an error to
avoid name conflicts in the database storing the traces. In general however,
pymc uses references to the objects themselves, not their names, to identify
variables.

Of course, since these are Stochastic elements, your values will be
different than these. If you check rate’s value, you’ll see an array whose
first switchpoint elements are early_mean (here 0.33464706), and whose
remaining elements are late_mean (here 2.64919368):

To compute its value, rate calls the function we used to create it, passing
in the values of its parents.

Stochastic objects can evaluate their probability mass or density functions
at their current values given the values of their parents. The logarithm of a
stochastic object’s probability mass or density can be accessed via the
logp attribute. For vector-valued variables like disasters, the
logp attribute returns the sum of the logarithms of the joint probability
or density of all elements of the value. Try examining switchpoint’s and
disasters’s log-probabilities and early_mean ‘s and late_mean’s
log-densities:

Stochastic objects need to call an internal function to compute their
logp attributes, as rate needed to call an internal function to compute
its value. Just as we created rate by decorating a function that computes
its value, it’s possible to create custom Stochastic objects by decorating
functions that compute their log-probabilities or densities (see chapter
Building models). Users are thus not limited to the set of
statistical distributions provided by PyMC.

The arguments switchpoint, early_mean and late_mean are
Stochastic objects, not numbers. If that is so, why aren’t errors raised
when we attempt to slice array out up to a Stochastic object?

Whenever a variable is used as a parent for a child variable, PyMC replaces it
with its value attribute when the child’s value or log-probability is
computed. When rate’s value is recomputed, s.value is passed to the
function as argument switchpoint. To see the values of the parents of
rate all together, look at rate.parents.value.

PyMC provides several objects that fit probability models (linked collections
of variables) like ours. The primary such object, MCMC, fits models with a
Markov chain Monte Carlo algorithm [Gamerman1997]. To create an MCMC
object to handle our model, import disaster_model.py and use it as an
argument for MCMC:

Fitting a model means characterizing its posterior distribution somehow. In
this case, we are trying to represent the posterior \(p(s,e,l|D)\) by a set
of joint samples from it. To produce these samples, the MCMC sampler randomly
updates the values of switchpoint, early_mean and late_mean
according to the Metropolis-Hastings algorithm [Gelman2004] over a specified
number of iterations (iter).

As the number of samples grows sufficiently large, the MCMC distributions of
switchpoint, early_mean and late_mean converge to their joint
stationary distribution. In other words, their values can be considered as
random draws from the posterior \(p(s,e,l|D)\). PyMC assumes that the
burn parameter specifies a sufficiently large number of iterations for
the algorithm to converge, so it is up to the user to verify that this is the
case (see chapter Model checking and diagnostics). Consecutive values sampled from
switchpoint, early_mean and late_mean are always serially
dependent, since it is a Markov chain. MCMC often results in strong
autocorrelation among samples that can result in imprecise posterior inference.
To circumvent this, it is useful to thin the sample by only retaining every k
th sample, where \(k\) is an integer value. This thinning interval is
passed to the sampler via the thin argument.

If you are not sure ahead of time what values to choose for the burn and
thin parameters, you may want to retain all the MCMC samples, that is to
set burn=0 and thin=1, and then discard the burn-in period and thin
the samples after examining the traces (the series of samples). See
[Gelman2004] for general guidance.

The output of the MCMC algorithm is a trace, the sequence of retained samples
for each variable in the model. These traces can be accessed using the
trace(name,chain=-1) method. For example:

>>> M.trace('switchpoint')[:]array([41, 40, 40, ..., 43, 44, 44])

The trace slice [start:stop:step] works just like the NumPy array slice. By
default, the returned trace array contains the samples from the last call to
sample, that is, chain=-1, but the trace from previous sampling runs
can be retrieved by specifying the correspondent chain index. To return the
trace from all chains, simply use chain=None. [2]

Histogram of the marginal posterior probability of parameter late_mean.

PyMC has its own plotting functionality, via the optional matplotlib module
as noted in the installation notes. The Matplot module includes a plot
function that takes the model (or a single parameter) as an argument:

>>> frompymc.Matplotimportplot>>> plot(M)

For each variable in the model, plot generates a composite figure, such as
this one for the switchpoint in the disasters model:

Temporal series, autocorrelation plot and histogram of the samples drawn for
switchpoint.

The upper left-hand pane of this figure shows the temporal series of the
samples from switchpoint, while below is an autocorrelation plot of the
samples. The right-hand pane shows a histogram of the trace. The trace is
useful for evaluating and diagnosing the algorithm’s performance (see
[Gelman1996]), while the histogram is useful for visualizing the posterior.

As with most textbook examples, the models we have examined so far assume that
the associated data are complete. That is, there are no missing values
corresponding to any observations in the dataset. However, many real-world
datasets have missing observations, usually due to some logistical problem
during the data collection process. The easiest way of dealing with
observations that contain missing values is simply to exclude them from the
analysis. However, this results in loss of information if an excluded
observation contains valid values for other quantities, and can bias results.
An alternative is to impute the missing values, based on information in the
rest of the model.

For example, consider a survey dataset for some wildlife species:

Count

Site

Observer

Temperature

15

1

1

15

10

1

2

NA

6

1

1

11

Each row contains the number of individuals seen during the survey, along with
three covariates: the site on which the survey was conducted, the observer that
collected the data, and the temperature during the survey. If we are interested
in modelling, say, population size as a function of the count and the
associated covariates, it is difficult to accommodate the second observation
because the temperature is missing (perhaps the thermometer was broken that
day). Ignoring this observation will allow us to fit the model, but it wastes
information that is contained in the other covariates.

In a Bayesian modelling framework, missing data are accommodated simply by
treating them as unknown model parameters. Values for the missing data
\(\tilde{y}\) are estimated naturally, using the posterior predictive
distribution:

\[p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta\]

This describes additional data \(\tilde{y}\), which may either be
considered unobserved data or potential future observations. We can use the
posterior predictive distribution to model the likely values of missing data.

Consider the coal mining disasters data introduced previously. Assume that two
years of data are missing from the time series; we indicate this in the data
array by the use of an arbitrary placeholder value, None.:

To estimate these values in PyMC, we generate a masked array. These are
specialised NumPy arrays that contain a matching True or False value for each
element to indicate if that value should be excluded from any computation.
Masked arrays can be generated using NumPy’s ma.masked_equal function:

This masked array, in turn, can then be passed to one of PyMC’s data stochastic
variables, which recognizes the masked array and replaces the missing values
with Stochastic variables of the desired type. For the coal mining disasters
problem, recall that disaster events were modeled as Poisson variates:

Here rate is an array of means for each year of data, allocated according
to the location of the switchpoint. Each element in disasters is a Poisson
Stochastic, irrespective of whether the observation was missing or not. The
difference is that actual observations are data Stochastics
(observed=True), while the missing values are non-data Stochastics. The
latter are considered unknown, rather than fixed, and therefore estimated by
the MCMC algorithm, just as unknown model parameters.

The entire model looks very similar to the original model:

# Switchpointswitch=DiscreteUniform('switch',lower=0,upper=110)# Early meanearly_mean=Exponential('early_mean',beta=1)# Late meanlate_mean=Exponential('late_mean',beta=1)@deterministic(plot=False)defrate(s=switch,e=early_mean,l=late_mean):"""Allocate appropriate mean to time series"""out=np.empty(len(disasters_array))# Early mean prior to switchpointout[:s]=e# Late mean following switchpointout[s:]=lreturnout# The inefficient way, using the Impute function:# D = Impute('D', Poisson, disasters_array, mu=r)## The efficient way, using masked arrays:# Generate masked array. Where the mask is true,# the value is taken as missing.masked_values=masked_array(disasters_array,mask=disasters_array==-999)# Pass masked array to data stochastic, and it does the right thingdisasters=Poisson('disasters',mu=rate,value=masked_values,observed=True)

Here, we have used the masked_array function, rather than masked_equal,
and the value -999 as a placeholder for missing data. The result is the same.

Trace, autocorrelation plot and posterior distribution of the missing data
points in the example.

MCMC objects handle individual variables via step methods, which determine
how parameters are updated at each step of the MCMC algorithm. By default, step
methods are automatically assigned to variables by PyMC. To see which step
methods \(M\) is using, look at its step_method_dict attribute with
respect to each parameter:

The value of step_method_dict corresponding to a particular variable is a
list of the step methods \(M\) is using to handle that variable.

You can force \(M\) to use a particular step method by calling
M.use_step_method before telling it to sample. The following call will
cause \(M\) to handle late_mean with a standard Metropolis step
method, but with proposal standard deviation equal to \(2\):

late_mean and rate will all accrue samples, but disasters will not
because its value has been observed and is not updated. Hence disasters has
no trace and calling M.trace('disasters')[:] will raise an error.