Absorption and disposition

Objectives

Define common models of absorption
using closed form solutions
and differential equations.

Find out how to compare the fit of models to the same
data.

Introduction

Pharmacokinetic models have three main components:

Input: Drug inputs are commonly described as:

Bolus

Zero order (Tk0)

First Order (Tabs or Ka)

A lag time (Tlag) may be used with any of these input models.

Distribution: Most
commonly defined in terms of the number of compartments. Distribution
is usually a first order process. The key parameters of distribution are the volumes of distribution
(V1, V2, Vss) for each compartment and inter-compartmental clearances (Clic1,..)
between the central compartment and peripheral compartments of a mammillary model.

Elimination: Drug elimination is characterised by models for clearance:

First-order (CL)

Mixed-order (Vmax, Km)

Absorption: definition and physiological determinants

Absorption is the name given to the process that determines how much of
an administered drug enters the body. Absorption may be local or
systemic in terms of the physiological process and the resulting
effects. Absorption kinetics generally describe the process of the drug
entering into the central compartment (circulation). Absorption is
determined by multiple different processes, and is commonly described
by two factors: time independent = Extent (F) and time dependent =
Rate.

Disposition

The
term disposition is from the verb 'to dispose' or to get rid of. In
pharmacology, disposition refers to what happens to the drug after it
enters the body, including distribution, and elimination processes.
This encompasses clearance and accumulation (i.e. excluding
absorption).

Bioavailability

The term bioavailability (F) refers to the extent of absorption.
Bioavailability may be described by two processes: Fraction absorbed
(f) and First pass extraction ratio (ER). Fraction absorbed is
determined by blood flow, the physiochemical properties governing
diffusion, intraluminal metabolism, and transport processes including
secretion. ER is determined by organ blood flow and metabolism. The
extent of absorption is estimated as the area under the concentration
time curve (AUC), and this is useful for comparing the bioavailability
of generic drugs. AUC values can also be used to determine initial
values of parameters for first order processes of elimination
(Clearance = CL = dose/AUC), and absorption (Bioavailability = F = AUC
test/AUC reference). AUC estimations can involve error due to
extrapolation from beyond the last data point, and so their accuracy is
often limited by inadequate data (1, p. 67).

To convert an intravenous
dose to an equivalent oral dose it is necessary to divide by
bioavailability (F). For example, the bioavailibilty of oral morphine
varies from 15-30%. This equates to an IV:Oral dose ratio of 1:3 for
chronic opiate users, and 1:6 for the opiate naive.

Absorption rate depends on the route of administration, the type of
input, physiological processes (blood flow, gastric emptying),
physiochemical factors and drug formulation. There are 3 types of input
processes describing rate of absorption: bolus, zero-order, or first
order. Bolus input implies instantaneous absorption. Zero order input
means a constant rate of input, eg intravenous infusion. Repeated
intermittent oral dosing also achieves the same average steady state
concentrations as a constant rate infusion, and accumulates at the same
rate, but with higher peak concentrations and swings between peak and
trough concentrations. First order input absorption occurs at a rate
proportional to the amount of drug, and the rate changes with time,
typically decreasing over time (eg. intramuscular, intestinal
absorption).

Absorption rate is by definition time dependant, and the term Tk0 is
used to describe duration. Tmax (time to peak concentration) and Cmax
(peak concentration) are used to describe rate of absorption to allow
comparison between generic medicines. Half life of absorption predicts
the time to peak concentration, and will be considered further in the
next section.

The site of absorption of an oral
preparation is an important consideration. Few drugs are absorbed in
the stomach itself (with alcohol being an exception). Most orally
administered drugs are absorbed from duodenum, so that gastric emptying
acts as a physiological control on the rate of absorption and may
effectively result in zero order infusion kinetics. Slow release
formulations represent a pharmacological control mechanism of rate of
absorption that may also result in zero order input. Delays in drug
absorption may be due to a variety of processes including gastric
emptying and formulation and can be coded in a model by describing the
effect of lag time (Tlag).

Different
considerations apply to non-intestinal sites of drug administration and
absorption. Drugs can be administered by almost any route imaginable,
even intrapleural and intraperitoneal. Systemic absorption is usually
the goal of rectal, buccal, subcutaneous and intramuscular
administration. Local absorption is usually the main objective of
intraarticular, spinal, epidural, and topical (conjunctival, cutaneous,
mucosal), although some degree of systemic absorption may occur.
Systemic absorption can occur from specific trancutaneous formulations
(eg. nicotine and Fentanyl patches). Rectal absorption processes can be
particularly complex to predict depending on how much enters the portal
circulation and the impact of first pass metabolism.

Half life

Half life is the time taken for a first order process to be 50%
complete. This process could be absorption, elimination or
accumulation. The natural logarithm of two (ln (2)) is used to
calculate half life, and may be approximated to 0.7. In a first order
process, an exponential function predicts time course. After one
half-life the process is 50% complete, after 2 half-lives 75%, 3
half-lives 87.5% , 4 half-lives 93.75%, 5 half-lives 96.875 and so on.
A first order process can be considered relatively complete after 4
half-lives.

Absorption half life is determined
by Ln2/Ka, where Ka is the proportionality constant that relates drug
amount and rate of absorption. Absorption half lives are typically
short eg. 30 minutes. Time to peak effect (Tmax) is approximately 3 x
absorption half life, (where peak effect is the effect at peak
concentration).

Elimination or accumulation half-lives have the same time value, but
describe different processes. In a simple one compartment model, half
life is determined by clearance and volume of distribution, T1/2=
ln2*Vd/CL. In this way half life may be thought of as a proportionality
constant describing volume of distribution (Vd) and clearance (CL).
Thus half life may not change despite changes in Vd and CL if they
change proportionally. It is often preferable to estimate clearance and
volume as primary parameters, and then derive half life as a secondary
parameter.

Accumulation during constant infusion
or intermittent oral dosing is time dependant and determined by half
life. After one half life, a drug will accumulate to 50% of steady
state. Steady state will be reached after approximately 4 half-lives.
Accumulation rate will be the same whether a drug is given by repeated
bolus doses or constant infusion as long as the same total amount of
drug is administered over a given time period. The rate of accumulation
is the same, since rate = amount of drug/time, but there will be more
peaks and troughs for bolus doses.

Accumulation for repeated doses is determined by dosing interval (dosint) and half life.
This can be described by an equation for accumulation factor:
AF = 1/(1-EXP(-CL/Vd. Dosint))

If the dosing interval is equal to half life then the accumulation
factor is 2, which means that the concentration at steady state is
twice the concentration after the first dose. So accumulation factor is
the ratio of concentration at steady state to concentration after first
dose. This only applies at the same time after the dose, eg
concentration measured 10 minutes after 1st dose, and 10 minutes after
dose occurring at steady state (4 half lives).

Elimination half life refers to the time for plasma concentration to
reduce by 50% after the input stops. Elimination is usually a first
order process for most drugs, but may be mixed (eg. alcohol). Context
sensitive half time (CSHT) is a concept that relates duration of
infusion and elimination processes once the infusion stops. It is
influenced by the physiochemical properties of the drug and
distribution as well as clearance. It is often discussed with respect
to prolonged infusions of lipid soluble drugs such as fentanyl in
anaesthesia an intensive care practice, but will not be considered
further here.

Rate of input and output and differential equations

Models may be defined in different mathematical forms. Often the
simplest analytical solution involves the use of closed form equations.
Explicit equations express the dependent variable as a function of the
independent variable, constants and parameters of the model (1, p. 35).
Implicit equations include the dependent variable (eg. as initial drug
concentration), for example Michaelis -Menten type processes (1, p.
38).

Differential equations
are often required for iteration of more complex models. Differential
equations describe the rate of change of a variable with time. This is
commonly expressed as d/dt (X), which is the differential of X with
respect to time, where X is a dependant variable. Either amount or
concentration may be used, as long as this is consistent for the
equation or series of equations. If amount is used, concentration needs
to be specified as amount/volume. The initial or starting amount or
concentration of each variable needs to be specified for each
compartment considered.

The rate of change of a
variable is equal to the inputs â€" outputs. At steady state the rate
of input is equal to the rate of output. Rate of input is described by
administration method and absorption kinetics: bolus, zero order, or
1st order or a combination. Rate of output is described by elimination
models: 1st order or mixed order. Mixed order processes, incorporate
both constant and maximum rate of a process.

As
a form of mathematical description differential equations can be
helpful for conceptualizing physiological processes and more complex
models e.g. effect compartments, combinations of different types of
input, and multiple compartment models. Inter-compartmental clearance
is generally a first order process. The structural model and number of
compartments is determined not only the properties of the drug, but
from a modelling perspective determined by the particular data.
Different structural models (single or multiple compartments) may be
required to give the best fit for the particular data set. More
complicated models are not necessarily more valid.

Differential equations are transformed by different methods to produce
integrated equations. Pharmacometric software enables progressive
calculation of integrated values from differential equations. This can
be time consuming for more complex models. Various different methods of
solving differential equations in a stepwise fashion exist, for example
Runge-Kutta. Modifications of this method allow improved accuracy (eg.
determination of step size) (1, p. 53).

The use
of nested conditional expressions (if then else endif blocks) within
the model may allow the differential equations to be solved more
quickly. Examples in this assignment include expressions defining lag
time and duration of input.

Evaluating a model: methods of comparing goodness of fit

How and why do we attempt to choose 'the best model' if all models are
wrong? The 'rightness' of a model is a relative not absolute concept.
What is the main objective of the model? For example parameter
estimation, covariate evaluation, or assessment of between subject
variability? The clinical relevance and applicability of the model is
important in evaluating its usefulness, its 'fit' for purpose.
Pharmacometrics by definition entails measurement and quantification,
but to some extent subjective evaluation is also important. Before
embarking on model building it is helpful to have clear objectives and
a plan for evaluation.

A useful model describes
a particular data set well, and best predicts and explains the
relationship of the variables in the system. The simplest methods of
fitting mathematical models to data involve drawing a line of best fit
through observed data points (e.g. 'eyeball method'). Pharmacometric
modelling involves adjusting parameter estimates to best fit the model
to the data (1, p. 11). During modelling, parameters are determined by
nonlinear regression. Parameters may be adjusted, added or subtracted.
Constants remain fixed (doses, infusion times). The stability of the
model can be assessed by rerunning it with different initial parameter
estimates. More complex models are not necessarily better, and may
involve more error. Evaluating a model is necessary to avoid
overparameterisation.

Most simply model
evaluation involves comparison between observed and predicted data (eg.
concentration values), commonly represented in graphical form. This is
termed 'Goodness of Fit' (GOF). The characteristics of a model can be
varied to predict values that are more likely to reflect true
population values. Of course the true population values can never be
entirely known, but comparison with observed sample values provides a
reference. Simulation and bootstrapping are used to evaluate the
robustness and sensitivity of the model.

The
data set being modelled will determine the structure of the model,
covariate selection, and parameter estimates. For example, different
models for the same drug may vary in the number of compartments, the
covariates, and the parameter estimates. This is because different data
sets may vary in terms of the characteristics of the population
sampled, the time course observed by sampling, and errors involved. So
to compare and discriminate between models the same data set must be
used.

Evaluation of a model occurs throughout
the modelling process to enable selection of the model, and inclusion
of covariates. This involves the assessment of the accuracy, precision
and variability of the observed and predicted data and estimates. As
well as checking the fit to the data, evaluation includes assessment of
parameter estimates, the influence of covariates and the estimation of
error and variability. In evaluating parameter estimates we consider
estimates of their variability and error, comparison with literature
values and consideration of empirical models. Biological plausibility
is also an important consideration in evaluating covariate selection.

Evaluation may be the most difficult part of the modelling process.
There is no one optimal test of the validity of a model and a variety
of methods are generally used. In evaluating the fit of a model to the
data, we need measures to describe both how well it fits, and how and
where it does not fit. These two concepts of 'fit' may require
different methods of evaluation: 'lack of fit' as well as goodness of
fit. So as well as overall goodness of fit, model evaluation ideally
involves specific diagnosis of what the problems are with the model,
and where they exist. Different parts of model need to be evaluated,
for example the fixed and random effects with NONMEM.

The output of pharmacometric software programs provides a large array
of information describing the fit of the data. Two main methods are
used to evaluate a model, and determine how well it fits the data set:
using graphical representations and statistical assessment.

Graphs for evaluating models Scatter plots are commonly used to
evaluate goodness of fit and to detect differences between models. GOF
plots are probably more important tools of model evaluation then
statistical tests such as objective function. A model with a lower
objective function is not necessarily if better if there is no
difference in goodness of fit or parameter estimates. Graphical
assessment of models is useful to diagnose 'lack of fit' and
specifically where the problem is. Outliers can be identified from
visual inspection, and there are different approaches that may be used
to deal with these deviations from expected values.

Some standard diagnostic plots

PRED, DV vs Time Graphs of observed and predicted values for the
dependant variable (eg concentration) versus time area simple and
powerful tool. Consistent amplitude and shape of the data should be a
feature if the model is a good fit.

PRED vs DV Graphs can be constructed of predicted vs. observed
dependant variable (eg. concentration). If the fit is good data should
be clustered uniformly and closely around a line of unity. The time
course is lost in this graph.

Visual predictive check. Posterior or visual predictive checks (VPC)
compare predicted with observed data (comparing 95% prediction
interval). They are more complex to perform and involve simulation from
final parameter estimates, then comparison of the distribution of
observations over time within the simulated distribution (95% CI). VPCs
are a relatively new method of model evaluation but are arguably one of
the most robust forms of diagnostic assessment (2).

Statistical methods of model evaluation. A number of statistical
parameters are generated by each nonlinear regression program.
Statistical tests commonly used to evaluate and discriminate between
models include objective function values, standard error and
coefficient of variation. In general terms, the model with smallest
numerical value of the test parameter (minimum value) represents 'the
best model' for the data, i.e. the best fit. The statistical
significance levels are set in advance. For example a reduction in
objective function of around 4% is commonly used (approximately
equivalent to p < 0.05).

Wings for NONMEM output provides a Coefficient of Variation (CV%) for
each parameter, estimating its variability. High CV > 10 or 20% or
wide confidence intervals may result from under or
overparameterization; or problems with the selection, error and number
of data points (1, p. 96). If the estimates of standard error are very
small then they are of limited value due to the sensitivity of this
method to changes in the means.

Methods of model evaluation

Correlation coefficients provide limited information about goodness of
fit. Coefficients range from zero to one, with one indicating a perfect
correlation or fit. Good fits usually have high R values, but the
inverse is not always true (3, p. 322). Correlation is less useful for
nonlinear methods, where values > 0.9 may be calculated despite a
visually poor fit (1, p. 108). Another disadvantage of correlation is
that is does not provide any information about the absolute values.

Plots of residuals and weighted residuals can be helpful for choosing
error models and for evaluating models, but are probably of limited
value when presented in isolation without the other GOF plots discussed
above.

A residual is the difference between an observed and calculated value for the dependant variable (3, p. 345).

Residual = Yobs - Ycalc

RES vs PRED: This plot should include zero line and trend line. The
shape of distribution of predicted response around residual zero line
suggests the error model: uniform shape suggests additive error model,
fanning suggests proportional error model. The shape may also suggest
if a weighting scheme is needed to deal with overestimation of large
concentrations and underestimation of small concentrations.

WRES vs PRED: The shape of distribution of predicted concentration
response should be uniform around the residual = 0 line, and within 3
SD.

RES vs Time:
May reveal regions not well explained by model, for example where data such as sampling time is missing.

WRES vs TIME:
The plot of weighted residual vs. time should be randomly dispersed around the residual zero line.

WRES vs ID:
This plot can be useful to assess outliers and their impact on the model.

Subjective evaluation may be as important as objective assessment of a
model. Hence the 'rightness' of a model depends on the perspective of
the modeller, the population, and the purpose of the model. Other
important considerations are the biological plausibility and the
simplicity of the model (parsimony principle).

Study design to ensure sampling optimizes the number and timing of data
point collection may improve model fit and accuracy of parameter
estimation.

Where the results of goodness of fit
of plots and statistical tests differ, more emphasis should be place on
the graph of overlaying values of predicted and observed concentrations
versus time.

Subjective assessment remains
important, especially when faced with conflicting information about
goodness of fit. However visual perceptual errors and bias can lead to
errors in the interpretation of Goodness of fit plots.

Modelling involves the creation of a model that fits the particular data set.

Change
the initial parameter estimates under Fixed effects to reasonable
starting values based on the Excel parameters used for simulation.
Holding the mouse over each box will tell you, which parameter the
value is for.

Set the 'Stand. dev. of the random effects' to 0 by clicking on each 1 in the covariate model box.

Because these are data from a single individual, there is no between
subject variability (random effects). The SD of random effects is
therefore 0.

Set the residual error model to 'exponential'.

Set the 'Residual error parameters' to the value or reserr in the Excel file kak0.xls file.

Click
'Check initial fixed effects'. A plot of predictions based on the model
and initial parameter estimates will display along with the observed
values. Close the 'Check initial fixed effects' window.

You can visualise the effect of changing your parameter estimates by
adjusting the values in the bottom left of the window. When you have
chosen initial estimates that form a prediction that is similar to the
observations, click 'Set as initial values' to apply these values and
close the window.

Set
the calculation options by ticking the 'Estimate the population
parameters', 'Estimate the Fisher Information Matrix' and 'Estimate the
log-likelihood' boxes which are next to the 'Run' icon at the top of
the Monolix window).

IMPORTANT: Save
the project as KA1_ka1_project.mlxtran in "Absorption and
Disposition\Monolix\KA1" folder. You wiill need to make a new folder
for each of the 3 data sets (i.e. KA1, KA1L, K01L).

Estimate the parameters by clicking on Run. This will
take a while depending on the complexity of the model. During the estimation process
you can see how the parameter estimates are being changed and settle down towards
the final value.

When
the estimation finishes click on 'List' button below 'Graphics' at the
bottom of the Monolix window. Choose just 'Individual Fits' as Outputs
and then click on OK. Then click on 'Display the Graphics'
at the top of the Monolix window. You can select the graphic plots by
clicking on the tab at the bottom of the 'Figures' window.

Look at the Individual fit and save a pdf copy in your Absorption and Disposition\Monolix\KA1\KA1_ka1 project folder.

View the parameter estimates by clicking
'Last Results'. A text file containing these results is saved a 'pop_parameters.txt' in the project folder.

IMPORTANT: Save the project again as KA1_ka1.mlxtran in your "Absorption and Disposition\Monolix\KA1\KA1_ka1" folder.

Make up each project name as the combination of the model e.g. KA1 and
the data source e.g. ka1L project name for the remaining models. After you have used all three models with the ka1 data, create a folder for the ka1L data and then for the k01L data.

At the end of the assignment, you should have 9 Monolix projects in the Monolix folder.

Take
care to save each Monolix workspace with a name that will clearly
identify, to you, the model and data type. These instructions suggest
doing so by using seperate folders for each simulated data set,
containing three models each.

Select the KA1 icon in the Workflow (Object Browser
window). Click
on the Execute KA1 icon at the top of the Phoenix window. This will
start the parameter estimation process.

Click on Output Data Theta icon to see the parameter
estimates.

Create
a new Excel workbook in the Absorption and Disposition\Phoenix
folder called "Absorption and Disposition.xlsx".

Select
the Theta table values (including columns and rows) then copy and paste
the parameter estimates to
the Excel workbook.

Create a new Word document in the Absorption and Disposition\Phoenix folder called "Absorption and Disposition.docx".

Copy the Excel parameter table and paste it into the Word
document.

Look at the Ind DV, IPRED vs IVAR plot then right click on
it
and copy a bitmap to clipboard. Paste the clipboard contents to the
Word
document..

Save the project with the name Absorption and Disposition.phxproj
in a Absorption and Disposition\Phoenix folder.

Once
you have set up the first workflow model object you may find it quicker
to copy the model object, select Workflow and then paste it to the
Workflow. In that case you will only have to make a few changes to each
of the new model objects.

For the KA1L model, click on the Structure tab then on the 'tlag' option for the KA1L. Set an initial estimate for tlag.

The
K01L model requires a Graphical form of the model. Click on Edit as
Graphical on the Structure tab and accept the change and not using
closed form equations. Click on the Setup Model tab to see the
graphical structure.

If you see an Aa object then right click on the Aa object and delete it (it will be there if you have copied a ka1 model).

Right click on the graphical model window and insert a parameter then click on the Parameters tab and enter tk0 in the name box.

In the Setup model graphical window click on the C object then go to the Structure tab and click on 1 for the 'Dosepts' option.

Click on Tlag and enter tk0 in the name box.

Click twice on the '0 ord' button to get '0 ord duration' then enter tk0 in the name box.

Click on the Mapping window and map AMT to A1.

Use the Initial Estimates tab to set suitable initial estimates. Make sure the tvTlag box on the left is checked.

Repeat the steps above for each of the 9 data and model combinations.

Click on the Workflow object in the left hand pane and then click on the 'Execute workflow' button in the top menu.

If everything has been set up correctly all 9 models will be run and results presented in the Workflow results.