Applied Bioinformatics Computing: Data Mining

Data mining techniques are an automated means of reducing the complexity of data in large bioinformatics databases and of discovering meaningful and useful patterns and relationships in data. In the second article in his series on applied bioinformatics, author and technology expert Bryan Bergeron offers an overview of the methods, technologies, and challenges associated with data mining in bioinformatics.

From the author of

From the author of

Where is the knowledge we have lost in information?Where is the wisdom we have lost in knowledge?--T.S. Elliot, "The Rock"

Bioinformatics, the study of how information is represented and transmitted
in biological systems, is a data-intensive field of research and development. It
encompasses networking, databases, visualization techniques, search engine
design, statistical techniques, modeling and simulation, AI and related pattern
matching, and (the subject of this article) data mining. In bioinformatics, data
mining is concerned with discovering how simple base pairs can be combined in
different ways, many of which are unknown, to provide the form and function of
the larger building blocks of life. Mastering this biological
dataincluding discovering many of the underlying rules, relationships, and
meaningsrequires human intelligence and intuition, leveraged by
computer-based tools.

Because of automated gene sequencing machines and new worldwide activity in
the field, both experimental (wet lab) and computer-generated data are
increasing at an exponential rate. Consider the growth in the holdings of
GenBank and
Swiss-Prot, major online
nucleotide sequence and protein sequence databases, respectively. As illustrated
in Figure 1, about 90% of the entries in both databases have been made since
1998, when Celera Genomics entered the human genome sequencing race against the
nearly decade-old government-sponsored activity.
PubMed, the major
online biomedical bibliographic database, has experienced similar growth.

The increase in the holdings of GenBank, Swiss-Prot, and PubMed mirrors the
growth of the hundreds of public and private online databases that reflect the
work of thousands of researchers in laboratories around the world who are
engaged in mass-producing biological data. There are more data to deal with
today because modern researchers are using computer-enabled, data-centric,
high-throughput processes, such as automated sequencing machines and
microarrays. These researchers are looking for data about the structure of the
protein in order to allow, for example, the design of molecules to match key
regions of the protein. In this way, designer drugs can be synthesized to
catalyze or block reactions involving the protein. Similarly, an increasing
proportion of the data is derived from mining and manipulating data from other
databases, as opposed to direct experimental methods. For example, there are
dozens of labs around the world focused on predicting protein structure from
sequence data, as opposed to the traditional time-consuming method of direct
observation.

Getting at the hard-won sequence and structure data in molecular biology
databases and the functional data in the online biomedical literature is
complicated by the size and complexity of the databases. Exhaustively searching
for the raw data and performing the transformation and manipulations on the data
through manual operations is often impractical. However, even when computer
resources are available, the time and computational resources required to locate
and manipulate the data are limiting factors. As a result, executing exhaustive,
non-directed searches for potential correlations isn't possible. Without an
organizing theme, the billions of data points from genomic or proetomic studies
are of little value. Regardless of whether this categorization is at the base
pair, chromosome, or gene level, an organizing theme is critical because it
simplifies and reduces the complexity of what could otherwise be a flood of
incomprehensible data. For example, the PubMed, Swiss-Prot, and GenBank
databases represent generally recognizable organizational themes that facilitate
use of their contents. At a higher level, our understanding of health and
disease is facilitated by the organization of clinical research data by organ
system, pathogen, genetic aberration, or site of trauma.

Ideally, the creator and users of the database share an understanding of the
underlying organizational theme. These themes and the tools used to support them
determine how easily databases created for one purpose can be used for other
purposes. For example, in a relational database of gene sequences, the data may
be arranged in tables, and the user may need to construct SQL (structured query
language) statements to search for and retrieve data. However, if inherited
diseases organize the relational database, it may not readily support an
efficient search by protein sequence.

The challenge for researchers looking in the exponentially increasing
quantities of microbiology data for assumed and unknown relationships can be
formidable. Even simple queries may involve creating relatively complicated,
computationally intensive joins in order to create views that support a given
hypothesis of how data are related. In addition, even if the technology is
available that allows a researcher to specify any hypothetical query, the
potential for discovering new relationships in data is a function of the
insights and biases imposed by the researcher. While these limitations may be
problematic in relatively small databases, they may be intolerable in databases
with billions of interrelated data elements.

To avoid the computational constraints imposed by these large molecular
biology databases, researchers frequently turn to biological heuristics to avoid
exhaustive searches or processes with a low likelihood of success. For example,
in hunting for new genes, a good place to start from a statistical perspective
is near sequences that tend to be found between introns and exons. However, even
with heuristics, user-directed discovery is inherently limited by the time
required to manually search for new data.

The aim of this article is to introduce data mining techniques as an
automated means of reducing the complexity of data in large bioinformatics
databases and of discovering meaningful, useful patterns and relationships in
data. The following sections provide an overview of the methods, technologies,
and challenges associated with data mining.

Knowledge Discovery

Data mining is one stage in an overall knowledge discovery process. As
illustrated in Figure 2, this process involves selection and sampling of the
appropriate data from the database(s); preprocessing and cleaning of the data to
remove redundancies, errors, and conflicts; transforming and reducing data to a
format more suitable for the data mining; data mining; evaluation of the mined
data; and visualization of the evaluation results.

Figure 2 Data mining in the larger context of the knowledge discovery
process.

In most cases, several iterations of the knowledge discovery process are
required, each involving the design of new data queries to test new hypothesesIn
addition, although the process may seem straightforward, data mining and the
overall knowledge discovery process involve much more than the simple
statistical analysis of data. For example, difficult-to-describe metrics, such
as novelty, interestingness, and understandability, are often used to define
data mining parameters for data discovery. Similarly, each phase of the
knowledge discovery process has associated challenges, as outlined here.

Selection and Sampling

Because of practical computational limitations and a priori knowledge, data
mining isn't simply about searching for every possible relationship in a
database. In a large database or data warehouse, there may be hundreds or
thousands of valueless relationships. Because there may be millions of records
involved and thousands of variables, initial data mining is typically restricted
to computationally tenable samples of the holding in an entire data warehouse.
The evaluation of the relationships that are revealed in these samples can be
used to determine which relationships in the data should be mined further using
the complete data warehouse. With large complex databases, even with sampling,
the computational resource requirements associated with non-directed data mining
may be excessive. In this situation, researchers generally rely on their
knowledge of biology to identify potentially valuable relationships, and they
limit sampling based on these heuristics.

Preprocessing and Cleaning

The bulk of work associated with knowledge discovery is in preparing the data
for the actual analysis associated with data mining. The major preparatory
activities include the following:

Data Characterizationcreating a high-level description of the
nature and the content of the data to be mined.

Consistency Analysisdetermining the statistical variability in the
data, independent of the domain.

Domain Analysisvalidating the data values in the larger context of
the biology.

Data Enrichmentdrawing from multiple data sources to minimize the
limitations of a single data source.

Frequency and Distribution Analysisweighing values as a function of
their frequency of occurrence.

Normalizationtransforming data values from one representation to
another.

Transformation and Reduction

In the transformation and reduction phase of the knowledge discovery process,
data sets are reduced to the minimum size possible through sampling or summary
statistics. For example, tables of data may be replaced by descriptive
statistics such as mean and standard deviation.

Data Mining Methods

The process of data mining is concerned with extracting patterns from the
data by using techniques such as classification, regression, link analysis,
segmentation, or deviation detection. Classification involves mapping data into
one of several predefined or newly discovered classes. Regression methods
involve assigning data a continuous numerical variable based on statistical
methods. One goal in using regression methods is to extrapolate trends from a
few samples of the data. Link analysis involves evaluating apparent connections
or links between data in the database. Deviation detection identifies data
values that are outside of the norm, as defined by existing models or by
evaluating the ordering of observations. Segmentation identifies classes or
groups of data that behave similarly, according to an established metric. These
methods of data mining are typically used in combination with each other, either
in parallel or as part of a sequential operation.

Evaluation

In the evaluation phase of knowledge discovery, the patterns identified by
the data mining analysis are interpreted. Typical evaluation ranges from simple
statistical analysis and complex numerical analysis of sequences and structures
to determining the clinical relevance of the findings.

Visualization

Visualization of evaluation results can range from simple pie charts to 3-D
virtual reality displays that can be manipulated by haptic (force feedback)
controllers.