Transcriptomics is the study of an organism’s transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell.

The first attempts to study the whole transcriptome began in the early 1990s and technological advances since the late 1990s have made transcriptomics a widespread discipline. Transcriptomics has been defined by repeated technological innovations that transform the field. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA-Seq, which uses high-throughput sequencing to capture all sequences.

Measuring the expression of an organism’s genes in different tissues, conditions, or time points gives information on how genes are regulated and reveals details of an organism’s biology. It can also help to infer the functions of previously unannotated genes. Transcriptomic analysis has enabled the study of how gene expression changes in different organisms and has been instrumental in the understanding of human disease. An analysis of gene expression in its entirety allows detection of broad coordinated trends which cannot be discerned by more targeted assays.

Transcriptomics has been characterised by the development of new techniques which have re-defined what is possible every decade or so and render previous technologies obsolete. The first attempt at capturing a partial human transcriptome was published in 1991 and reported 609 mRNA sequences from the human brain.[2] In 2008, two human transcriptomes, composed of millions of transcript-derived sequences covering 16,000 genes, were published[3][4] and, by 2015, transcriptomes had been published for hundreds of individuals.[5][6] Transcriptomes for different disease states, tissues or even single cells are now routinely generated.[6][7][8] This explosion in transcriptomics has been driven by the rapid development of new technologies with improved sensitivity and economy.[9][10][11][12]

Early attempts

The word “Transcriptome” was first used in the 1990s.[19][20] In 1995, one of the earliest sequencing-based transcriptomic methods was developed, Serial Analysis of Gene Expression (SAGE), which worked by Sanger sequencing of concatenated random transcript fragments.[21] Transcripts were quantified by matching the fragments to known genes. A variant of SAGE using high-thoughput sequencing techniques, called digital gene expression analysis, was also briefly used.[22][9] However, these methods were largely overtaken by high throughput sequencing of entire transcripts, which provided additional information on transcript structure e.g. splice variants.[9]

The dominant contemporary techniques, microarrays and RNA-Seq, were developed in the mid-1990s and 2000s.[9][33] Microarrays that measure the abundances of a defined set of transcripts via their hybridisation to an array of complementary probes were first published in 1995.[34][35] Microarray technology allowed the assay of 1000s of transcripts simultaneously, at a greatly reduced cost per gene and labour saving.[36] Both spotted oligonucleotide arrays and Affymetrix high density arrays were the method of choice for transcriptional profiling until the late-2000s.[12][33] Over this period, a range of microarrays were produced to cover known genes in model or economically important organisms. Advances in design and manufacture of arrays improved the specificity of probes and allowed more genes to be tested on a single array. Advances in fluorescence detection increased the sensitivity and measurement accuracy for low abundance transcripts.[35][37]

RNA-Seq refers to the sequencing of transcript cDNAs, where abundance is derived from the number of counts from each transcript. The technique has therefore been heavily influenced by the development of high-throughput sequencing technologies.[9][11]Massively Parallel Signature Sequencing (MPSS) was an early example based on generating 16-20bp sequences via a complex series of hybridisations,[38] and was used in 2004 to validate expression of 104 genes in Arabidopsis thaliana.[39] The earliest RNA-Seq work was published in 2006 with 105 transcripts sequenced using the 454 technology.[40] This was sufficient coverage to quantify relative transcript abundance. RNA-Seq began to increase in popularity after 2008, when new Solexa/Illumina technologies allowed 109 transcript sequences to be recorded.[41][4][42][10] This yield was sufficient for accurate quantitation of an entire human transcriptome. As of 2016, 107 transcripts can be sequenced for under USD $1000 (Table 2).

Data gathering

Transcriptomics data may be generated by several different techniques, and is broadly based on either the sequencing of individual RNA transcripts (Expressed sequence tags, or RNA-Seq) or via hybridisation to an ordered array of nucleotide probes (microarray).

Isolation of RNA

RNA must first be isolated from the experimental organism before transcripts can be recorded. Although biological systems are incredibly diverse, RNA extraction techniques are broadly similar and involve: mechanical disruption of cells or tissues, disruption of RNAse with chaotropic salts,[43] disruption of macromolecules and nucleotide complexes, separation of RNA from undesired biomolecules including DNA, and concentration of the RNA via precipitation from solution or elution from a solid matrix.[43][44] Isolated RNA may additionally be treated with DNAse to digest any traces of DNA, [45] or refined to enrich for messenger RNA.[46] RNA must be isolated with minimal degradation to avoid affecting the results, for example mRNA enrichment from fragmented RNA will result in the depletion of 5’ mRNA ends and uneven signal across the length of a transcript. Snap-freezing of tissue prior to RNA isolation is typical and care is taken to reduce exposure to RNAse enzymes once isolation is complete.[44]

Serial and Cap Analysis of Gene Expression (SAGE/CAGE)

Figure 2.Summary of SAGE. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism and reverse transcriptase is used to copy the mRNA into stable ds-cDNA (blue). In SAGE, the ds-cDNA is digested by restriction enzymes (at location ‘X’ and ‘X’+11) to produce 11-nucleotide ‘tag’ fragments. These tags are concatenated and sequences using long-read sanger sequencing (different shades of blue indicate tags from different genes). The sequences are deconvoluted to find the occurrence number of each tag. tag can be used to report on transcription of the gene that the tag came from is known.

Serial Analysis of Gene Expression (SAGE) was a development of EST methodology to increase the throughput of the tags generated and allow some quantitation of transcript abundance.[21] cDNA is generated from the RNA, but is then digested into 11 bp ‘tag’ fragments using restriction enzymes that cut at a specific sequence, and 11 base pairs along from that sequence. These cDNA tags are then concatenated head-to-tail into long strands (>500bp) and sequenced using low-throughput, but long read length methods such as Sanger sequencing. Once the sequences are deconvoluted into their original 11 bp tags.[21] If a reference genome is available, these can sometimes be aligned to identify their corresponding gene. If a reference genome is unavailable, the tags can simply be directly used as diagnostic markers if found to be differentially expressed in a disease state.

SAGE and CAGE methods produce information on more genes than was possible when sequencing single ESTs, but sample preparation and data analysis is typically more labour intensive.

Microarrays

Figure 3.Summary of DNA Microarrays. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism and reverse transcriptase is used to copy the mRNA into stable ds-cDNA (blue). In microarrays, the ds-cDNA is fragmented and fluorescently labelled (orange). The labelled fragments bind to an ordered array of complementary oligonucleotides and measurement of fluorescent intensity across the array indicates the abundance of a predetermined set of sequences. These sequences are typically specifically chosen to report on genes of interest within the organism’s genome.

Principles and advances

Microarrays consist of short nucleotide oligomers, known as “probes”, which are arrayed on a solid substrate (e.g. glass).[49] Transcript abundance is determined by hybridisation of fluorescently labelled transcripts to these probes.[50] The fluorescence intensity at each probe location on the array indicates the transcript abundance for that probe sequence.[50]

Microarrays require some prior knowledge of the organism of interest, for example, in the form of an annotated genome sequence, or a library of ESTs that can be used to generate the probes for the array.

Methods

The manufacture of microarrays relies on micro and nanofabrication techniques. Microarrays for transcriptomics typically fall into one of a two broad categories: low density spotted arrays or high density short probe arrays.[36] Transcript presence may be recorded with single- or dual-channel detection of fluorescent tags.

Spotted low-density arrays typically feature picolitre drops of a range of purified cDNAs arrayed on the surface of a glass slide.[51] The probes are longer than those of high density arrays and typically lack the transcript resolution of high-density arrays. Spotted arrays use a unique fluorophore (cy3 or cy5) on a test and a control sample, and the ratio of fluorescence is used to provide a quantitative measure of changes in abundance.[52] High-density arrays use single channel detection and each sample is hybridised and detected individually.[53] High density arrays were popularised by the Affymetrix genechip array, where each transcript is quantified by several short 25-mer probes that together assay one gene.[54]

Nimblegen arrays are a high-density array produced by a maskless-photochemistry method, which permits flexible manufacture of arrays in small or large numbers. These arrays have 100,000s of 45 to 85-mer probes and are hybridised with a one-colour labelled sample for expression analysis.[55] Some designs incorporate up to 12 independent arrays per slide.

RNA-Seq

Figure 4.Summary of RNA-Seq. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism and reverse transcriptase is used to copy the mRNA into stable ds-cDNA (blue). In RNA-seq, the ds-cDNA is fragmented and sequenced using high-throughput, short-read sequencing methods. These sequences can then be aligned to a reference genome sequence to reconstruct which genome regions were being transcribed. This data can be used to annotate where expressed genes are, their relative expression levels, and any alternative splice variants

Principles and advances

RNA-Seq refers to the combination of a high-throughput sequencing methodology with computational methods to capture and quantify transcripts present in an RNA extract.[10] The nucleotide sequences generated are typically around 100 bp in length, but can range from 30 bp to >10,000 bp, depending on the sequencing method used. RNA-Seq leverages deep sampling of the transcriptome with many short fragments from a transcriptome to allow computational reconstruction of the original RNA transcript by aligning reads to a reference genome or to each other (de novo assembly).[9] The typical dynamic range of 5-orders of magnitude for RNA-seq is a key advantage over microarray transcriptomes. In addition, input RNA amounts are much lower for RNA-Seq (nanogram quantity) compared to microarrays (microgram quantity), which allowed finer examination of cellular structures, down to the single-cell level when combined with linear amplication of cDNA.[25]
Theoretically, there is no upper limit of quantification in RNA-Seq, and background signal is very low for unambiguously mapped reads of 100 bp or more.[10]

RNA-Seq may be used to identify genes within a genome, or identify which genes are active at a particular point in time, read counts can be used to accurately model the relative gene expression level. RNA-Seq methodology has constantly improved, primarily through development of DNA sequencing technologies to increase throughput, accuracy, and read length.[56] Since the first descriptions in 2006 and 2008,[57][40] RNA-Seq has been rapidly adopted and overtook microarrays as the dominant transcriptomics technique in 2015.[58]

Methods

RNA-Seq was established in concert with the rapid development of a range of high-throughput DNA sequencing technologies.[59] However, before the extracted RNA transcripts are sequenced, several key processing steps are performed. Methods differ in the use of transcript enrichment, fragmentation, amplification, and whether to preservation of strand information.

Transcript enrichment encompasses how to separate the informative transcript molecules from more abundant but uninformative molecules, such as structural ribosomal RNAs (rRNA). Sensitivity of an RNA-Seq experiment can therefore be increased by depleting known abundant RNAs, and enriching classes of RNA that are of interest. Enrichment of mRNA molecules is achieved by separating them based on affinity for their by poly-A tails. Alternatively, ribo-depletion can be used to specifically remove rRNAs by hybridisation to probes tailored to the rRNA sequences present in the taxonomic group (eg mammal rRNA, plant rRNA). However, ribo-depletion can also introduce some bias via non-specific depletion of off-target transcripts. [60] Small RNAs such as micro RNAs, can be purified from total RNA based on their size by gel electrophoresis and extraction, prior to library preparation.

Since mRNAs are longer than the read-lengths of typical high-throughput sequencing methods, transcripts are typically fragmented prior to sequencing. Fragmentation method is generally dictated by the chosen sequencing platform and may be performed by hydrolysis, nebulisation, sonication, or enzymatic treatment of cDNA.

During preparation for sequencing cDNA copies of transcripts may be amplified by PCR to enrich for fragments that contain the expected 5’ and 3’ adapter sequences.[61] Amplification is also used to allow sequencing of very low input amounts of RNA, down to as little as 50pg of total RNA in extreme applications.[62]

Spike-in controls can be used to provide quality control assessment of library preparation and sequencing, in terms of GC-content, fragment length, as well as bias due to fragment position within a transcript.[63]

Currently, RNA-Seq relies on copying of RNA molecules into DNA molecules prior to sequencing, hence the subsequent platforms for generation of RNA-Seq data are the same as for genomic data (Table 2). Consequently, the development of DNA sequencing technologies has been a defining feature of RNA-Seq.[64][65][66]

Strand-specific RNA-seq methods preserve the strand information of a sequenced transcript.[70] Without strand information, reads can be aligned to a gene locus, but do not inform in which direction the gene is transcribed. Stranded-RNA-Seq is therefore useful for deciphering transcription for genes that overlap in different directions, and to make more robust gene predictions in non-model organisms.

Data analysis

Transcriptomics methods are typically data heavy, and require significant computation to produce meaningful data for both microarray and RNA-Seq experiments. Microarray data is recorded as high resolution images, requiring feature detection and spectral analysis. Microarray raw image files (.dat) are each about 750 MB on in size, while the processed intensities (.CEL) are around 60 MB in size. Multiple short probes matching a single transcript can reveal detail about the intron-exon structure, requiring statistical models to determine the authenticity of the resulting signal. RNA-Seq studies produce billions of short DNA sequences, which must be aligned to reference genomes comprised of millions to billions of base pairs. De novo assembly of reads within a dataset requires construction of highly complex sequence graphs. RNA-Seq operations are highly repetitious and benefit from parallelised computation with large of amounts of random-access memory (RAM).citation needed A human transcriptome could be accurately captured using RNA-Seq with 30 million 100 bp sequences per sample. [74][75] This example would require approximately 1.8 gigabytes of disk space when stored in a compressed fastq format, per sample. Processed count data for each gene would be much smaller, equivalent to processed microarray data in CEL format. Sequence data may be stored in public repositories, such as the Sequence Read Archive (SRA). [76] RNA-Seq datasets can be uploaded via the Gene Expression Omnibus.

Image processing

Microarray image processing must correctly identify the regular grid of features within an image and independently quantify the fluorescence intensity for each feature. Image artefacts must be additionally identified and removed from the overall analysis. The overall process can be broken down into a few steps: alignment of the processing grid with the image grid, spot finding, separation of target and background signal for each spot using segmentation, quantitation of fluorescence intensity, and finally quality control to report any image artefacts for manual inspection.[77]

Conversion of RNA-Seq image data into sequence data is typically handled automatically by instrument software, but typically includes a similar set of processes to microarray image processing. The Illumina sequencing-by-synthesis method results in a random array of clusters distributed over the surface of a flow cell. The flow cell is imaged up to four times during each sequencing cycle, with 10s to 100s of cycles in total. Flow cell clusters are analogous to microarray spots and must be correctly identified during the early stages of the sequencing process, however, it differs in that each cluster generates only one read and many RNA-Seq reads are required to quantify the abundance of one mRNA. In Roche’s Pyrosequencing method a camera records cluster location and the intensity of emitted light to determine the identity and number of consecutive nucleotides sequenced per cycle.

RNA-Seq Data Analysis

Quality Control

Sequence Alignment

Alignment of RNA-Seq reads to a reference genome has become relatively straightforward due to efficiency improvements in alignment software. The key challenges for this process include: sufficient speed to permit billions of short sequences to be aligned in a meaningful timeframe, flexibility to recognise and deal with intron splicing of eukaryotic mRNA, and correct assignment of reads that map to multiple locations. Efficiency issues have been largely addressed by applying indexing reference genome sequences using techniques such as spaced-seed indexing[78] or Burrows-Wheeler transform.[79]

Alignment of primary transcript mRNAs sequences derived from eukaryotes to a reference genome requires specialised handling of intron sequences, which are absent from mature mRNA. Short read aligners perform an additional round of alignments specifically designed to identify splice junctions, informed by canonical splice donor and acceptor sequences. Non-splice aware short read aligners otherwise generally fail to identify intron splice junctions. Identification of intron spice junctions also allows more reads to be aligned to a reference genome and therefore potentially improved accuracy of expression estimation. Since gene regulation may occur at the mRNA isoform level, splice-aware alignments permit detection of isoform abundance changes that would otherwise be lost in a bulked analysis.[80]

Sequence coverage

The sensitivity and accuracy of an RNA-Seq experiment is dependent on the number of reads obtained from each sample. Insufficient coverage of the transcriptome results in failure to detect low abundance transcripts and more uncertainty compared to a higher coverage transcriptome. Experimental design is further complicated by sequencing technologies with limited range of output, the variable efficiencies of sequence creation, and variable sequence quality. Added to those considerations is that every species has a different number of genes and therefore requires a tailored sequence yield for an effective transcriptome. Early studies determined suitable thresholds empirically, but as the technology matured suitable coverage is predicted computationally by transcriptome saturation. Somewhat counter intuitively, the most effective way to improve detection of differential expression in low expression genes is to add more biological replicates, rather than adding more reads.[81] The Encyclopedia of DNA Elements (ENCODE) Project catalogs the functional elements of the human genome through thousands of collaborative experiments, including RNA-Seq transcriptomics.[82][83][84] The ENCODE standards currently advise 70-fold exome coverage for standard RNA-Seq and up to 500-fold exome coverage for RNA-Seq to detect rare transcripts and isoforms.

De novo assembly of transcripts

De novo assembly refers to the construction of full length transcript sequences from individual reads, without use of a reference genome. Numerous assemblers are available for de novo creation of a transcriptome each generally with a different approach or focus (Table 3).[85] A de novo transcriptome is suited to gene discovery applications as it does not require an existing reference genome, novel transcripts are assembled as easily as known examples. Once assembled de novo, the assembly can be used as a reference for sequence alignment methods and quantitation gene expression analysis. Challenges particular to de novo assembly include: larger computational requirements compared to a reference-based transcriptome, additional validation of gene variants or fragments, additional annotation of assembled transcripts. The first metrics used to describe transcriptome assemblies, such as N50, have been shown to be misleading[86] and subsequently improved evaluation methods are now available.[87][88]

Quantification of read alignments

Differential expression

RNA-Seq alignments capture quantitative gene expression information in the form of coverage. Several software packages have been developed that normalise and model count-based gene expression data to accurately identify differential gene expression. Most popular differential gene expression software are run from a command-line interface, either in a unix-based environment or within the R/Bioconductor[97] statistical environment. Four examples are described in Table 4. Most take a table of genes and gene counts as their input, but some, such as cuffdiff, will accept .bam format read alignments as input. Analyses are output as gene lists with associated pair-wise tests for differential expression between treatments and the probability estimates of those differences.

Validation

Validation of transcriptomic analyses requires an independent technique that is recognisable, statistically assessable and highly controlled: quantitative PCR (QPCR).[102] Gene expression is measured against defined standards both for the gene of interest and control genes. The measurement by QPCR is similar to that obtained by RNA-Seq wherein a value can be calculated for the concentration of a target region in a given sample. QPCR is however, restricted to <300bp amplicons, usually toward the 3’ end of the coding region avoiding the 3’UTR.[103] If validation of transcript isoforms is required, an inspection of RNA-Seq read alignments should indicate where QPCR primers might best be placed for maximum discrimination. The measurement of multiple control genes along with the genes of interest produces a stable reference within in a biological context.[104] QPCR validation of RNA-Seq data has generally found a high degree of correlation between the techniques. [57][105][106]

Transcriptomic analysis has predominantly focused on either the host or the pathogen. Dual RNA-seq has recently been applied to simultaneously profile RNA expression in both the pathogen and host throughout the infection process. This technique enables the study of the dynamic response and interspecies gene regulatory networks in both interaction partners from initial contact through to invasion and the final persistence of the pathogen or clearance by the host immune system.[114][115]

Responses to environment

Transcriptomics allows identification of genes and pathways that respond to and counteract biotic and abiotic environmental stresses. The non-targeted nature of transcriptomics allows the identification of novel transcriptional networks in complex systems. For example, comparative analysis of a range of chickpea lines at different developmental stages identified distinct transcriptional profiles associated with drought and salinity stresses, including identifying the role of transcript isoforms of AP2-EREBP.[116]. Investigation of gene expression of during biofilm formation by the fungal pathogen Candida albicans revealed a co-regulated set of genes critical for biofilm establishment and maintenance.[117]

Assembly of RNA-seq reads is not dependent on a reference genome[93] and so ideal for gene expression studies of non-model organisms with non-existing or poorly developed genomic resources. For example, a database of SNPs used in Douglas Fir breeding programs was created by de novo transcriptome analysis, in the absence of a genome sequencing|sequenced genome.[122] Similaryl, genes that function in the development of cardiac, muscle and nervous tissue in lobster were identified by comparing the transcriptomes of the various tissues types, without use of a genome sequence.[123] RNA-seq can be also be used to identify previously unknown protein coding regions in existing sequenced genomes.

Gene expression databases

Transcriptomics studies generate large amounts of data that has potential applications far beyond the original aims of an experiment. As such, raw or process data may be deposited in public databases to ensure their utility for the broader scientific community (The Gene Expression Omnibus contained millions of experiments in 2016). The summary of the main databases in Table 5 indicates some of the available transcriptome data resources.

First transcriptomics database to accept data from any source. Introduced MIAME and MINSEQE community standards that define necessary experiment metadata to ensure effective interpretation and repeatability.[133][134]

Imports datasets from GEO and accepts direct submissions. Processed data and experiment metadata is stored at ArrayExpress, while the raw sequence reads are held at the ENA. Complies with MIAME and MINSEQE standards.[133][134]

Contains manual curations of public transcriptome datasets, focusing on medical and plant biology data. Individual experiments are normalised across the full database, to allow comparison of gene expression across diverse experiments. Full functionality requires licence purchase, with free access to a limited functionality.

Conclusions

Journal version only

Transcriptomics has revolutionised our understanding of how genomes are expressed. Over the last three decades, new technologies have redefined what is possible to investigate, and integration with other -omics technologies is giving an increasingly integrated view of the complexities of cellular life.
The plummeting cost of transcriptomics studies have made them possible for small laboratories, and large scale transcriptomics consortia are able to undertake experiments comparing transcriptomes of thousands of organisms, tissues, or environmental conditions. This trend is likely to continue as sequencing technologies improve.