HYPERSPECTRUM NEWS LETTER
Vol 3, No. 1, February 1997

The above AVIRIS data-cube was taken from NASA/JPL FTP site

Special IssueSpectral Imaging: Technology & Applications.

ABSTRACT

This issue of Hyperspectrum reviews the activities at OKSI related to imaging spectroscopy, presenting current and future applications of the technology. We discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.

1 THE TECHNOLOGY

Spectral imaging combines the following photonic technologies: (i) conventional imaging, (ii) spectroscopy, and (iii) radiometry to produce images for which a spectral signature is associated with each spatial resolution element (pixel). The position of spectral imaging relative to related technologies is shown in Fig. 1. Data produced by a spectral imager constitute a 3-dimensional cube with 2 spatial and a third spectral dimension, as depicted in Fig. 2. Instrument recorded values of the data cube can be converted, via proper calibration, to radiometric quantities that are related to the scene phenomenology (e.g., radiance, reflectance, emissivity, etc.). The power of the technique is that the phenomenology provides a link to spatial and spectral analytical models, spectral libraries, etc., to support various applications as discussed below.

To illustrate the strength of this technique, the combined spectral / spatial analysis allows the detection of optically unresolved objects (subpixel-size objects) in an image. Application of this technique range from to Earth remote sensing to early cancer detection. It is obvious why the combination of imaging and spectroscopy is very attractive. In general, one can improve image understanding by combining the best of two worlds: spatial and spectral analyses. The combined analysis may be considered as data fusion. Conventional color imagery touches upon the idea, except that color imagery is based on very broad bands and the colors are often achieved at the expense of spatial resolution (i.e., by use of RGB color striped CCDs).

Imaging spectrometers typically use a 2-D matrix array (e.g., a CCD), and produced a 3-D data cube (2 spatial dimensions and a third spectral axis). These data cubes are built in a progressive manner either by (i) sequentially recording one full spatial image after another, each at a different wavelength, or (ii) by sequentially recording one narrow image (1 pixel wide, multiple pixels long) swath after another with the corresponding spectral signature for each pixel in the swath. Some common techniques used in airborne or spaceborne applications are depicted in Fig. 3.

Figure 3. Some Remote Sensing Implementations of Multi and Hyper-spectral Imagery [Multi-spectral systems a (using a few single detectors/filter combinations) and b (using a few linear array / filter combinations) are based on whishbroom and pushbroom scanning techniques, respectively; hyper-spectral systems c (using a few linear arrays in a dispersive system) and d (using an area array in a dispersive system) are based on similar scanning techniques].

Before discussing specific examples it is worth noting that multispectral techniques utilizing a small number (less than 10) of spectral bands have been available since the deployment of LandSat in the 1960s. A question often asked is whether hyperspectral systems that utilize several tens or hundreds of bands are indeed better than multi-spectral systems. The answer is that often they are not. If so, why all the interest in hyperspectral techniques? The reason is that multispectral instruments and the bands they use are often tailored to a specific application. These bands may be therefore less than optimal or even completely unsuitable for other applications. On the other hand, hyperspectral systems have the advantage of providing data at high spectral resolution over a large number of bands, and may hence be used for a variety of applications. However, once the optimal bands have been determined for a specific application, a multi-spectral system can be deployed as it provides an overall better solution. Multispectral systems are less expensive, produce smaller datasets, and have a greater signal to noise ratio (S/N). The examples that follow should be considered in the light of this idea.

2 APPLICATIONS

(Return to TOC)During the past few years OKSI designed and developed several imaging spectrometer systems for customized applications including algorithms for data processing. Examples that are discussed below include:

(Return to TOC)The objective is to detect the presence of target organic compounds in the atmosphere utilizing the midwave and longwave infrared (MWIR/LWIR) portions of the electromagnetic spectrum which constitute the "finger print" spectral region of most organic compounds. The spectral range of molecular signatures is shown in Fig. 4 which also depicts the atmospheric "windows". The two sensors discussed in section 2.1 and 2.2 cover these MWIR and LWIR windows.

Figure 4. Assignment of Molecular Spectra in the MWIR/LWIR.

The sensor under development is the Thermal InfraRed Imaging Spectrometer (TIRIS), a 7.5 to 14.0 µm imaging spectrometer designed to demonstrate operations using uncooled optics for Dual Use applications (including military target detection and civilian applications). The environmental application is particularly attractive since with an imaging spectrometer, a toxic gas plume that is monitored can be simultaneously characterized spatially and spectrally, and the measurements can be registered to a map location via geographical information systems (GIS).

TIRIS-I is based on an Aerojet PATHS 64x20 Si:As focal plane array (FPA), indium-bump multiplexed to a readout IC. The required FPA operating temperature of 10K is achieved by using a closed cycle helium cryo-cooler, selected for logistical reasons since TIRIS will be flown from various locations where liquid helium is not readily available. A prototype TIRIS-I sensor is shown in Fig. 5.

Figure 5. The TIRIS-I Sensor (OKSI/JPL Development Sponsored by DARPA/SSDC).

The TIRIS's field of view is divided into 20 spatial pixels while spectral signature data are collected in 64 bands at about 0.1 µm resolution. When airborne, full spatial/spectral image data cubes are generated in the pushbroom mode (Fig. 3d). Spectral dispersion is obtained by a combination of two means. First, a custom linearly variable filter (LVF) was designed and manufactured to match the FPA dimensions and pixel pitch. The LVF is placed about 0.2 mm from the FPA and it is cooled to the FPA temperature. A custom diffraction grating is also installed in the optical train with dispersion matched to the LVF spectral spread. The combination of grating/ LVF produces a high optical efficiency. The grating, as well as all other fore-optics, function at room temperature. The nearfield thermal emission from these components is suppressed by the LVF.

Custom electronics were built to operate the FPA and interface it to an image acquisition system. All the digital logic required to generate the necessary clocks and biases are loaded from a PROM into an FPGA upon system power up. Data are read in 20 parallel channels and every three channels are multiplexed to a single A/D converter. Seven ADCs are used to digitize the data at 12-bits. Each data channel is then sequentially given access to a bus that is connected to a digital frame grabber installed in a desktop PC. A gray code is used to sequentially increment the rows from 1 to 64. Each row is read twice, once after incrementing the gray code, and a second time after a reset signal is applied to the row. The scheme allows using software implemented correlated double sampling or subtractive double sampling (CDS/SDS) to reduce the noise generated during the FPA readout.

The TIRIS prototype is currently undergoing spectral and radiometric characterization. Based on lessons learned, a second generation airborne TIRIS-II is being developed using a Hughes HYWAYS 64x20 Si:As FPA, Fig. 6. Among the key features of TIRIS-II, in addition to a rugged construction, is a 10K filter wheel with 8 positions containing a set of filters (narrow bandpass, ND) and other devices (blanks, pinholes) that aid in the spectro-radiometric calibration. Another feature is inclusion of two micrometer feedthrough which facilitate fine alignment of the LVF over the FPA. This fine alignment is needed for the proper positioning after the thermal contraction of various components during cool down to 10K. Finally, the dewar includes a 10K and 80K radiation shields. Electrivcally, all leads that carry signals to the FPA use micro-coaxial cables with the sheath grounded. This feature significantly reduces noise and cross talk between digital clocks and analog signals.

Figure 6. TIRIS-II Sensor.

Two models are used in the algorithmic approach to TIRIS data analysis. The first is a phenomenology model (forward solution) that simulates and predicts the sensor response for a better understanding of its operational parameters and limitations. It is a radiative transfer model that accounts for the terrain infrared characteristics, sky radiation, and the toxic plume parameters, as depicted in Fig. 7. A plume may be observed in absorption or emission depending on the temperature contrast with the background. Two spectral libraries, one of organic compounds [3] and one of background materials [4], are used to model and predict the sensor response under a prescribed scenario. Other libraries can be incorporated into the code. The model, developed in IDL (and can run on all platforms that support IDL), has a convenient GUI allowing the user to experiment with various parameters, Fig. 8.

Figure 7. Radiative Exchange Model for TIRIS Use in Detection of Gas Plumes; , , are the spectral absorptivity, emissivity, and reflectivity, and T is temperature.

With this model, the user can predict the sensor output under various operational conditions such as a specific mixture of plume component (or end-members that are selected from a library shown on the top-left of Fig. 8), environmental parameters such as ambient temperature, relative humidity, plume temperature, etc., as depicted on the right side of the figure. The top right corner of the screen allows selecting a specific sensor model that includes the spectroradiometric calibration of the sensor. Finally, corrections for atmospheric transmission are calculated externally using the MODTRAN [5] code. Once all such parameters have been specified, the model calculates and plots the sensor anticipated output in various user-selected formats on the bottom half of the screen.

Figure 8. User Interface, and Phenomenology Model for the Forward Solution The model can be used as a sensor design tool to establish performance requirements. (Sponsored by USAF Armstrong Labs).

The second model (inverse solution) uses the sensor output to determine the composition of toxic gas plume that creates the observed signature. The inverse solution has two steps. First, the organic compounds library is scanned and reduced to a set of M potential end-members that might contribute to the signature, based on detection probability and false alarm rate considerations. In the second step, an MxM matrix is constructed in which the M compounds selected in the first step are incorporated into M linear equations, in M spectral bands (M is typically smaller than the number of spectral data bands). The system provides an exact solution to the problem rather than the traditional least squares approximation. The key to this approach is the proper selection of the M band.

This process essentially constitutes a linear subpixel unmixing model and has applications to a wide variety of situations including (some are discussed in the sequel) medical, crop health monitoring, mineral prospecting, search and rescue operations, etc.

Return to TOCThe purpose of this sensor was to explore the application of spectral imaging for a real-time application, in an air-to-air or surface-to-air missile seeker [6,7]. As infrared countermeasures and decoys grow more effective, air-to-air or SAM seekers must acquire more intelligence to discriminate between their target and the decoys. Over the years, in an attempt to foil the evolution of countermeasures, missile seekers evolved from a single "heat seeking" detector to ratio (two-band) seekers, three band seekers, and finally to imaging seekers. The original heat-seeker type devices utilized radiometric target signatures in a preselected spectral band to detect and track a target, while the later two or three bands seekers utilized more sophisticated spectroradiometric analysis to this end. Modern imaging seekers utilize spatial or geometric target characteristic. The question of course is whether the combination of imaging and spectroscopy can provide a more robust target identification and discrimination.

Since targets and decoys can be hot objects, the IMS imaging spectrometer concept evaluation sensor was built to collect radiation in the VNIR and MWIR. A schematic of the optical layout of the sensor is shown in Fig. 9. The sensor has a common 6" Cassegrain telescope with a dichroic beam splitter. The VNIR signals are reflected into a 256x256 CCD (Manufactured by Dalsa) based imaging spectrometer operating from 500 to 1,000 nm, while the IR radiation is passed into a 160x120 InSb array (Cincinnati Electronics) operating between 2.5 to 5.0 µm. Both spectrometers have an adjustable width entrance slit, that determines the spectral resolution of the sensor (which is not necessarily determined by the number of pixels in the FPA along the dispersion direction).

Figure 9: Optical Layout of VNIR and MWIR Imaging Spectrometers of the IMS Sensor, and a Photograph of the Assembly (Sponsored by USAF Wright Labs).

To demonstrate the format of data recording, we show in Fig. 10 "frame" of data captured during calibration process. The data represent spectral and spatial information along the x and y image coordinates, respectively (the black region in the figure). The field of view is a narrow swath that includes a point source Hg(Xe) lamp. The y-position determines the lamp elevation above or below the horizontal plane, while the x-position indicates the spectral signature of that source. To create a complete 3-D data cube, the field of view has to be scanned. The surface plot in Fig. 10 represents the relative intensity of the source at each wavelength and the spatial extent of the source.

A MWIR signature of a passenger jet upon takeoff, looking directly into the back of the plane from about 1 Km is shown in Fig. 11. The signature closely matches the theoretical spectral distribution of a 1,100K source. The absorption of radiation in 2.7 and 4.3 µm, due to H2O and CO2 in the intervening atmosphere can also be seen. Because of the presence of these two gases in the engine plume, where the temperature is much higher than that of the surrounding atmosphere, the so called "red and blue spikes" (due to Doppler line broadening) are also seen in the graph on the right.

Figure 11. Signatures of an Airliner's Exhaust Plume Obtained via the MWIR Part of IMS.

The greatest challenge for the IMS application is the signal processing [8]. Since the missile guidance and control system must be updated at about 50 Hz, the entire process of data acquisition and analysis must be repeatedly performed in less than 2 msec. One of the techniques developed for this purpose was termed "pseudo-signature." Basically, instead of analyzing large data sets corresponding to the entire measured signature, only a subset or a pseudo-spectrum is analyzed. This pseudo spectrum is a signature in a small number of bands (e.g., between 4 to 16 -- depending on the complexity of the situation) that are created (i) by removing bands in which atmospheric transmission is poor (attempts to correct the data in those bands are impractical), and then (ii) by binning bands into a wide group (to enhance signal to noise ratio). The latter step does not require the use of the same number of binned bands per group, nor do the binned bands have to be contiguous. Such a reduced size dataset can now be analyzed in real-time.

The pseudo signatures are transformed into a domain in which maximum separation exists between the signatures of target and decoy. The linear transformation is based upon the simultaneous diagonalization of the covariance matrices of the signatures corresponding to the targets and decoys. Target/decoy discrimination is accomplished in the transformed domain using for instance matched filters (see section 3).

The IMS project demonstrated the use of a multispectral seeker. It has shown that with hyperspectral system, the bands can be dynamically configured for specific situation for optimized performance (for instance, day /night bands selection may be different for instance when the sun glare off the background or the target are not present).

An added side benefit of a multi- or hyper-spectral seeker is for target ranging. The spectral signature in certain bands is strongly modified by the presence of CO2 in the atmosphere (Carbon monoxide is selected because the amount of CO2 in the atmosphere is pretty much a constant). The observed signature depends on the range (L) (attenuation caused by the path-integrated amount of intervening CO2 ), the wavelength (), the extincion coefficient (), and the target temperature (T). The unknowns in Eq. (1) are the target range, temperature, and factor that is a product of the emissivity, the target area and a view factor (assuming that the emissivity is not a strong function of wavelength, this parameter can be also considered constant). In this equation, BB is a Plank blackbody function. Using the measured values , in k bands, a constraint least squares solution can be obtained to the target range L.

(1)

2.3 OTHER APPLICATIONS2.3.1 Crop Health Monitoring

Return to TOCEarly detection of infestation in crops can lead to eradication using a local treatment. Such early detection by ground observers is impractical in very large fields. By the time infestation is detected, large areas would have already been affected. Late stage detection not only causes a large financial damage, but also requires aerial spraying based eradication and the use of chemicals that are harmful to the environment. Under a USDA program OKSI has developed a conceptual design of an airborne multispectral sensor comprising 4 digital video cameras each equipped with a wide band-pass filter. The spectral ranges are selected based on extensive ground measurements of healthy and infested crops. The use of only a few bands is possible, as discussed earlier, when a system is designed for a specific target application. If more than a few bands were required, a more efficient method would have been some sort of scanning or starring imaging spectrometer as depicted in Fig. 3. The system will be flown on a small plane.

In the present system, four cameras acquire an image and the data are simultaneously transmitted to a computer via a frame grabber. Using the PCI bus (32 bits wide bus running at 33 MHz; with specifications also allowing for 64 bit wide bus running at 66 MHz) and a DMA bus mastering mode, it is possible now to transmit data to the host memory at rates as high as 132 MBytes/sec in a burst mode. For four 12-bits cameras, 512x512 pixels each, running at 10 MHz, the required transfer bandwidth is about 60 MBytes/sec, well within the PCI range. PCI frame grabbers that perform this function are less expensive than their predecessors that required large amount of data storage on board, and provided built-in expensive DSP functions. OKSI in conjunction with DIPIX (a frame grabber manufacturer, Ontario, Canada) and Dalsa (CCD camera manufacturer, Ontario, Canada) have tested a system that allows connecting two digital cameras to a single frame grabber. The two cameras are then pixel synchronized, not only in the read out speed but also in pixel registration on each camera.

An effective early detection technique requires in this case accurate subpixel unmixing analysis. A linear unmixing method was described earlier in connection with plumes containing gas mixtures. Such a linear technique is the first step in any analysis. Because of the specific conditions for radiative exchange, it is believed that non-linear effects may play a role in the measured signatures. Non-linear effects are caused by multiple scattering of photons off objects that have different spectral characteristics, as depicted in Fig. 12. Nonlinear analysis techniques are under development.

Figure 12. Non-linear Mixing Model Based on Two Photon Reflections (A and B are Singly Reflected Photons, C Photons have Characteristics of A and B).

2.3.2 Advanced Manufacturing Technology

Return to TOCAnother system currently under construction at OKSI is a stereoscopic hyperspectral camera system for use in the advanced manufacturing technology group of a large motor company. As in the previous application, this system is based on two 12-bits cameras that are pixel-synchronized and read simultaneously into a computer via a frame grabber. The spectral bands are selected using liquid crystal tunable filters (LCTF) (Manufactured by CRI, Cambridge, MA) placed in front of each of the cameras, Fig. 13. The LCTFs are controlled via a computer and can be stepped through a series of wavelengths or randomly switched to any desirable wavelength.

Figure 13. LCTF-Based Hyperspectral Imaging Stereovision System.

There are many manufacturing processes that can take advantage of multi- or hyper-spectral data. These include processes such as inspection of color and paint quality, detection of rust, or the inspection of defects in thin film coatings. Another potential application is the on-line, real-time inspection of weld quality. In this case the time evolution of spectroradiometric data can be translated to temperature maps and heat transfer processes in the welded parts. Heat flow maps can be correlated with good or cold welds. Real time detection of such defects can be used to correct the process before too many defective parts leave the production line.

2.3.3 Medical Applications

Return to TOCOne of the more fascinating applications of imaging spectroscopy is real-time systems to be used during cancer surgery, to delineate remaining traces of malignant cells. Such an application may work in an active mode in which the biological tissue is illuminated at wavelengths that causes the tissue to fluoresce. Typically this may include excitation in the UV or blue-green range of the spectrum and fluorescence measured at longer wavelengths. The fluorescence signatures of benign or malignant cells have been shown to sufficiently differ to allow discrimination [9]. The problem then becomes one of detection of small signals embedded in a poor S/N and poor signal-to-background clutter conditions. In addition to tissue autofluorescence, other techniques for spectral photodiagnosis are also being investigated including the use of photosensitizers [10] and elastic photon scattering [11]. Most of these methods, often referred to as optical biopsy, are based on non-imaging measurements but they could be extended, using the correct equipment, to multi- or hyperspectral imaging.

Another interesting study of utilizing hyperspectral techniques conducted by UCLA/JPL is related to functional mapping of the brain. The idea being to identify areas of the brain that perform specific functions without doing histopathaology on every slice (Fig. 14).

2.3.4 Small Target Detection & Search and Rescue Operations

Return to TOCOften there is no need to identify a specific target, the need is only to detect the presence of a target of unknown a-priori characteristics. Search and Rescue operation are an excellent example. In that case, a lost hiker in the mountains or a boat at sea must be rapidly detected. Rather than conducting an extensive analysis of hyperspectral data cubes, a time consuming process, a cursory technique based on "anomaly detection" has been devised. In anomaly detection an image is segmented into small subimages that are relatively homogeneous. The problem then becomes one of detecting pixels in the subimage region that are sufficiently "different" from their "homogeneous" neighbors. Since targets of interest may often be subpixel in size, the conventional approach would require performing complete subpixel unmixing, which is computationally expensive and require the use of extensive spectral libraries of all possible material in the image. The anomaly detection method removes the need for using extensive libraries of all potential end-members. It is designed to alert the operator to the presence of suspect pixels, for further investigation.

3 CLASSIFICATION TECHNIQUES

[Major contributions to this work were made by Dr. Jacob Barhen of ORNL]Supervised classification of multi- and hyper-spectral imagery is a robust technique that can be applied in almost all the fields that were discussed above. Customarily objects' signatures are presented in the feature-space which in our case is an N-dimensional space where N is the number of spectral bands, Fig. 15. Most classification algorithms operate in the feature space and are primarily based on statistical techniques including clustering algorithms. In contrast, neural network (NN) based classification may have the advantages that (i) they are distribution-free (i.e., do not make any assumption regarding the statistical distribution function of the data), (ii) they are non-linear, and (iii) they do not require a phenomenology based model to describe the data distribution, but can learn by example (as such they can easily identify classes with disjoint distributions). In spite of these potential advantages, NN are not frequently used in hyperspectral image classification. The reasons are (a) the inordinately long training time associated with backpropagation type algorithms training when using large data sets, and (b) the dependence of the final results on the starting parameters assigned to the NN weights. The latter problem is related to the fact that the NN training algorithms often get trapped in local minima as opposed to establishing global minimization of the error function. OKSI has addressed some of this issues with excellent results.

Figure 15. Clustering Pixels in N-D Feature Domain

3.1 CLASS SEPARABILITY ANALYSIS

Return to TOCA technique based on the Generalized Eigenvalue Problem (GEP [13,14]) was developed for transforming hyperspectral data to a domain in which maximum separability exists between signature classes, reducing the dimensionality of data sets by better than an order of magnitude, and improving classification performance. The GEP technique, as opposed to the conventional principal components analysis (PCA) accomplishes two goals: (i) increase separation between classes, and (ii) reduce within class scattering. These concepts are illustrated in 2-D in Fig. 16.

In general PCA does not guarantee to produce optimal separability among all classes in the image. The largest eigenvector in PCA produces and axis along which the entire image has maximum variance, but there is no guarantee that the variance is maximized along that eigen vector for all classes. Hence, instead of performing PCA on an entire image, doing so with training data of specific classes may produce better results.

Figure 16. Signatures in the Spectral (Physical) Domain (A); in the Feature Domain (B); Transformed for Maximum Separability (C). (Work Sponsored by U.S. DOC/NIST and by NASA GSFC)

One technique to quantify the class separability is based on the Spectral Angle Mapper (SAM) that measures the "angle" between the two vectors that point to the centers of the two clusters (Fig. 17). Another, is based on the Fisher Linear Discriminant, which is a measure of the "distance between classes normalized by the spread within the classes (also illustrated in Fig. 17).

3.1.1 Hyperspectral Data Compression

Return to TOCThe dimensionality of the problem is determined by the number of non-zero eigenvalues of the transformation matrix, which in turn depends on the number of primary classes in the image (as opposed to the original data that have the dimensionality equal to the number of spectral bands of the sensor). This technique can also serve as a basis for lossless (or lossy) data compression for use in hyperspectral data transmission across networks or telemetry from satellites.

3.1.2 Hyperspectral Image Filtering

Return to TOC
This aspect will be discussed in an upcoming issue of Hyperspectrum.

3.2 NEURAL NETWORK BASED SUPEVISED CLASSIFICATION

Return to TOCA subnetwork technique was devised for performing hyperspectral image classification (in the transformed domain). Analogous to matched filters, each subnetwork is trained to identify one class and reject all other existing classes -- a process that significantly speeds up and improves classification performance, and is fully amenable to parallel processing with no additional programming overhead!

A novel world-class NN training paradigm based on alternating direction singular value decomposition (AD-SVD) [15] was developed and demonstrated, reducing training time for large data sets and large number of input nodes, to a fraction of a second. The network topology is illustrated in Fig. 18. The AD-SVD allow determining the synaptic weights of the NN typically in a single pass. A similar backpropagation network required 10 to 15 minutes training. An example of the application of the NN classification to a hyperspectral image of a fresco is shown in Fig. 19.

A recurrent NN with enhanced learning algorithm has been adapted based on prior work at JPL, to address hyperspectral analysis. This fully connected network, configured as a system of subnetworks was shown to perform very well on the benchmarking datasets, opening the way to use hyperspectral data in dynamical, time-dependent applications such as monitoring seasonal change, deforestation, soil erosion, etc.

3.3 ALGORITHM BENCHMARKING

Return to TOCA special hyperspectral classification benchmarking technique was developed, based on "synthetic signatures" and various "noise" and "clutter" models. The synthetic data were constructed to create classes that are either unique or very similar to other classes in order to assess the performance of the classification algorithms. The use of synthetic signatures provided the utmost certainty in validating results (which practically is impossible with remotely sensed imagery due to the lack of sufficient and reliable ground truth). An example of a synthetic data cube is shown in Fig. 20, for which the mean signature of each class in the spectral domain and in the transform domain are plotted in Fig. 21. The latter figure illustrates how only a few non-zero eigenvalues are required for characterization of the data. In most cases, the classification in the transform domain produced much fasted and more accurate results.

Work was conducted addressing subpixel spectral unmixing, in which the dominant component in an unknown mixture is identified based on classification algorithms.

3 SUMMARY & OUTLOOK

Return to TOCAs hyperspectral imaging techniques evolve and are introduced into new fields, their primary contribution will be in exploring and developing new applications primarily via the selection of optimal spectral band parameters (bands position and widths). In most applications, practical operational considerations (sensor cost, data volume, data processing costs, etc.) will favor the use of multi-spectral systems. Hyperspectral sensors will find a use in general purpose instruments such as spaceborne systems that provide data to a broad range of end-users (agriculture, mineralogy, etc.). It will be the value-added date provider, or the end-users' responsibility to extract the useful data and reject the voluminous data that not only does not contribute to the application, but may cause enormous confusion.