Bladder cancer is the fifth cause of cancer deaths in the United States. Virtual cystoscopy (VC) can be a screening means
for early detection of the cancer using non-invasive imaging and computer graphics technologies. Previous researches
have mainly focused on spiral CT (computed tomography), which invasively introduces air into bladder lumen for a
contrast against bladder wall via a small catheter. However, the tissue contrast around bladder wall is still limited in CT-based
VC. In addition, CT-based technique carries additional radiation. We have investigated a procedure to achieve the
screening task by MRI (magnetic resonance imaging). It utilizes the unique features of MRI: (1) the urine has distinct T1
and T2 relaxation times as compared to its surrounding tissues, and (2) MRI has the potential to obtain good tissue
contrast around bladder wall. The procedure is fully non-invasive and easy in implementation. In this paper, we proposed
a MRI-based VC system for computer aided detection (CAD) of bladder tumors. The proposed VC system is an
integration of partial volume-based segmentation containing texture information and fast marching-based CAD
employing geometrical features for detecting of bladder tumors. The accuracy and efficiency of the integrated VC
system are evaluated by testing the diagnoses against a database of patients.

With the development of computer-aided polyp detection towards virtual colonoscopy screening, the trade-off between
detection sensitivity and specificity has gained increasing attention. An optimum detection, with least number of false
positives and highest true positive rate, is desirable and involves interdisciplinary knowledge, such as feature extraction,
feature selection as well as machine learning. Toward that goal, various geometrical and textural features, associated
with each suspicious polyp candidate, have been individually extracted and stacked together as a feature vector.
However, directly inputting these high-dimensional feature vectors into a learning machine, e.g., neural network, for
polyp detection may introduce redundant information due to feature correlation and induce the curse of dimensionality.
In this paper, we explored an indispensable building block of computer-aided polyp detection, i.e., principal component
analysis (PCA)-weighted feature selection for neural network classifier of true and false positives. The major concepts
proposed in this paper include (1) the use of PCA to reduce the feature correlation, (2) the scheme of adaptively
weighting each principal component (PC) by the associated eigenvalue, and (3) the selection of feature combinations via
the genetic algorithm. As such, the eigenvalue is also taken as part of the characterizing feature, and the necessary
number of features can be exposed to mitigate the curse of dimensionality. Learned and tested by radial basis neural
network, the proposed computer-aided polyp detection has achieved 95% sensitivity at a cost of average 2.99 false
positives per polyp.

As a non-invasive bladder tumor screening approach, magnetic resonance imaging (MRI)-based virtual cystoscopy
(VCys) has received increasing attention for a better soft tissue contrast compared to computer tomography (CT)-based
VCys. In this paper, some preliminary work on segmenting the inner boundary of bladder wall from both T1- and T2-
weighted MR bladder images were presented. Via an iterative maximum a posteriori expectation-maximization (MAPEM)
approach, the tissue mixture fractions inside each voxel were estimated. Considering the partial volume effect
(PVE) that MR images suffer from, the advantages of such mixture-based segmentation approach are (1) statistics-based
tissue mixture model that shapes each tissue type as a normal-distributed random variable, (2) closed-form mathematical
MAP-EM iterative solution, and (3) capability and efficiency of the estimated tissue mixture fractions in reflecting PVE.
Given the extracted inner bladder wall, manipulations could be further taken, for each individual voxel located on the
inner bladder wall, to identify the outer bladder wall prior to the measurement of wall thickness. Not limited to
geometrical analysis, the consideration of PVE in the study of early stage abnormality on the mucosa in the scope of
VCys is believed to provide more textural information in distinguishing from neighboring artifacts about the surface
deformations that is due to bladder tumors.

Computed tomography (CT) has been well established as a diagnostic tool through hardware optimization and
sophisticated data calibration. For screening purposes, the associated X-ray exposure risk must be minimized. An
effective way to minimize the risk is to deliver fewer X-rays to the subject or lower the mAs parameter in data
acquisition. This will increase the data noise. This work aims to study the noise property of the calibrated or
preprocessed sinogram data in Radon space as the mAs level decreases. An anthropomorphic torso phantom was
scanned repeatedly by a commercial CT imager at five different mAs levels from 100 down to 17 (the lowest value
provided by the scanner). The preprocessed sinogram datasets were extracted from the CT scanner to a laboratory
computer for noise analysis. The repeated measurements at each mAs level were used to test the normality of the
repeatedly measured samples for each data channel using the Shapiro-Wilk statistical test merit. We further studied the
probability distribution of the repeated measures. Most importantly, we validated a theoretical relationship between the
sample mean and variance at each channel. It is our intention that the statistical test and particularly the relationship
between the first and second statistical moments will improve
low-dose CT image reconstruction for screening
applications.

Computed tomography-based virtual colonoscopy or CT colonography (CTC) currently utilizes oral contrast solution to
differentiate the colonic fluid and possibly residual stool from the colon wall. The enhanced image density of the tagged
colonic materials causes a significant partial volume (PV) effect into the colon wall as well as the lumen space (air or
CO2). The PV effect into the colon wall can "bury" polyps of small size by increasing their image densities to a
noticeable level, resulting in false negatives. It can also create false positives when PV effect goes into the lumen space.
Modeling the PV effect for mixture-based image segmentation has been a research topic for many years. This paper
presents the practical implementation of our newly developed statistical image segmentation framework, which utilizes
the EM (expectation-maximization) algorithm to estimate (1) tissue fractions in each image voxel and (2) statistical
model parameters of the image under the principle of maximum a posteriori probability (MAP). This partial-volume
expectation-maximization (PV-EM) mixture-based MAP image segmentation pipeline was tested on 52 CTC datasets
downloaded from the website of the VC Screening Resource Center, with each dataset consisting of two scans of supine
and prone positions, resulting in 104 CT volume images. The cleansed lumens by the automated PV-EM image
segmentation algorithm were visualized with comparison to our previous work, with the gain achieved mainly in the
following three aspects: (1) the tissue fraction information of those voxels with PV effect have been well preserved, (2)
the problem of incomplete cleansing of tagged materials in our previous work has been mitigated, and (3) the
interference caused by small bowel was significantly released.

Virtual colonoscopy has been developed as a non-invasive, safe, and low-cost method to evaluate colon polyps.
Implementation and efficiency of virtual colonoscopy requires rigorous cleansing of colon prior to the examination.
Electronic colon cleansing is a new technology that virtually clean stool residues tagged with contrast agents from the
obtained computed tomography (CT) images. From our previous studies on electronic colon cleansing, we found that
residual stool and fluid are often problematic for optimal viewing of colon. In this paper, we focus on developing a
model-based approach to correct both non-uniformity and partial volume effects appearing in regions of bone and tagged
stool residues. A statistical method for maximum a posterior probability (MAP) was developed to identify and virtually
clean the tagged stool residuals. In calculating the solution, the well-known expectation maximization (EM) algorithm is
employed. Experimental results of electronic colon cleansing are promising.

This paper presents a band selection technique for spectral feature characterization in hyperspectral data, referred to as
band selection for hyperspectral signature feature characterization (BSHSFC). Since a hyperspectral signature is
characterized by its spectral profile, the number of bands to be selected is totally determined by spectral features that
uniquely characterize the signature. As a result, two hyperspectral signatures may require different sets of bands for
spectral characterization. The proposed BSHSFC is a variable-size variable-band selection (VSVBS) where the number
of selected bands varies with a hyperspectral signature to be processed. In order for BSHSFC to select an appropriate set
of bands for a hyperspectral signature, a new band prioritization criterion, referred to as orthogonal subspace projectorbased
band prioritization criterion (OSP-BPC) is derived for this purpose. It assigns a different priority score to each
spectral band of a hyperspectral signature such that various features can be captured by the BSHSFC from the original
set of bands. Accordingly, the BSHSFC can be interpreted as a spectral feature extraction technique for signature
characterization. Finally, experiments using two sets of data are conducted to demonstrate that the BSHSFC-based BS
can improve the performance of hyperspectral signature characterization.

Hyperspectral images are collected by hundreds of contiguous spectral channels and thus, the data volume to be processed is considered to be huge. With such high spectral resolution, spectral correlation among bands is expected to be very high. Band selection (BS) is one of common practices to reduce data volumes, while retaining desired information for data processing. This paper investigates issues of band selection and further develops various exploitation-based band prioritization criteria (BPC) which rank the hyperspectral bands in accordance with priorities measured by various applications such as detection, classification and endmember extraction. Three categories of BPC can be derived based on different design rationales, (1) second order statistics, (2) higher-order statistics, and (3) band correlation/dependence minimization or band correlation/dependence constraint. Unlike commonly used band selection techniques which do not specifically use the concept of band prioritization (BP) to select desired bands, this paper explores the idea of BP for band selection where an appropriate set of bands can be selected according to priority scores produced by spectral bands. As a result, the BPC provides a general guideline for band selection to meet various applications in hyperspectral data exploitation.

Kalman filter has been widely used in statistical signal processing for parameter estimation. Recently, a Kalman filter-based approach to spectral unmixing, referred to as Kalman filter-based linear unmixing (KFLU) was also developed for mixed pixel classification. However, its applicability to estimation and discrimination for hyperspectral signature characterization has not been explored where a hyperspectral signature is defined as a vector on a range of contiguous optical wavelengths of interest. This paper presents a new application of Kalman filtering in hyperspectral signature similarity and discrimination. In particular, it develops a Kalman filter-based signature estimator from which two Kalman filter-based discriminators can be derived for signature similarity and discrimination. The developed Kalman filter-based discriminators utilize a state equation to characterize a hyperspectral signature and a measurement equation to describe another hyperspectral signature, while the developed Kalman filter-based estimator makes use of state and measurement equations to describe the true signature and the observable signature respectively. The least squares error resulting from the Kalman filter-estimated hyperspectral signature is then used as the power for hyperspectral signature similarity and discrimination. Experimental results demonstrate that such Kalman filter-based discriminators are more effective than commonly used spectral similarity measures such as spectral angle mapper (SAM) or Euclidean distance.

Linearly constrained adaptive beamforming has been used to design hyperspectral target detection algorithms such
as constrained energy minimization (CEM) and linearly constrained minimum variance (LCMV). It linearly constrains a
desired target signature while minimizing interfering effects caused by other unknown signatures. This paper
investigates this idea and further uses it to develop a new approach to band selection, referred to as linear constrained
band selection (LCBS) for hyperspectral imagery. It interprets a band image as a desired target signature while
considering other band images as unknown signatures. With this interpretation, the proposed LCBS linearly constrains a
band image while also minimizing band correlation or dependence caused by other band images. As a result, two
different methods referred to as Band Correlation Minimization (BCM) and Band Correlation Constraint (BCC) can be
developed for band selection. Such LCBS allows one to select desired bands for data analysis. In order to determine the
number of bands required to select, p, a recently developed concept, called virtual dimensionality (VD) is used to
estimate the p. Once the p is determined, a set of p desired bands can be selected by LCBS. Finally, experiments are
conducted to substantiate the proposed LCBS.

Under the U.S. Army sponsored Joint Service Agent Water Monitor (JSAWM) program, developing hand-held assays using tickets for chemical/biological agent detection has been of major interest. One of keys to success is to develop detection algorithms that not only can effectively detect the presence of various agents, but also can quantify the detected agents. This paper presents a recent development of detection software that can perform 3-dimensional (3D) receiver operating characteristics (ROC) analysis which is based on quantified agent concentration. The ROC curves have been widely used in communications, signal processing and medical communities to evaluate the effectiveness of a detection technique. It generally formulates a signal detection problem as a binary composite hypothesis testing problem with the null hypothesis and the alternative hypothesis represents the case of no signal and the case of signal presence respectively. The ROC curve is then plotted based on the detection probability (power) PD versus the false alarm probability, PF. Unfortunately, such a two-dimensional (2D) (PD,PF)-based ROC curve does not factor in the concentration detected in an agent signal which is a crucial parameter in chemical/biological agent detection. The proposed 3D ROC analysis is developed from such a need. It includes an additional parameter, referred to as threshold t, which is used to threshold the detected agent signal concentration. Consequently, a different value of t results in a different 2D ROC curve. In order to take into account the thresholding factor t, a 3D ROC curve is derived and plotted based on three parameters, (PD,PF,t). As a result of the 3D ROC curve, three 2D ROC curves can be also derived. One is the conventional 2D (PD,PF)-ROC curve. Another is a 2D (PD,t)-ROC curve which describes the relationship between PD and the threshold value t. A third one is a 2D (PF,t)-ROC curve which shows the effect of the threshold value t on PF. The utility of the proposed 3D ROC analysis will be demonstrated by the detection software developed by the UMBC for the tickets used in HHA for water monitoring.

Kalman filter has been widely used in statistical signal processing for parameter estimation. Although a Kalman filter approach has been recently developed for spectral unmixing, referred to as Kalman filter-based linear unmixing (KFLU), its applicability to spectral characterization within a single pixel vector has not been explored. This paper presents a new application of Kalman filtering in spectral estimation and quantification. It develops a Kalman filter-based spectral signature esimator (KFSSE) which is different from the KFLU in the sense that the former performs a Kalman filter wavelength by wavelength across a spectral signature as opposed to the latter which implements a Kalman filter pixel vector by pixel vector in an image cube. The idea of the KFSSE is to implement the state equation to characterize the true spectral signature, while the measurement equation is being used to describe the spectral signature to be processed. Additionally, since a Kalman filter can accurately estimate spectral abundance fraction of a signature, our proposed KFSSE can further used for spectral quantification for subpixel targets and mixed pixel vectors, called Kalman filter-based spectral quantifier (KFSQ). Such spectral quantification is particularly important for chemical/biological defense which requires quantification of detected agents for damage control assessment. Several different types of hyperspectral data are used for experiments to demonstrate the ability of the KFSSE in estimation of spectral signature and the utility of the KFSQ in spectral quantification.

There is an immediate need for the ability to detect, identify and quantify chemical and biological agents in water supplies during water point selection, production, storage, and distribution to consumers. Through a U.S. Army sponsored Joint Service Agent Water Monitor (JSAWM) program, based on hand-held assays that exist in a ticket format, we are developing new algorithms for automatic processing of tickets. In previous work, detection of control dots in the tickets was carried out by traditional image segmentation approaches such as Otsu's method and other entropy-based thresholding techniques. In experiments, it was found that the approaches above were sensitive to illumination effects in the camera reader. As a result, more robust, object-oriented approaches to detect the control dots are required. Mathematical morphology is a powerful technique for image analysis that focuses on the size and shape of the objects in the scene. In this work, we describe a novel application of morphological operations in identification of control dots in hand held assay ticket imagery. Such images were pre-processed by a light compensation algorithm prior to morphological analysis. The performance of the proposed approach is evaluated using Receiving Operating Characteristics (ROC) analysis.

My Library

You currently do not have any folders to save your paper to! Create a new folder below.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews