Professor of Radiology (General Radiology) and, by courtesy, of Medicine (Medical Informatics) and of Electrical Engineering

Radiology - General Radiology

Bio

Bio

My primary interests are in developing diagnostic and therapy-planning applications and strategies for the acquisition, visualization, and quantitation of multi-dimensional medical imaging data. Examples are: creation of three-dimensional images of blood vessels using CT, visualization of complex flow within blood vessels using MR, computer-aided detection and characterization of lesions (e.g., colonic polyps, pulmonary nodules) from cross-sectional image data, visualization and automated assessment of 4D ultrasound data, and fusion of images acquired using different modalities (e.g., CT and MR). I have also been involved in developing and evaluating techniques for exploring cross-sectional imaging data from an internal perspective, i.e., virtual endoscopy (including colonoscopy, angioscopy, and bronchoscopy), and in the quantitation of structure parameters, e.g., volumes, lengths, medial axes, and curvatures. I am also interested in creating workable solutions to the problem of "data explosion," i.e., how to look at the thousands of images generated per examination using modern CT and MR scanners. My most recent focus includes making image features computer-accessible, to facilitate content-based retrieval of similar lesions, and prediction of molecular phenotype, response to therapy, and prognosis from imaging features. I am co-director of the Radiology 3D and Quantitative Imaging Lab, providing clinical service to the Stanford and local community, and co-Director of IBIIS (Integrative Biomedical Imaging Informatics at Stanford), whose mission is to advance the clinical and basic sciences in radiology, while improving our understanding of biology and the manifestations of disease, by pioneering methods in the information sciences that integrate imaging, clinical and molecular data.

Links

Research & Scholarship

Current Research and Scholarly Interests

My primary interests are in developing diagnostic and therapy-planning applications and strategies for the acquisition, visualization, and quantitation of multi-dimensional medical imaging data. Examples are: creation of three-dimensional images of blood vessels using CT, visualization of complex flow within blood vessels using MR, computer-aided detection and characterization of lesions (e.g., colonic polyps, pulmonary nodules) from cross-sectional image data, visualization and automated assessment of 4D ultrasound data, and fusion of images acquired using different modalities (e.g., CT and MR). I have also been involved in developing and evaluating techniques for exploring cross-sectional imaging data from an internal perspective, i.e., virtual endoscopy (including colonoscopy, angioscopy, and bronchoscopy), and in the quantitation of structure parameters, e.g., volumes, lengths, medial axes, and curvatures. I am also interested in creating workable solutions to the problem of "data explosion," i.e., how to look at the thousands of images generated per examination using modern CT and MR scanners. My most recent focus includes making image features computer-accessible, to facilitate content-based retrieval of similar lesions, and prediction of molecular phenotype, response to therapy, and prognosis from imaging features. I am co-director of the Radiology 3D and Quantitative Imaging Lab, providing clinical service to the Stanford and local community, and co-Director of IBIIS (Integrative Biomedical Imaging Informatics at Stanford), whose mission is to advance the clinical and basic sciences in radiology, while improving our understanding of biology and the manifestations of disease, by pioneering methods in the information sciences that integrate imaging, clinical and molecular data.

Abstract

We explore noninvasive biomarkers of microvascular invasion (mVI) in patients with hepatocellular carcinoma (HCC) using quantitative and semantic image features extracted from contrast-enhanced, triphasic computed tomography (CT). Under institutional review board approval, we selected 28 treatment-naive HCC patients who underwent surgical resection. Four radiologists independently selected and delineated tumor margins on three axial CT images and extracted computational features capturing tumor shape, image intensities, and texture. We also computed two types of "delta features," defined as the absolute difference and the ratio computed from all pairs of imaging phases for each feature. 717 arterial, portal-venous, delayed single-phase, and delta-phase features were robust against interreader variability ([Formula: see text]). An enhanced cross-validation analysis showed that combining robust single-phase and delta features in the arterial and venous phases identified mVI (AUC [Formula: see text]). Compared to a previously reported semantic feature signature (AUC 0.47 to 0.58), these features in our cohort showed only slight to moderate agreement (Cohen's kappa range: 0.03 to 0.59). Though preliminary, quantitative analysis of image features in arterial and venous phases may be potential surrogate biomarkers for mVI in HCC. Further study in a larger cohort is warranted.

Abstract

Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

Abstract

This study investigated the relationship between epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene homolog (KRAS) mutations in non-small-cell lung cancer (NSCLC) and quantitative FDG-PET/CT parameters including tumor heterogeneity. 131 patients with NSCLC underwent staging FDG-PET/CT followed by tumor resection and histopathological analysis that included testing for the EGFR and KRAS gene mutations. Patient and lesion characteristics, including smoking habits and FDG uptake parameters, were correlated to each gene mutation. Never-smoker (P < 0.001) or low pack-year smoking history (p = 0.002) and female gender (p = 0.047) were predictive factors for the presence of the EGFR mutations. Being a current or former smoker was a predictive factor for the KRAS mutations (p = 0.018). The maximum standardized uptake value (SUVmax) of FDG uptake in lung lesions was a predictive factor of the EGFR mutations (p = 0.029), while metabolic tumor volume and total lesion glycolysis were not predictive. Amongst several tumor heterogeneity metrics included in our analysis, inverse coefficient of variation (1/COV) was a predictive factor (p < 0.02) of EGFR mutations status, independent of metabolic tumor diameter. Multivariate analysis showed that being a never-smoker was the most significant factor (p < 0.001) for the EGFR mutations in lung cancer overall. The tumor heterogeneity metric 1/COV and SUVmax were both predictive for the EGFR mutations in NSCLC in a univariate analysis. Overall, smoking status was the most significant factor for the presence of the EGFR and KRAS mutations in lung cancer.

Abstract

To explore the characteristics that impact lung nodule detection by peripheral vision when searching for lung nodules on chest CT-scans.This study was approved by the local IRB and is HIPAA compliant. A simulated primary (1°) target mass (2 × 2 × 5 cm) was embedded into 5 cm thick subvolumes (SV) extracted from three unenhanced lung MDCT scans (64 row, 1.25 mm thickness, 0.7 mm increment). One of 30 solid, secondary nodules with either 3-4 mm and 5-8 mm diameters were embedded into 192 of 207 SVs. The secondary nodule was placed at a random depth within each SV, a transverse distance of 2.5, 5, 7.5, or 10 mm, and along one of eight rays cast every 45° from the center of the 1° mass. Video recordings of transverse paging in cranio-caudal direction were created for each SV (frame rate three sections/sec). Six radiologists observed each cine-loop once while gaze-tracking hardware assured that gaze was centered on the 1° mass. Each radiologist assigned a confidence rating (0-5) to the detection of a secondary nodule and indicated its location. Detection sensitivity was analyzed relative to secondary nodule size, transverse distance, radial orientation, and lung complexity. Lung complexity was characterized by the number of particles (connected pixels) and the sum of the area of all particles above a -500 HU threshold within regions of interest around the 1° mass and secondary nodule.Using a proportional odds logistic regression model and eliminating redundant predictors, models fit individually to each reader resulted in the following decreasing order of association based on greatest reduction in Akaike Information Criterion: secondary nodule diameter (6/6 readers, P < 0.001), distance from central mass (6/6 readers, P < 0.001), lung complexity particle count (5/6 readers, P = 0.05), and lung complexity particle area (3/6 readers, P = 0.03). Substantial inter-reader differences in sensitivity to decreasing nodule diameter, distance, and complexity characteristics were observed.Of the investigated parameters, secondary nodule size, distance from the gaze center and lung complexity (particle number and area) significantly impact nodule detection with peripheral vision.

Abstract

We propose a novel method, the adaptive local window, for improving level set segmentation technique. The window is estimated separately for each contour point, over iterations of the segmentation process, and for each individual object. Our method considers the object scale, the spatial texture, and the changes of the energy functional over iterations. Global and local statistics are considered by calculating several gray level co-occurrence matrices. We demonstrate the capabilities of the method in the domain of medical imaging for segmenting 233 images with liver lesions. To illustrate the strength of our method, those lesions were screened by either Computed Tomography or Magnetic Resonance Imaging. Moreover, we analyzed images using three different energy models. We compared our method to a global level set segmentation, to a local framework that uses predefined fixed-size square windows and to a local region-scalable fitting model. The results indicate that our proposed method outperforms the other methods in terms of agreement with the manual marking and dependence on contour initialization or the energy model used. In case of complex lesions, such as low contrast lesions, heterogeneous lesions, or lesions with a noisy background, our method shows significantly better segmentation with an improvement of 0.25?±?0.13 in Dice similarity coefficient, compared with state of the art fixed-size local windows (Wilcoxon, p 0.001).

Abstract

Molecular analysis of the mutation status for EGFR and KRAS are now routine in the management of non-small cell lung cancer. Radiogenomics, the linking of medical images with the genomic properties of human tumors, provides exciting opportunities for non-invasive diagnostics and prognostics. We investigated whether EGFR and KRAS mutation status can be predicted using imaging data. To accomplish this, we studied 186 cases of NSCLC with preoperative thin-slice CT scans. A thoracic radiologist annotated 89 semantic image features of each patient's tumor. Next, we built a decision tree to predict the presence of EGFR and KRAS mutations. We found a statistically significant model for predicting EGFR but not for KRAS mutations. The test set area under the ROC curve for predicting EGFR mutation status was 0.89. The final decision tree used four variables: emphysema, airway abnormality, the percentage of ground glass component and the type of tumor margin. The presence of either of the first two features predicts a wild type status for EGFR while the presence of any ground glass component indicates EGFR mutations. These results show the potential of quantitative imaging to predict molecular properties in a non-invasive manner, as CT imaging is more readily available than biopsies.

Abstract

Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.

Abstract

Radiomics is to provide quantitative descriptors of normal and abnormal tissues during classification and prediction tasks in radiology and oncology. Quantitative Imaging Network members are developing radiomic "feature" sets to characterize tumors, in general, the size, shape, texture, intensity, margin, and other aspects of the imaging features of nodules and lesions. Efforts are ongoing for developing an ontology to describe radiomic features for lung nodules, with the main classes consisting of size, local and global shape descriptors, margin, intensity, and texture-based features, which are based on wavelets, Laplacian of Gaussians, Law's features, gray-level co-occurrence matrices, and run-length features. The purpose of this study is to investigate the sensitivity of quantitative descriptors of pulmonary nodules to segmentations and to illustrate comparisons across different feature types and features computed by different implementations of feature extraction algorithms. We calculated the concordance correlation coefficients of the features as a measure of their stability with the underlying segmentation; 68% of the 830 features in this study had a concordance CC of ?0.75. Pairwise correlation coefficients between pairs of features were used to uncover associations between features, particularly as measured by different participants. A graphical model approach was used to enumerate the number of uncorrelated feature groups at given thresholds of correlation. At a threshold of 0.75 and 0.95, there were 75 and 246 subgroups, respectively, providing a measure for the features' redundancy.

Abstract

To develop an intratumor partitioning framework for identifying high-risk subregions from (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer.In this institutional review board-approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP).Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05).We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.

Abstract

Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 ?l to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p?0.05) and was significantly higher on the phantom dataset compared to the other datasets (p?0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p?0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

Abstract

The purpose of this study is to investigate the utility of obtaining "core samples" of regions in CT volume scans for extraction of radiomic features. We asked four readers to outline tumors in three representative slices from each phase of multiphasic liver CT images taken from 29 patients (1128 segmentations) with hepatocellular carcinoma. Core samples were obtained by automatically tracing the maximal circle inscribed in the outlines. Image features describing the intensity, texture, shape, and margin were used to describe the segmented lesion. We calculated the intraclass correlation between the features extracted from the readers' segmentations and their core samples to characterize robustness to segmentation between readers, and between human-based segmentation and core sampling. We conclude that despite the high interreader variability in manually delineating the tumor (average overlap of 43% across all readers), certain features such as intensity and texture features are robust to segmentation. More importantly, this same subset of features can be obtained from the core samples, providing as much information as detailed segmentation while being simpler and faster to obtain.

Abstract

Glioblastoma (GBM) is the most common and highly lethal primary malignant brain tumor in adults. There is a dire need for easily accessible, noninvasive biomarkers that can delineate underlying molecular activities and predict response to therapy. To this end, we sought to identify subtypes of GBM, differentiated solely by quantitative magnetic resonance (MR) imaging features, that could be used for better management of GBM patients. Quantitative image features capturing the shape, texture, and edge sharpness of each lesion were extracted from MR images of 121 single-institution patients with de novo, solitary, unilateral GBM. Three distinct phenotypic "clusters" emerged in the development cohort using consensus clustering with 10,000 iterations on these image features. These three clusters--pre-multifocal, spherical, and rim-enhancing, names reflecting their image features--were validated in an independent cohort consisting of 144 multi-institution patients with similar tumor characteristics from The Cancer Genome Atlas (TCGA). Each cluster mapped to a unique set of molecular signaling pathways using pathway activity estimates derived from the analysis of TCGA tumor copy number and gene expression data with the PARADIGM (Pathway Recognition Algorithm Using Data Integration on Genomic Models) algorithm. Distinct pathways, such as c-Kit and FOXA, were enriched in each cluster, indicating differential molecular activities as determined by the image features. Each cluster also demonstrated differential probabilities of survival, indicating prognostic importance. Our imaging method offers a noninvasive approach to stratify GBM patients and also provides unique sets of molecular signatures to inform targeted therapy and personalized treatment of GBM.

Abstract

We aim to develop a better understanding of perception of similarity in focal computed tomography (CT) liver images to determine the feasibility of techniques for developing reference sets for training and validating content-based image retrieval systems. In an observer study, four radiologists and six nonradiologists assessed overall similarity and similarity in 5 image features in 136 pairs of focal CT liver lesions. We computed intra- and inter-reader agreements in these similarity ratings and viewed the distributions of the ratings. The readers' ratings of overall similarity and similarity in each feature primarily appeared to be bimodally distributed. Median Kappa scores for intra-reader agreement ranged from 0.57 to 0.86 in the five features and from 0.72 to 0.82 for overall similarity. Median Kappa scores for inter-reader agreement ranged from 0.24 to 0.58 in the five features and were 0.39 for overall similarity. There was no significant difference in agreement for radiologists and nonradiologists. Our results show that developing perceptual similarity reference standards is a complex task. Moderate to high inter-reader variability precludes ease of dividing up the workload of rating perceptual similarity among many readers, while low intra-reader variability may make it possible to acquire large volumes of data by asking readers to view image pairs over many sessions.

Abstract

To determine the effectiveness of radiologists' search, recognition, and acceptance of lung nodules on computed tomographic (CT) images by using eye tracking.This study was performed with a protocol approved by the institutional review board. All study subjects provided informed consent, and all private health information was protected in accordance with HIPAA. A remote eye tracker was used to record time-varying gaze paths while 13 radiologists interpreted 40 lung CT images with an average of 3.9 synthetic nodules (5-mm diameter) embedded randomly in the lung parenchyma. The radiologists' gaze volumes ( GV gaze volume s) were defined as the portion of the lung parenchyma within 50 pixels (approximately 3 cm) of all gaze points. The fraction of the total lung volume encompassed within the GV gaze volume s, the fraction of lung nodules encompassed within each GV gaze volume (search effectiveness), the fraction of lung nodules within the GV gaze volume detected by the reader (recognition-acceptance effectiveness), and overall sensitivity of lung nodule detection were measured.Detected nodules were within 50 pixels of the nearest gaze point for 990 of 992 correct detections. On average, radiologists searched 26.7% of the lung parenchyma in 3 minutes and 16 seconds and encompassed between 86 and 143 of 157 nodules within their GV gaze volume s. Once encompassed within their GV gaze volume , the average sensitivity of nodule recognition and acceptance ranged from 47 of 100 nodules to 103 of 124 nodules (sensitivity, 0.47-0.82). Overall sensitivity ranged from 47 to 114 of 157 nodules (sensitivity, 0.30-0.73) and showed moderate correlation (r = 0.62, P = .02) with the fraction of lung volume searched.Relationships between reader search, recognition and acceptance, and overall lung nodule detection rate can be studied with eye tracking. Radiologists appear to actively search less than half of the lung parenchyma, with substantial interreader variation in volume searched, fraction of nodules included within the search volume, sensitivity for nodules within the search volume, and overall detection rate.

Abstract

Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.

Abstract

The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

Abstract

We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.

Abstract

Through a marriage of spiral computed tomography (CT) and graphical volumetric image processing, CT angiography was born 20 years ago. Fueled by a series of technical innovations in CT and image processing, over the next 5-15 years, CT angiography toppled conventional angiography, the undisputed diagnostic reference standard for vascular disease for the prior 70 years, as the preferred modality for the diagnosis and characterization of most cardiovascular abnormalities. This review recounts the evolution of CT angiography from its development and early challenges to a maturing modality that has provided unique insights into cardiovascular disease characterization and management. Selected clinical challenges, which include acute aortic syndromes, peripheral vascular disease, aortic stent-graft and transcatheter aortic valve assessment, and coronary artery disease, are presented as contrasting examples of how CT angiography is changing our approach to cardiovascular disease diagnosis and management. Finally, the recently introduced capabilities for multispectral imaging, tissue perfusion imaging, and radiation dose reduction through iterative reconstruction are explored with consideration toward the continued refinement and advancement of CT angiography.

Abstract

Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification.

Abstract

Motivation: A gold standard for perceptual similarity in medical images is vital to content-based image retrieval, but inter-reader variability complicates development. Our objective was to develop a statistical model that predicts the number of readers (N) necessary to achieve acceptable levels of variability. Materials and Methods: We collected 3 radiologists' ratings of the perceptual similarity of 171 pairs of CT images of focal liver lesions rated on a 9-point scale. We modeled the readers' scores as bimodal distributions in additive Gaussian noise and estimated the distribution parameters from the scores using an expectation maximization algorithm. We (a) sampled 171 similarity scores to simulate a ground truth and (b) simulated readers by adding noise, with standard deviation between 0 and 5 for each reader. We computed the mean values of 2-50 readers' scores and calculated the agreement (AGT) between these means and the simulated ground truth, and the inter-reader agreement (IRA), using Cohen's Kappa metric. Results: IRA for the empirical data ranged from =0.41 to 0.66. For between 1.5 and 2.5, IRA between three simulated readers was comparable to agreement in the empirical data. For these values , AGT ranged from =0.81 to 0.91. As expected, AGT increased with N, ranging from =0.83 to 0.92 for N = 2 to 50, respectively, with =2. Conclusion: Our simulations demonstrated that for moderate to good IRA, excellent AGT could nonetheless be obtained. This model may be used to predict the required N to accurately evaluate similarity in arbitrary size datasets.

Abstract

Although 2[18F]fluoro-2-deoxy-d-glucose (FDG) uptake during positron emission tomography (PET) predicts post-surgical outcome in patients with non-small cell lung cancer (NSCLC), the biologic basis for this observation is not fully understood. Here, we analyzed 25 tumors from patients with NSCLCs to identify tumor PET-FDG uptake features associated with gene expression signatures and survival. Fourteen quantitative PET imaging features describing FDG uptake were correlated with gene expression for single genes and coexpressed gene clusters (metagenes). For each FDG uptake feature, an associated metagene signature was derived, and a prognostic model was identified in an external cohort and then tested in a validation cohort of patients with NSCLC. Four of eight single genes associated with FDG uptake (LY6E, RNF149, MCM6, and FAP) were also associated with survival. The most prognostic metagene signature was associated with a multivariate FDG uptake feature [maximum standard uptake value (SUV(max)), SUV(variance), and SUV(PCA2)], each highly associated with survival in the external [HR, 5.87; confidence interval (CI), 2.49-13.8] and validation (HR, 6.12; CI, 1.08-34.8) cohorts, respectively. Cell-cycle, proliferation, death, and self-recognition pathways were altered in this radiogenomic profile. Together, our findings suggest that leveraging tumor genomics with an expanded collection of PET-FDG imaging features may enhance our understanding of FDG uptake as an imaging biomarker beyond its association with glycolysis.

Abstract

To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets.A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available.There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance.This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

Abstract

We have developed a method to quantify the shape of liver lesions in CT images and to evaluate its performance for retrieval of images with similarly-shaped lesions. We employed a machine learning method to combine several shape descriptors and defined similarity measures for a pair of shapes as a weighted combination of distances calculated based on each feature. We created a dataset of 144 simulated shapes and established several reference standards for similarity and computed the optimal weights so that the retrieval result agrees best with the reference standard. Then we evaluated our method on a clinical database consisting of 79 portal-venous-phase CT liver images, where we derived a reference standard of similarity from radiologists' visual evaluation. Normalized Discounted Cumulative Gain (NDCG) was calculated to compare this ordering with the expected ordering based on the reference standard. For the simulated lesions, the mean NDCG values ranged from 91% to 100%, indicating that our methods for combining features were very accurate in representing true similarity. For the clinical images, the mean NDCG values were still around 90%, suggesting a strong correlation between the computed similarity and the independent similarity reference derived the radiologists.

Abstract

To determine the accuracy and reproducibility of a remote eye-tracking system for studies of observer gaze while displaying volumetric chest computed tomography (CT) images.Four participants performed calibrations using three different gray-scale backgrounds (black, gray, and white). Each participant then observed a three-dimensional 10-point test pattern embedded in five Digital Imaging and Communications in Medicine (DICOM) datasets (test backgrounds): a full 190-section chest CT scan, 190 copies of a single chest CT section, and three 190-section datasets of homogeneous intensity (black, gray, and white).Significant variances between participants, calibration backgrounds, and test backgrounds were observed. The least mean systematic error (deviation of recorded gaze position from target) was obtained when the calibration background and test background were black (27 pixels). Systematic error increased when displaying a test background that deviated from the calibration background intensity. Hence, the largest mean systematic error occurred when calibrating to a black background and displaying a white background (67 pixels). For complex chest CT volumes the white calibration background performed best (38 pixels). An angular analysis of the systematic error was performed and demonstrated that the systemic error primarily affects the vertical position of the estimated gaze position.Our findings indicate a potential source of systematic error during gaze recording in a dynamic environment and highlight the importance of configuring the calibration procedure according to the brightness of the display. We recommend that investigators develop routines for postcalibration accuracy measurement and report the effective accuracy for the display environment in which the data are collected.

Abstract

We aim to predict radiological observations using computationally-derived imaging features extracted from computed tomography (CT) images. We created a dataset of 79 CT images containing liver lesions identified and annotated by a radiologist using a controlled vocabulary of 76 semantic terms. Computationally-derived features were extracted describing intensity, texture, shape, and edge sharpness. Traditional logistic regression was compared to L(1)-regularized logistic regression (LASSO) in order to predict the radiological observations using computational features. The approach was evaluated by leave one out cross-validation. Informative radiological observations such as lesion enhancement, hypervascular attenuation, and homogeneous retention were predicted well by computational features. By exploiting relationships between computational and semantic features, this approach could lead to more accurate and efficient radiology reporting.

Abstract

Aortoiliac and lower extremity arterial atherosclerotic plaque burden is a risk factor for the development of visceral and peripheral ischemic and aneurismal vascular disease. While prior research allows automated quantification of calcified plaque in these body regions using CT angiograms, no automated method exists to quantify soft plaque. We developed an automatic algorithm that defines the outer wall contour and wall thickness of vessels to quantify non-calcified plaque in CT angiograms of the chest, abdomen, pelvis, and lower extremities. The algorithm encodes the search space as a constrained graph and calculates the outer wall contour by deriving a minimum cost path through the graph, following the visible outer wall contour while minimizing path tortuosity. Our algorithm was statistically equivalent to a reference standard made by two reviewers. Absolute error was 1.9?±?2.3% compared to the inter-observer variability of 3.9?±?3.6%. Wall thickness in vessels with atherosclerosis was 3.4?±?1.6 mm compared to 1.2?±?0.4 mm in normal vessels. The algorithm shows promise as a tool for quantification of non-calcified plaque in CT angiography. When combined with previous research, our method has the potential to quantify both non-calcified and calcified plaque in all clinically significant systemic arteries, from the thoracic aorta to the arteries of the calf, over a wide range of diameters. This algorithm has the potential to enable risk stratification of patients and facilitate investigations into the relationships between asymptomatic atherosclerosis and a variety of behavioral, physiologic, pathologic, and genotypic conditions.

Abstract

It is challenging to reproducibly measure and compare cancer lesions on numerous follow-up studies; the process is time-consuming and error-prone. In this paper, we show a method to automatically and reproducibly identify and segment abnormal lymph nodes in serial computed tomography (CT) exams.Our method leverages initial identification of enlarged (abnormal) lymph nodes in the baseline scan. We then identify an approximate region for the node in the follow-up scans using nonrigid image registration. The baseline scan is also used to locate regions of normal, non-nodal tissue surrounding the lymph node and to map them onto the follow-up scans, in order to reduce the search space to locate the lymph node on the follow-up scans. Adaptive region-growing and clustering algorithms are then used to obtain the final contours for segmentation. We applied our method to 24 distinct enlarged lymph nodes at multiple time points from 14 patients. The scan at the earlier time point was used as the baseline scan to be used in evaluating the follow-up scan, resulting in 70 total test cases (e.g., a series of scans obtained at 4 time points results in 3 test cases). For each of the 70 cases, a "reference standard" was obtained by manual segmentation by a radiologist. Assessment according to response evaluation criteria in solid tumors (RECIST) using our method agreed with RECIST assessments made using the reference standard segmentations in all test cases, and by calculating node overlap ratio and Hausdorff distance between the computer and radiologist-generated contours.Compared to the reference standard, our method made the correct RECIST assessment for all 70 cases. The average overlap ratio was 80.7?±?9.7% s.d., and the average Hausdorff distance was 3.2?±?1.8 mm s.d. The concordance correlation between automated and manual segmentations was 0.978 (95% confidence interval 0.962, 0.984). The 100% agreement in our sample between our method and the standard with regard to RECIST classification suggests that the true disagreement rate is no more than 6%.Our automated lymph node segmentation method achieves excellent overall segmentation performance and provides equivalent RECIST assessment. It potentially will be useful to streamline and improve cancer lesion measurement and tracking and to improve assessment of cancer treatment response.

Abstract

Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

Abstract

Diagnostic radiology requires accurate interpretation of complex signals in medical images. Content-based image retrieval (CBIR) techniques could be valuable to radiologists in assessing medical images by identifying similar images in large archives that could assist with decision support. Many advances have occurred in CBIR, and a variety of systems have appeared in nonmedical domains; however, permeation of these methods into radiology has been limited. Our goal in this review is to survey CBIR methods and systems from the perspective of application to radiology and to identify approaches developed in nonmedical applications that could be translated to radiology. Radiology images pose specific challenges compared with images in the consumer domain; they contain varied, rich, and often subtle features that need to be recognized in assessing image similarity. Radiology images also provide rich opportunities for CBIR: rich metadata about image semantics are provided by radiologists, and this information is not yet being used to its fullest advantage in CBIR systems. By integrating pixel-based and metadata-based image feature analysis, substantial advances of CBIR in medicine could ensue, with CBIR systems becoming an important tool in radiology practice.

Abstract

To develop a system to facilitate the retrieval of radiologic images that contain similar-appearing lesions and to perform a preliminary evaluation of this system with a database of computed tomographic (CT) images of the liver and an external standard of image similarity.Institutional review board approval was obtained for retrospective analysis of deidentified patient images. Thereafter, 30 portal venous phase CT images of the liver exhibiting one of three types of liver lesions (13 cysts, seven hemangiomas, 10 metastases) were selected. A radiologist used a controlled lexicon and a tool developed for complete and standardized description of lesions to identify and annotate each lesion with semantic features. In addition, this software automatically computed image features on the basis of image texture and boundary sharpness. Semantic and computer-generated features were weighted and combined into a feature vector representing each image. An independent reference standard was created for pairwise image similarity. This was used in a leave-one-out cross-validation to train weights that optimized the rankings of images in the database in terms of similarity to query images. Performance was evaluated by using precision-recall curves and normalized discounted cumulative gain (NDCG), a common measure for the usefulness of information retrieval.When used individually, groups of semantic, texture, and boundary features resulted in various levels of performance in retrieving relevant lesions. However, combining all features produced the best overall results. Mean precision was greater than 90% at all values of recall, and mean, best, and worst case retrieval accuracy was greater than 95%, 100%, and greater than 78%, respectively, with NDCG.Preliminary assessment of this approach shows excellent retrieval results for three types of liver lesions visible on portal venous CT images, warranting continued development and validation in a larger and more comprehensive database.

Abstract

The authors examine potential bias when using a reference reader panel as "gold standard" for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy.A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range of operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols.In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value < 0.0001) than LCA (ARB--2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27).Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.

Abstract

The diagnostic performance of radiologists using incremental CAD assistance for lung nodule detection on CT and their temporal variation in performance during CAD evaluation was assessed.CAD was applied to 20 chest multidetector-row computed tomography (MDCT) scans containing 190 non-calcified > or =3-mm nodules. After free search, three radiologists independently evaluated a maximum of up to 50 CAD detections/patient. Multiple free-response ROC curves were generated for free search and successive CAD evaluation, by incrementally adding CAD detections one at a time to the radiologists' performance.The sensitivity for free search was 53% (range, 44%-59%) at 1.15 false positives (FP)/patient and increased with CAD to 69% (range, 59-82%) at 1.45 FP/patient. CAD evaluation initially resulted in a sharp rise in sensitivity of 14% with a minimal increase in FP over a time period of 100 s, followed by flattening of the sensitivity increase to only 2%. This transition resulted from a greater prevalence of true positive (TP) versus FP detections at early CAD evaluation and not by a temporal change in readers' performance. The time spent for TP (9.5 s +/- 4.5 s) and false negative (FN) (8.4 s +/- 6.7 s) detections was similar; FP decisions took two- to three-times longer (14.4 s +/- 8.7 s) than true negative (TN) decisions (4.7 s +/- 1.3 s).When CAD output is ordered by CAD score, an initial period of rapid performance improvement slows significantly over time because of non-uniformity in the distribution of TP CAD output and not to a changing reader performance over time.

Abstract

To identify challenges and opportunities in imaging informatics that can lead to the use of images for discovery, and that can potentially improve the diagnostic accuracy of imaging professionals.Recent articles on imaging informatics and related articles from PubMed were reviewed and analyzed. Some new developments and challenges that recent research in imaging informatics will meet are identified and discussed.While much literature continues to be devoted to traditional imaging informatics topics of image processing, visualization, and computerized detection, three new trends are emerging: (1) development of ontologies to describe radiology reports and images, (2) structured reporting and image annotation methods to make image semantics explicit and machine-accessible, and (3) applications that use semantic image information for decision support to improve radiologist interpretation performance. The informatics methods being developed have similarities and synergies with recent work in the biomedical informatics community that leverage large high-throughput data sets, and future research in imaging informatics will build on these advances to enable discovery by mining large image databases.Imaging informatics is beginning to develop and apply knowledge representation and analysis methods to image datasets. This type of work, already commonplace in biomedical research with large scale molecular and clinical datasets, will lead to new ways for computers to work with image data. The new advances hold promise for integrating imaging with the rest of the patient record as well as molecular data, for new data-driven discoveries in imaging analogous to that in bioinformatics, and for improved quality of radiology practice.

Abstract

The authors develop a method to visualize the abdominal aorta and its branches, obtained by CT or MR angiography, in a single 2D stylistic image without overlap among branches.The abdominal aortic vasculature is modeled as an articulated object whose underlying topology is a rooted tree. The inputs to the algorithm are the 3D centerlines of the abdominal aorta, its branches, and their associated diameter information. The visualization problem is formulated as an optimization problem that finds a spatial configuration of the bounding boxes of the centerlines most similar to the projection of the input into a given viewing direction (e.g., anteroposterior), while not introducing intersections among the boxes. The optimization algorithm minimizes a score function regarding the overlap of the bounding boxes and the deviation from the input. The output of the algorithm is used to produce a stylistic visualization, made of the 2D centerlines modulated by the associated diameter information, on a plane. The authors performed a preliminary evaluation by asking three radiologists to label 366 arterial branches from the 30 visualizations of five cases produced by the method. Each of the five patients was presented in six different variant images, selected from ten variants with the three lowest and three highest scores. For each label, they assigned confidence and distortion ratings (low/medium/high). They studied the association between the quantitative metrics measured from the visualization and the subjective ratings by the radiologists.All resulting visualizations were free from branch overlaps. Labeling accuracies of the three readers were 93.4%, 94.5%, and 95.4%, respectively. For the total of 1098 samples, the distortion ratings were low: 77.39%, medium: 10.48%, and high: 12.12%. The confidence ratings were low: 5.56%, medium: 16.50%, and high: 77.94%. The association study shows that the proposed quantitative metrics can predict a reader's subjective ratings and suggests that the visualization with the lowest score should be selected for readers.The method for eliminating misleading false intersections in 2D projections of the abdominal aortic tree conserves the overall shape and does not diminish accurate identifiability of the branches.

Abstract

Existing density- and gradient-based automated centerline-extraction algorithms fail in severely diseased or occluded arterial segments for the generation of curved planar reformations (CPRs). We aimed to quantitatively and qualitatively assess the precision of a knowledge-based centerline-extraction algorithm in patients with occluded femoro-popliteal artery (FPA).Computed tomography angiograms of 38 FPA occlusions (mean length 120 mm) were retrospectively identified. Reference centerlines were determined as the mean of eight manual expert readings. Each occlusion was also interpolated using a new knowledge-based algorithm (partial vector space projection [PVSP]), which uses shape information extracted from a separate database of 30 nondiseased FPAs. Precision of PVSP was quantified as the maximum departure error (MDE) from the standard of reference and the proportion of the interpolated centerlines remaining within an assumed vessel radius of 3 mm. Multiple regression method was used to determine the factors predicting the precision of the algorithm. CPR quality was independently assigned by two readers.The mean MDE (in mm) for occlusion lengths of <50 mm, 50-100 mm, 100-200 mm, and >200 mm was 0.95, 1.19, 1.40, and 2.25, for manual readings and 1.68, 2.90, 9.43, and 19.95 for PVSP, respectively. MDEs of the algorithm were completely contained within 3 mm of the assumed vessel radius in 20 of 38 occlusions. CPR quality was rated diagnostic by both readers in 23 of 38 occlusions.Shape-based centerline extraction of FPA occlusions in lower extremity CTA is feasible, and independent from local density and gradient information. PVSP centerline extraction allows interpolation of occlusions up to 100 mm within the variability of manually derived centerlines.

Abstract

The purpose of this work was to measure the accuracy of dual-energy computed tomography for identifying iodine and calcium and to determine the effects of calcium suppression in phantoms and lower-extremity computed tomographic (CT) angiographic data sets.Using a three-material basis decomposition method for 80- and 140-kVp data, the accuracy of correctly identified contrast medium and calcium voxels and the mean attenuation before and after calcium suppression were computed. Experiments were first performed on a phantom of homogenous contrast medium and hydroxyapatite samples with mean attenuation of 57.2, 126, and 274 Hounsfield units (HU) and 50.0, 122, and 265 HU, respectively. Experiments were repeated in corresponding attenuation groups of voxels from manually segmented bones and contrast medium-enhanced arteries in a lower-extremity CT angiographic data set with mean attenuation of 293 and 434 HU, respectively. Calcium suppression in atherosclerotic plaques of a cadaveric specimen was also studied, using micro-computed tomography as a reference, and in a lower-extremity CT angiographic data set with substantial below-knee calcified plaques.Higher concentrations showed increased accuracy of iodine and hydroxyapatite identification of 87.4%, 99.7%, and 99.9% and 88.0%, 95.0%, and 99.9%, respectively. Calcium suppression was also more accurate with higher concentrations of iodine and hydroxyapatite, with mean attenuation after suppression of 47.1, 122, and 263 HU and 7.14, 11.6, and 12.6 HU, respectively. Similar patterns were seen in the corresponding attenuation groups of the contrast medium-enhanced arteries and bone in the clinical data set, which had overall accuracy of 81.3% and 78.9%, respectively, and mean attenuation after calcium suppression of 254 and 73.7 HU, respectively. The suppression of calcified atherosclerotic plaque was accurate compared with the micro-CT reference; however, the suppression in the clinical data set showed probable inappropriate suppression of the small vessels.Dual-energy computed tomography can detect and differentiate between contrast medium and calcified tissues, but its accuracy is dependent on the CT density of tissues and limited when CT attenuation is low.

Abstract

Segmentation of the lungs in chest-computed tomography (CT) is often performed as a preprocessing step in lung imaging. This task is complicated especially in presence of disease. This paper presents a lung segmentation algorithm called adaptive border marching (ABM). Its novelty lies in the fact that it smoothes the lung border in a geometric way and can be used to reliably include juxtapleural nodules while minimizing oversegmentation of adjacent regions such as the abdomen and mediastinum. Our experiments using 20 datasets demonstrate that this computational geometry algorithm can re-include all juxtapleural nodules and achieve an average oversegmentation ratio of 0.43% and an average under-segmentation ratio of 1.63% relative to an expert determined reference standard. The segmentation time of a typical case is under 1min on a typical PC. As compared to other available methods, ABM is more robust, more efficient and more straightforward to implement, and once the chest CT images are input, there is no further interaction needed from users. The clinical impact of this method is in potentially avoiding false negative CAD findings due to juxtapleural nodules and improving volumetry and doubling time accuracy.

Abstract

Accurate arterial centerline extraction is essential for comprehensive visualization in CT Angiography. Time consuming manual tracking is needed when automated methods fail to track centerlines through severely diseased and occluded vessels. A previously described algorithm, Partial Vector Space Projection (PVSP), which uses vessel shape information from a database to bridge occlusions of the femoropopliteal artery, has a limited accuracy in long (>100 mm) occlusions. In this article we introduce a new algorithm, Intermediate Point Detection (IPD), which uses calcifications in the occluded artery to provide additional information about the location of the centerline to facilitate improvement in PVSP performance. It identifies calcified plaque in image space to find the most useful point within the occlusion to improve the estimate from PVSP. In this algorithm candidates for calcified plaque are automatically identified on axial CT slices in a restricted region around the estimate obtained from PVSP. A modified Canny edge detector identifies the edge of the calcified plaque and a convex polygon fit is used to find the edge of the calcification bordering the wall of the vessel. The Hough transform for circles estimates the center of the vessel on the slice, which serves as a candidate intermediate point. Each candidate is characterized by two scores based on radius and relative position within the occluded segment, and a polynomial function is constructed to define a net score representing the potential benefit of using this candidate for improving the centerline. We tested our approach in 44 femoropopliteal artery occlusions of lengths up to 398 mm in 30 patients with peripheral arterial occlusive disease. Centerlines were tracked manually by four-experts, twice each, with their mean serving as the reference standard. All occlusions were first interpolated with PVSP using a database of femoropopliteal arterial shapes obtained from a total of 60 subjects. Occlusions longer than 80 mm (N = 20) were then processed with the IPD algorithm, provided calcifications were found (N = 14). We used the maximum point-wise distance of an interpolated curve from the reference standard as our error metric. The IPD algorithm significantly reduced the average error of the initial PVSP from 2.76 to 1.86 mm (p < 0.01). The error was less than the clinically desirable 3 mm (smallest radius of the femoropopliteal artery) in 13 of 14 occlusions. The IPD algorithm achieved results within the range of the human readers in 11 of 14 cases. We conclude that the additional use of sparse but specific image space information, such as calcified atherosclerotic plaque, can be used to substantially improve the performance of a previously described knowledge-based method to restore the centerlines of femoropopliteal arterial occlusions.

Abstract

We developed an automated algorithm for bone removal in computed tomographic angiographic images that identifies and deletes connections between bone and vessels. Our automated algorithm is significantly faster than manual methods (2.45 minutes vs 73 minutes) and only generates about 2 small artifactual deletions per patient, mostly in the region of the ankle. Image quality was equivalent to manual methods. It shows promise as a tool for fast and accurate postprocessing of computed tomographic angiograms.

Abstract

Computer-aided detection (CAD) algorithms identify locations in computed tomographic (CT) images of the colon that are most likely to contain polyps. Existing CAD methods treat the CT data as a voxelized, volume image. They estimate a curvature-based feature at the mucosal surface voxels. However, curvature is a smooth notion, while our data are discrete and noisy. As a second order differential quantity, curvature amplifies noise. In this paper, we present the smoothed shape operators method (SSO), which uses a geometry processing approach. We extract a triangle mesh representation of the colon surface, and estimate curvature on this surface using the shape operator. We then smooth the shape operators on the surface iteratively. Throughout, we use techniques explicitly designed for discrete geometry. All our computation occurs on the surface, rather than in the voxel grid. We evaluate our algorithm on patient data and provide free-response receiver-operating characteristic performance analysis over all size ranges of polyps. We also provide confidence intervals for our performance estimates. We compare our performance with the surface normal overlap (SNO) method for the same data. A preliminary evaluation of our method on 35 patients yielded the following results (polyp diameter range; sensitivity; false positives/case): (10mm; 100%; 17.5), (5-10 mm; 89.7%, 21.23), (<5 mm; 59.1%; 23.9) and (overall; 80.3%; 23.9). The evaluation of the SNO method yielded: (10 mm; 75%; 17.5), (5-10 mm; 43.1%; 21.23), (<5 mm; 15.9%; 23.9) and (overall; 38.5%; 23.9).

Abstract

Magnetic resonance diffusion-weighted imaging coupled with fiber tractography (DFT) is the only non-invasive method for measuring white matter pathways in the living human brain. DFT is often used to discover new pathways. But there are also many applications, particularly in visual neuroscience, in which we are confident that two brain regions are connected, and we wish to find the most likely pathway forming the connection. In several cases, current DFT algorithms fail to find these candidate pathways. To overcome this limitation, we have developed a probabilistic DFT algorithm (ConTrack) that identifies the most likely pathways between two regions. We introduce the algorithm in three parts: a sampler to generate a large set of potential pathways, a scoring algorithm that measures the likelihood of a pathway, and an inferential step to identify the most likely pathways connecting two regions. In a series of experiments using human data, we show that ConTrack estimates known pathways at positions that are consistent with those found using a high quality deterministic algorithm. Further we show that separating sampling and scoring enables ConTrack to identify valid pathways, known to exist, that are missed by other deterministic and probabilistic DFT algorithms.

Abstract

Measuring the properties of the white matter pathways from retina to cortex in the living human brain will have many uses for understanding visual performance and guiding clinical treatment. For example, identifying the Meyer's loop portion of the optic radiation (OR) has clinical significance because of the large number of temporal lobe resections. We use diffusion tensor imaging and fiber tractography (DTI-FT) to identify the most likely pathway between the lateral geniculate nucleus (LGN) and the calcarine sulcus in sixteen hemispheres of eight healthy volunteers. Quantitative population comparisons between DTI-FT estimates and published postmortem dissections match with a spatial precision of about 1 mm. The OR can be divided into three bundles that are segmented based on the direction of the fibers as they leave the LGN: Meyer's loop, central, and direct. The longitudinal and radial diffusivities of the three bundles do not differ within the measurement noise; there is a small difference in the radial diffusivity between the right and left hemispheres. We find that the anterior tip of Meyer's loop is 28 +/- 3 mm posterior to the temporal pole, and the population range is 1 cm. Hence, it is important to identify the location of this bundle in individual subjects or patients.

Abstract

A challenging problem in image segmentation is preventing boundary leakage through poorly resolved edges because not enough local information can be provided along them. In this article, we propose a new directional distance aided image segmentation method, formulated under the level set framework, to prevent the leakage. At each evolution step, the zero level set is extracted and smoothed. For each point on the zero level set, a new directional distance (DD) term, defined as the vector starting from itself and pointing to its counterpart on the smoothed version of the zero level set, is calculated to measure its "degree of protrusion." The evolution speed of the points that are considered to be protruding out will be penalized. Other terms, e.g., curvature and gradient terms and user specified constraints, are used along with the DD term to influence the level set evolution. Our smoothing technique augments traditional Gaussian smoothing with a new antishrinkage operation. The novelty of our method is that the DD term does not depend on intensity or gradient boundaries to regulate the regional shape and, therefore, help prevent leakage and the method incorporates vertex-based curve/surface smoothing into curve evolution under the level set framework. Experimental results show that the new DDA method achieves promising results and reasonable stability in segmenting simulated objects as well as abdominal aortic aneurysms in computed tomography (CT) angiograms, in both 2D and 3D, by preventing leakage into adjacent structures while preserving local shape details.

Abstract

Computer aided detection (CAD) in computed tomography colonography (CTC) aims at detecting colonic polyps that are the precursors of colon cancer. In this work, we propose a colon wall evolution algorithm polyp enhancing level sets (PELS) based on the level-set formulation that regularizes and enhances polyps as a preprocessing step to CTC CAD algorithms. The underlying idea is to evolve the polyps towards spherical protrusions on the colon wall while keeping other structures, such as haustral folds, relatively unchanged and, thereby, potentially improve the performance of CTC CAD algorithms, especially for smaller polyps. To evaluate our methods, we conducted a pilot study using an arbitrarily chosen CTC CAD method, the surface normal overlap (SNO) CAD algorithm, on a nine patient CTC data set with 47 polyps of sizes ranging from 2.0 to 17.0 mm in diameter. PELS increased the maximum sensitivity by 8.1% (from 21/37 to 24/37) for small polyps of sizes ranging from 5.0 to 9.0 mm in diameter. This is accompanied by a statistically significant separation between small polyps and false positives. PELS did not change the CTC CAD performance significantly for larger polyps.

Abstract

We developed a classifier that permits transparent rendering of both tagging material and air to facilitate interpretation of tagged computed tomographic (CT) colonography. With this technique, a reader can simultaneously appreciate polyps on endoluminal views both covered with tagging material and against air, along with unmodified 2-dimensional CT images. Evaluated with 49 polyps from 26 patients (data from public National Library of Medicine, Health Insurance Portability and Accountability Act compliant), 3 readers were able to determine the presence/absence of polyps in tagged locations with equivalent accuracy compared with polyps in air. This method offers an alternative way to visualize tagged CT colonography.

Abstract

Curved planar reformation allows comprehensive visualization of arterial flow channels, providing information about calcified and noncalcified plaques and degrees of stenoses. Existing semiautomated centerline-extraction algorithms for curved planar reformation generation fail in severely diseased and occluded arteries. We explored whether contralateral shape information could be used to reconstruct centerlines through femoropopliteal occlusions. We obtained CT angiography data sets of 29 subjects (16m/13f, 19-86yo) without peripheral arterial occlusive disease and five consecutive subjects (1m/4f, 54-85yo) with unilateral femoropopliteal arterial occlusions. A gradient-based method was used to extract the femoropopliteal centerlines in nondiseased segments. Centerlines of the five occluded segments were manually determined by four experts, two times each. We interpolated missing centerlines in 2475 simulated occlusions of various occlusion lengths in nondiseased subjects. We used different curve registration methods (reflection, similarity, affine, and global polynomial) to align the nonoccluded segments, matched the end points of the occluded segments to the corresponding patent end points, and recorded maximum Euclidean distances to the known centerlines. We also compared our algorithm to an existing knowledge-based PCA interpolation algorithm using the nondiseased subjects. In the five subjects with real femoropopliteal occlusions, we measured the maximum Euclidean distance and the percentage of the interpolation that remained within a typical 3 mm radius vessel. In the nondiseased subjects, we found that the rigid registration methods were not significantly (p<0.750) different among themselves but were more accurate than the nonrigid methods (p<0.001). In simulations using nondiseased subjects, our method produced centerlines that stayed within 3 mm of a semiautomatically tracked centerline in occlusions up to 100 mm in length; however, the PCA method was significantly more accurate for all occlusions lengths. In the actual clinical cases, we found the following [occlusion length (mm):error (mm)]: 16.5:0.775, 42.0:1.54, 79.9:1.82, 145:3.23, and 292:6.13, which were almost always more accurate than the PCA algorithm. We conclude that the use of contralateral shape information, when available, is a promising method for the interpolation of centerlines through arterial occlusions.

Abstract

We present a novel algorithm, Partial Vector Space Projection (PVSP), for estimation of missing data given a database of similar datasets, and demonstrate its use in restoring the centerlines through simulated occlusions of femoropopliteal arteries, derived from CT angiography data. The algorithm performs Principal Component Analysis (PCA) on a database of centerlines to obtain a set of orthonormal basis functions defined in a scaled and oriented frame of reference, and assumes that any curve not in the database can be represented as a linear combination of these basis functions. Using a database of centerlines derived from 30 normal femoropopliteal arteries, we evaluated the algorithm, and compared it to a correlation-based linear Minimum Mean Squared Error (MMSE) method, by deleting portions of a centerline for several occlusion lengths (OL: 10 mm, 25 mm, 50 mm, 75 mm, 100 mm, 125 mm, 150 mm, 175 mm and 200 mm). For each simulated occlusion, we projected the partially known dataset on the set of basis functions derived from the remaining 29 curves to restore the missing segment. We calculated the maximum point-wise distance (Maximum Departure or MD) between the actual and estimated centerline as the error metric. Mean (standard deviation) of MD increased from 0.18 (0.14) to 4.35 (2.23) as OL increased. The results were fairly accurate even for large occlusion lengths and are clinically useful. The results were consistently better than those using the MMSE method. Multivariate regression analysis found that OL and the root-mean-square error in the 2 cm proximal and distal to the occlusion accounted for most of the error.

Abstract

The tracking of lung nodules across computed tomography (CT) scans acquired at different times for the same patient is helpful for the determination of malignancy. We are developing a nodule registration system to facilitate this process. We propose to use a semi-rigid method that considers principal structures surrounding the nodule and allows relative movements among the structures. The proposed similarity metric, which evaluates both the image correlation and the degree of elastic deformation amongst the structures, is maximized by a two-layered optimization method, employing a simulated annealing framework. We tested our method by simulating five cases that represent physiological deformation as well as different nodule shape/size changes with time. Each case is made up of a source and target scan, where the source scan consists of a nodule-free patient CT volume into which we inserted ten simulated lung nodules, and the target scan is the result of applying a known, physiologically based nonrigid transformation to the nodule-free source scan, into which we inserted modified versions of the corresponding nodules at the same, known locations. Five different modification strategies were used, one for each of the five cases: (1) nodules maintain size and shape, (2) nodules disappear, (3) nodules shrink uniformly by a factor of 2, (4) nodules grow uniformly by a factor of 2, and (5) nodules grow nonuniformly. We also matched 97 real nodules in pairs of scans (acquired at different times) from 12 patients and compared our registration to a radiologist's visual determination. In the simulation experiments, the mean absolute registration errors were 1.0+/-0.8 mm (s.d.), 1.1+/-0.7 mm (s.d.), 1.0+/-0.7 mm (s.d.), 1.0+/-0.6 mm (s.d.), and 1.1+/- 0.9 mm (s.d.) for the five cases, respectively. For the 97 nodule pairs in 12 patient scans, the mean absolute registration error was 1.4+/-0.8 mm (s.d.).

Abstract

For decades, conventional 2D-roadmaping has been the method of choice for image-based guidewire navigation during endovascular procedures. Only recently have 3D-roadmapping techniques become available that are based on the acquisition and reconstruction of a 3D image of the vascular tree. In this paper, we present a new image-based navigation technique called RoRo (Rotational Roadmapping) that eliminates the guess-work inherent to the conventional 2D method, but does not require a 3D image. Our preliminary clinical results show that there are situations in which RoRo is preferred over the existing two methods, thus demonstrating potential for filling a clinical niche and complementing the spectrum of available navigation tools.

Abstract

X-ray images are often used to guide minimally invasive procedures in interventional radiology. The use of a preoperatively obtained 3D volume can enhance the visualization needed for guiding catheters and other surgical devices. However, for intraoperative usefulness, the 3D dataset needs to be registered to the 2D x-ray images of the patient. We investigated the effect of targeting subvolumes of interest in the 3D datasets and registering the projections with C-arm x-ray images. We developed an intensity-based 2D/3D rigid-body registration using a Monte Carlo-based hybrid algorithm as the optimizer, using a single view for registration. Pattern intensity (PI) and mutual information (MI) were two metrics tested. We used normalization of the rays to address the problems due to truncation in 3D necessary for targeting. We tested the algorithm on a C-arm x-ray image of a pig's head and a 3D dataset reconstructed from multiple views of the C-arm. PI and MI were comparable in performance. For two subvolumes starting with a set of initial poses from +/-15 mm in x, from +/-3 mm (random), in y and z and +/-4 deg in the three angles, the robustness was 94% for PI and 91% for MI, with accuracy of 2.4 mm (PI) and 2.6 mm (MI), using the hybrid algorithm. The hybrid optimizer, when compared with a standard Powell's direction set method, increased the robustness from 59% (Powell) to 94% (hybrid). Another set of 50 random initial conditions from [+/-20] mm in x,y,z and [+/-10] deg in the three angles, yielded robustness of 84% (hybrid) versus 38% (Powell) using PI as metric, with accuracies 2.1 mm (hybrid) versus 2.0 mm (Powell).

Abstract

The objective of this pilot project was to devise a new image acquisition and processing technique to produce PET/CT images rendered in 3-dimensional (3D) volume that can then be reviewed in several 3D formats such as virtual bronchoscopy and colonoscopy "fly-throughs" and external "fly-arounds."We tested the new imaging and processing protocol on 24 patients with various malignancies to determine whether it could dependably acquire and reformat standard tomographic 2-dimensional PET/CT images into 3D renderings.This new technique added helpful information to the diagnostic interpretation for 2 of the 24 patients. Further, in the 6 patients undergoing mediastinoscopy, bronchoscopy, or endoscopy, 3D imaging helped in preprocedural planning.In this initial study, we demonstrated both the feasibility of rendering PET/CT images into 3D volumes and the potential clinical utility of this technique for diagnostic lesion characterization and preprocedural planning.

Abstract

To retrospectively determine if three-dimensional (3D) viewing improves radiologists' accuracy in classifying true-positive (TP) and false-positive (FP) polyp candidates identified with computer-aided detection (CAD) and to determine candidate polyp features that are associated with classification accuracy, with known polyps serving as the reference standard.Institutional review board approval and informed consent were obtained; this study was HIPAA compliant. Forty-seven computed tomographic (CT) colonography data sets were obtained in 26 men and 10 women (age range, 42-76 years). Four radiologists classified 705 polyp candidates (53 TP candidates, 652 FP candidates) identified with CAD; initially, only two-dimensional images were used, but these were later supplemented with 3D rendering. Another radiologist unblinded to colonoscopy findings characterized the features of each candidate, assessed colon distention and preparation, and defined the true nature of FP candidates. Receiver operating characteristic curves were used to compare readers' performance, and repeated-measures analysis of variance was used to test features that affect interpretation.Use of 3D viewing improved classification accuracy for three readers and increased the area under the receiver operating characteristic curve to 0.96-0.97 (P

Abstract

We present a system for segmenting the human aortic aneurysm in CT angiograms (CTA), which, in turn, allows measurements of volume and morphological aspects useful for treatment planning. The system estimates a rough "initial surface," and then refines it using a level set segmentation scheme augmented with two external analyzers: The global region analyzer, which incorporates a priori knowledge of the intensity, volume, and shape of the aorta and other structures, and the local feature analyzer, which uses voxel location, intensity, and texture features to train and drive a support vector machine classifier. Each analyzer outputs a value that corresponds to the likelihood that a given voxel is part of the aneurysm, which is used during level set iteration to control the evolution of the surface. We tested our system using a database of 20 CTA scans of patients with aortic aneurysms. The mean and worst case values of volume overlap, volume error, mean distance error, and maximum distance error relative to human tracing were 95.3% +/- 1.4% (s.d.); worst case = 92.9%, 3.5% +/- 2.5% (s.d.); worst case = 7.0%, 0.6 +/- 0.2 mm (s.d.); worst case = 1.0 mm, and 5.2 +/- 2.3 mm (s.d.); worst case = 9.6 mm, respectively. When implemented on a 2.8 GHz Pentium IV personal computer, the mean time required for segmentation was 7.4 +/- 3.6 min (s.d.). We also performed experiments that suggest that our method is insensitive to parameter changes within 10% of their experimentally determined values. This preliminary study proves feasibility for an accurate, precise, and robust system for segmentation of the abdominal aneurysm from CTA data, and may be of benefit to patients with aortic aneurysms.

Abstract

We developed a novel visualization method for providing an uncluttered view of the abdominal aorta and its branches. The method abstracts the complex geometry of vessels using a convex primitive, and uses a sweep line algorithm to find a suboptimal placement of the primitive. The method was evaluated using 10 CT angiography datasets and resulted in a clear visualization with all cluttering intersections removed. The method can be used to convey clinical findings, including lumen patency and lesion locations, in a single two-dimensional image.

Abstract

To compare devices for the task of navigating through large computed tomographic (CT) data sets at a picture archiving and communication system workstation.The institutional review board approved this study, and all subjects provided informed consent. Five radiologists were asked to find 25 different vascular targets in three CT angiography data sets (average number of sections, 1025) by using several devices (trackball, tablet, jog-shuttle wheel, and mouse). For each trial, the total time to acquire the targets (T1) was recorded. A secondary study in which 13 nonradiologists performed seven trials with an artificial target inserted at a random location in the same image data was also performed. For each trial, the following items were recorded: time until first target sighting (t2), time to manipulate the device after seeing the target, sections traversed during t2 (d1), time from first sight to target acquisition (t4), sections traversed during t4 (d2), and total trial time. Statistical analysis involved repeated-measures analysis of variance (ANOVA) and pairwise comparisons.Repeated-measures ANOVA revealed that the device used had a significant (P < .05) effect on T1. Pairwise comparisons revealed that the trackball was significantly slower than the tablet (P < .05) and marginally slower than the jog-shuttle wheel (P < .10). Further repeated-measures ANOVA for each secondary outcome measure revealed significant differences between devices for all outcome measures (P < .005). Pairwise comparisons revealed the trackball to be significantly slower than the other devices in all measures (P < .05). The trackball was significantly (P < .05) more accurate than the other devices for d1 and d2.The trackball may not be the optimal device for navigation of large CT angiography data sets; the use of other existing devices may improve the efficiency of interpretation of these sets.

Abstract

To compare the performance of radiologists and of a computer-aided detection (CAD) algorithm for pulmonary nodule detection on thin-section thoracic computed tomographic (CT) scans.The study was approved by the institutional review board. The requirement of informed consent was waived. Twenty outpatients (age range, 15-91 years; mean, 64 years) were examined with chest CT (multi-detector row scanner, four detector rows, 1.25-mm section thickness, and 0.6-mm interval) for pulmonary nodules. Three radiologists independently analyzed CT scans, recorded the locus of each nodule candidate, and assigned each a confidence score. A CAD algorithm with parameters chosen by using cross validation was applied to the 20 scans. The reference standard was established by two experienced thoracic radiologists in consensus, with blind review of all nodule candidates and free search for additional nodules at a dedicated workstation for three-dimensional image analysis. True-positive (TP) and false-positive (FP) results and confidence levels were used to generate free-response receiver operating characteristic (ROC) plots. Double-reading performance was determined on the basis of TP detections by either reader.The 20 scans showed 195 noncalcified nodules with a diameter of 3 mm or more (reference reading). Area under the alternative free-response ROC curve was 0.54, 0.48, 0.55, and 0.36 for CAD and readers 1-3, respectively. Differences between reader 3 and CAD and between readers 2 and 3 were significant (P < .05); those between CAD and readers 1 and 2 were not significant. Mean sensitivity for individual readings was 50% (range, 41%-60%); double reading resulted in increase to 63% (range, 56%-67%). With CAD used at a threshold allowing only three FP detections per CT scan, mean sensitivity was increased to 76% (range, 73%-78%). CAD complemented individual readers by detecting additional nodules more effectively than did a second reader; CAD-reader weighted kappa values were significantly lower than reader-reader weighted kappa values (Wilcoxon rank sum test, P < .05).With CAD used at a level allowing only three FP detections per CT scan, sensitivity was substantially higher than with conventional double reading.

Abstract

Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed a two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.

Abstract

The objective of this work was to develop and validate algorithms for detection and classification of hypodense hepatic lesions, specifically cysts, hemangiomas, and metastases from CT scans in the portal venous phase of enhancement. Fifty-six CT sections from 51 patients were used as representative of common hypodense liver lesions, including 22 simple cysts, 11 hemangiomas, 22 metastases, and 1 image containing both a cyst and a hemangioma. The detection algorithm uses intensity-based histogram methods to find central lesions, followed by liver contour refinement to identify peripheral lesions. The classification algorithm operates on the focal lesions identified during detection, and includes shape-based segmentation, edge pixel weighting, and lesion texture filtering. Support vector machines are then used to perform a pair-wise lesion classification. For the detection algorithm, 80% lesion sensitivity was achieved at approximately 0.3 false positives (FP) per slice for central lesions, and 0.5 FP per slice for peripheral lesions, giving a total of 0.8 FP per section. For 90% sensitivity, the total number of FP rises to about 2.2 per section. The pair-wise classification yielded good discrimination between cysts and metastases (at 95% sensitivity for detection of metastases, only about 5% of cysts are incorrectly classified as metastases), perfect discrimination between hemangiomas and cysts, and was least accurate in discriminating between hemangiomas and metastases (at 90% sensitivity for detection of hemangiomas, about 28% of metastases were incorrectly classified as hemangiomas). Initial implementations of our algorithms are promising for automating liver lesion detection and classification.

Abstract

We developed a novel computer-aided detection (CAD) algorithm called the surface normal overlap method that we applied to colonic polyp detection and lung nodule detection in helical computed tomography (CT) images. We demonstrate some of the theoretical aspects of this algorithm using a statistical shape model. The algorithm was then optimized on simulated CT data and evaluated using a per-lesion cross-validation on 8 CT colonography datasets and on 8 chest CT datasets. It is able to achieve 100% sensitivity for colonic polyps 10 mm and larger at 7.0 false positives (FPs)/dataset and 90% sensitivity for solid lung nodules 6 mm and larger at 5.6 FP/dataset.

Abstract

: To determine the feasibility of a computer-aided detection (CAD) algorithm as the "first reader" in computed tomography colonography (CTC).: In phase 1 of a 2-part blind trial, we measured the performance of 3 radiologists reading 41 CTC studies without CAD. In phase 2, readers interpreted the same cases using a CAD list of 30 potential polyps.: Unassisted readers detected, on average, 63% of polyps > or =10 mm in diameter. Using CAD, the sensitivity was 74% (not statistically different). Per-patient analysis showed a trend toward increased sensitivity for polyps > or =10 mm in diameter, from 73% to 90% with CAD (not significant) without decreasing specificity. Computer-aided detection significantly decreased interobserver variability (P = 0.017). Average time to detection of the first polyp decreased significantly with CAD, whereas total reading case reading time was unchanged.: Computer-aided detection as a first reader in CTC was associated with similar per-polyp and per-patient detection sensitivity to unassisted reading. Computer-aided detection decreased interobserver variability and reduced the time required to detect the first polyp.

Abstract

Multislice helical CT offers several retrospective choices of longitudinal (z) resolution at a given detector collimation setting. We sought to determine the effect of z resolution on the performance of a computer-aided colonic polyp detector, since a human reader and a computer-aided polyp detector may have optimal performances at different z resolutions. We ran a computer-aided polyp detection algorithm on phantom data sets as well as data obtained from a single patient. All data were reconstructed at various slice thicknesses ranging from 1.25 to 10 mm. We studied the performance of the detector at various ranges of polyp sizes using free-response receiver-operating characteristic analyses. We also studied contrast-to-noise ratios (CNR) as a function of slice thickness and polyp size. For the phantom data, reducing the slice thickness from 5 to 1.25 mm improves sensitivity from 84.5% to 98.3% (all polyps), from 61.4% to 95.5% (polyps in the range [0, 5) mm) and from 97.7% to 100% (polyps in the range [5, 10) mm) at a false positive rate of 20 per data set. For polyps larger than 10 mm, there is no significant improvement in detection sensitivity when slice thickness is reduced. CNRs showed expected behavior with slice thickness and polyp size, but in all cases remained high (> 4). The results for the patient data followed similar patterns to that of the phantom case. Thus we conclude that for this detector, the optimal slice thickness is dependent upon the size of the smallest polyps to be detected. For detection of polyps 10 mm and larger, reconstruction of 5 mm sections may be sufficient. Further study is required to generalize these results to a broader population of patients scanned on different scanners.

Abstract

Two-dimensional intensity-based methods for the segmentation of blood vessels from computed-tomography-angiography data often result in spurious segments that originate from other objects whose intensity distributions overlap with those of the vessels. When segmented images include spurious segments, additional methods are required to select segments that belong to the target vessels. We describe a method that allows experts to select vessel segments from sequences of segmented images with little effort. Our method uses ellipse-overlap criteria to differentiate between segments that belong to different objects and are separated in plane but are connected in the through-plane direction. To validate our method, we used it to extract vessel regions from volumes that were segmented via analysis of isolabel-contour maps, and showed that the difference between the results of our method and manually-edited results was within inter-expert variability. Although the total editing duration for our method, which included user-interaction and computer processing, exceeded that of manual editing, the extent of user interaction required for our method was about a fifth of that required for manual editing.

Abstract

The authors developed and evaluated a method to produce curved-slab maximum intensity projections (MIPs) through blood vessels that semiautomatically excludes soft tissue and bone. Results obtained with the algorithm were compared with those obtained with rectangular-slab MIPs by using computed tomographic (CT) data from four patients with abdominal aortic aneurysms. Curved-slab MIPs exhibited increased mean vessel-to-perivascular tissue contrast of 55.1 HU (36%), allowed a 10% increase in contrast-to-noise ratio, and decreased apparent vessel narrowing by 0.12-1.09 mm, without increasing processing time. Curved-slab MIPs may also include multiple vessels in a single image, thereby improving interpretation efficiency by reducing the number of MIPs required in these patients from eight to three.

Abstract

Colorectal cancer can easily be prevented provided that the precursors to tumors, small colonic polyps, are detected and removed. Currently, the only definitive examination of the colon is fiber-optic colonoscopy, which is invasive and expensive. Computed tomographic colonography (CTC) is potentially a less costly and less invasive alternative to FOC. It would be desirable to have computer-aided detection (CAD) algorithms to examine the large amount of data CTC provides. Most current CAD algorithms have high false positive rates at the required sensitivity levels. We developed and evaluated a postprocessing algorithm to decrease the false positive rate of such a CAD method without sacrificing sensitivity. Our method attempts to model the way a radiologist recognizes a polyp while scrolling a cross-sectional plane through three-dimensional computed tomography data by classification of the changes in the location of the edges in the two-dimensional plane. We performed a tenfold cross-validation study to assess its performance using sensitivity/specificity analysis on data from 48 patients. The mean specificity over all experiments increased from 0.19 (0.35) to 0.47 (0.56) for a sensitivity of 1.00 (0.95).

Abstract

An approach for acquiring dimensionally accurate three-dimensional (3-D) ultrasound data from multiple 2-D image planes is presented. This is based on the use of a modified linear-phased array comprising a central imaging array that acquires multiple, essentially parallel, 2-D slices as the transducer is translated over the tissue of interest. Small, perpendicularly oriented, tracking arrays are integrally mounted on each end of the imaging transducer. As the transducer is translated in an elevational direction with respect to the central imaging array, the images obtained by the tracking arrays remain largely coplanar. The motion between successive tracking images is determined using a minimum sum of absolute difference (MSAD) image matching technique with subpixel matching resolution. An initial phantom scanning-based test of a prototype 8 MHz array indicates that linear dimensional accuracy of 4.6% (2 sigma) is achievable. This result compares favorably with those obtained using an assumed average velocity [31.5% (2 sigma) accuracy] and using an approach based on measuring image-to-image decorrelation [8.4% (2 sigma) accuracy]. The prototype array and imaging system were also tested in a clinical environment, and early results suggest that the approach has the potential to enable a low cost, rapid, screening method for detecting carotid artery stenosis. The average time for performing a screening test for carotid stenosis was reduced from an average of 45 minutes using 2-D duplex Doppler to 12 minutes using the new 3-D scanning approach.

Abstract

The authors developed and evaluated a method to automatically create interactive vascular curved planar reformations with computed tomographic (CT) angiographic data. The method decreased user interaction time by 86%, from 15 to 2 minutes. Expert reviewers were asked to indicate their confidence in differentiating automatically created images from clinical-quality manually produced images. The area under the receiver operating characteristic curve was 0.45 (95% CI: 0.39, 0.51), and a test of equivalency indicated that reviewers could not distinguish between images. They also graded image quality as equivalent to that with manual methods and found fewer artifacts on automatically created images. Automatic methods rapidly produce curved planar reformations of equivalent quality with reduced time and effort.

Abstract

Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.

Abstract

Automatic analysis was performed of four-dimensional ultrasonographic (US) data in the carotid artery. The data, which were acquired in 31 subjects (eight healthy volunteers and 23 patients) by using a US scanner fitted with a special probe, were successfully processed. Acquisition time averaged 12 minutes. Data for all healthy volunteers (n = 8) and patients with complete occlusions (n = 3) were correctly classified. Data for two of the 12 patients with mild to severe (but not occlusive) disease were misclassified by one category.

Abstract

Adenomatous polyps in the colon are believed to be the precursor to colorectal carcinoma, the second leading cause of cancer deaths in United States. In this paper, we propose a new method for computer-aided detection of polyps in computed tomography (CT) colonography (virtual colonoscopy), a technique in which polyps are imaged along the wall of the air-inflated, cleansed colon with X-ray CT. Initial work with computer aided detection has shown high sensitivity, but at a cost of too many false positives. We present a statistical approach that uses support vector machines to distinguish the differentiating characteristics of polyps and healthy tissue, and uses this information for the classification of the new cases. One of the main contributions of the paper is the new three-dimensional pattern processing approach, called random orthogonal shape sections method, which combines the information from many random images to generate reliable signatures of shape. The input to the proposed system is a collection of volume data from candidate polyps obtained by a high-sensitivity, low-specificity system that we developed previously. The results of our ten-fold cross-validation experiments show that, on the average, the system increases the specificity from 0.19 (0.35) to 0.69 (0.74) at a sensitivity level of 1.0 (0.95).

Abstract

A common challenge for automated segmentation techniques is differentiation between images of close objects that have similar intensities, whose boundaries are often blurred due to partial-volume effects. We propose a novel approach to segmentation of two-dimensional images, which addresses this challenge. Our method, which we call intrinsic shape for segmentation (ISeg), analyzes isolabel-contour maps to identify coherent regions that correspond to major objects. ISeg generates an isolabel-contour map for an image by multilevel thresholding with a fine partition of the intensity range. ISeg detects object boundaries by comparing the shape of neighboring isolabel contours from the map. ISeg requires only little effort from users; it does not require construction of shape models of target objects. In a formal validation with computed-tomography angiography data, we showed that ISeg was more robust than conventional thresholding, and that ISeg's results were comparable to results of manual tracing.

Abstract

To compare the effects of acquisition parameters on the magnitude and appearance of artifacts between single and multiple detector-row helical computed tomography (CT).A cylindric (12.7 x 305.0-mm) acrylic rod inclined 45 degrees relative to the z axis was scanned at the isocenter and 100 mm from the isocenter with single detector-row (single-channel) helical CT (beam width, 1-10 mm; pitch, 1.0, 2.0, or 3.0) and multiple detector-row (four-channel) helical CT (detector width, 1. 25, 2.5, 3.75, and 5 mm; pitch, 0.75 or 1.5). The SD of radius measurements along the rod (SD(r)) was used to quantify artifacts in all 72 data sets and to analyze their frequency patterns. Volume-rendered images of the data sets were ranked by six independent and blinded readers; findings were correlated with acquisition parameters and SD(r) measurements.SD(r) was smaller in four- than in single-channel helical CT for any given table increment (TI). In single-channel helical CT, SD(r) increased linearly with beam width and geometrically with pitch. In four-channel helical CT, SD(r) measurements were directly proportional to the TI, regardless of the detector width and pitch combination used. Off-center object position on average increased SD(r) by a factor of 1.6 for single-channel helical CT and by a factor of 2.0 for four-channel helical CT. Subjective rankings of image quality correlated excellently with SD(r) (Spearman r = 0.94, P

Abstract

An abdominal computed tomographic scan was modified by inserting 10 simulated colonic polyps with use of methods that closely mimic the attenuation, noise, and polyp-colon wall interface of naturally occurring polyps. A shape-based polyp detector successfully located six of the 10 polyps. When settings that enhanced the edge profile of polyps were chosen, eight of 10 polyps were detected. There were no false-positive detections. Shape analysis is technically feasible and is a promising approach to automated polyp detection.

Abstract

To compare the costs of performing helical computed tomographic (CT) angiography with three-dimensional rendering versus intraarterial digital subtraction angiography (DSA) for preoperative imaging of abdominal aortic aneurysms (AAAs).A single observer determined the variable direct costs of performing nine intraarterial DSA and 10 CT angiographic examinations in age- and general health-matched patients with AAA by using time and motion analyses. All personnel directly involved in the cases were tracked, and the involvement times were recorded to the nearest minute. All material items used during the procedures were recorded. The cost of labor was determined from personnel reimbursement data, and the cost of materials, from vendor pricing. The variable direct costs of laboratory tests and using the ambulatory treatment unit for postprocedural monitoring, as well as all fixed direct costs, were assessed from hospital accounting records. The total costs were determined for each procedure and compared by using the Student t test and calculating the CIs.The mean total direct cost of intraarterial DSA (+/- SD) was $1,052 +/- 71, and that of CT angiography was $300 +/- 30, which are significantly different (P < 4.1 x 10(-11)). With 95% confidence, intraarterial DSA cost 3.2-3.7 times more than CT angiography for the assessment of AAA.Assuming equal diagnostic utility and procedure-related morbidity, institutions may have substantial cost savings whenever CT angiography can replace intraarterial DSA for imaging AAAs.

Abstract

The purpose of this study was to demonstrate the limitations to the effectiveness of CT colonography, colloquially called virtual colonoscopy (VC), for detecting polyps in the colon and to describe a new technique, map projection CT colonography using Mercator projection and stereographic projection, that overcomes these limitations.In one experiment, data sets from nine patients undergoing CT colonography were analyzed to determine the percentage of the mucosal surface visible in various visualization modes as a function of field of view (FOV). In another experiment, 40 digitally synthesized polyps of various sizes (10, 7, 5, and 3.5 mm) were randomly inserted into four copies of one patient data set. Both Mercator and stereographic projections were used to visualize the surface of the colon of each data set. The sensitivity and positive predictive value (PPV) were calculated and compared with the results of an earlier study of visualization modes using the same CT colonography data.The percentage of mucosal surface visualized by VC increases with greater FOV but only approaches that of map projection VC (98.8%) at a distorting, very high FOV. For both readers and polyp sizes of > or =7 mm, sensitivity for Mercator projection (87.5%) and stereographic projection (82.5%) was significantly greater (p < 0.05) than for viewing axial slices (62.5%), and Mercator projection was significantly more sensitive than VC (67.5%). Mercator and stereographic projection had PPVs of 75.4 and 78.9%, respectively.The sensitivity of conventional CT colonography is limited by the percentage of the mucosal surface seen. Map projection CT colonography overcomes this problem and provides a more sensitive method with a high PPV for detecting polyps than other methods currently being investigated.

Abstract

This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

Abstract

Spiral computed tomography (CT) has revolutionized conventional CT as a truly three-dimensional imaging modality. A number of studies aimed at evaluating the longitudinal resolution in spiral CT have been presented, but the spatially varying nature of the longitudinal resolution in spiral CT has been largely left undiscussed. In this paper, we investigate the longitudinal resolution in spiral CT as affected by the spatially varying longitudinal aliasing. We propose the treatment of aliasing as a signal dependent, additive noise, and define a new image quality parameter, the contrast-to-aliased-noise ratio (CNaR), that relates to possible image degradation or loss of resolution caused by aliasing. We performed CT simulations and actual phantom scans using a resolution phantom consisting of sequences of spherical beads of different diameters, extending along the longitudinal axis. Our results show that the off-isocenter longitudinal resolution differs significantly from the longitudinal resolution at the isocenter and that the CNaR decreases with distance from the isocenter, and is a function of pitch and the helical interpolation algorithm used. The longitudinal resolution was observed to worsen with decreasing CNaR. We conclude that the longitudinal resolution in spiral CT is spatially varying, and can be characterized by the CNaR measured at the transaxial location of interest.

Abstract

Since its clinical introduction in 1991, volumetric computed tomography scanning using spiral or helical scanners has resulted in a revolution for diagnostic imaging. In addition to new applications for computed tomography, such as computed tomographic angiography and the assessment of patients with renal colic, many routine applications such as the detection of lung and liver lesions have substantially improved. Helical computed tomographic technology has improved over the past eight years with faster gantry rotation, more powerful X-ray tubes, and improved interpolation algorithms, but the greatest advance has been the recent introduction of multi detector-row computed tomography scanners. These scanners provide similar scan quality at a speed gain of 3-6 times greater than single detector-row computed tomography scanners. This has a profound impact on the performance of computed tomography angiography, resulting in greater anatomic coverage, lower iodinated contrast doses, and higher spatial resolution scans than single detector-row systems.

Abstract

To determine the sensitivity of radiologist observers for detecting colonic polyps by using three different data review (display) modes for computed tomographic (CT) colonography, or "virtual colonoscopy."CT colonographic data in a patient with a normal colon were used as base data for insertion of digitally synthesized polyps. Forty such polyps (3.5, 5, 7, and 10 mm in diameter) were randomly inserted in four copies of the base data. Axial CT studies, volume-rendered virtual endoscopic movies, and studies from a three-dimensional mode termed "panoramic endoscopy" were reviewed blindly and independently by two radiologists.Detection improved with increasing polyp size. Trends in sensitivity were dependent on whether all inserted lesions or only visible lesions were considered, because modes differed in how completely the colonic surface was depicted. For both reviewers and all polyps 7 mm or larger, panoramic endoscopy resulted in significantly greater sensitivity (90%) than did virtual endoscopy (68%, P = .014). For visible lesions only, the sensitivities were 85%, 81%, and 60% for one reader and 65%, 62%, and 28% for the other for virtual endoscopy, panoramic endoscopy, and axial CT, respectively. Three-dimensional displays were more sensitive than two-dimensional displays (P < .05).The sensitivity of panoramic endoscopy is higher than that of virtual endoscopy, because the former displays more of the colonic surface. Higher sensitivities for three-dimensional displays may justify the additional computation and review time.

Abstract

To develop and validate a method for the insertion of digitally synthesized polyps into computed tomographic (CT) images of the human colon for use as ground truth for evaluation of virtual colonoscopy.Spiral CT simulator software was used to generate 10 synthetic polyps in various configurations. Additional software was developed to insert these polyps into volume CT scans. Ten polyps in eight patients were selected for comparison. Three radiologists evaluated whether two-dimensional (2D) CT images and three-dimensional (3D) volume-rendered CT images showed synthetic or real polyps.Edge-response profiles and noise of simulated polyps matched those of native polyps. Frequency distributions of reviewers' responses were not significantly different for synthetic versus real polyps in either 3D or 2D images. Responses were clustered around the response of "unsure" if lesions were real or synthetic. Receiver operating characteristic curves had areas of 0.54 (95% CI = 0.39, 0.68) for 3D and 0.39 (95% CI = 0.25, 0.53) for 2D images, which were not significantly different from random guessing (P = .70 and .28 for 3D and 2D images, respectively).Synthetic polyps were indistinguishable from real polyps. This method can be used to generate ground truth experimental data for comparison of CT colonographic display and detection methods.

Abstract

We describe a technique for three-dimensional cine MR imaging. By using short repetition times (TR) and interleaved slice encoding, volumetric cine data can be acquired throughout the cardiac cycle with a temporal resolution of approximately 80 msec. A T1-shortening agent is used to produce contrast between blood and myocardium. A comparison between the acquisition times of this and several other two-dimensional techniques is presented.

Abstract

We present a technique for obtaining three-dimensional external and virtual endoscopy views of organs using perspective volume-rendered gray-scale and Doppler sonographic data, and we explore potential clinical applications in the carotid artery, the female pelvis, and the bladder.Using the proposed methods, radiologists will find it possible to create virtual endoscopy and external perspective views using sonographic data. The technique works well for revealing the interior of fluid-filled structures and cavities. However, expected improvements in computer performance and integration with existing sonographic equipment will be necessary for the technique to become practical in the clinical environment.

Abstract

This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

Abstract

Although analyses of in-plane aliasing have been done for conventional computed tomography (CT) images, longitudinal aliasing in spiral CT has not been properly investigated. We propose a mathematical model of the three-dimensional (3-D) sampling scheme in spiral CT and analyze its effects on longitudinal aliasing. We investigated longitudinal aliasing as a function of the helical-interpolation algorithm, pitch, and reconstruction interval using CT simulations and actual phantom scans. Our model predicts, and we verified, that for a radially uniform object at the isocenter, the spiral sampling scheme results in spatially varying cancellation of the aliased spectral islands which, in turn, results in spatially varying longitudinal aliasing. The aliasing is minimal at the scanner isocenter, but worsens with distance from it and rapidly becomes significant. Our results agree with published results observed at the isocenter of the scanner and further provide new insight into the aliasing conditions at off-isocenter locations with respect to the pitch, interpolation algorithm, and reconstruction interval. We conclude that longitudinal aliasing at off-isocenter locations can be significant, and that its magnitude and effects cannot be predicted by measurements made only at the scanner isocenter.

Abstract

Virtual colonoscopy is a new method of colon examination in which computer-aided 3D visualization of spiral CT simulates fiberoptic colonoscopy. We used a colon phantom containing various-sized spheres to determine the influence of CT acquisition parameters on lesion detectability and sizing.Spherical plastic beads with diameters of 2.5, 4, 6, 8 and 10 mm were randomly attached to the inner wall of segments of plastic tubing. Groups of three sealed tubes were scanned at 3/1, 3/2, 5/1 collimation (mm)/pitch settings in orientations perpendicular and parallel to the scanner gantry. For each acquisition, image sets were reconstructed at intervals from 0.5 to 5.0 mm. Two blinded reviewers assessed transverse cross-sections of the phantoms for bead detection, using source CT images for images for acquisitions obtained with the tubes oriented perpendicular to the gantry and using orthogonal reformatted images for scans oriented parallel to the gantry.Detection of beads of > or = 4 mm was 100% for both tube orientations and for all collimator/pitch settings and reconstruction intervals. For the 2.5 mm beads, detection decreased to 78-94% for 5 mm collimation/pitch 2 scans when the phantom sections were oriented parallel to the gantry (p = 0.01). Apparent elongation of beads in the slice direction occurred as the collimation and pitch increased. The majority of the elongation (approximately 75%) was attributable to changing the collimator from 3 to 5 mm, with the remainder of the elongation due to doubling the pitch from 1 to 2.CT scanning at 5 mm collimation and up to pitch 2 is adequate for detection of high contrast lesions as small as 4 mm in this model. However, lesion size and geometry are less accurately depicted than at narrower collimation and lower pitch settings.

Abstract

In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

Abstract

This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

Abstract

The primary objective of this study is to perform a blinded evaluation of a group of retrospective image registration techniques using as a gold standard a prospective, marker-based registration method. To ensure blindedness, all retrospective registrations were performed by participants who had no knowledge of the gold standard results until after their results had been submitted. A secondary goal of the project is to evaluate the importance of correcting geometrical distortion in MR images by comparing the retrospective registration error in the rectified images, i.e., those that have had the distortion correction applied, with that of the same images before rectification.Image volumes of three modalities (CT, MR, and PET) were obtained from patients undergoing neurosurgery at Vanderbilt University Medical Center on whom bone-implanted fiducial markers were mounted. These volumes had all traces of the markers removed and were provided via the Internet to project collaborators outside Vanderbilt, who then performed retrospective registrations on the volumes, calculating transformations from CT to MR and/ or from PET to MR. These investigators communicated their transformations again via the Internet to Vanderbilt, where the accuracy of each registration was evaluated. In this evaluation, the accuracy is measured at multiple volumes of interest (VOIs), i.e., areas in the brain that would commonly be areas of neurological interest. A VOI is defined in the MR image and its centroid c is determined. Then, the prospective registration is used to obtain the corresponding point c' in CT or PET. To this point, the retrospective registration is then applied, producing c" in MR. Statistics are gathered on the target registration error (TRE), which is the distance between the original point c and its corresponding point c".This article presents statistics on the TRE calculated for each registration technique in this study and provides a brief description of each technique and an estimate of both preparation and execution time needed to perform the registration.Our results indicate that retrospective techniques have the potential to produce satisfactory results much of the time, but that visual inspection is necessary to guard against large errors.

Abstract

To compare accuracy of three-dimensional (3D) spiral computed tomography (CT) performed without administration of contrast material with that of radiography and linear nephrotomography in detection and measurement of renal calculi.Fifty renal calculi within an abdominal phantom were imaged with 3D spiral CT, radiography, and linear nephrotomography. Spiral CT data were analyzed with workstation-based 3D imaging software, with a thresholding procedure based on the maximally attenuating voxel within each calculus during measurement. Measurement accuracy and detection rates were compared according to modality. Conventional and magnification-corrected measurements from radiography and linear nephrotomography were included.Spiral CT depicted calculi and allowed determination of the collective two-dimensional and 3D linear measurements statistically significantly more accurately than the other techniques; the mean linear measurement errors along individual axes did not exceed 3.6%. With 3D spiral CT, calculus volumes were determined with a mean error of -4.8%.3D spiral CT enabled highly accurate determination of the volumes and all three linear dimensions of renal calculi. In addition, 3D spiral CT depicted calculi more sensitively than traditional techniques and provided new information and improved accuracy in the evaluation of nephrolithiasis.

Abstract

Our goal was to use three-dimensional information obtained from helical computed tomographic (CT) data to explore and evaluate the nasal cavity, nasopharynx, and paranasal sinuses by simulated virtual endoscopy (VE). This was done by utilizing a new image reconstruction method known as perspective volume rendering (PVR). Thin-section helical CT of the nasal cavity, nasopharynx, and paranasal sinuses was performed on a conventional CT scanner. The data were transferred to a workstation to create views similar to those seen with endoscopy. Additional views not normally accessible by conventional endoscopy were generated. Key perspectives were selected, and a video "flight" model was choreographed and synthesized through the nasal cavity and sinuses based on the CT data. VE allows evaluation of the nasal cavity, nasopharynx, and paranasal sinuses with appreciation of the relationships of these spatially complex structures. In addition, this technique allows structural visualization with unconventional angles, perspectives, and locations not conventionally accessible. Although biopsies, cultures, and lavages routinely done with endoscopy cannot be performed with VE, this technique holds promise for improving the diagnostic evaluation of the nasal cavity, the nasopharynx, and the paranasal sinuses. The unconventional visual perspectives and very low morbidity may complement many applications of simple diagnostic endoscopy.

Abstract

This study was to evaluate the accuracy of MR angiography (MRA) using a Gd-DTPA-polyethylene glycol polymer (Gd-DTPA-PEG) with a 3D fast gradient echo (3D fgre) technique in diagnosing pulmonary embolism in a canine model. Pulmonary emboli were created in six mongrel dogs (20-30 kg) by injecting tantalum oxide-doped autologous blood clots into the femoral veins via cutdowns. MRI was performed with a 1.5 T GE Signa imager using a 3D fgre sequence (11.9/2.3/15 degrees) following intravenous injection of 0.06 mmol Gd/kg of Gd-DTPA-PEG. The dogs were euthanized and spiral CT of the lungs were then obtained on the deceased dogs. The MRI images were reviewed independently and receiver-operating-characteristic (ROC) curves were used for statistical analysis using spiral CT results as the gold standard. The pulmonary emboli were well visualized on spiral CT. Out of 108 pulmonary segments in the six dogs, 24 contained emboli >2 mm and 27 contained emboli < or = 2 mm. With unblinded review, MRI detected 79% of emboli >2 mm and only 48% of emboli < or = 2 mm. The blinded review results were significantly worse. Gd-DTPA-PEG enhanced 3D fgre MRI is potentially able to demonstrate pulmonary embolism with fairly high degree of accuracy, but specialized training for the interpretations will be required.

Abstract

To use perspective volume rendering (PVR) of computed tomographic (CT) and magnetic resonance (MR) imaging data sets to simulate endoscopic views of human organ systems.Perspective views of helical CT and MR images were reconstructed from the data, and tissues were classified by assigning color and opacity based on their CT attenuation or MR signal intensity. "Flight paths" were constructed through anatomic regions by defining key views along a spline path. Twelve movies of the thoracic aorta (n=3), tracheobronchial tree (n=4), colon (n=3), paranasal sinuses (n=1), and shoulder joint (n=1) were generated to display images along the flight path. All abnormal results were confirmed at surgery.PVR fly-through enabled evaluation of the full range of tissue densities, signal intensities, and their three-dimensional spatial relationships.PVR is a novel way to present volumetric data and may enable noninvasive diagnostic endoscopy and provide an alternate method to analyze volumetric imaging data for primary diagnosis.

Abstract

The purpose of this study was to determine the value of reformatted noncontrast helical CT in patients with suspected renal colic. We hoped to determine whether this technique might create images acceptable to both radiologists and clinicians and replace our current protocol of sonography and abdominal plain film.Thirty-four consecutive patients with signs and symptoms of renal colic were imaged with both noncontrast helical CT and a combination of plain film of the abdomen and renal sonography. Reformatting of the helical CT data was performed on a workstation to create a variety of reformatted displays. The correlative studies were interpreted by separate blinded observers. Clinical data, including the presence of hematuria and the documentation of stone passage or removal, were recorded.Findings on 18 CT examinations were interpreted as positive for the presence of ureteral calculi; 16 of these cases were determined to be true positives on the basis of later-documented passage of a calculus. Thirteen of the 16 cases proved to be positive were interpreted as positive for renal calculi using the combination of abdominal plain film and renal sonography. The most useful CT reformatting technique was curved planar reformatting of the ureters to determine whether a ureteral calculus was present.In this study, noncontrast helical CT was a rapid and accurate method for determining the presence of ureteral calculi causing renal colic. The reformatted views produced images similar in appearance to excretory urograms, aiding greatly in communicating with clinicians. Limitations on the technique include the time and equipment necessary for reformatting and the suboptimal quality of reformatted images when little retroperitoneal fat is present.

Abstract

This paper presents a new reference data set and associated quantification methodology to assess the accuracy of registration of computerized tomography (CT) and magnetic-resonance (MR) images. Also described is a new semiautomatic surface-based system for registering and visualizing CT and MR images. The registration error of the system was determined using a reference data set that was obtained from a cadaver in which rigid fiducial tubes were inserted prior to imaging. Registration error was measured as the distance between an analytic expression for each fiducial tube in one image set and transformed samples of the corresponding tube obtained from the other. Registration was accomplished by first identifying surfaces of similar anatomic structures in each image set. A transformation that best registered these structures was determined using a nonlinear optimization procedure. Even though the root-mean-square (rms) distance at the registered surfaces was similar to that reported by other groups, it was found that rms distances for the tubes were significantly larger than the final rms distances between the registered surfaces. It was also found that minimizing rms distance at the surface did not minimize rms distance for the tubes.

Abstract

We present a method to correct the geometric distortion caused by field inhomogeneity in MR images of patients wearing MR-compatible stereotaxic frames. Our previously published distortion correction method derives patient-dependent error maps by computing the phase-difference of 3D images acquired at different TEs. The time difference (delta TE = 4.9 ms at 1.5 T) is chosen such that the water and fat signals are in phase. However, delta TE is long enough to permit phase wraps in the difference images for frequency offsets greater than 205 Hz. Phase unwrapping techniques resolve these only for connected structures; therefore, the phase difference for fiducial rods may be off by multiples of 2 pi relative to the head. We remove this uncertainty by using an additional single 2D phase-different image with delta TE = 1 ms (during which time no phase-wraps are typically expected) to determine the correct multiple of 2 pi for each rod. We tested our method in a cadaver and in a patient using CT as the gold standard. Targets in the frame coordinates were chosen from CT and compared with their locations in MR. Localizing errors using MR compared with CT were as large as 3.7 mm before correction and were reduced to less than 1.11 mm after correction.

Abstract

The authors have developed a technique based on a solution of the Poisson equation to unwrap the phase in magnetic resonance (MR) phase images. The method is based on the assumption that the magnitude of the inter-pixel phase change is less than pi per pixel. Therefore, the authors obtain an estimate of the phase gradient by "wrapping" the gradient of the original phase image. The problem is then to obtain the absolute phase given the estimate of the phase gradient. The least-squares (LS) solution to this problem is shown to be a solution of the Poisson equation allowing the use of fast Poisson solvers. The absolute phase is then obtained by mapping the LS phase to the nearest multiple of 2 K from the measured phase. The proposed technique is evaluated using MR phase images and is proven to be robust in the presence of noise. An application of the proposed method to the 3-point Dixon technique for water and fat separation is demonstrated.

Abstract

To evaluate the use of spiral computed tomographic (CT) angiography in the analysis of the arteries of the circle of Willis and compare these results with magnetic resonance (MR) angiography and conventional angiography.The results in 17 patients who underwent examination were prospectively studied in a blinded fashion. The presence or absence of the arteries of the circle of Willis was determined by using maximum intensity projection reconstructions from CT angiography and MR angiography. These results were compared with results from conventional angiography.Similar sensitivities were determined for CT angiography (88.5%) and MR angiography (85.5%); however, MR angiography was found to differ significantly (P = .005) from conventional angiography. No significant differences (P > .05) were found between the two modalities and conventional angiography in the detection of the anterior, middle, or posterior cerebral arteries or the anterior communicating artery.Spiral CT angiography is highly sensitive in the detection of arterial anatomy in the circle of Willis and is a reliable alternative to MR angiography.

Abstract

We previously described a technique for correcting patient-specific magnetic field inhomogeneity spatial distortion in magnetic resonance images (MRI), which was not applicable to patients fitted with MRI-compatible stereotactic fiducial frames. Here we describe an improvement to the technique that permits application for these patients. Measurements with a cadaver head show that this method achieves MRI stereotactic localization accuracy of 1 mm.

Abstract

This paper presents a versatile system for registering and visualizing computed tomography and magnetic resonance images. The system utilizes a semi-automatic, surface-based registration strategy which has proven useful for registering a number of different anatomical structures. A triangular mesh approximates surfaces in one image set while a set of surface points is used as a surface approximation in the other set. A non-linear optimization procedure determines the transformation that minimizes the total sum-squared perpendicular distance between triangles of the mesh and surface points. This system has been used without modification to successfully register images of the brain, spine and calcaneus.

Abstract

The different sources of spatial distortion in magnetic resonance images are reviewed from the point of view of stereotactic target localization. The extents of the two most complex sources of spatial distortion, gradient field nonlinearities and magnetic field inhomogeneities, are discussed both qualitatively and quantitatively. Several ways by which the spatial distortion resulting from these sources can be minimized are discussed. The clinical relevance of the spatial distortion along with some strategies to minimize the localization errors in magnetic resonance-guided stereotaxy are presented.

Abstract

A method of computing the velocity field and pressure distribution from a sequence of ultrafast CT (UFCT) cardiac images is demonstrated. UFCT multi-slice cine imaging gives a series of tomographic slices covering the volume of the heart at a rate of 17 frames per second. The complete volume data set can be modeled using equations of continuum theory and through regularization, velocity vectors of both blood and tissue can be determined at each voxel in the volume. The authors present a technique to determine the pressure distribution throughout the volume of the left ventricle using the computed velocity field. A numerical algorithm is developed by discretizing the pressure Poisson equation (PPE), which Is based on the Navier-Stokes equation. The algorithm is evaluated using a mathematical phantom of known velocity and pressure-Couette flow. It is shown that the algorithm based on the PPE can reconstruct the pressure distribution using only the velocity data. Furthermore, the PPE is shown to be robust in the presence of noise. The velocity field and pressure distribution derived from a UFCT study of a patient are also presented.

Abstract

We present a method to quantify the MR field inhomogeneity geometric distortion to subpixel accuracy without using objects of known dimensions and without using an external standard such as CT. Our method may be used to quantify the geometric accuracy of MR images of anatomical structures of unknown geometry and also to test any geometry correction scheme. We have quantified the distortion in a tissue phantom and found the largest error to be approximately 2.8 pixels (1.8 mm) for Bo = 1.5 T, G = 3.13 mT/m and FOV = 160 x 160 x 70.7 mm3. We also found that our previously published correction technique reduced the largest error to 0.3 pixels (mu = 0.02 and sigma = 0.07 pixels).

Abstract

To evaluate the accuracy of computed tomographic (CT) angiography in the detection of renal artery stenosis (RAS).CT angiography was performed in 31 patients undergoing conventional renal arteriography. CT angiographic data were reconstructed with shaded surface display (SSD) and maximum-intensity projection (MIP). Stenosis was graded with a four-point scale (grades 0-3). The presence of mural calcification, poststenotic dilatation, and nephrographic abnormalities was also noted.CT angiography depicted all main (n = 62) and accessory (n = 11) renal arteries that were seen at conventional arteriography. MIP CT angiography was 92% sensitive and 83% specific for the detection of grade 2-3 stenoses (> or = 70% stenosis). SSD CT angiography was 59% sensitive and 82% specific for the detection of grade 2-3 stenoses. The accuracy of stenosis grading was 80% with MIP and 55% with SSD CT angiography. Poststenotic dilatation and the presence of an abnormal nephrogram were 85% and 98% specific, respectively.CT angiography shows promise in the diagnosis of RAS. The accuracy of CT angiography varies with the three-dimensional rendering technique employed.

Abstract

We sought to apply a new technique of computed tomographic angiography (CTA) to the preoperative and postoperative assessment of the abdominal aorta and its branches.After a peripheral intravenous contrast injection, the patient is continuously advanced through a spiral CT scanner, while maintaining a 30-second breath-hold. Thirty-five patients with abdominal aortic, renal, and visceral arterial disease have undergone CTA.Diagnostic three-dimensional images were obtained in patients with aortic aneurysms (n = 9), aortic dissections (n = 4), and mesenteric artery stenoses (n = 4). The technique has also been used to assess vessels after operative reconstruction or endovascular intervention in 18 patients. These preliminary studies have correlated well with conventional arteriographic findings. In aneurysmal disease both the lumen and mural thrombus and associated renal artery stenoses are visualized. The true and false channels of aortic dissections and the perfusion source of the visceral vessels are clearly shown; patency of visceral and renal reconstruction or stent placement are confirmed. CTA is relatively noninvasive and can be completed in less time than conventional angiography with less radiation exposure.This initial experience suggests that CTA may be a valuable alternative to conventional arteriography in the evaluation of the aorta and its branches.

Abstract

The authors present sliding thin-slab maximum intensity projection (STS-MIP) as a technique for improved visualization of blood vessels and airways from rapidly acquired thin-section CT data. The STS-MIP reconstructions can be computed rapidly and without operator intervention directly from the transaxial sections. The resulting images retain the high contrast resolution of thin-section (1-3 mm) CT while providing vascular or airway visibility within a sequence of overlapping thin-slabs (3-10 mm). Examples are presented of pulmonary vessels and airways derived from spiral CT and of pulmonary vessels and coronary arteries derived from electron-beam CT.

Abstract

The authors have developed a method to reduce noise in three-dimensional (3D) phase-contrast magnetic resonance (MR) velocity measurements by exploiting the property that blood is incompressible and, therefore, the velocity field describing its flow must be divergence-free. The divergence-free condition is incorporated by a projection operation in Hilbert space. The velocity field obtained with 3D phase-contrast MR imaging is projected onto the space of divergence-free velocity fields. The reduction of noise is achieved because the projection operation eliminates the noise component that is not divergence-free. Signal-to-noise ratio (S/N) gains on the order of 15%-25% were observed. The immediate effect of this noise reduction manifests itself in higher-quality phase-contrast MR angiograms. Alternatively, the S/N gain can be traded for a reduction in imaging time and/or improved spatial resolution.

Abstract

Spiral CT allows continuous data to be acquired rapidly, and if a correctly timed IV bolus of contrast material is given, spiral CT angiography can be performed. This study was designed to evaluate spiral CT angiography with maximum-intensity-projection reconstructions for assessing the degree of carotid artery stenosis.Spiral CT angiography (of 28 carotid bifurcations in 14 patients) was compared in a blinded fashion with conventional angiography (of 28 bifurcations) and with two-dimensional time-of-flight MR angiography (of 12 bifurcations) to assess degree of stenosis. A nonblinded comparison of the contour of the lumen at the site of stenosis was then made between conventional angiography, spiral CT angiography, and MR angiography. The degree of stenosis was measured in each internal carotid artery and categorized as mild (< 30%), moderate (30-69%), or severe (70-99%) stenosis or as occlusion. Maximum-intensity-projection images were used for the evaluations; however, if calcification was circumferential and the lumen of the carotid artery could not be analyzed in the area of the calcification, the axial source images were used.The results of CT angiography and conventional angiography agreed overall in 25 (89%) of 28 cases (r = .921, p = .05, Spearman rank correlation). The presence of severe stenosis or occlusion was correctly identified in seven of seven cases. In the moderate and mild stenosis categories, 18 (86%) of 21 were correctly identified (r = .802, p = .122). Three internal carotid arteries (11%) had circumferential calcification that necessitated evaluation of the axial source images, and the measurements obtained from the axial images agreed well with angiographic findings. MR angiography correlated well with the various categories of stenosis. However, when we compared MR angiography directly with CT angiography and conventional angiography, we found that the degree of stenosis was overestimated when MR angiography was used.Our results show that spiral CT angiography shows normal and abnormal carotid anatomy well when compared with conventional angiography. The short examination time and clear depiction of arterial caliber in areas of stenosis are significant advantages of spiral CT angiography compared with MR angiography.

Abstract

Spiral computed tomography (CT) is a new technology that couples continuous tube rotation with continuous table feed. This allows compilation of a data set that has continuous anatomic information without the establishment of arbitrary boundaries at section interfaces as in conventional CT. The unique method of data collection of the spiral scanner has been combined with a dynamic intravenous contrast material bolus to image abdominal vasculature, specifically, the aorta, renal arteries, and splanchnic circulation. Through various techniques of image processing, including surface renderings and maximum-intensity projections, it is possible to obtain excellent anatomic detail of the aorta and its major branches. The authors applied this technique in 15 patients and reliably saw third-order aortic branches as well as third-order splenic-portal venous anatomic detail with remarkable clarity. Pathologic conditions detected include stenotic renal arteries, abdominal aortic dissection, abdominal aortic aneurysm, and celiac bypass graft occlusion.

Abstract

The authors describe a technique for obtaining angiographic images by means of spiral computed tomography (CT), preprocessing of reconstructed three-dimensional sections to suppress bone, and maximum intensity projection. The technique has some limitations, but preliminary results in 48 patients have shown excellent anatomic correlation with conventional angiography in studies of the abdomen, the circle of Willis in the brain, and the extracranial carotid arteries. With continued development and evaluation, CT angiography may prove useful as a screening tool or replacement for conventional angiography in some patients.

Abstract

This article describes a new algorithm for reprojection of volumetric data, called Fast Fourier Projection (FFP), which is one to two orders of magnitude faster than conventional methods such as ray casting. The theoretical basis of the new method is developed in a unified mathematical framework encompassing slice imaging and conventional volumetric reprojection methods. Software implementation is discussed in detail. The article closes with an account of experience with a prototype FFP implementation, and applications of the technique in medical visualization.

Abstract

Three-dimensional (3D) velocity maps acquired with 3D phase-contrast magnetic resonance (MR) imaging contain information regarding complex motions that occur during imaging. A technique called simulated streamlines, which facilitates the display and comprehension of these velocity data, is presented. Single or multiple seed points may be identified within blood vessels of interest and tracked through the velocity field. The resulting trajectories are combined with a 3D MR angiogram and displayed with 3D volume visualization software. Mathematical analysis highlights potential applications and pitfalls of the technique, which was implemented both in phantoms and in vivo with excellent results. For example, single streamlines reveal helical flow patterns in aneurysms, and multiple streamlines seeded in the common carotid artery reveal branch filling-time relationships and slow filling of the carotid bulb. The technique is helpful in understanding these complex flow patterns.

Abstract

We have developed a technique called fast Fourier projection which rapidly produces projections through images and is particularly useful for generating MR angiograms. Based on the projection-slice theorem of Fourier transform theory, this method extracts planes from three-dimensional spatial frequency space and computes projections at arbitrary viewing angles by two-dimensional inverse Fourier transformation. Typical computation times are on the order of 1 s per projection. This performance makes possible interactive selection of optimal projection directions for visualizing the desired vasculature in single or stereo-pair angiographic images and drastically reduces the time required to generate sequences of projections for display in movie loops compared to the conventional ray-casting approach. The method is easily implemented on off-line workstations or directly on MRI computer systems.

Abstract

The authors review the history and physical principles behind vascular magnetic resonance imaging (MRI) techniques, developed to measure blood flow noninvasively and to display images of the vasculature. All these techniques have been used to create magnetic resonance angiograms, in which the vasculature is shown in a projection format similar to x-ray angiography. Signal loss limits the effectiveness of "white-blood" magnetic resonance angiography techniques, since slow flow and complex flow often cause a drop in signal and consequently a loss of accuracy in depicting vessel anatomy. "Black-blood" magnetic resonance angiography is described as a method that avoids these problems of signal loss. Selective black-blood magnetic resonance angiography is introduced as a technique for improving the visualization of the vasculature when other signal-void structures are present in the volume of interest.

Abstract

The capability of computed tomography (CT) scanning to measure cardiac output was explored using ten anesthetized dogs, and the results were compared with those obtained by thermodilution. Dynamic CT scans were performed at the level of the aortic root while small peripheral intravenous boluses of contrast medium were injected. Time/density curves were generated using a gamma variate fitting program. These were used to estimate cardiac output by applying indicator dilution principles. CT results correlated favorably (r = 0.86) with those of thermodilution. This feasibility study indicates the utility of CT for obtaining physiologic measurements of cardiac function and should encourage further studies to develop the potential of CT for cardiovascular diagnostic purposes.

Abstract

Data from rapid-sequence CT scans of the same cross section, obtained following bolus injection of contrast material, were analyzed by functional imaging. The information contained in a large number of images can be compressed into one or two gray-scale images which can be evaluated both qualitatively and quantitatively. The computational techniques are described and applied to the generation of images depicting bolus transit time, arrival time, peak time, and effective width.

Abstract

In this paper a method is described for obtaining and characterizing fetal blood velocity waveforms. The signals were recorded with a range-gated Doppler instrument and characterized after spectral analysis. Preliminary observations indicate differences in the waveforms obtained during normal pregnancies compared with some complicated pregnancies.