Author Information

Cardiac computed tomography (CT) has undergone a remarkable evolution in the past 25 years. In the past decade, we have seen further progress as first electron beam computed tomography (EBCT) and, later, multi-detector computed tomography (MDCT) were developed, validated, and applied to examine coronary atheromatous plaque.

Quantification of coronary artery calcification (CAC) by CT is now established as a valid “estimator” of total coronary plaque burden, providing medium- and long-term cardiovascular prognostic information separate and significantly incremental to conventional risk factors (1–3). However, although CAC is of confirmed clinical value in risk stratification, 80% (or more) of coronary plaque is noncalcified (4), and CAC provides only minimal insight into segmental stenosis severity (5), largely because of focal vascular remodeling (6).

Contrast-enhanced CTCA was developed in Germany (7) and the U.S. (8) in the mid-to-late 1990s as a noninvasive method to estimate coronary luminal stenoses. It became apparent that the administration of intravenous contrast additionally facilitated visualization of at least some noncalcified plaques adjacent to and separate from calcified mural plaque. These interesting observations led researchers to muse that CTCA might be a good candidate to pursue noninvasive identification and quantification of focal plaque morphology, that is, it might be possible to move from estimations of global plaque volume and/or the absence/presence of obstructive stenoses toward true segmental Q-CTCA.

There have been two previous investigations evaluating the potential of CTCA to define focal coronary plaque. Achenbach et al. (9) compared intravascular ultrasound (IVUS) obtained during cardiac catheterization with CTCA using 16-slice MDCT. They found that CT had a sensitivity of approximately 82% to detect coronary segments with any atherosclerotic plaque but that it had only a 53% sensitivity to define segments with solely noncalcified atherosclerotic plaque.

Leber et al. (10) also performed a comparison between IVUS and contrast-enhanced 16-slice MDCT. They reported a sensitivity for “soft” and/or fibrous plaque at approximately 78% while maintaining a sensitivity for “calcified” plaque (as had been shown in previous EBCT studies) of approximately 95%. Thus, in approximately 80% of the cases, CTCA was found to provide additional information about noncalcified plaque, but total “atherosclerotic plaque burden” remained significantly underdefined.

The current study

In this issue of the Journal, Leber et al. (11) extend their work using the latest 64-slice MDCT accompanied by further improvement in their analytic methods. Again, IVUS was the reference standard. In their most recent study, they found correct detection of plaque in 83% of all atheromatous types and in 97% of “mixed”-type plaques but no change in the accuracy of defining calcified plaque (still 95%). Perhaps most impressively, the absence of any atheromatous plaque was correctly ruled out by 64-slice MDCT in 192 of 204 (94%) of the coronary segments defined as normal by IVUS. Additionally, in the proximal coronary arteries, the correlation of mean combined vessel plaque volume using 64-slice MDCT with direct mean combined vessel plaque volume by IVUS was significant (r = 0.69).

The edge

These latest results are very encouraging. Basically, the data imply that we have the potential to use CTCA, in a totally noninvasive manner, to define/measure the “edges” that separate hard plaque from soft plaque and, furthermore, distinguish these features from normal vessel wall. However, what do we need to do to move Q-CTCA even further into the realm of clinical practicality? The answer to this question lies in understanding the physics of, and then optimizing the approach to, cardiac “edge” detection with digital methods such as CT.

There are three important “edge” issues regarding the use of CT to quantify hard “calcified” plaque, soft “noncalcified” plaque, coronary mural surfaces, and subsequent stenosis severity: spatial resolution, low-contrast resolution, and temporal resolution. All three figure prominently (but unfortunately not totally independently) in the process.

Spatial resolution

What might explain the considerable improvement from the initial Leber study published only a year ago and the current study? One difference is that the newest 64-slice MDCT scanners have even better spatial (and to some extent temporal) resolution than their 16-slice and EBCT predecessors. Acquiring CTCA in thinner slices now allows image reconstruction into “almost cubic” (isentropic) voxels significantly facilitating three-dimensional calibrations and measurements through an almost infinite number of imaging planes.

The concept of quantifying the cardiac edge using CT goes back to the initial cardiac CT studies performed by University of Iowa researchers more than 20 years ago. The research objective was quantifying left ventricular muscle mass by EBCT. We devised a method to look at the “full-width half maximum” (FWHM) density between ventricular boundaries as a continuum, appreciating that the true physical edge when rendered in image space was half-way between the CT densities of the true myocardium and the adjacent tissues. Once this was determined by direct calculation, the consistency and subsequent accuracy to within 5 g of ventricular muscle mass estimation by CT, compared with autopsy, was established (12,13).

Defining the “edge” for ventricular muscle, a large structure, compared with finding the edge of the much smaller coronary artery is a fundamental issue of “resolution.” Heretofore much of the difficulty in resolving a small structure such as the coronary artery with CT had been due to unequal dimensions of the “pixel” (picture element) in the x-y plane related to the dimension defining slice depth in the z-plane; these characteristics define the three-dimensional “voxel” (volume element). Although the x-y spatial resolution of all cardiac CT has remained in the sub-millimeter range for more than a decade, slice width or thickness was, until quite recently, significantly larger.

In a two-dimensional CT image, the border between two physically adjacent objects is composed of an array of pixels that render the image as a gradual array of finite gray scale densities separating the two objects; from a distance, these objects are distinct, but up close, the borders are actually a blur. This transition zone between two adjacent objects in the CT image and the transition “blur” between them is partly a function of the size of the pixel in two dimensions and, more importantly, the size of the voxel in three dimensions. The true edge is distinct in physical space, but in image space, there is a transition of finite dimension. The concept of FWHM defines this boundary as the CT density halfway between the true reconstructed densities (gray scale or Hounsfield units) of each of the separate objects sharing a common “edge.” The fidelity of the edge (i.e., the accuracy of a measurement across the edge) becomes one of confidence that is directly related to the pixel/voxel size (and coronary motion blurring and low-density resolution). In the current study by Leber et al. (11), they used a FWHM concept to define the edges of various plaque elements but determined that the fidelity was somewhat in error—this is due, at least partly, to the spatial resolution of the CT data. The current x-y-z voxel for the scanner used by Leber et al. (11) is roughly 0.4 mm × 0.4 mm × 0.6 mm and is still not quite “cubic” but, more importantly, is still not nearly sufficient for confident resolution of the structures in question, where the minimal voxel dimensions (on each side) should be closer to that possible in the catheterization laboratory, 0.1 mm or less.

Low-density resolution

Basically, low-density resolution regards the fidelity of separating two objects of limited differential with respect to CT density. Although some of this ability may be overcome by improved spatial resolution, in the world of CT, the differences are highly dependent on the detector technology used. All of these detector materials (actually ceramics) have a short but finite “memory” after radiation exposure and this defines the potential for afterglow. This is sort of like somebody setting off a camera flash in your eyes; for a short time, the resolution of objects put in your line of site is obscured. Improvements in detector technology will continue to minimize this situation, but the detectors of the older 16-slice MDCT and the newer 64-slice MDCT are basically, for the most part, the same.

Temporal resolution and what’s next?

Finally, we begin where it all began. Electron beam computed tomography has been and continues to be the “fastest of all CT” scanners, with scan speeds designed from inception to image the beating heart (50 ms and 100 ms complete acquisition). It actually has significant low-contrast resolution because of its unique detector architecture, but currently it lacks the necessary spatial resolution for complete plaque definition. Multidetector computed tomography has been adapted from general radiological applications that require high spatial resolution of static objects in which temporal resolution is not an issue. Rotational speeds of MDCT have improved considerably and are now approaching 300 ms; however, this is still not ideal for all patients, and the current optimal clinical situation is to have heart rates of 60 to 65 beats/min or slower to minimize coronary motion during acquisition. Although each CT vendor has methods to maximize temporal resolution using innovations such as partial scan reconstruction, these may compound other issues, producing additional artifacts after processing. Optimal temporal resolution for cardiac CT imaging largely independent of normal heart rates remains a major factor in making Q-CTCA a reality applicable across large numbers of individuals.

The goal is to make pixel/voxel image spatial/low-contrast resolution for cardiac CT smaller and better and, at the same time, render negligible the additional confounding factor of motion artifacts such that the confidence in and reproducibility of density measures for all coronary structures are maximized. How will this happen? It is not clear at present. One partial method might be to increase the power of the scanner, but this then increases the radiation exposure (already at alarming values in some MDCT studies). Another is to improve the detector technology. Interestingly, the current four major CT vendors all use different materials for their detectors, each with somewhat-different efficiencies and low-contrast resolution. The flat panel detector, now a feature in some catheterization laboratories, has been discussed as part of the solution. It could significantly improve the spatial resolution of CT, but current flat panel technology has issues of suboptimal low-contrast resolution due to afterglow artifacts. Personally, I feel that the best approach would be to better develop the electron beam concept into a multisource (beam) product and then marry it with improved multidetector ceramic technology, although this is not a trivial engineering challenge. However, incredible advances in MDCT technology have demonstrated that issues only conceived several years ago are now a clinical reality.

Why the fuss?

In the early days of cardiac catheterization, the focus was on the presence of “significant stenoses.” Later research showed the potential for myocardial infarction was often greater in complex nonobstructive plaques (14), fostering concepts of the vulnerable plaque/vulnerable patient that figure prominently in current atherosclerosis research. Given the ever-expanding clinical applications for CTCA to define not only coronary obstructive and nonobstructive disease on a segment-by-segment basis but also chamber and congenital anatomy and function, the future is very exciting. Research has proven CAC as a valuable adjunct to defining cardiovascular prognosis in the medium to long term, but we have lacked a reliable means to directly image the coronary arteries noninvasively for prognosis in the short term. It will require a fundamental digital dissection of the plaque process itself—a glimpse of which has been demonstrated in this issue of the Journal. This is a major innovation for the clinical definition of focal plaque characteristics and opens up the potential for Q-CTCA as a research tool for lipid- and plaque-altering medications.

Footnotes

↵⁎ Editorials published in the Journal of the American College of Cardiologyreflect the views of the authors and do not necessarily represent the views of JACCor the American College of Cardiology.

Toolbox

Thank you for your interest in spreading the word about JACC: Journal of the American College of CardiologyNOTE: We request your email address only as a reference for the recipient. We do not save email addresses.

Your Email *

Your Name *

Send To *

Enter multiple addresses on separate lines or separate them with commas.