Wolters Kluwer Health may email you for journal alerts and information, but is committed
to maintaining your privacy and will not share your personal information without
your express consent. For more information, please refer to our Privacy Policy.

Author Information

For correspondence contact the author at the above address, or email at hcull@rerf.or.jp.

Supplemental Digital Content is available in the HTML and PDF versions of this article on the journal’s Web site (www.health-physics.com).

(Manuscript accepted 16 August 2013)

Abstract

Abstract: The Radiation Effects Research Foundation (RERF) studies various cohorts of Japanese atomic bomb survivors, the largest being the Life Span Study (LSS), which includes 93,741 persons who were in Hiroshima or Nagasaki at the times of the bombings; there are also cohorts of persons who were exposed in utero and survivors’ children. This presentation attempts to summarize the total impact of the radiation from the bombs on the survivors from both an individual perspective (both age-specific and integrated lifetime risk, along with a measure of life expectancy that describes how the risk affects the individual given age at exposure) and a group perspective (estimated numbers of excess occurrences in the cohort), including both early and late effects. As survivors’ doses ranged well into the acutely lethal range at closer distances, some of them experienced acute signs and symptoms of radiation exposure in addition to being at risk of late effects. Although cancer has always been a primary concern among late effects, estimated numbers of excess cancers and hematopoietic malignancies in the LSS are a small fraction of the total due to the highly skewed dose distribution, with most survivors receiving small doses. For example, in the latest report on cancer incidence, 853 of 17,448 incident solid cancers were estimated to be attributable to radiation from the bombs. RERF research indicates that risk of radiation-associated cancer varies among sites and that some benign tumors such as uterine myoma are also associated with radiation. Noncancer late effects appear to be in excess in proportion to radiation dose but with an excess relative risk about one-third that of solid cancer and a correspondingly small overall fraction of cases attributable to radiation. Specific risks were found for some subcategories, particularly circulatory disease, including stroke and precedent conditions such as hypertension. Radiation-related cataract in the atomic bomb survivors is well known, with evidence in recent years of risk at lower dose levels than previously appreciated. In addition to somatic effects, survivors experienced psychosocial effects such as uncertainty, social stigma, or rejection, and other social pressures. Developmental deficits associated with in utero exposure, notably cognitive impairment, have also been described. Interaction of radiation with other risk factors has been demonstrated in relation to both cancer and noncancer diseases. Current research interests include whether radiation increases risk of diabetes or conditions of the eye apart from cataract, and there continues to be keen interest as to whether there are heritable effects in survivors’ children, despite negative findings to date. Introduction of Impact on the Japanese Atomic- Bomb Survivors (Video 1:52, http://links.lww.com/HP/A29)

INTRODUCTION

THE DATA collected and analyzed by the Radiation Effects Research Foundation (RERF) and its predecessor, the Atomic Bomb Casualty Commission (ABCC), comprise the most complete collection of information in the world on the health effects experienced by the survivors of the atomic bombs in Hiroshima and Nagasaki, Japan, and by their children. Although other, very early organizations, such as the Joint Commission for the Investigation of the Effects of the Atomic Bomb in Japan, were able to more directly collect data on matters such as acute health effects (Oughterson and Warren 1956), the overall scope and completeness of the ABCC/RERF data are unparalleled. The findings of ABCC and RERF have been summarized many times, with a recent review by Douple et al. (2011). For understandable scientific reasons, such reviews have tended to focus on late effects and on risk coefficients or other quantities that have been abstracted and generalized for wider purposes such as setting standards for radiation protection. The emphasis of the present work, in contrast, is to summarize the entire impact of the radiation from the bombs on the survivors themselves.

At ABCC and RERF, the health experience of the survivors and their children has been monitored through several sampling frames originally created by the ABCC (Table 1). The first is a Master Sample of residents of the two cities that was used to form a large cohort of 120,321 survivors of the bombings and unexposed controls [the Life Span Study (LSS)] for mortality and cancer incidence follow-up. A subset of this cohort numbering ∼20,000 individuals, the Adult Health Study (AHS), has been followed by biennial medical examinations, which has major advantages such as providing biosamples and data on occurrence of nonfatal, noncancer health conditions. A much smaller cohort of persons exposed in utero and unexposed controls, numbering ∼3,300 individuals, has been followed for mortality, and a subset of ∼1,600 of this cohort has received clinical examinations. Finally, a cohort of ∼77,000 children of exposed and unexposed parents, called the “F1 cohort,” originated from an early genetic study of birth outcomes in pregnancies that were registered in the two cities in the years 1948–1953 and has been followed in various studies, and a related cohort of ∼53,400 children of survivors that was created for follow-up studies of mortality by assembling registered births from several studies and sources spanning the years from 1946–1958 (Neel and Schull 1991); a subset of ∼12,000 have been enrolled in a clinical study of adult multifactorial disease in recent years.

Although the LSS is not an exhaustive sample of the survivors of the atomic bombs, it includes a large fraction of the high-dose survivors and is thought to be representative of survivors overall. The LSS was selected to include all persons in the 1950 national census who had been within 2.5 km of the hypocenter in either city, were still residing in one of the cities in 1950, and met other study criteria, along with lesser sampling fractions of more distal or unexposed individuals. As a result, it was estimated that about half of all survivors who had been within 2.5 km of the hypocenters and were still alive 5 y after the bombings were included in the LSS (Douple et al. 2011) (i.e., that there were ∼2 × 40,883 total survivors in Hiroshima at distances <2.5 km, and ∼2 × 12,963 in Nagasaki). The distribution of colon dose versus distance in the LSS is shown in Table 2. Since the average DS02-weighted colon dose calculated for survivors in the LSS at 2.5 km (constant neutron weight of 10) is ∼5 mGy in Hiroshima and 7 mGy in Nagasaki, it is clear that virtually all radiation-associated cases of adverse health effects in the LSS are among those at distances <2.5 km. Furthermore, the fraction of survivors with unknown doses does not exceed ∼20% at any distance >1 km where there are substantial numbers of survivors, particularly if survivors in underground air-raid shelters in Nagasaki, who presumably had minimal doses, are not included as potential radiation-associated excess cases.† Even with the loss of some other cases of disease due to exclusion criteria for major RERF studies, it seems reasonable to suggest that multiplying the estimated number of radiation-associated excess cases of such studies by a factor of ∼2.5 typically provides a rough estimate of the total number of radiation-related excess cases among all survivors who were in the cities at the times of the bombings. The same basic considerations in regard to distance range and sampling fraction apply to the in utero cohort. In regard to the F1 cohort, the sampling of pregnancies was thought to be fairly exhaustive with respect to those occurring in the two cities but is subject to the losses due to out-migration and to exclusion based on other study criteria; as for the LSS, furthermore, pregnancies before 1948 were not included.

The evaluation of the radiation impact on the survivors themselves requires all of the same detailed data that are necessary for conventional risk estimation at RERF: good data on the adverse health outcomes that occur in the population (case ascertainment), but also on:

* the radiation doses received by each individual;

* the individual’s values of other variables that may determine the probability that a given effect would have occurred in that person if they had no radiation exposure (baseline risk); and

* the individual’s values of variables for other risk factors that may interact with radiation in causing the adverse health outcome or covariates that, while not risk factors themselves, may modify the added risk of the outcome for a given radiation dose (effect modifiers).

The data of ABCC and RERF have had unique advantages among exposed populations for a number of reasons, including the size of the cohort and the lack of potentially biasing selection factors such as medical or occupational status, the great attention to the dosimetry and its validation by numerous physical experiments, and the fact that the extremely steep dose-distance gradient means that survivors with widely different doses were only hundreds of meters apart at the times of the bombings and were therefore not likely to have differed substantially in regard to potentially confounding variables that might be spatially correlated. While there are reasons to expect differences in socioeconomic status in different parts of the cities, these have been investigated primarily as relates to differences among groups of minimal-dose survivors as to their suitability as comparison groups to the exposed survivors (Cologne and Preston 2001). While the purpose of RERF’s risk estimations is usually to generalize the experience of the atomic bomb survivors to other exposed or potentially exposed populations, the estimates are most directly relevant to the impact on the survivors themselves for evaluating their collective and individual risks. This is most clearly true for late effects, which are treated generally as being stochastic in their occurrence because individual cases of the outcome in question cannot be distinguished by some characteristic(s) as having been caused by the individual’s exposure to radiation; hence, the need for probabilistic risk estimation. But probabilistic risk estimation has also played a role in ABCC and RERF evaluations of dose-response relationships for early (acute) effects (i.e., the clinical signs and symptoms of radiation injury), despite the conventional idea that as “tissue reactions,” they follow a paradigm in which severity increases with dose, as explained further below.

DATA

The ABCC/RERF data have been described extensively elsewhere, and only some key points relative to evaluating the impact on the survivors will be mentioned here. Regarding ascertainment of health outcomes, death certificates are available from local registries throughout Japan to determine date and cause of death for all members of the cohorts, and cancer incidence data are available from high-quality tumor registries maintained by RERF in the two cities. Although there is some loss of data on cancer incidence due to undocumented out-migration from the tumor registry catchment areas, the migration rates are estimated from other sources, and estimates are adjusted accordingly (Preston et al. 2007). Dose information is extensive due to the enormous efforts that were made by the ABCC to collect exposure data by interviewing survivors or proxies, including the compilation of detailed shielding histories for proximal survivors, and the extensive efforts over the years to construct a series of increasingly sophisticated dosimetry systems. The last two systems, DS86 and DS02, were created by very large efforts involving international scientific working groups outside of RERF (Roesch 1987; Young and Kerr 2005; Cullings et al. 2006). Doses are considered “unknown” for only 7,070 of 93,741 survivors who were in the cities at the times of the bombings. These unknown doses include both survivors with insufficient shielding data and survivors in some classifications of shielding that were not technically feasible to include in DS86 and DS02, such as concrete buildings, underground air-raid shelters, and odd types of shielding such as streetcars, boats, etc.

For late effects, stochastic models typically assume that there is a substantial baseline risk of the outcome, which is much larger than the risk added by radiation except at the largest doses, in contrast to stochastic modeling of early effects in which RERF’s models have typically assumed that there is essentially zero risk of the outcome at zero dose if appropriately radiation-specific outcomes are chosen. For late effects, the most important covariates for baseline risk in RERF’s models are city, gender, and age at exposure, e, or year of birth (birth cohort), b (i.e., b = 1945−e), which are documented accurately for all but a handful of members of the cohorts by the original data-collection efforts for assembling the cohorts; and attained age a, which requires the additional datum provided by case ascertainment of health outcomes as described above. For early effects, the first three of these are also the most important covariates (in addition to radiation dose and possibly time since exposure, tse = a−e, in lieu of either e or a) in modeling the probability of the survivors experiencing the outcome; and for late effects, the four of these covariates have been the main candidates for effect modifiers. Other covariates, such as education and lifestyle factors (i.e., diet) along with data on other exposures (i.e., smoking and alcohol), are available from original source documents or mail surveys of the LSS, or from clinical surveys of the AHS, and have been used in some studies of late effects (Furukawa et al. 2010; Grant et al. 2012).

EARLY EFFECTS

Because there was so much injury due to trauma from blast forces and thermal radiance from the fireball resulting in burns to exposed skin at distances where the radiation exposure was high enough to contribute to early mortality, it would be virtually impossible to define the contribution of the radiation doses received to the number of fatalities. Even in a study in which reliable and exhaustive numbers of casualties and associated dose estimates were available for the occupants of two concrete school buildings in Nagasaki, including those who did not survive, the estimation of the lethal radiation dose was done only for those without other serious injuries, and an evaluation of the contribution of the radiation to fatalities from combined injuries was not attempted (Levin et al. 1992). The estimation of early mortality naturally suffers from major issues with regard to the quality of the available data on the area-specific populations of the cities at the instant of the bombings. Some investigators have raised the possibility that early mortality may have biased the later studies of late effects discussed below by preferentially removing a sensitive subset of the population (Stewart and Kneale 2000), and this topic has been addressed by various investigators (Little and Charles 1990; Neriishi et al. 1991; Little 2002; Pierce et al. 2007). However, any putative selection would not affect the experience of those who did survive—that is only an issue in applying the risk estimates of the atomic-bomb survivors to other populations. The early effects quantified here will be stated only in terms of morbidity for the survivors and not early mortality among the entire exposed population.

The Joint Commission for the Investigation of the Effects of the Atomic Bomb in Japan and other teams of investigators performed early medical examinations and other investigations of the early effects on survivors (Oughterson and Warren 1956). From Table 4.4 of that reference, it may be calculated that ∼30,000 (42%) of 72,000 injured survivors in Hiroshima and 13,000 (51%) of 25,000 injured survivors in Nagasaki were estimated to have sustained “radiation injury,” often in combination with other injuries. These numbers and fractions are fairly high in light of what is now known about the doses received but not implausible. For example, if one chose 500 mGy of colon dose, calculated with a neutron weight of 10, as a possible threshold dose for radiation injury, ∼8,400 members of the LSS had doses exceeding this, and one might surmise that two to three times as many total survivors would have had such doses.

Very early investigations, such as those of the Joint Commission, did not have dose estimates available. The later data collected by ABCC, although obtained via interviews, are detailed, and doses are available for most of those survivors. For example, the Master Sample Questionnaire had spaces for mild, moderate, and severe grades of 14 symptoms in the section “Radiation Symptoms History” (Ishida and Beebe 1959). However, these data have limited capacity for relating severity of response to dose, since they were subjective self-evaluations with only three categories of severity, and relating severity to dose has not been a focus of ABCC/RERF studies. Rather, the emphasis has been to relate probability of occurrence to dose, restricting the outcome(s) to the most severe and radiation-specific conditions, and the work was mostly done for purposes other than establishing a dose-response relationship. For example, Jablon et al. (1965) used data on epilation, purpura, and oropharyngeal lesions to compare the fractions reporting each versus distance and dose calculated from the dosimetry system T57D in the two cities and to attempt an evaluation of the relative hazard of neutrons versus gamma rays. Gilbert and Ohara (1984) used severe epilation and purpural bleeding for a number of analyses comparing the dose response for various methods of shielding calculation, different shielding categories, Nagaski versus Hiroshima, and octants of horizontal direction from the hypocenters (circular symmetry). They made some comparative dose-response plots, but they based their quantitative results primarily on comparisons using odds ratios from tables, which did not depend on the shape of the dose response, and they did not explicitly evaluate the dose response by fitting parametric models to statistically estimate quantities such as the dose for 50% response. Stram and Mizuno (1989) did attempt to evaluate the dose response for severe epilation, but they limited their analysis to a range from 0.75–2.5 Gy, as the observed proportions in both the lowest and highest parts of the dose range did not approach the expected limits of zero and one. Other investigators have attempted to model these data (Little 2002; Stewart and Kneale 2000), and the cited work of Little provides fitted dose-response functions that consider dose uncertainty, although those functions also use terms for a nonzero baseline prevalence and a bending-over of the dose-response curve at higher doses; furthermore, the fitted model parameters would need to be obtained from the author, and the models would need be applied to dose estimates that have been adjusted using Little’s assumptions for 30% dose uncertainty. In general, there has been considerable difficulty in establishing a straightforward quantitative dose response for even the most radiation-specific outcomes, such as severe epilation, due to the nature of the exposure situation, potential artifacts such as causes other than radiation, variation in individual reporting of the symptoms due to subjectivity and failures of memory, and dosimetric uncertainty. Correspondingly, although fractions of the cohort reporting some of the noted endpoints are available in the publications just cited, the ABCC/RERF data have not afforded a comprehensive, systematic assessment of acute effects on the survivors as a function of the doses received that is as well understood and widely used as the models discussed below for cancer.

LATE EFFECTS

Solid cancer

Solid cancers as a group remain the predominant late health effect among the survivors in terms of numbers of radiation-associated excess cases and corresponding excess absolute rates (EARs). Accordingly, the radiation dose response of “all solid cancer” has been characterized more thoroughly than other outcomes and serves to illustrate a number of important concepts, such as ones related to dependence of the radiation-related excess on covariates (effect modification), and summary measures of radiation risk, such as integrated lifetime risk or loss of life expectancy (LLE). The statistical power obtained by grouping all solid cancers together comes at the expense of relying on a common model for cancers of anatomical sites that vary greatly in their age- and gender-specific rates of baseline occurrence, not to mention rates that differ among ethnic or national groups, which would complicate the potential transport of observed risks to other populations for wider use in radiation protection. Some authors have used less exhaustive groups of solid cancer sites for specific purposes, such as evaluating effect modification, by rationales such as excluding gender-specific or hormonally influenced cancers (Pierce 2002), but these were not used in the studies for which results are noted here.

The development of Poisson regression, using person-year tables of follow-up time stratified on radiation dose and other risk factors along with generalized excess risk models (Thomas 1981), allowed natural and intuitive models of risk as a function of gender, age at exposure to the radiation from the atomic bombs, and attained age subsequent to exposure. For radiobiological reasons, emphasis has been on models in which the excess risk is a linear or linear-quadratic function of radiation dose. However, models with an added exponential term for “bending over” of the response at the highest doses may give better estimates of risk at those doses (Little et al. 2008), particularly for the survivors themselves, although the number of survivors at such doses is small (Table 2). Here we cite results for a simple linear model with no threshold, except where otherwise noted, which greatly facilitates matters of interpretation.

In ABCC and RERF studies, much attention has been paid to the way in which risk varies with age at exposure and attained age, which are interrelated. There is strong evidence that these variations are considerable and that they need to be properly accounted for, first in order to fit an acceptable stochastic model of the effect of the radiation on risk among the survivors; second, for evaluating individual risks for the survivors given their covariates; and finally to make the results of RERF studies useful more generally in radiation protection. These patterns can be modeled in a straightforward way with Poisson (or Cox) regression, although the values for later periods of life in persons exposed at young ages necessarily require extrapolation of fitted parametric functions of age at exposure and attained age. The need for such extrapolation has continually decreased, with the most recent publications having follow-up to 2003, so that persons exposed just after birth would have attained an age of 58 y (Ozasa et al. 2012). However, 42% of LSS members were still alive as of 2003, and, as will be discussed below, projections suggest that a very large remaining fraction of excess cancer and noncancer cases remain to be observed after that time.

There is reason to believe from radiation biology that the effect of radiation on cancer rates should be larger at younger ages of exposure, but the effect of age at exposure on excess rates is inevitably confounded with the effect of birth cohort on baseline rates, with age at exposure being synonymous with birth cohort in the RERF cohorts because everyone was exposed at the same time. This is particularly true when the radiation risk is modeled as excess relative risk (ERR); conversely, the effect of age at exposure is typically more apparent in EAR models. The effect of using an EAR versus an ERR model in the investigation of age at exposure effects depends on whether the factors responsible for birth cohort trends act additively or multiplicatively with radiation. Excellent discussions of these relationships are given by Pierce and others (Pierce 2002; Pierce and Vaeth 2003). This is illustrated aptly in Fig. 1, taken from a recent major RERF paper on incidence of solid cancer (Preston et al. 2007), in which ERR and EAR models for risk as a function of attained age are shown with different curves for different ages at exposure. The effect of age at exposure on relative risk may be more apparent in some studies than in others due to various factors: In the latest published RERF studies, it is more apparent for solid cancer mortality than for incidence, as shown by the greater separation of the curves in Fig. 2, which is taken from the recent LSS report (Ozasa et al. 2012).

There has also been a longstanding concern about the effect of attained age or time since exposure, and the question of whether the risk from a single dose of radiation disappears after a number of years or persists at a constant level throughout the remainder of one’s life, or something in between. Of course, the interpretation of these alternatives depends on whether one refers to EAR or ERR [since baseline cancer rates rise very rapidly in later life, whether ERR increases or decreases depends on whether EARs are increasing with age more rapidly than baseline rates or rising less rapidly (or even declining)]. Most recent analyses at RERF suggest that EAR increases throughout life for all solid cancer and for most site-specific cancers, but not as rapidly as baseline rates, so that ERR decreases with age at long times after exposure, as shown in Fig. 1. This is broadly consistent with some mechanistic considerations in carcinogenesis (Pierce and Vaeth 2003), such as Armitage-Doll type models (Little et al. 1992) and generalizations of Moolgavkar-Venzon-Knudson type models (Heidenreich et al. 2002).

A common way to give major risk results in RERF studies in a single number is to give the ERR at 1 Gy for risk at a particular attained age after exposure at some other age as a gender-averaged value, often for age at exposure = 30 y and attained age = 70 y. Using this measure, a recent result for incidence of all solid cancer is 0.47 Gy−1 (Preston et al. 2007), and a comparable result for all solid cancer mortality is 0.42 Gy−1 (Ozasa et al. 2012), with dose specified to the colon with a constant neutron weight of 10. The ERR for an individual of any given gender, age at exposure, and attained age for mortality as an example, can be approximated by examining the plot in Fig. 2 and by knowing that the risk for females was about twice that for males (Ozasa et al. 2012). Approximating formulas for effect modification are also given by Ozasa et al. (2012) (e.g., a decrease of −29% per decade of age at exposure and a decrease proportional to attained age to the power −0.86).

For any given individual, the most informative measure of impact is some form of lifetime detriment that expresses the increased risk due to radiation exposure over an individual’s remaining lifetime after the exposure. Two broad classifications are often used: One integrates the risk of experiencing a radiation exposure-associated disease over the individual’s lifetime, and the other involves the extent to which the individual’s life is likely to be shortened due to a disease caused by the radiation exposure (i.e., LLE). A better summary of an individual’s risk is communicated by giving one of each type of estimate rather than one or the other (Thomas et al. 1992). The first classification includes the excess lifetime risk (i.e., the difference in lifetime risk of a disease between an exposed and unexposed individual); the risk of radiation-induced death (REID) (i.e., the lifetime risk of dying of a radiation disease attributable to radiation exposure); and the lifetime attributable risk, an approximation of the REID that has certain desirable properties (Kellerer et al. 2001). The second classification includes the LLE for an exposed individual and the LLE among exposure-induced deaths (Thomas et al. 1992). The quantities given herein are primarily the REID and the LLE. The reader is referred to the references just cited for more detailed definitions.

Lifetime detriment has been considered in various papers by RERF investigators, including a paper on effects on longevity (Cologne and Preston 2000), which used Cox regression with parametric models, and more recently a paper on predicting future risk (Furukawa et al. 2009), which used Bayesian age-period-cohort models. Furukawa et al. (2009) give a REID of 30% for solid cancer for those exposed to >1 Gy (mean ∼1.6 Gy) at the youngest ages (i.e., <5 y, which decreases with age at exposure). They give an estimated LLE of ∼15 y for women and 10 y for men, for those exposed to >1 Gy at <5 y of age, for radiation-associated cancer and noncancer mortality combined. These estimates of LLE should not be confused with the LLE per exposure-induced death, which by definition is a larger quantity—the values noted here are rather large because they are specified for exposure at a very young age and are for combined cancer and noncancer mortality. The considerable reduction in LLE with age at exposure is shown in Fig. 3, which applies to combined cancer and noncancer mortality.

For the LSS as a group, 527 of 10,929 deaths due to solid cancer were estimated to be excess deaths due to radiation in the most recent LSS report on cancer mortality (Ozasa et al. 2012). The rather small size of the corresponding proportion is related to the highly skewed distribution of doses in the LSS, with the vast majority having low doses, as illustrated in Fig. 4. Fig. 5 shows the estimated and predicted number of excess cancer deaths in the cohort over time; the actual peak is expected to be reached between about 2010 and 2015, due to the large proportion of young ages at exposure in the LSS and the relatively large risk for exposure at the youngest ages. When cancers of different anatomical sites are considered separately, different sites (i.e., different tissues) appear to have different radiation risks and in some cases different patterns of effect modification with respect to the age variables. Copious detail is given in recent references (Preston et al. 2007; Ozasa et al. 2012).

Leukemia and other hematopoietic malignancies

Leukemia was the first late effect to be noticed among the atomic-bomb survivors and has always been strongly associated with exposure to ionizing radiation. A very recent paper on incidence of leukemia and other hematopoietic malignancies (Hodgkin and non-Hodgkin lymphomas, multiple myeloma) provides results for various subtypes and combinations of subtypes of leukemia (Hsu et al. 2013). The primary result is for a combination of the three types of leukemia generally considered to be increased by radiation exposure: acute myeloid leukemia (AML), acute lymphoblastic leukemia (ALL), and chronic myeloid leukemia (CML). The best-fitting dose response was a linear-quadratic function of dose with substantial upward curvature and ERR of 1.74 at 1 Gy, considerably larger than the ERR of all solid cancer. However, because the baseline incidence of leukemia is much smaller than that of all solid cancer, the absolute excess rates are much smaller. There were 312 cases of AML/ALL/CML in the 51 y follow-up of the LSS from 1950 to 2001 (Hsu et al. 2013) versus 17,448 first primary solid cancers in the 40 y follow-up from 1958–1998 (Preston et al. 2007). However, ∼94 of those 312 cases were considered to be associated with the radiation exposure. The relative risk of leukemia was larger for younger age at exposure, as with all solid cancer but more so—the risk was very high for times shortly after exposure at young ages, and the decline with age or time after exposure was much more marked than for solid cancer. The EARs, not just the ERR, declined with age or time since exposure, although the excess risk had not disappeared by the end of follow-up; even for the last 12 y of follow-up, 45–55 y after exposure, the radiation-associated risk at 1 Gy was estimated to be twice as large as the baseline risk. Measures of lifetime detriment have not been estimated in recent RERF papers on leukemia mortality in the LSS, but the BEIR VII report (NA/NRC 2006) gives estimates of lifetime attributable risk of mortality on the order of 0.03–0.05% for exposure to 0.1 Gy, depending on age at exposure, based primarily on analysis of the LSS mortality data. Furthermore, it is clear from the patterns of age-time modification noted above that LLE is dramatically larger for those exposed at younger ages. Regarding hematopoietic malignancies other than leukemia, there was a weak indication of a radiation risk of non-Hodgkin lymphoma among men, but not women, and no other remarkable risks among these malignancies (Hsu et al. 2013).

Noncancer disease

Mortality due to noncancer diseases as a group was first noticed to be in excess at high doses in the LSS after ∼30 y of follow-up (Kato et al. 1982) and has been more and more soundly established as a demonstrated radiation risk in the LSS reports (Shimizu et al. 1992), although with a substantially lower ERR than solid cancer. The effects of age at exposure and attained age on risk were examined in LSS Report 13 (Preston et al. 2003). In the period from 1966–2003, which was chosen to minimize the effect of selection bias due to the fact that members of the cohort had to have survived early effects and concomitant injury and disease, the ERR was estimated to be 0.13 for all noncancer diseases combined at 1 Gy, using a linear-quadratic model with slight upward curvature (Ozasa et al. 2012). Correspondingly, only ∼353 of 35,685 deaths were considered to be associated with the radiation exposure. Estimated and projected numbers of radiation-associated excess noncancer deaths in the LSS as a function of calendar time are shown in Fig. 5. In regard to lifetime detriment, Furukawa et al. (2009) give an estimated REID of ∼10% for women and 5% for men for those exposed to >1 Gy for most ages at exposure. As noted above under “Solid Cancer,” for radiation-associated cancer and noncancer mortality combined, they estimate a LLE of ∼15 y for women and 10 y for men, for those exposed to >1 Gy at <5 y of age (Furukawa et al. 2009).

In addition to all noncancer disease combined, Ozasa et al. (2012) reported significant risk estimates for several subcategories: diseases of the blood, circulatory system, respiratory system, and, for the period from 1966–2003, the digestive system. Nonmalignant diseases of the blood and blood-forming organs had a particularly high estimated ERR of 1.79 Gy−1, but subject to the concern that this may be an overestimate due to misclassification of some malignant diseases as nonmalignant ones on death certificates, a concern that might also apply to respiratory or digestive diseases. The ERR estimates for circulatory, respiratory, and digestive diseases were between 0.11 and 0.23 Gy−1 for the period 1966–2003. Diseases of the circulatory system have been a subject of increasing interest at RERF in recent years. A recent study of mortality in the LSS found significant risk for both stroke and heart disease, at least at doses >0.5 Gy, with ERR estimates of 0.09 and 0.14 Gy−1, respectively (Shimizu et al. 2010). Furthermore, a study in the AHS clinical cohort suggested that the risk of stroke was related to hemorrhagic but not ischemic stroke (Takahashi et al. 2012), and another paper from the AHS showed an effect of radiation on the increase of blood pressure with age (Sasaki et al. 2002).

Cataract has long been a recognized effect of ionizing radiation exposure, and recent work by RERF investigators has suggested that vision-impairing cataracts occur at substantially lower doses than previously appreciated. A recent paper suggested a threshold of 0.5 Gy for cataracts requiring surgery, with a linear dose response having ERR of 0.32 Gy−1 at 70 y of age for exposure at 20 y of age, with higher ERRs at younger ages at exposure (Neriishi et al. 2012), although the threshold estimate must be viewed cautiously, especially in light of the lack of evidence of curvature. The estimated EAR was 33 cases per 10,000 persons per year per gray (0.33% y−1 Gy−1); corresponding estimates of excess cases in the full LSS or lifetime detriment are not immediately available.

RERF results also suggest a risk of hyperparathyroidism, with an estimated ERR of ∼3 Gy−1 and some indication of increased risk at lower age at exposure (Fujiwara et al. 1992). The estimated prevalence rates were ∼2% in males and 5% in females in the groups with highest dose and lowest age at exposure, although the confidence bounds on all of the noted values are wide due to the small numbers of cases. There has also been evidence of radiation risk of benign as well as malignant thyroid nodules (Imaizumi et al. 2006) and another form of benign neoplasm: uterine myoma (fibroma) (Kawamura et al. 1997). Many of these findings have been confirmed by analysis of longitudinal data from the AHS (Yamada et al. 2004).

In addition to clinically manifest disease, the effect of radiation exposure is seen in a number of biomarkers among the survivors, whose implications remain to be completely elucidated. Perhaps the best known is the cytogenetic evidence of chromosomal aberrations, which is clearly related to radiation dose (Kodama et al. 2001). A number of immunological changes have also been observed and have been hypothesized to be related to disease via chronic inflammation (Kusunoki et al. 2002, 2010; Kusunoki and Hayashi 2008).

Psychosocial effects

In a literal sense, evaluating the psychosocial effects of the radiation from the bombs per se is problematic for various reasons, including that the radiation was accompanied by so many other stresses and that survivors generally did not know and understand the extent of the doses that they had received personally. Hence studies have been predicated not on radiation dose but on the survivors’ experience related to the general situation. A study by RERF investigators concluded that persons who had been in the cities at the times of the bombings exhibited more anxiety and somatization symptoms that those who had not been in the cities (Yamada and Izumi 2002). Bromet (1998) has reviewed research on the psychosocial effects of the atomic bombs and other radiation disasters.

EXPOSURE IN UTERO

RERF analyses have suggested some special concerns about exposure in utero, based on studies of the related cohort. Perhaps most noted have been neurological effects, particularly the indication of a developmental effect on cognitive ability, which has been related to various measures such as school performance, intelligence tests, and diagnoses of severe developmental disability, and was specific to particular periods of gestation: 8 to 15 wk, and to a lesser extent, 16 to 25 wk after ovulation. The prevalence of severe developmental disability was estimated to be ∼40% for exposure to 1 Gy in the 8–15 wk period, with an indication of a threshold of at least 0.3 Gy, but the results were quite uncertain, being based on only 30 cases (Otake and Schull 1998; Douple et al. 2011). In regard to induction of cancer and leukemia, RERF studies have not confirmed some of the more extreme indications from elsewhere of a very high radiosensitivity for exposure in utero in contrast to postnatal exposure, although it should be noted that the ABCC/RERF follow-up of in utero survivors did not begin until 5 or 6 y of age for mortality and 11 or 12 y of age for incidence, and the risk estimates for prenatal exposure from RERF studies are not necessarily inconsistent with those of other major studies (Wakeford and Little 2003). RERF studies seem to indicate that the impact on survivors exposed in utero is not markedly greater than that for those exposed in early childhood, as discussed in the sections above, in regard to either mortality due to leukemia or solid cancer (Delongchamp et al. 1997) or incidence of solid cancer (Preston et al. 2008).

The dose distribution among in utero survivors exposed with DS02 dose estimates is very similar to that of the women in the LSS, as in utero doses are basically proportional to mothers’ doses, and there does not appear to have been any relation of pregnancy and shielded kerma (i.e., dose in air at the survivor’s shielded location). Because the cohort is so small and its members are younger than the youngest members of the LSS, the total number of cancers to date is very small, with only 94 cancers being eligible for the study in the most recent analysis of cancer incidence (Preston et al. 2008), and the estimated number of excess cases is tiny. As the group ages, given the points noted above regarding dose distribution and risk estimates, one might expect that excess numbers of cases in this group would be similar in proportion to the size of the cohort to those among persons exposed in childhood in the LSS, although there would be some minor differences because the dose distribution of in utero survivors is somewhat different from that of those exposed as children (Delongchamp et al. 1997). In regard to noncancer disease, a study involving clinical follow-up of survivors from the in utero clinical cohort failed to find clear evidence of radiation risk of hypertension, hypercholesterolemia, or cardiovascular disease, but the small size of the group (n = 506) and their young age at the time of the study (follow-up through 53 or 54 y of age) was very limiting (Tatsukawa et al. 2008). Much interest has been generated by findings that in utero exposed survivors did not have as many chromosomal aberrations as their mothers (Ohtaki et al. 2004), which runs counter to conventional ideas about the high radiosensitivity of the fetus and may be related to the lack of any particularly high apparent radiosensitivity noted above in regard to leukemia (Delongchamp et al. 1997). However, this phenomenon, which was observed in lymphocytes as are all typical chromosomal aberration assays, might be tissue specific, and ongoing research, including that in animal models, is investigating this question among others (Nakano et al. 2012).

GENETIC EFFECTS ON CHILDREN OF THE SURVIVORS

The possibility of genetic effects was a very early concern in the work of ABCC and the subject of a major early study involving a population-based cohort of some 77,000 registered pregnancies to determine whether parental exposure to radiation was associated with untoward birth outcomes (Neel and Schull 1991). No significant effects were found. In the ensuing decades, ABCC and RERF have performed numerous studies that have changed in methodology with improvements in science and technology and are still ongoing, always aimed at detecting any inherited effect of parental radiation exposure but with no statistically significant results to date (Douple et al. 2011). Neither cancer incidence (Izumi et al. 2003a) nor mortality (Izumi et al. 2003b) among the children has shown an effect of parental exposure. Currently, a cohort of survivors’ children is receiving clinical follow-up to evaluate the incidence of multifactorial diseases of adulthood, with negative results to date (Tatsukawa et al. 2013). The distribution of parental doses in the F1 cohorts is very highly skewed: among 11,951 in the F1 Clinical Study cohort just mentioned, only 226 have mothers’ gonadal doses >1 Gy, and only 125 have fathers’ gonadal doses >1 Gy. Techniques for detecting mutations inherited from irradiation of parents’ gonads are currently being refined at the level of high-density arrays of probes for comparative genomic hybridization for planned studies of sets of related parents and children, which may be extended to DNA sequencing in the future. That radiation is a known mutagen would suggest that there must be some effect, however subtle or low in probability, but such an impact on the survivors’ children remains to be identified, let alone quantified.

CURRENT RESEARCH

RERF continues to conduct a wide variety of research aimed at elucidating the effects of the atomic bomb survivors’ radiation exposure. Some topical research interests include whether radiation increases the risk of diabetes or of conditions of the eye other than cataract, questions about the mechanisms of radiation-associated circulatory disease in the survivors, and other lines of research related to carcinogenesis, mutagenesis, and immunological effects. Interactions with other risk factors are a focus of research interest, as are the shapes of dose responses for major late effects in the low-dose range. Critical data are accumulating and being evaluated on the health experience of survivors who were exposed in utero or in childhood as they progress into old age.

CONCLUSION

A rather large fraction of survivors, perhaps one-fourth or more of proximal (<2 km) survivors, may have experienced early effects in the form of signs and symptoms of acute radiation injury. The fractions of survivors experiencing late effects appear smaller, with the number of estimated radiation-associated deaths to date being on the order of 1,000, or ∼3% of proximal survivors in the LSS, although projections suggest that number could double in the next couple of decades, and dying at a younger age due to radiation exposure has obviously grave implications. A full picture of the impact of cancer on the survivors must include the elevated risk of incidence as well as mortality. Furthermore, there are other late effects with a clear impact on health and quality of life, such as cataract. For survivors who were in utero at the times of the bombings, exposure to doses of a fraction of a gray or more during critical periods of gestation may have led to severe developmental consequences in a rather large proportion of those so exposed. In addition, survivors were subject to a psychosocial impact as well, although this was related to the general experience of being in the cities at the times of the bombings rather than to radiation dose per se.

Acknowledgments

The author is especially grateful to Don Pierce for valuable discussions and to Kyoji Furukawa for his kind provision of several unpublished figures from his work on risk projection. The Radiation Effects Research Foundation (RERF), Hiroshima and Nagasaki, Japan, is a private, nonprofit foundation funded by the Japanese Ministry of Health, Labour and Welfare (MHLW) and the U.S. Department of Energy (DOE), the latter in part through U.S. DOE Award DE-HS0000031 to the National Academy of Sciences. This publication was supported by RERF Research Protocols 18–59 and 1–75. The views of the author do not necessarily reflect those of the two governments.

Little MP. Absence of evidence for differences in the dose-response for cancer and non-cancer endpoints by acute injury status in the Japanese atomic-bomb survivors. Int J Radiat Biol 78: 1001–1010; 2002.

Enter and submit the email address you registered with. An email with instructions to reset your password will be sent to that address.

Email:

Password Sent

Link to reset your password has been sent to specified email address.

Remember me

What does "Remember me" mean?
By checking this box, you'll stay logged in until you logout. You'll get easier access to your articles, collections,
media, and all your other content, even if you close your browser or shut down your
computer.

To protect your most sensitive data and activities (like changing your password),
we'll ask you to re-enter your password when you access these services.

What if I'm on a computer that I share with others?
If you're using a public computer or you share this computer with others, we recommend
that you uncheck the "Remember me" box.