By the spring of 1945, he had seen further. He saw that the bombs would be used on cities. And he realized that to murder vast numbers of innocent civilians in the hope that it might make repetition of the crime less likely was immoral. The deaths would be certain. The future benefit was only hypothetical.

I think you give the wartime decision-makers too much credit in suggesting that their statements about the bomb were trying to preserve ambiguity. I think that any ambiguity reflects only their muddled thinking. As they marveled at the power of the bomb, the vast ocean of its implications lay undiscovered all around them.

As for the common perception that Hiroshima and Nagasaki created a moral barrier to future use of the bomb, I think that there is more truth to the opposite view. All the war-planning of the nuclear arms race was based on the assumption that using nuclear weapons on cities was legitimate. In other words, the nuclear arms race was predicated on the precedents of Hiroshima and Nagasaki. To this day, the USA hasn’t renounced or condemned such use.

Another fine post, Alex. I am a fan of John Dower’s work. I recommend Embracing Defeat: Japan in the Wake of World War II. The argument that doesn’t show up very much in these debates, but that I find persuasive, is this: Truman, Stimson and the rest were intent to do their utmost to end the war as quickly as possible. They, more than anyone, were acutely aware of the casualty counts. And what was in store the longer the war in the Pacific dragged on. Any argument that would have delayed a surrender would have been — and was — a very tough sell. MK

Hi Michael: I hope you are well! While there is no doubt that they wanted a swift end to the war, there is the question of how much they thought the atomic bomb would help with this, and how to optimally achieve this, and how to weigh that need against all of their other desires (e.g. re: the demands for unconditional surrender, which is an area where Stimson and Truman disagreed deeply). It is a complex issue, and they were trying to come up with the right solution for both the near and far time horizons.

The Target Committee ruled out nuking already-bombed cities because they didn’t think it would provide much “demonstration.” And I would note that Tokyo was still probably the most populous city in Japan in 1945 (3 million or so), even after all of that bombing and death.

Contents

Differentiation is a process of finding a function that outputs the
rate of change
of one variable with respect to another variable.

Informally, we may suppose that we're tracking the position of a car on a two-lane road with no passing lanes. Assuming the car never pulls off the road, we can abstractly study the car's position by assigning it a variable,
x
{\displaystyle x}
. Since the car's position changes as the time changes, we say that
x
{\displaystyle x}
is dependent on time, or
x
=
f
(
t
)
{\displaystyle x=f(t)}
. This tells where the car is at each specific time. Differentiation gives us a function
d
x
d
t
{\displaystyle {\frac {dx}{dt}}}
which represents the car's speed, that is the rate of change of its position with respect to time.

Equivalently, differentiation gives us the slope at any point of the graph of a non-linear function. For a linear function, of form
f
(
x
)
=
a
x
+
b
{\displaystyle f(x)=ax+b}
,
a
{\displaystyle a}
is the slope. For non-linear functions, such as
f
(
x
)
=
3
x
2
{\displaystyle f(x)=3x^{2}}
, the slope can depend on
x
{\displaystyle x}
; differentiation gives us a function which represents this slope.

Kurzweil [
170
], considering the possible effects of many future technologies, notes that AGI may be a catastrophic risk. He generally supports regulation and partial relinquishment of dangerous technologies, as well as research into their defensive applications. However, he believes that with AGI this may be insufficient and that, at the present time, it may be infeasible to develop strategies that would guarantee safe AGI. He argues that machine intelligences will be tightly integrated into our society and that, for the time being, the best chance of avoiding AGI risk is to foster positive values in our society. This will increase the likelihood that any AGIs that are created will reflect such positive values.

One possible way of achieving such a goal is moral enhancement [
91
], the use of technology to instill people with better motives. Persson [
215
,
MUK LUKS Anabelle Womens 7EaMUHbma
] argue that, as technology improves, we become more capable of damaging humanity, and that we need to carry out moral enhancement in order to lessen our destructive impulses.

Proposals to incorporate AGIs into society suffer from the issue that some AGIs may never adopt benevolent and cooperative values, no matter what the environment. Neither does the intelligence of the AGIs necessarily affect their values [
Proxy MargowPR Womens sFdBl
]. Sufficiently intelligent AGIs could certainly come to eventually understand human values, but humans can also come to understand others' values while continuing to disagree with them.

Thus, in order for these kinds of proposals to work, they need to incorporate strong enforcement mechanisms to keep non-safe AGIs in line and to prevent them from acquiring significant power. This requires an ability to create value-conforming AGIs in the first place, to implement the enforcement. Even a soft takeoff would eventually lead to AGIs wielding great power, so the enforcement could not be left to just humans or narrow AIs . In practice, this means that integration proposals must be combined with some proposal for internal constraints which is capable of reliably creating value-conforming AGIs. Integration proposals also require there to be a soft takeoff in order to work, as having a small group of AGIs which rapidly acquired enough power to take control of the world would prevent any gradual integration schemes from working.

Therefore, because any effective integration strategy would require creating safe AGIs, and the right safe AGI design could lead to a positive outcome even if there were a hard takeoff, we believe that it is currently better to focus on proposals which are aimed at furthering the creation of safe AGIs.

Integrating AGIs into society may require explicit regulation. Calls for regulation are often agnostic about long-term outcomes but nonetheless recommend caution as a reasonable approach. For example, Hibbard [
BC Footwear Breezy Womens YztQYqSk
] calls for international regulation to ensure that AGIs will value the long-term well-being of humans, but does not go into much detail. Daley [
79
] calls for a government panel for AGI issues. Hughes [
157
] argues that AGI should be regulated using the same mechanisms as previous technologies, creating state agencies responsible for the task and fostering global cooperation in the regulation effort
24
.