Introduction

Emergent
high-technologies pose a dilemma for policymakers of modern society. In their incubatory
forms, these technologies are without defined and accepted standards –
technical or social – of design and operation. Without such standards,
high-technologies cannot be fully “understood;” and if they cannot be fully
understood, can they be fully controlled? There is a clash between emerging
technologies’ “novelty” – their being not fully known or assessed – and the modern regulatory state that has developed
to govern their development and consumption; a social mechanism which seeks to
preserve a general uniformity of behavior and cultural norm (order) and
reliability of that behavior (control).[i]

The
tension between novelty and regulation is especially true for high-risk
technologies, which present a specter of social harm. The consumption of
high-risk technologies – who may use them; how may they use them; when can they
use them; and what, if any, protections they receive when using them – therefore
becomes a value statement; a judgment of individual and collective tolerance
for potential costs and an evaluation of individual and collective benefit. Permitting
or prohibiting use of a high-risk technology, and in what shape and form, is a
social dialogue that extends beyond the technology itself, reflecting broader
cultural values, contexts, and concerns. As such, a decision on the nature and
scope of regulation is ultimately a decision on social power – which actor has
authority over decisions to mitigate or not mitigate risk, which actor has
autonomy in decisions to assume risk.

In
short, the debate over use and regulation of high-risk high-technologies is
about legitimacy – i.e. socially
recognition, acknowledgement, and acceptance of power in its various contours
and dimensions. When not all is known or can be accounted for in the character of
a technology, who is (and who is not) the proper arbiter of its risk? What
legal and institutional frameworks for managing that technology confer
legitimacy to its use, justifying risks to involved and uninvolved public
against potential costs? These arrangements are, like perceptions of risk, socially
constructed – beholden to ideological predispositions about the interplay of
safety and society.

Commercial
human spaceflight offers a salient case-study on legitimacy as a frame for discussion
of risk and safety. In recent years, vigorous debate has occurred over the
safety standards involved in private flight of humans to, from, and in outer
space – particularly, whether the government should regulate passenger safety
and spacecraft design. Spaceflight is an overtly risky activity.[ii]
Despite a history of 50 years of human activity in space, the space
environment’s effects on the human physiology is still not well understood.
More importantly, the technological hurdles of accessing and using the space
environment necessitate vastly complex systems of hardware and people, often
with unclear or unanticipated single points of failure. The historical chances
of casualty in human spaceflight are larger than 1 in 70.[iii]

This
paper explores the debate on commercial human spaceflight safety through the
lens of social construction – finding that the debate is not grounded on
spaceflight technology itself, or the activity of spaceflight, but on
differently held constructs of safety, society, and risk. The disparate perspectives
expressed in the debate arrive at disparate prognoses for what risk
identification and management regime (or lack thereof) is appropriate and
acceptable for this risky high-technology’s use – that is, which confers
legitimacy to the enterprise. This paper does not set out to arrive at an
answer to the debate, but rather to explore its foundational dimensions beyond
the sound-bites, floor statements, and policy proposals that have to-date
shaped it.

The
prospect of an industry for commercial human spaceflight began to materialize in
the early 2000s, with the first private launch of humans into outer space
successfully carried out in 2004[iv].
Responding to the nascent field, the United States Congress passed a law – the
Commercial Space Launch Amendments Act of 2004 (CSLAA) – which established a
foundational legal and regulatory regime to govern private human spaceflight. As
signed into law, the CSLAA established a regime premised on passenger informed
consent: with vehicle operators required to inform spaceflight participants
about the risks of flight – including detailing the safety record of their
vehicle and stating that it is not government-certified as safe – and spaceflight
participants assuming and consenting to the risks of participating.[v]

The
FAA’s safety regime is limited to protecting the safety of the uninvolved
public; regulations may not be promulgated to specify design criteria or
practices of passenger-carrying spacecraft so as to address or mitigate risk.
While the FAA can create training and medical standards for passengers and
crew, it cannot restrict or prohibit “design features or operating practices”
unless these have been found to “have resulted in a serious or fatal injury… to
crew or spaceflight participants during a licensed or permitted commercial
human space flight.”[vi] This has colloquially
come to be known as the industry “learning period.”

Contentious
debate during the CSLAA’s consideration was demonstrative of starkly different
regulatory philosophies regarding high-risk activities – and of values held on the
proper nature and scope of a government’s role in risk mitigation for nascent
high-technologies.[vii] At the time, the FAA’s
Associate Administrator for Commercial Space expressed the view that passengers
“should be able to board their vehicles with the same freedom as the stunt pilots
who pioneered commercial aviation.”[viii]
This perspective was mirrored by the legislation’s proponents; one, Rep.
Boehlert, described the rationale behind the learning period and informed
consent as of the industry being at,

“the stage when it is the preserve of visionaries and daredevils and adventurers… these are people who do not expect and should not expect to be protected by the government. Such protection would only stifle innovation…[ix] [The bill strikes] the right balance, protecting the public without stifling the industry… and sets the industry on a path toward greater regulation as it develops.”[x]

Others,
though, viewed passenger safety and vehicle risk in starkly different ways.
Rep. Oberstar and Rep. DeFazio, both leading members of the House of
Representatives’ committee overseeing transportation, analogized commercial
human spaceflight to traditional aviation. Rep. Oberstar, in opposition to the
bill, circulated a criticism stating that the legislation’s safety standard,

“amounts to the codification of what has been come to be known in aviation safety parlance as the ‘tombstone mentality’: don’t regulate until there are fatalities. For many years, many of my colleagues and I have criticized the Federal Aviation Administration for waiting until after a disaster to take safety actions, and have urged a more proactive safety oversight…[xi] I do not think that safety regulation is ever silly.”[xii]

The
safety regulation learning period established by the CSLAA was set to “sunset”
(expire) in 2012. However, the commercial spaceflight industry did not
materialize at the pace expected in 2004. Indeed, no spaceflights of private
passengers – paid customers or company crew – occurred in the years following
the 2004 flights. Amid industry concern, and Congressional receptivity, that not
enough data had been collected to properly inform the FAA on the character of
potential regulation, the learning period was granted two extensions – setting
it to expire in 2015.[xiii]
By that time, many in industry were again petitioning the Congress to extend
the learning period, arguing that it was still premature to issue regulations
given the lack of flight experience. Jeff Greason, Chairman of XCOR – a
commercial suborbital human spaceflight company – noted at the time that “we
don’t want to start regulating based on the shape of the industry today in a
fashion that prevents it from evolving.”[xiv]

However,
the FAA stated its opposition to any extension of the learning period, with the
FAA Associate Administrator for Commercial Space saying that “we appear to be
just kicking the can down the road.”[xv],[xvi]
He proposed that the development of industry-consensus standards could enable
the government to have a reference point in later regulations – an approach
similar to the regulatory regime in sport aircraft “that would prevent an
overreaction and hastily crafted, inappropriate regulations in response to some
high-profile accident.”[xvii]
Others, such as Mike Griffin, a former NASA Administrator, noted that it was
“inconceivable that we’re going to have a lesser regulatory structure for
commercial human spaceflight than we have for my Beach Bonanza airplane.”[xviii]

Nonetheless,
in late 2015, Congress again took up an extension to the learning period –
reinvigorating the debate over passenger safety. The Commercial Space Launch
Competitiveness Act of 2015 (CSLCA), as introduced, proposed an extension of
the learning period for 5 years; later amendment to the legislation extended it
out to 2023. The legislation also extended “cross-waiver” provisions to
spaceflight participants through 2025, requiring them to waive away right for
legal action against the United States government for damages in the event of
an accident, except under circumstances of gross negligence.[xix],[xx]

Much
like the debate over the CSLAA, there was strong disagreement over the CSLCA
extending limitations on safety regulation of human spaceflight. Rep. Grayson,
during the legislation’s consideration, argued that “[a]ny limitation of
liability, any indemnification, is wrong… [w]e invite an accident, we invite a
tragedy, if we limit liability.”[xxi]
The provisions were “corporate welfare,” he said, that creates a “moral
hazard.”[xxii]
The CEO of the American Association for Justice issued a statement saying “this
bill is terrifying because it says certain corporations can’t be held
accountable if they cause any kind of harm to others.”[xxiii]

The
CSLCA directed the FAA, through industry groups, to facilitate the development
of voluntary industry consensus safety standards – a process that was begun in
2016.[xxiv]
The process of industry-developed standards was lauded, by some, as
demonstrative of industry commitment to safety – “establishing good, effective
safety, engineering, and management standards in a voluntary industry
association is the hallmark of any reputable and mature industry.”[xxv]
In the words of one executive of a commercial spaceflight company, developing
standards for safety “really is on our shoulders, and in terms of us having a
safe place in the market, we take that seriously, we want to put our own
families on board, we take that very seriously. So we are holding ourselves to
internal standards.”[xxvi]
Yet others see the CSLAA’s mandate against government-issued safety regulations
as anathema to safety, regardless of industry’s production of (or stated
commitment to produce) safety standards – “industries that lobby for immunity
from accountability might as well hang up a sign saying they don’t trust
themselves to be safe.”[xxvii]

Parsing the Debate

While
the distinct and singular policy issue of government regulation of human
spaceflight safety seems “settled” for now – with the CSLCA extending the
learning period through 2023, still 5 years out at the time of this writing –
the debate surely is not. Indeed, the evident philosophical disagreements on
risk, safety, and regulation are no more reconciled now than they were at time
of consideration of the CSLAA or CSLCA. What can be made of the debate’s key
points?

First
is the disagreement over the commercial human spaceflight as more closely
resembling a “thrill-seeking” industry or a “common carriage” industry, which
connotes legal and liability statuses and establishes implicit rights and
duties between the provider and the passenger.[xxviii]
This difference, as debated, was framed in terms of practical analogies – an
“adventurer” knowingly and willingly “signing up” for a potentially dangerous
joyride, similar to a skydiver, or a paying passenger purchasing a ride on a
mode of technological transportation, similar to a traveler using commercial
aviation.

Underscoring
these different distinctions is a crucial value-judgment on assessment and
acceptance of risk. It is an element of our cultural perception toward
acceptable risk that in “everyday” activities, such as a vacationer boarding a
major airline, one cannot be properly informed of all borne risks and be
prepared to waive away rights to safety – nor should they. Flying is, indeed, a
risky activity – as is driving, or smoking, or crossing the street – but it is
conferred with legitimacy through the auspice of a safety regime consisting of
government oversight and regulation. Conversely, it is generally socially
accepted that an individual willingly participating in a “novel” or “thrill-seeking”
activity – a (sometimes infrequent) activity perceived to have unique risks
and/or unique costs which distinctly attract or detract participation based on
one’s opportunity-cost assessment of its value – may do so when personally
aware of what’s at stake. These activities are legitimized by an individual’s
autonomy to make value-judgments and decisions based on their rational
self-interest and cognition.

In
other words, and at the crux of the issue, is a philosophical difference on a
key question: who has the legitimate authority and autonomy to make
distinctions and decisions about risk assessment and management in the high-risk,
high-technology activity that is commercial human spaceflight: the passenger,
or the government? Similarly, who has legitimate authority in ensuring
operational safety, communicating risk to passengers, and mediating conflicts
between the two: the industry, or the regulator?

This
raises the second key question of the debate – who can be entrusted to frame,
bound, and design the processes and characteristics of safety in commercial
human spaceflight? At present, safety “regulation” is being attempted through
industry-led voluntary safety consensus standards. But, in our social
conception and construct of safety – viewed and understood particularly through
the lens of industrial competition, market economics, and corporate
self-interest – can the operator reasonably be expected to govern itself; is it
a legitimate safety “regulator”? Conversely, does (and can) government oversight
and regulation ensure safety and mitigate risk for a developmentally immature
industry with uncertain technologies and unknown risk propositions? Given this
uncertainty, does government regulation merely offer an “illusion” of
legitimacy?

These
are, again, value-laden, ideologically-driven, culturally contextual
determinations. They belie a single “correct,” objective answer. They do,
however warrant a deeper investigation of the concepts of risk, technological
failure, and regulation.

On Risk

Fundamental
to questions of safety and regulation – and the roles, authority, and
corresponding legitimacy of certain social actors to adjudicate the interplay
between them – is the concept of “risk.” Risk generally refers to the potential
for an undesirable or unanticipated event, and/or a lack of knowledge of the
unknown.[xxix]
The calculus of risk pertains to things of value, often to one’s person or
property, and the possibility to lose them[xxx]
– in insurance law, for example, risk refers to the chance of injury, damage,
or loss of property.[xxxi]
“Risk,” then, is broadly, a social construct – a determination of opportunity
cost subject to culture, context, perceptions and communication.[xxxii]

Given
that proponents of a limited safety regulatory regime for commercial human
spaceflight analogize the industry to “thrill-seeking” ones, it is appropriate
to consider risk in the context of “adventure.”[xxxiii]
Risk is generally considered an important element of an “adventure activity,” in
that it makes it worth a participant’s “time, resources, energy, and possibly
even health and life;”[xxxiv]
indeed, removing too much risk from adventure may subdue or negate the premise
of the activity.[xxxv] The high value of an
adventure experience may overcome a participant’s aversion to risk,[xxxvi]
with each individual establishing and conducting their own opportunity-cost
evaluation.[xxxvii] In short, risk-taking
in “thrill-seeking” activities is very much a product of an individual’s
desires measured against the threat of injury in satisfying those desires – a value-judgment
framed by one’s personal context.

This,
of course, is contingent on risk perception – a calculation of value in the
participation in a high-risk activity. For activities that are deemed as very
high value, an attempt to participate may be valued even at a low probability
of success or high cost of failure. Risk perception and calculus thereby “sorts”
participants to levels of risk and safety that is appropriate and acceptable to
their own values.[xxxviii] However, risk
perception is not consistent; individuals tend to perceive risk to others as
different or inconsistent to risk to themselves. They may under-perceive risk,
leading to “unrealistic optimism,” and therefore assume more risk than they
realize or anticipated in their opportunity-cost evaluation.[xxxix]

Risk
perception in “thrill-seeking” activities, though skewed by individuals, can be
managed the operators.[xl] This
can be for the “positive” or the “negative” – with operators potentially
“talking risks up” or downplaying their significance. This has consequence in
the context of a spaceflight safety informed consent regime; in a
litigation-oriented society such as the United States, informed consent must be
written and documented with due regard to law lest it create the challenge of
litigation for negligent nondisclosure.[xli]
In the view of some, a “prudent operator” of a spaceflight system would
disclose both events with a high likelihood of occurring and those with a low
likelihood of occurring with severe consequence.[xlii]
There runs the risk, however, of litigation-worried operators miring
spaceflight participants in vast amounts of technical data or information when
detailing the risk they are assuming – with description of individual risks
becoming lost “noise” to the layperson.[xliii]
A reasonable question posited is whether, “much like the fine print on a
lengthy contract,” this could effectively nullify the “informed” nature of
consent to participate in the risky activity.[xliv]

Moreover,
the social construction of risk suggests that it is not simply a matter of
individual autonomy or perception. Rather, it exists in the context of
society’s interest in maintaining collective values. It can conflict with a
society’s right to not be harmed as a consequence of an activity – especially
one that cannot be abrogated of risk.[xlv]

This
thus warrants the social evaluation of “acceptable” risk. The “acceptable”
level of risk is a threshold below which risk will be tolerated; an “optimal”
level is that at which the incremental cost of risk reduction equals the
marginal reduction achieved in societal cost.[xlvi]
Yet, as terms such as “acceptable” and “optimal” inherently suggest, these are
value-laden and subjective determinations. Risks are calculated by meaningful
probabilities; the perceived gravity of harm is a factor.[xlvii]
Some activities may constitute high risk of harm, but with a low probability
that such harm will occur; others may pose low risk of harm and equally low
probability of occurrence. An acceptable risk can then be considered one whose
perceived likelihood of a harmful event occurring is low, whose perceived
consequence of a harmful event is slight, or whose perceived benefits are large
enough that society is willing to be subjected to the risk the event could
occur.[xlviii]
Accordingly, this suggests a legal – and corresponding moral – social permission
of voluntary and personal risk-taking through an individual’s capacity to be
informed, make a decision, and consent to potential risk.[xlix]

Safety
– and its regulated mandate or lack thereof – can therefore be considered contextual
and flexible. Something is “safe” if it is socially deemed so; if its risks are
socially decided to be acceptable.[l] The
commonly-posed question for governments is – “how safe is safe enough?”[li] Of
course, particularly risky activities can be or become accepted by the amount
of value placed on it.[lii]
This posits a second, equally important question – “how much value is value
enough?”

“Value
enough” is, of course, another subjective determination. Examined in the
context of private human spaceflight, multiple value-laden inputs to that
determination – informed by ideological and philosophical positions – may be
posited. Is there more value in “fostering” the economic vitality or innovative
capacity of the nascent industry by limiting regulatory burden and
proscription, or in ensuring a maximal amount of safety for those who are
involved in flights and their operation – along with those who are not? Is
there more value in trusting the autonomy of the cognizant individual to
willingly make risky decisions, or in protecting the individual from potential
misrepresentations or miscalculations of burdens of risk to which they may
subscribe? Is there more value in affording these risky private activities the
opportunity to be carried out – even if they may risk public harm – or in
restricting or outright banning them so as to ensure that the uninvolved public
can enjoy its right to stay uninvolved from the actions of a non-public actor? These
are deeply fundamental questions about causality, responsibility, and the role
of individual, the business, the economy, and the state in society.[liii]

Whether
the consumer of human spaceflight – the future paying passenger (or crew
member) – will expect or demand more safety is currently unknown. Establishing a
risk proposition for commercial human spaceflight in the present, and a
corresponding level of safety acceptance, thereby requires a value-based,
ideologically constructed legitimization of particular actors (in this case, the
industry and individual) to carry out the presumed needed, but as of yet unclear,
motions of risk assessment and management. Of course, it is equally unclear
whether these actors, despite their legitimization, will be able to
successfully carry out the safety promises and prerogatives they hold to the
level socially deemed necessary.

The
debate on risk is also demonstrative of ethically-framed conceptions of risk
management, with advocates for strong and oversight and regulation generally
ascribing to the framework of the “precautionary principle,” and those who
advocate for an informed consent regime and voluntary standards ascribing to a
utilitarian construct of safety.[liv] These
disparate ethical perspectives on the legitimacy of risk managers have
importance in the context of risk uncertainty and regulation.

On Technological Accidents, Uncertainty,
& Regulation

Risk
acceptance – “safe enough” – is, as noted, flexible. Results or perceptions of
risk in the present do not necessarily correspond to those levels of risk in
the past, nor are they indicative of future trends or circumstances.[lv]
This is, in part, due to the inherent uncertainty involved in the risk
ramifications of variable designs and operations of emerging high-technologies
such as commercial human spacecraft. Uncertainty poses a particular challenge
for commercial human spaceflight, as the evidence for risk of harm remains
inconclusive while data is collected simultaneous to some measure of regulation
ensuring public safety being promulgated.[lvi] Nonetheless,
imputing defined risk to objects increases a sense of control and social order;[lvii]
and an ability to control that risk thereby influences the degree of a risk’s
social and political acceptance. In the context of how that risk is evaluated
and valued – and its corresponding base level of “acceptability” – the
imputation of risk on an object justifies the legitimization of certain
particular actors to control it.

“Things”
are “generally deemed risky or safe in and of themselves.”[lviii]
This is evident in the discourse on safety standards for commercial human
spaceflight – spacecraft are “risky,” as space is “risky.” These vehicles rely
on “dangerous” methods of power and propulsion, and “effective” mechanisms to
control and ensure passenger safety are undefined. In short, these machines and
their inherent risk are depicted as quantifiable – in present, past, and future
– by the people who govern them.[lix] The
precautionary principle – as part of the broader discourse on the control of
technology – favors a model of regulation that equates attention to risk
reduction through regulatory compliance with “safety.”[lx]
Under this presumed paradigm, formal rules – safety standards, approved designs
– ensure value-free and consistent assessment of risk and risk mitigation.

However,
it has been widely noted that managing technological risk is highly context-dependent;[lxi]
pure technical risk analyses at any point in time are unlikely to provide much
benefit to policymaker, regulator, or society at-large. The evolution of an
emerging technology or system is challenging – if not impossible – to
accurately predict.[lxii]
This problem is compounded by systems of incredible technical and human
complexity, as is the case with human-rated spacecraft operating in unfamiliar
profiles and environments. Such systems can chaotically fail in unexpected
ways, sometimes set off by a minor “glitch.”[lxiii]

These
“disasters waiting to happen” are inadvertently built into complex systems,
often the result of confusing or unanticipated interactions.[lxiv]
“Normal accidents,” as they are known, are bound to occur – because complex disaster
patterns can generally only be translated in hindsight.[lxv] Accordingly,
systems may not be considered completely reliable nor safe until they have been
operated through their full profile of potentiality – until considerable
uncertainty has been dispelled on their operation and technical
characteristics.

Even
then, a machine is more than a collection of moving parts – it is a “congealed
embodiment of an entire history of social assumptions, conventions, interests,
and cultural practices.”[lxvi]
Blame does not rest on a malfunctioning object – people are principally
responsible for accidents; “normal accidents” are not the fault of a machine –
they are the fault of people creating, and operating in and around, complex
systems without the foresight of potential fault-trees or externalities. As
such, responsibility for safety – and for blame – is political and cultural,
not inherently technical. Risk and accidents are the manifestation of processes
and cultures that keeps actors unaware of a system’s or decision’s full
complexity, and unaware of all the possibilities at which a complex system can
fail.[lxvii]

The
critical interrelationship of people and machine in risk and safety effectively
suggests that regulation is not a process of governing technologies, but rather
governing people. Technological practices are incapable of being governed by
“rules,” because compliance is ultimately an issue of human judgment –
interpretation – by the regulator and the regulated.[lxviii]
In essence, risk assessment and mitigation in the regulatory context can be
considered subjective and bounded by its practitioners’ perspectives,
knowledge, and constraints – “as much art as science.”[lxix]

Equally
important is the relationship between the regulator and the regulated. Limited
by technical expertise and resources, regulators often cannot be closely
involved in many of the tests of technological assessment. Rather, they certify
and oversee the representatives – usually, technical experts and insiders from
industry with unique knowledge of novel systems and technologies – who may.
This “second-order” regulation manifests from a need to make complex judgments
in an environment where rules, as noted, are “interpretively flexible.”[lxx] In
essence, the organizations producing high-risk technologies often play active
roles in their own regulation, even under a government-mandated and overseen
regulatory regime. The regulator become a perceived “virtuous witness,” who can
attest to and presumably ensure the virtue and validity of these expert
secondaries.[lxxi]

This
simple reality – be it distinct “regulatory capture” or mere necessity given
the techno-social complexities of high-technology systems – belies a key public
perception, manifest in political and social discourse, of the “regulator” as
an “independent expert… and disinterested arbiter of objective facts.”[lxxii]
Of course, the part can still be seen as played up by the regulator – that
regulation is “performative as well as functional” – that it is “better to
speak grandly of a rigorous method enforced by disciplinary peers, canceling
the biases of the knower and leading ineluctably to valid conclusions.”[lxxiii]
Nonetheless, if rules and numbers convey legitimacy in that they constrain
action or design – limiting discretion when credibility is “suspect” (as,
looking at the discourse, the credibility of the industry to regulate itself
is) – then the suggestion that rules may be non-constrictive, subject to the
regulated expert’s interpretation and subjective flexibility, indicates that
the core issue of credibility is not, and perhaps cannot be, resolved.[lxxiv]

What
is all of this to suggest? Principally, that a belief that public regulatory
institutions and their processes have inherent meaningful efficacy is based on
subjective perceptions, value-based assumptions, and ideological conceptions of
the role of public bodies in overseeing and governing private activities. The
notion that a public regulator can and should act as a credible “referee,”
verifying and validating the practices of self-interested private actors for
the benefit of the public good, is demonstrated in the debate over commercial
human spaceflight safety standards and held as a close normative expectation –
even if it is not constructed by reality.

Nor
is this affirmative belief in public regulatory institutions premised on clearly
objective metrics in the context of commercial human spaceflight. The critical
literature on safety in highly complex, high-risk, high-technology systems and
technologies indicates key points – that, in absence of operational experience,
it is near-impossible to effectively predict, and thereby regulate, the
evolution of emerging high-technologies such as spacecraft; that, for highly
complex technologies, the inherent interplay of human-machine systems makes an
unanticipated or unpredicted accident or incident probable, if not inevitable;
that regulatory institutions often rely on “second-degree” regulation,
effectively subsuming the regulated industry’s expert and niche understandings,
perceptions, and perspectives in the process of establishing, enforcing, and
verifying regulations.

This
is to say that the legitimization of a public institution to regulate emerging
technologies is “illusory.” A public regulator issuing safety regulations is
not, inherently and naturally, more effective or “safer” than an industry group
producing voluntary consensus standards. Equally so, an industry group
producing voluntary consensus standards is not, inherently and naturally, less
effective or “riskier” than a public regulator. All is contingent not on the
“public or private nature” of the group or institution, but on its processes,
its politics, and its culture – whether it can or cannot navigate systematic
complexity in order to identify, assess, manage, and mitigate all possibilities
of system failure and all types of risk.

Accordingly,
this legitimization is foundationally premised on socially and ideologically
constructed expectations and concerns about the role of the private actor,
vis-à-vis the public actor, in the trade-space of the public good. Can private
actors be trusted to uphold public safety and strive for protection of the
public good? Can public institutions properly arbitrate risk and safety
impartially, objectively, without influence or capture by externalities? Ultimately,
this debate over legitimization of public regulators is – like with risk – a
debate over politics and worldview, of trust or suspicion of individual
autonomy versus the collective right, the private actor versus the public
sector, which in turn shapes understandings of concepts such as safety, the
public good, precaution, utility.

Concluding Observations

As
noted in the introduction, this essay has not attempted to arrive at policy
prescriptions or answers regarding the complicated debate over commercial human
spaceflight safety standards. Those are better suited for mediums such as op-ed
pages and floor statements. Rather, it explored how socially-constructed
perceptions and ideologies enable and force the legitimization of certain
actors to oversee value-laden concepts such as risk, safety, and regulation.

This
analysis is important in the context of the politically-charged debate over
commercial spaceflight safety, and more broadly in debates over the proper
evaluation and management of risk and safety for emerging, risky,
high-technologies. Though politically convenient, digestible, and narratively
resonate, the debate is not simply and merely about “self-interested
capitalists against accountable public safety;” “the right of thrill-seekers
and adventurers against the heavy hand of a nanny-state;” “innovation and
business against stifling regulation;” “money against people.” Rather, it is
about fundamental worldviews and conceptions of the individual, the economy,
the state, and the public. These worldviews are correct or incorrect depending
on perspective; as social constructs contingent on culture and context, they
are relative. Most importantly, they are starkly demonstrative of how disparate
and contradictory prognoses of the use and limits of technology may be among
different, though equally valid, perceptions.

Works
Cited

[ii] Molly Macauley (2005), “Flying
in the Face of Uncertainty: Human Risk in Space Activities,” in 6 Chicago Journal of International Law 1.

[iii] Michael Elliott Leybovich, “A
Technoregulatory Analysis of Government Regulation and Oversight in the United
States for the Protection of Passenger Safety in Commercial Human Spaceflight,”
Massachusetts Institute of Technology,
February 2009. Pg. 76.

[iv] Rebecca Anderson and Michael
Peacock, “Ansari X-Prize: A Brief History and Background,” NASA, February 2010.

[lxx] T. Pinch & W. Bijker (1984),
“The Social Construction of Facts and Artifacts: Or How the Sociology of
Science and the Sociology of Technology Might Benefit Each Other,” in 14Social
Studies of Science.

[lxxi] “Trust and technology: the
social foundations of aviation regulation.” Pg. 95.