Posts

[Editor’s Note: Mad Scientist Laboratory is pleased to present the first of two guest blog posts by Dr. Nir Buras. In today’s post, he makes the compelling case for the establishment of man-machine rules. Given the vast technological leaps we’ve made during the past two centuries (with associated societal disruptions), and the potential game changing technological innovations predicted through the middle of this century, we would do well to consider Dr. Buras’ recommended list of nine rules — developed for applicability to all technologies, from humankind’s first Paleolithic hand axe to the future’s much predicted, but long-awaited strong Artificial Intelligence (AI).]

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Avid mathematician Ada Augusta Lovelace is often called the first computer programmer

Computing originated in the punched cards of Jacquard looms early in the 19th century. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852), foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.1

AI presents us with such “collateral influences.”2 The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms?3 Should there be limits to mankind’s connection to machines? As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.4

A Hippocratic Oath of AI?

This cover of Asimov’s I, Robot illustrates the story “Runaround”, the first to list all Three Laws of Robotics.

Asimov’s Three Laws of Robotics are merely a literary ploy to infuse his storylines.5 In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org6 to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.7 Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals, such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest Soldier.8

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards. It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.9 Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.10

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its ownCode of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”11 Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.12 Programming and algorithms clearly reflect ethical, philosophical, and moral positions.13 It is false to assume that the so-called “openness” trait of programmers reflects a broad mindfulness. There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing.”14 The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.15 The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to do so.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and thatvirtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’sHomo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.16

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of the production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

Microsoft’s Tay AI Chatter Bot

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman, or Microsoft’s fascist Tay.17 Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Machines don’t make decisions. Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.18

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.19

As we master artificial intelligence, human intelligence must take charge.20 The highest values known to mankind remains human life and the qualities and quantities necessary for the best individual life experience.21 For the transactions and transformations in which technology assists, we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.22

1. All material transformations and human transactions are to be conducted by humans.

2. Humans may directly employ hand/desktop/workstation devices in the above.

3. At all times, an individual human is responsible for the activity of any machine or program.

4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.

5. Any person can shut off any machine at any time.

6. All computing is visible to anyone [No Black Box].

7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third-party requires permission and compensation.

8. Technology must age before common use, until an Appropriate Technology is selected.

9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them. Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? Howethicalare chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?23 Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.24 It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost? If not now, when? If not by us, by whom?

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.

1 Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

5 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

17 That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

19 See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates. She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the same types of controls and restrictions. Dellot, 2017.

21 Sentience and sensibilities of other beings is recognized here, but not addressed.

22 The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

Dr. Giordano’s and CAPT Bremseth’s post is especially relevant, given the publication earlier this month of TRADOC Pamphlet 525-3-1, U.S. Army in Multi-Domain Operations 2028, and its solution to the “problem of layered standoff,” namely “the rapid and continuous integration of all domains of warfare to deter and prevail as we compete short of armed conflict; penetrate and dis-integrate enemy anti-access and area denial systems; exploit the resulting freedom of maneuver to defeat enemy systems, formations and objectives and to achieve our own strategic objectives; and consolidate gains to force a return to competition on terms more favorable to the U.S., our allies and partners.”]

“Victorious warriors seek to win first then go to war, while defeated warriors go to war first then seek to win.” — Sun Tzu

Non-kinetic Engagements

Political and military actions directed at adversely impacting or defeating an opponent often entail clandestine operations which can be articulated across a spectrum that ranges from overt warfare to subtle “engagements.” Routinely, the United States, along with its allies (and adversaries), has employed clandestine tactics and operations across the kinetic and non-kinetic domains of warfare. Arguably, the execution of clandestine kinetic operations is employed more readily as these collective activities often occur after the initiation of conflict (i.e., “Right of Bang”), and their effects may be observed (to various degrees) and/or measured. Given that clandestine non-kinetic activities are less visible and insidious, they may be particularly (or more) effective because often they are unrecognized and occur “Left of Bang.” Other nations, especially adversaries, understand the relative economy of force that non-kinetic engagements enable and increasingly are focused upon developing and articulating advanced methods for operations.

Much has been written about the fog of war. Non-kinetic engagements can create unique uncertainties prior to and/or outside of traditional warfare, precisely because they have qualitatively and quantitatively “fuzzy boundaries” as blatant acts of war. The “intentionally induced ambiguity” of non-kinetic engagements can establish plus-sum advantages for the executor(s) and zero-sum dilemmas for the target(s). For example, a limited scale non-kinetic action, which exerts demonstrably significant effects but does not meet defined criteria for an act of war, places the targeted recipient(s) at a disadvantage: First, in that the criteria for response (and proportionality) are vague and therefore any response could be seen as questionable; and second, in that if the targeted recipient(s) responds with bellicose action(s), there is considerable likelihood that they may be viewed as (or provoked to be) the aggressor(s) (and therefore susceptible to some form of retribution that may be regarded as sanctionable).

Nominally, non-kinetic engagements often utilize non-military means to expand the effect-space beyond the conventional battlefield. The Department of Defense and Joint Staff do not have a well agreed-upon lexicon to define and to express the full spectrum of current and potential activities that constitute non-kinetic engagements. It is unfamiliar – and can be politically uncomfortable – to use non-military terms and means to describe non-kinetic engagements. As previously noted, it can be politically difficult – if not precarious– to militarily define and respond to non-kinetic activities.

Non-kinetic engagements are best employed to incur disruptive effects in and across various dimensions of effect (e.g., biological, psychological, social) that can lead to intermediate to long-term destructive manifestations (in a number of possible domains, ranging from the economic to the geo-political). The latent disruptive and destructive effects should be framed and regarded as “Grand Strategy” approaches that evoke outcomes in a “long engagement/long war” context rather than merely in more short-term tactical situations.1

Thus, non-kinetic operations must be seen and regarded as “tools of mass disruption,” incurring “rippling results” that can evoke both direct and indirect de-stabilizing effects. These effects can occur and spread: 1) from the cellular (e.g., affecting physiological function of a targeted individual) to the socio-political scales (e.g., to manifest effects in response to threats, burdens and harms incurred by individual and/or groups); and 2) from the personal (e.g., affecting a specific individual or particular group of individuals) to the public dimensions in effect and outcome (e.g., by incurring broad scale reactions and responses to key non-kinetic events).2

Given the increasing global stature, capabilities, and postures of Asian nations, it becomes increasingly important to pay attention to aspects of classical Eastern thought (e.g., Sun Tzu) relevant to bellicose engagement. Of equal importance is the recognition of various nations’ dedicated enterprises in developing methods of non-kinetic operations (e.g., China; Russia), and to understand that such endeavors may not comport with the ethical systems, principles, and restrictions adhered to by the United States and its allies.3,4These differing ethical standards and practices, if and when coupled to states’ highly centralized abilities to coordinate and to synchronize activity of the so-called “triple helix” of government, academia, and the commercial sector, can create synergistic force-multiplying effects to mobilize resources and services that can be non-kinetically engaged.5 Thus, these states can target and exploit the seams and vulnerabilities in other nations that do not have similarly aligned, multi-domain, coordinating capabilities.

Emerging Technologies – as Threats

Increasingly, emerging technologies are being leveraged as threats for such non-kinetic engagements. While the threat of radiological, nuclear, and (high yield) explosive technologies have been and remain generally well surveilled and controlled to date, new and convergent innovations in the chemical, biological, cyber sciences, and engineering are yielding tools and methods that currently are not completely, or effectively addressed. An overview of these emerging technologies is provided in Table 1 below.

Table 1

Of key interest are the present viability and current potential value of thebrain sciences to be engaged in these ways.6,7,8 The brain sciences entail and obtain new technologies that can be applied to affect chemical and biological systems in both kinetic (e.g., chemical and biological ‘warfare’ but in ways that may sidestep definition – and governance – by existing treaties and conventions such as the Biological Toxins and Weapons Convention (BTWC), and Chemical Weapons Convention (CWC), and/or non-kinetic ways (which fall outside of, and therefore are not explicitly constrained by, the scope and auspices of the BTWC or CWC).9,10

As recent incidents (e.g., “Havana Syndrome”; use of novichok; infiltration of foreign-produced synthetic opioids to US markets) have demonstrated, the brain sciences and technologies have utility to affect “minds and hearts” in (kinetic and non-kinetic) ways that elicit biological, psychological, socio-economic, and political effects which can be clandestine, covert, or attributional, and which evoke multi-dimensional ripple effects in particular contexts (as previously discussed). Moreover, apropos current events, the use ofgene editing technologies and techniques to modify existing microorganisms11, and/or selectively alter human susceptibility to disease12, reveal the ongoing and iterative multi-national interest in and considered weaponizable use(s) of emerging biotechnologies as instruments to incur “precision pathologies” and “immaculate destruction” ofselected targets.

Toward Address, Mitigation, and Prevention

Without philosophical understanding of and technical insight into the ways that non-kinetic engagements entail and affect civilian, political, and military domains, the coordinated assessment and response to any such engagement(s) becomes procedurally complicated and politically difficult. Therefore, we advocate and propose increasingly dedicated efforts to enable sustained, successful surveillance, assessment, mitigation, and prevention of the development and use of Emerging Technologies as Threats (ETT) to national security. We posit that implementing these goals will require coordinated focal activities to: 1) increase awareness of emerging technologies that can be utilized as non-kinetic threats; 2) quantify the likelihood and extent of threat(s) posed; 3) counter identified threats; and 4) prevent or delay adversarial development of future threats.

Further, we opine that a coordinated enterprise of this magnitude will necessitate a Whole of Nations approach so as to mobilize the organizations, resources, and personnel required to meet other nations’ synergistic triple helix capabilities to develop and non-kinetically engage ETT.

Utilizing this approach will necessitate establishment of:

1. An office (or network of offices) to coordinate academic and governmental research centers to study and to evaluate current and near-future non-kinetic threats.

2. Methods to qualitatively and quantitatively identify threats and the potential timeline and extent of their development.

3. A variety of means for protecting the United States and allied interests from these emerging threats.

4. Computational approaches to create and to support analytic assessments of threats across a wide range of emerging technologies that are leverageable and afford purchase in non-kinetic engagements.

In light of other nations’ activities in this domain, we view the non-kinetic deployment of emerging technologies as a clear, present, and viable future threat. Therefore, as we have stated in the past13,14,15, and unapologetically re-iterate here, it is not a question of if such methods will be utilized but rather questions of when, to what extent, and by which group(s), and most importantly, if the United States and its allies will be prepared for these threats when they are rendered.

If you enjoyed reading this post, please also see Dr. Giordano’s presentations addressing:

Mad Scientist James Giordano, PhD, is Professor of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, and Co-Director of the O’Neill-Pellegrino Program in Brain Science and Global Law and Policy at Georgetown University Medical Center. He also currently serves as Senior Biosciences and Biotechnology Advisor for CSCI, Springfield, VA, and has served as Senior Science Advisory Fellow of the Strategic Multilayer Assessment Group of the Joint Staff of the Pentagon.

R. Bremseth, CAPT, USN SEAL (Ret.), is Senior Special Operations Forces Advisor for CSCI, Springfield, VA. A 29+ years veteran of the US Navy, he commanded SEAL Team EIGHT, Naval Special Warfare GROUP THREE, and completed numerous overseas assignments. He also served as Deputy Director, Operations Integration Group, for the Department of the Navy.

This blog is adapted with permission from a whitepaper by the authors submitted to the Strategic Multilayer Assessment Group/Joint Staff Pentagon, and from a manuscript currently in review at HDIAC Journal. The opinions expressed in this piece are those of the authors, and do not necessarily reflect those of the United States Department of Defense, and/or the organizations with which the authors are involved.

5 Etzkowitz H, Leydesdorff L. The dynamics of innovation: From national systems and “Mode 2” to a Triple Helix of university-industry-government relations. Research Policy, 29: 109-123 (2000).

6 Forsythe C, Giordano J. On the need for neurotechnology in the national intelligence and defense agenda: Scope and trajectory. Synesis: A Journal of Science, Technology, Ethics and Policy 2(1): T5-8 (2011).

[Editor’s Note: As addressed in last week’s post, entitled The Human Targeting Solution: An AI Story, the incorporation of Artificial Intelligence (AI) as a warfighting capability has the potential to revolutionize combat, accelerating the future fight to machine speeds. That said, the advanced algorithms underpinning these AI combat multipliers remain dependent on the accuracy and currency of their data feeds. In the aforementioned post, the protagonist’s challenge in overriding the AI-prescribed optimal (yet flawed) targeting solution illustrates the inherent tension between human critical thinking and the benefits of AI.

The future character of war will be influenced by emerging technologies such as AI, robotics, computing, and synthetic biology. Cutting-edge technologies will become increasingly cheaper and readily available, introducing a wider range of actors on the battlefield. Moreover, nation-state actors are no longer the drivers of cutting-edge technology — militaries are leveraging the private sector who are leading research and development in emergent technologies. Proliferation of these cheap, accessible technologies will allow both peer competitors and non-state actors to wage serious threats in the future operational environment. Due to the abundance of new players on the battlefield combined with emerging technologies, future conflicts will be won by those who both possess “critical thinking” skills and can integrate technology seamlessly to inform decision-making in war instead of relying on technology to win war. Achieving success in the future eras of accelerated human progress and contested equality will require the U.S. Army to develop Soldiers who are adept at seamlessly employing technology on the battlefield while continuously exercising critical thinking skills.

The Foundation for Critical Thinkingdefines critical thinking as “the art of analyzing and evaluating thinking with a view to improve it.” 1 Furthermore, they assert that a well cultivated critical thinker can do the following: raise vital questions and problems and formulate them clearly and precisely; gather and assess relevant information, using abstract ideas to interpret it effectively; come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards; think open-mindedly within alternative systems of thought, recognizing and assessing, as needed, their assumptions, implications, and practical consequences; and communicate effectively with others in figuring out solutions to complex problems.2

Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students’ struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that “a young adult whose brain has been “wired’ to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today’s increasingly complex and competitive world.”3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be “creative thinking.” Specifically, he stated “I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering.”4 Additionally, Forbes magazine published an article in 2018 declaring that “creativity is the skill of the future.”5

Employing future technologies effectively will be key to winning war, but it is only one aspect. During the Vietnam War, the U.S. relied heavily on technology but were defeated by an enemy who leveraged simple guerilla tactics combined with minimal military technology. Emerging technologies will be vital to inform decision-making, but will not negate battlefield friction. Carl von Clausewitz ascertained that although everything is simple in war, the simplest things become difficult and accumulate and create friction.6 Historically, a lack of information caused friction and uncertainty. However, complexity is a driver of friction in current warfare and will heavily influence future warfare. Complex, high-tech weapon systems will dominate the future battlefield and create added friction. Interdependent systems linking communications and warfighting functions will introduce more friction which will require highly skilled thinkers to navigate.

The newly publishedU.S. Army in Multi-Domain Operations 2028 concept “describes how Army forces fight across all domains, the electromagnetic spectrum (EMS), and the information environment and at echelon“7to “enable the Joint Force to compete with China and Russia below armed conflict, penetrate and dis-integrate their anti-access and area denial systems and ultimately defeat them in armed conflict and consolidate gains, and then return to competition.”8 Even with technological advances and intelligence improvement, elements of friction will be present in future wars. Both great armies and asymmetric threats have vulnerabilities, due to small things in terms of friction that morph into larger issues capable of crippling a fighting force. Therefore, success in future war is dependent on military commanders that understand these elements and how to overcome friction. Future technologies must be fused with critical thinking to mitigate friction and achieve strategic success. The U.S. Army must simultaneously emphasize integrating critical thinking in doctrine and exercises when training Soldiers on new technologies.

Soldiers should be creative, innovative thinkers; the Army must foster critically thinking as an essential skill. The Insight Assessment emphasizes that “weakness in critical thinking skill results in loss of opportunities, of financial resources, of relationships, and even loss of life. There is probably no other attribute more worthy of measure than critical thinking skills.”9 Gaining and maintaining competitive advantage over adversaries in a complex, fluid future operational environment requires Soldiers to be both skilled in technology and experts in critical thinking.

MAJ Cynthia Dehne is in the U.S. Army Reserve, assigned to the TRADOC G-2 and has operational experience in Afghanistan, Iraq, Kuwait, and Qatar. She is a graduate of the U.S. Army Command and General Staff College and holds masters degrees in International Relations and in Diplomacy and International Commerce.

The image of the “space war” is ubiquitous from popular Cold War and contemporary renderings: fast attack fighters equipped with laser cannons, swooping in to engage the enemy fleet in an outer space dogfight, culminating with the cataclysmic explosion of the enemy’s dreadnought. The use of directed energy in this scenario, while making for good entertainment, is a far cry from the practical applications of directed energy in space out to 2050. Taking a step back from the thrilling future possibilities of space combat, it is important to note that it is not a question of when lasers will be put into space — they already have been. What is uncertain is the speed at which lasers and other forms of directed energy will be weaponized, and when these capabilities will be used to extend conflict into the physical domain of low-earth orbit and outer space.

The ICESat-2 instrument measures the difference between the polar oceans and sea ice / NASA

Since 2003, NASA has used a laser mounted on a satellite to measure ice sheets and conduct other environmental studies and mapping. This mission involved the constant emission of a green laser, split into six beams, reflecting off polar ice and returning photons to the satellite.1 NASA is presently exploring the use of lasers for communications, a technology with abundant military applications. One such program, undertaken jointly by NASA and private industry, is the use of optical, or laser, communications between space assets and ground stations on Earth. These optical transmissions have the benefit of allowing the communication

NASA tested the International Space Station’s laser communications system, linking the ISS to an observatory on Earth and allowing for the real-time transmission of high-resolution video / NASA

of a much greater volume of data in a more secure fashion, when compared with radio communications.2 Lasers represent an increase in communications security because they are very difficult to intercept due to their narrow beam. In contrast, radio signals are broadcast widely, and as such are more easily intercepted.3 The use of lasers allows for much more effective communications between space assets and Earth and would facilitate exploration deeper into space.

Artist’s impression of a laser removing orbital debris, based on NASA pictures

A study of lasers in orbit acting on other objects also in orbit by researchers from the Information and Navigation College, Air Force Engineering University, Xi’an, China, received international attention in the beginning of 2018.4 This study demonstrated the possibility of using lasers to help remove “space junk.” This debris presents a major challenge for every space actor because particles as small as flecks of paint can cause damage to orbiting assets, given their high velocities.5 The Chinese proposal involved a space-based laser strong enough to vaporize a portion of the object’s mass, altering its flight path enough to cause it to de-orbit, resulting in its re-entry and burning up in the atmosphere. This “space broom” may be the solution the international community is looking for regarding space debris; however, it has raised some eyebrows in the scientific and defense communities. There is concern that the type and strength of this laser could present a dual-use potential for military application, such as satellite sabotage or the destruction of space assets, in the event of conflict on Earth escalading to the level of physically attacking a competitor’s assets in space.6

A “new” directed energy technology, which is an alternative to an optical laser beam in the military context, is an older idea: neutral particle beams. Originally researched during and after the Cold War, this technology is beginning to catch up with imagination. Michael Griffin, the Department of Defense Under Secretary of Defense for Research and Engineering, explained that, “directed energy is more than just big lasers… that’s important. High-powered microwave approaches can affect an electronics kill. The same with the neutral particle beam systems we explored briefly in the 1990s.” What makes this kind of technology militarily attractive is that it is non-attributable. It does not leave a residue, so it is impossible to determine the exact source or cause of any damage. Both optical lasers and neutral particle beams travel in straight lines, can penetrate atmospheres, and collide with targets at or nearly at the speed of light, making these technologies untraceable, invisible weapons.7

So, what will space war in 2050 really look like? Cyber war is already being waged in space, with cyber operations from around the world transmitted via satellite on a near-constant basis — this will most certainly continue. But the ramifications are only felt on Earth.

One potential space-to-earth military application for orbiting directed energy assets that is already under consideration today, and that may be operational by 2050, is the use of lasers to intercept ballistic missiles during their boost phase.8Space war using directed energy in 2050 may also involve satellite-to-satellite communications and targeting, potentially giving states the ability to disable, damage, or destroy space assets from other satellite-mounted directed energy systems.

As more advanced technologies proliferate, new debates will open up about the acceptable usage of space. But while these discussions are ongoing, near-peer adversaries will act. Space is the new frontier for military conflict and abiding by the spirit of the Outer Space Treaty of 1967 may begin to seem limiting to technological and defense progress necessary for maintaining national security. Near-peer adversaries will continue to seek the next military advantage, and they have already begun to look for it amongst the stars.

Marie Murphy is a junior at The College of William and Mary in Virginia, studying International Relations and Arabic. She is a regular contributor to the Mad Scientist Laboratory, interned at Headquarters, U.S. Army Training and Doctrine Command (TRADOC) with the Mad Scientist Initiative during the Summer of 2018, and is currently a Research Fellow for William and Mary’s Project on International Peace and Security.

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger CW3 Jesse R. Crifasi, envisioning a combat scenario in the not too distant future, teeing up the twin challenges facing the U.S Army in incorporating Artificial Intelligence (AI) across the force — “human-in-the-loop” versus “human-out-of-the-loop” and trust. In it, CW3 Crifasi describes the inherent tension between human critical thinking and the benefits of Augmented Intelligence facilitating warfare at machine speed. Enjoy!]

“CAITT, let’s re-run the targeting solution for tomorrow’s engagement… again,” asked Chief Warrant Officer Five Robert Menendez, in a not altogether annoyed tone of voice. Considering this was the fifth time he had asked, the tone of control Bob was exercising was nothing short of heroic for those knew him well. Fortunately, CAITT, short for Commander’s Artificially Intelligent Targeting Tool, did not seem to notice. Bob quietly thanked the nameless software engineer who had not programmed it to recognize the sarcasm and vitriol that he felt when he made the request.

“Chief, do you really think she is going to come up with anything different this time? You know that old saying about the definition of insanity, right?” asked DeMarcus Austin. Bob shot the 28-year Captain a glare, clearly indicating that he knew exactly what the young man was implying. It was 0400 hours, and the entire Brigade Combat Team (BCT) was preparing to defend along its forward boundary. This after an exhausting three-day rapid deployment from their forward staging bases in Germany had everyone already on edge. In short, nothing had gone as expected or as planned for in the Operations Plan (OPLAN).

The UBRA’s, short for Unified Belorussian Russian Alliance’s, 323rd Tank Division was a mere 68 kilometers from the BCT’s Forward Line of Troops or FLOT. They would be in the BCT’s primary engagement area in six hours. Between 1EU DIV and the EU’s Expeditionary Air Force’s efforts, nothing was slowing UBRA’s advance towards the critical seaport city of Gdansk, Poland.

All the assumptions about air supremacy and cyber domination went out the window after the first UBRA tactical Electromagnetic Pulse (EMP) weapon detonated over Vilnius, Lithuania, 48 hours prior. A brilliant strategic move, the EMP fried every unshielded computer networked system the Allied Forces possessed. The Coalition AI Partner Network, so heavily relied on to execute the OPLAN, was inaccessible, as was every weapon system that linked to it. Right about now, Bob wished that CAITT was one of those systems.

Luckily for him and his boss, Colonel Steph “Duke” Ducalis, CAITT was designed with an internal Faraday shield preventing it and most of the U.S. Army’s other AI systems from suffering the same catastrophic damage. Unfortunately, the EU Armed Forces did not heed the same warnings and indicators. They were essentially crippled as they fervently worked to repair the damage. With the majority of U.S. military might committed to the Pacific Theatre, Colonel Ducalis’ BCT, a holdover from the old NATO alliance, was the lone American combat unit forward deployed in Western Europe. Alone and unafraid, as they say.

“Sir…” asked CAITT, snapping Bob out of his fatigue induced musings, “all data still indicates that engaging with our M56 Long-Range High-Velocity Missiles against the 323rd’s logistical assembly areas in Elblag will compel them to defeat. I estimate their advance will cease approximately 18 hours after direct fire battle commences. Given all of the variables, this is the optimal targeting solution.” Bob really hated how CAITT dispassionately stated her “optimal targeting solution,” in that sultry female tone. Clearly, that same software engineer who had ensured CAITT was durable also had a soft spot for British accents.

“CAITT, that makes no sense!” Bob stated exasperatedly. “The 323rd has approximately 250 T-90 MBTs — even if they expend all their fuel and munitions in that 18 hours, they will still overrun our defensive positions in less than six. We only have a single armored battalion with 35 FMC LAV3s. Even if they meet 3-1 K-kill ratios, we will not be able to hold our position. If they dislodge the LAVs, the dismounted infantrymen won’t stand a chance. We need to target the C2 nodes of their lead tank regiment now with the M56s. If we can neutralize their centralized command and control and delay their rate of march, it may give the EUAF enough time to get us those CAS and AI sorties they promised,” replied Bob. “That’s the right play, space for time.”

“I am sorry Mr. Menendez, I have no connection to the coalition network and cannot get a status update for the next Air Tasking Order. There is no confirmation that our Air Support Requests were received. I am issuing the target nominations to 2-142 HIMARS, they are moving towards their Position Areas Artillery now, airspace coordination is proceeding, and Colonel Ducalis is receiving his Commander’s Intervention Brief now. Pending his override there is nothing you can do.” CAITTs response almost sounded condescending to Bob; but then again, he remembered a time when human staff officers made recommendations to the boss, not smart-ass video game consoles.

“Chief, shouldn’t we just go with CAITTs solution? I mean she has all the raw data from the S2’s threat template and the weaponeering guidances that you built. CAITT is the joint program of record that we have to use, don’t we?” asked Captain Austin. Bob did not blame the young man for saying that. After all, this is what the Army wanted, staff officers that were more technicians and data managers than tacticians. The young man was simply not trained to question the AI’s conclusions.

“No sir, we should not, and by the way, I really hate how you call it a she,” answered Bob as he pondered his dilemma. Dammit! I’m the freaking Targeting Officer; I own this process, not this stupid thing… he thought for about five seconds before his instincts reasserted control of his senses.

Quickly jumping out of his chair, Bob left Captain Austin to oversee the data refinement and went outside to seek out the Commander’s Joint Lightweight Tactical Vehicle (JLTV). It took him a moment to locate it under the winter camouflage shielding, since Polish winters were just as brutal as advertised.

I must be getting old, Bob mused to himself, the cold air biting into his face. After twenty-five years of service, despite countless combat deployments in the Middle East, he was starting to get complacent. It was easy to think like young Captain Austin. He never should have trusted CAITT in the first place. It was so easy to let it make the decisions for you that many just stopped thinking altogether. The CIB would be Bob’s last chance to convince the boss that CAITT’s solution was wrong and he was right.

Bob entered the camo shield behind the JLTV constructing his argument to the boss in his mind. Colonel Ducalis had no time to entertain lengthy debate, this Bob knew. The fight was moving just too fast. Information is the currency of decision-making, and he would at best get about twenty seconds to make his case before something else grabbed the boss’s attention. CAITT would already be running the targeting solution straight to the boss via his Commanders Oculatory Device, jokingly called “COD,” referencing the old bawdy medieval term. Colonel Ducalis, already wearing the COD when Bob came in, was oblivious to everything else around him. Designed to construct a virtual and interactive battlefield environment, the COD worked almost too well. Even as Bob came in, CAITT was constructing the virtual battlefield, displaying missile aimpoints, HIMARs firing positions, airspace coordination measures, and detailed damage predictions for the target areas.

Bob could not understand how one person could absorb all that visual information in one sitting, but Colonel Ducalis was an exceptional commander. Standing nearby was the boss’s ever-present guardian, Major Lawrence Atlee, BCT XO, acting as always like a consigliere to his boss. His annoyance at Bob’s presence was evident by the scowl he received as he entered unannounced and, more egregiously, unrequested by him.

“Chief, what do you need?” asked Atlee, in his typically hurried tone, indicating that the boss should not be disturbed for all but the most serious reasons.

“Sir, it’s imperative I talk to the boss right now,” Bob demanded, somewhat out of breath — again, old age catching up. Without providing a reason to the XO, Bob moved directly to Colonel Ducalis and gently touched his arm. One did not shake a Brigade Commander, especially a former West Point Rugby player the size of Duke. The XO was not pleased.

“Bob, what’s up? I was just reviewing CAITT’s targeting solution,” said Duke as he lifted the COD off his face and saw his very distraught looking Targeting Officer. That’s hopeful, thought Bob, most Commanders would not even have bothered, simply letting the AI execute its solution.

Bob took a moment to compose himself and as he was about to pitch his case Atlee stepped in, “Sir, I’m very sorry. Chief here was just trying to let you know that he was ready to proceed.” Then turning to Bob he said in a manner that would not be confused as optional, “He was just leaving.”

Bob seized his chance as Duke looked right at him. They had served together for a long time. Bob remembered when Duke had asked him to come down from the 1EU Division Staff to fill his targeting officer billet. Undoubtedly, Duke trusted him and genuinely wanted to know what his concern was when he remove the COD in the first place. Bob owed it to him to give it to him straight.

“Sir, that is not correct,” Bob said speaking hurriedly. “We have a serious problem. CAITT’s targeting solution is completely wrong. The variables and assumptions were all predicated on the EUAF having air and cyber superiority. Those plans went out the window the second that EMP detonated. With all those aircraft down for CPU hardware replacement and software re-installs, those data points are now irrelevant. CAITT doesn’t know how long that will take because it is delinked from the Coalition’s AI Partner Network. I managed to get a low-frequency transmission established with Colonel Collins in Warsaw, and he thinks they can get us some sorties in the next six hours. CAITTs solution is ignoring the time versus space dynamic and going with a simple comparison of forces mathematical model. I’m betting it thinks that our casualties will be within acceptable limits after the 373rd expends all of its pre-staged consumable fuel and ammo. It thinks that we can hold our position if we cut off their re-supply. It may be right, but our losses will render us combat ineffective and unable to hold while 1EU DIV reconsolidates behind us.

“We need to implement this High Payoff Target List and Attack Guidance immediately disrupting and attriting their lead maneuver formations. Sir, we need to play for time and space,” Bob explained, hoping the sports analogy resonated while simultaneously accessing his Fires Forearm Display or FFaD, transmitting the data to Duke’s COD with a wave of his hand.

“Sir, I am not sure we should be deviating from the AI solution,” Atlee started to interject. “To be candid, and no offense to Mr. Menendez, the Army is eliminating their billets anyway since CAITT was fielded last year, same as they did for all the BCT S3s and FSOs. Their type of thinking is just not needed anymore, now that we have CAITT to do it for us.” Bob was amazed at how Major Atlee stated this dispassionately.

Bob, realizing where this was going, took a knee next to Duke. He was clearly as tired as everyone else. Bob leaned in to speak while Duke started to review the new battlespace geometries and combat projections in his COD. “Duke,” Bob said in a low tone of voice so Major Atlee could not easily overhear him, “We’ve been friends a long time, I’ve never given you a bad recommendation. Please, override CAITT. LTC Givens can reposition his HIMARS battalion, but he has to start doing it now. This is our only chance; once those missiles are gone, we won’t get them back.”

He then stood up and patiently waited. Bob understood that he had pushed things as far as he could. Duke was a good man, a fine commander, and would make the right decision, Bob was certain of it.

Taking off his COD and rubbing his eyes, Duke leaned back and sighed heavily. The weight of command taking its full effect.

“CAITT,” stated Colonel Ducalis. “I am initiating Falcon 06’s override prerogative. Issue Chief Menendez’s targeting solution to LTC Givens immediately. Larry, get a hold of 1EU DIV and tell them we can hold our positions for 24 hours. After that, we may have to withdraw, but we will live to fight another day. Right now, trading time for space may not be the optimal strategy, but it is the human one. Let’s Go!”

If you enjoyed reading this post, please also see the following blog posts:

CW3 Jesse R. Crifasi is an active duty Field Artillery Warrant Officer. He has over 24 years in service and is currently serving as the Field Artillery Intelligence Officer (FAIO) for the 82nd Airborne Division.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.

[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) mission is to recruit, train, and educate the Army, driving constant improvement and change to ensure the Total Army can deter, fight, and win on any battlefield now and into the future. Today’s post addresses how TRADOC will need to transform to ensure that it continues to accomplish this mission with the next generation of Soldiers.]

“The Army of 2028 will be ready to deploy, fight, and win decisively against any adversary, anytime and anywhere, in a joint, multi-domain, high-intensity conflict, while simultaneously deterring others and maintaining its ability to conduct irregular warfare. The Army will do this through the employment of modern manned and unmanned ground combat vehicles, aircraft, sustainment systems, and weapons, coupled with robust combined arms formations and tactics based on a modern warfighting doctrine and centered on exceptional Leaders and Soldiers of unmatched lethality.” GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, June 7, 2018.

In order to achieve this vision, the Army of 2028 needs a TRADOC 2028 that will recruit, organize, and train future Soldiers and Leaders to deploy, fight, and win decisively on any future battlefield. This TRADOC 2028 must account for: 1) the generational differences in learning styles; 2) emerging learning support technologies; and 3) how the Army will need to train and learn to maintain cognitive overmatch on the future battlefield. The Future Operational Environment, characterized by the speeding up of warfare and learning, will challenge the artificial boundaries between institutional and organizational learning and training (e.g., Brigade mobile training teams [MTTs] as a Standard Operating Procedure [SOP]).

Soldiers will be “New Humans” – beyond digital natives, they will embrace embedded and integrated sensors, Artificial Intelligence (AI), mixed reality, and ubiquitous communications. “Old Humans” adapted their learning style to accommodate new technologies (e.g., Classroom XXI). New Humans’ learning style will be a result of these technologies, as they will have been born into a world where they code, hack, rely on intelligent tutors and expert avatars (think the nextgen of Alexa / Siri), and learn increasingly via immersive Augmented / Virtual Reality (AR/VR), gaming, simulations, and YouTube-like tutorials, rather than the desiccated lectures and interminable PowerPoint presentations of yore. TRADOC must ensure that our cadre of instructors know how to use (and more importantly, embrace and effectively incorporate) these new learning technologies into their programs of instruction, until their ranks are filled with “New Humans.”

Delivering training for new, as of yet undefined MOSs and skillsets. The Army will have to compete with Industry to recruit the requisite talent for Army 2028. These recruits may enter service with fundamental technical skills and knowledges (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer) that may result in a flattening of the initial learning curve and facilitate more time for training “Green” tradecraft. Cyber recruiting will remain critical, as TRADOC will face an increasingly difficult recruiting environment as the Army competes to recruit new skillsets, from training deep learning tools to robotic repair. Initiatives to appeal to gamers (e.g., the Army’s eSports team) will have to be reflected in new approaches to all TRADOC Lines of Effort. AI may assist in identifying potential recruits with the requisite aptitudes.

“TRADOC in your ruck.”Personal AI assistantsbring Commanders and their staffs all of the collected expertise of today’s institutional force. Conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

Learning in the future will be personalized and individualized with targeted learning at the point of need. Training must be customizable, temporally optimized in a style that matches the individual learners, versus a one size fits all approach. These learning environments will need to bring gaming and micro simulations to individual learners for them to experiment. Similar tools could improve tactical war-gaming and support Commander’s decision making. This will disrupt the traditional career maps that have defined success in the current generation of Army Leaders. In the future, courses will be much less defined by the rank/grade of the Soldiers attending them.

Geolocation of Training will lose importance. We must stop building and start connecting. Emerging technologies – many accounted for in theSynthetic Training Environment (STE) – will connect experts and Soldiers, creating a seamless training continuum from the training base to home station to the fox hole. Investment should focus on technologies connecting and delivering expertise to the Soldier rather than brick and mortar infrastructure. This vision of TRADOC 2028 will require “Big Data” to effectively deliver this personalized, immersive training to our Soldiers and Leaders at the point of need, and comes with associated privacy issues that will have to be addressed.

In conclusion, TRADOC 2028 sets the conditions to win warfare at machine speed. This speeding up of warfare and learning will challenge the artificial boundaries between institutional and organizational learning and training.

If you enjoyed this post, please also see:

– Mr. Elliott Masie’s presentation on Dynamic Readiness from the Learning in 2050 Conference, co-hosted with Georgetown University’s Center for Security Studies in Washington, DC, on 8-9 August 2018.

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligenceby Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018. While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications. Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

“… their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.” – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI). Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either. Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

[Editor’s Note: Mad Scientist Laboratory is pleased to present our October edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

This innovative Table of Disruptive Technologies, derived from Chemistry’s familiar Periodic Table, lists 100 technological innovations organized into a two-dimensional table, with the x-axis representing Time (Sooner to Later) and the y-axis representing the Potential for Socio-Economic Disruption (Low to High). These technologies are organized into three time horizons, with Current (Horizon 1 – Green) happening now, Near Future (Horizon 2 – Yellow) occurring in 10-20 years, and Distant Future (Horizon 3 – Fuchsia) occurring 20+ years out. The outermost band of Ghost Technologies (Grey) represents fringe science and technologies that, while highly improbable, still remain within the realm of the possible and thus are “worth watching.” In addition to the time horizons, each of these technologies has been assigned a number corresponding to an example listed to the right of the Table; and a two letter code corresponding to five broad themes: DE – Data Ecosystems, SP – Smart Planet, EA – Extreme Automation, HA – Human Augmentation, and MI – Human Machine Interactions. Regular readers of the Mad Scientist Laboratory will find many of thesePotential Game Changers familiar, albeit assigned to far more conservative time horizons (e.g., our community of action believes Swarm Robotics [Sr, number 38], Quantum Safe Cryptography [Qs, number 77], and Battlefield Robots [Br, number 84] will all be upon us well before 2038). That said, we find this Table to be a useful tool in exploring future possibilities and will add it to our “basic load” of disruptive technology references, joining the annualGartner Hype Cycle of Emerging Technologies.

Tim Berners-Lee, who created the World Wide Web in 1989, has said recently that he thinks his original vision is being distorted due to concerns about privacy, access, and fake news. Berners-Lee envisioned the web as a place that is free, open, and constructive, and for most of his invention’s life, he believed that to be true. However, he now feels that the web has undergone a change for the worse. He believes the World Wide Web should be a protected basic human right. In order to accomplish this, he has created the “Contract for the Web” which contains his principles to protect web access and privacy. Berners-Lee’s “World Wide Web Foundation estimates that 1.5 billion… people live in a country with no comprehensive law on personal data protection. The contract requires governments to treat privacy as a fundamental human right, an idea increasingly backed by big tech leaders like Apple CEO Tim Cook and Microsoft CEO Satya Nadella.” This idea for a free and open web stands in contrast to recent news about China and Russia potentially branching off from the main internet and forming their own filtered and censoredAlternative Internet, orAlternet, with tightly controlled access. Berners-Lee’s contract aims at unifying all users under one over-arching rule of law, but without China and Russia, we will likely have a splintered and non-uniform Web that sees only an increase in fake news, manipulation, privacy concerns, and lack of access.

The Future Operational Environment’s “Era of Contested Equality” (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes. Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development.

Early innovation successes in artificial intelligence, sensors, robotics, and biometrics are being fielded to better control the Chinese population. Many of these capabilities will be tech inserted into Chinese command and control functions and intelligence, security, and reconnaissance networks redefining the timeless competition offinders vs. hiders. These breakthroughs represent homegrown Chinese innovation and are taking place now.

A recent example is the employment of ‘gait recognition’ software capable of identifying people by how they walk. Watrix, a Chinese technology startup, is selling the software to police services in Beijing and Shanghai as a further push to develop an artificial intelligence and data drive surveillance network. Watrix reports the capability can identify people up to 165 feet away without a view of their faces. This capability also fills in the sensor gap where high-resolution imagery is required for facial recognition software.

Tricking the brain can be fairly low tech, according to Dr. Alexis Mauger, senior lecturer at the University of Kent’s School of Sport and Exercise Sciences. Research has shown that students who participated in a Virtual Reality-based exercise were able to withstand pain a full minute longer on average than their control group counterparts. Dr. Mauger hypothesized that this may be due to a lack of visual cues normally associated with strenuous exercise. In the case of the specific research, participants were asked to hold a dumbbell out in front of them for as long as they could. The VR group didn’t see their forearms shake with exhaustion or their hands flush with color as blood rushed to their aching biceps; that is, they didn’t see the stimuli that could be perceived as signals of pain and exertion. These results could have significant and direct impact onArmy training. While experiencing pain and learning through negative outcomes is essential in certain training scenarios, VR could be used to train Soldiers past where they would normally be physically able to train. This could not only save the Army time and money but also provide a boost to exercises as every bit of effectiveness normally left at the margins could now be acquired.

Presently, there are two predominant techniques for machine learning: machines analyzing large sets of data from which they extrapolate patterns and apply them to analogous scenarios; and giving the machine a dynamic environment in which it is rewarded for positive outcomes and penalized for negative ones, facilitating learning through trial and error.

In programmed curiosity, the machine is innately motivated to “explore for exploration’s sake.” The example used to illustrate the concept of learning through curiosity details a machine learning project called “OpenAI” which is learning to win a video game in which the reward is not only staying alive but also exploring all areas of the level. This method has yielded better results than the data-heavy and time-consuming traditional methods. Applying this methodology for machine learning inmilitary training scenarios would reduce the human labor required to identify and program every possible outcome because the computer finds new ones on its own, reducing the time between development and implementation of a program. This approach is also more “humanistic,” as it allows the computer leeway to explore its virtual surroundings and discover new avenues like people do. By training AI in this way, the military can more realistically model various scenarios for training and strategic purposes.

A European Union plan to tax internet firms like Google and Facebook on their turnover is on the verge of collapsing. As the plan must be agreed to by all 28 EU countries (a tall order given that it is opposed by a number of them), the EU is announcing national initiatives instead. The proposal calls for EU states to charge a 3 percent levy on the digital revenues of large firms. The plan aims at changing tax rules that have let some of the world’s biggest companies pay unusually low rates of corporate tax on their earnings. These firms, mostly from the U.S., are accused of averting tax by routing their profits to the bloc’s low-tax states.

This is not just about taxation. This is about the issue of citizenship itself. What does it mean for virtual nations – cyber communities which have gained power, influence, or capital comparable to that of a nation-state – that fall outside the traditional rule of law? The legal framework of virtual citizenship turn upside down and globalize the logic of the special economic zone — a geographical space of exception, where the usual rules of state and finance do not apply. How will these entities be taxed or declare revenue?

Currently, for the online world, geography and physical infrastructure remain crucial to control and management. What happens when it is democratized, virtualized, and then control and management change? Google and Facebook still build data centers in Scandinavia and the Pacific Northwest, which are close to cheap hydroelectric power and natural cooling. When looked at in terms of who the citizen is, population movement, and stateless populations, what will the “new normal” be?

In this article, subtitled “Are we designing inequality into our genes?” Ms. Hercher echoes what proclaimed Mad ScientistHank Greely briefed at the Bio Convergence and Soldier 2050 Conference last March – advances in human genetics will be applied initially in order to have healthier babies via the genetic sequencing and the testing of embryos. Embryo editing will enable us to tailor / modify embryos to design traits, initially to treat diseases, but this will also provide us with the tools to enhance humans genetically. Ms. Hercher warns us that “If the use of pre-implantation testing grows and we don’t address the disparities in who can access these treatments, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease.” A valid concern, to be sure — but who will ensure fair access to these treatments? A new Government agency? And if so, how long after ceding this authority to the Government would we see politically-expedient changes enacted, justified for the betterment of society and potentially perverting its original intent? The possibilities need not be as horrific as Aldous Huxley’s Brave New World, populated with castes of Deltas and Epsilon-minus semi-morons. It is not inconceivable that enhanced combat performance via genetic manipulation could follow, resulting in a permanent caste of warfighters, distinct genetically from their fellow citizens, with the associated societal implications.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s post by returning guest blogger Mr. Ian Sullivan, addressing the paramount disruptor — people and ideas. While emergent technologies facilitate the possibility of change, the catalyst, or change agent, remains the human with the revolutionary idea or concept that employs these new tools in an innovative way to bring about change in the character of future warfare.]

There is a passage in Erich Maria Remarque’s All Quiet on the Western Frontthat has always colored my views on the future. In it, Alfred Kropp, the thoughtful school pal of main character Paul Bäumer, is having a discussion with his friend about a war that has metastasized from a youthful, joyous adventure into a numbingly horrific slog.

War weary German Soldiers / Source: Imperial War Museum

“But what I would like to know,” says Albert, “is whether there would not have been a war if the Kaiser had said No.”

“I’m sure there would,” I [Paul] interject, “he was against it from the first.”

“Well if not him alone, then perhaps if twenty or thirty people in the world had said No.”

“That’s probable,” I agree, “but they damned well said Yes.”

Ruminating on the First World War, a conflict that most leaders of the day thought would be over in a few weeks, but one that futurists should have realized would become something else altogether, Alfred and Paul hit upon a salient point. European armies met for a battle they could not imagine. Generals versed in Napoleon suddenly faced a true industrial age war. Sure, there were signs hinting at what could come. The post-Gettysburg U.S. Civil War, for example, offered a glimpse of this type of fight, as did the Franco-Prussian War, and even the Second Anglo-Boer War.

A German Gotha heavy bomber biplane in flight

In a total war, pitting society against society, where the battlefield was dominated by rapid-firing artillery, machine guns, and chemical warfare; where whole societies were pitted against each other to match the industrial requirements of the war, to sustain and reconstitute fighting forces; and where the civilian populations were directly targeted by naval blockades, aerial bombardment,

and other deprivations, it is easy to see the critical role that technology played. However, Alfred and Paul remind us that no matter how much technology advances, or how it shapes the world, the most significant, relevant, and system-altering changes come not from technology, but from people and the ideas and beliefs that shape their behaviors and enable decisions.

The world of today, looking forward, is at least reminiscent of the pre-Great War period. Technology is advancing rapidly; indeed, it is advancing so fast that changes in the way we live, create, think, and prosper are occurring at a dizzying pace. New and converged technologies have led us to question what shape society will take, and dramatic changes which were once only the province of science fiction seemingly become science fact at a swift clip. Information technology, artificial intelligence, quantum computing, robotics, additive manufacturing, and other technological advances have increased or soon will further expand the speed of human interaction. These technologies already have changed society, and will continue to do so as they mature, spawn convergences, and lead to the creation of a new series of technological wonders. TRADOC’s “Operational Environment and the Changing Character of Future Warfare” asserts the future is governed by two drivers; the rapid societal change spurred by these technological advances and the changes these advances will have on the character of warfare. But this assessment may be incomplete, or perhaps it is too deterministic in nature, because at the end of the day, there is a third driver, and it deals with people and ideas.

The latter part of the Nineteenth and early part of the Twentieth Centuries also were dominated by advances in technology. We saw industrialization on a massive scale, the development of internal combustion engines, aviation, telephony, the widespread use of railroads, and other remarkable changes. Alfred and Paul must have thought that the pace of human interaction was increasing exponentially. Yet, while these technologies clearly had societal impacts, they were not transformational on their own. Indeed, they served only to reinforce the power structures that stemmed from the end of the Enlightenment and the reaction to the French Revolution and Napoleon. For as much as society changed, for as much as sub-groups were empowered, as much as “super-empowered individuals” like Andrew Carnegie, John D. Rockefeller, Henry Ford, or the Krupp family garnered influence and even some power, it was a handful of individuals – Alfred postulated 20 or 30 – who made the decision to go to war in 1914.

The Kaiser (second from left) chatting with his staff while on field maneuvers, prior to World War I

And in spite of the technological advances of the era, it was the thought processes and ideals generated – a time when Marxism, nationalism, imperialism, social Darwinism, and existentialism, among other schools of thought were developed and refined – that influenced these 20 or 30 individuals who held in their hands the fate of the world in August 1914, and the multitudes of others who would see the war that transpired to the end.

So again, why is this important to the futurist? Because we see technological marvels and focus on their impact, noting that they will drive change that will compel society to follow. Technology is exciting, and its prospects are wondrous. It can and will drive change. But it does not drive change alone; ideas and people still play a role. The spark that caused the First World War,

The assassination of Archduke Franz Ferdinand of Austria, heir presumptive to the Austro-Hungarian throne, and his wife Sophie, Duchess of Hohenberg, on 28 June 1914 in Sarajevo by Gavrilo Princip

Gavrilo Princip’s assassination of Archduke Franz Ferdinand, was lit by one man but driven by Princip’s exposure to nationalism. The Russian Revolution, triggered as a popular reaction against the war and the ruling Romanov dynasty certainly was guided by ideology across many spectrums. Idealism also went hand-in-hand with the American perception of World War I, as a nation geared up in a spasm of Wilsonian idealism to “make the world safe for democracy” and to fight “a war to end all wars.” In the end it was the convergence of ideas, human decision-making, and technology that drove change, in this case, the onset of World War I.

A casual glance at newspaper, or more likely, scanning news notifications on your smart phone, shows us a world that is in large part driven by thought, ideal, and belief. In spite of technology, the speed of human interaction, and global connectivity, we see a retrenchment of globalization and an assertion of nationalism and regionalism around the globe. Whether it be China’s expansionist “One-Belt, One-Road Initiative,” Russia’s adventurism in the Near Abroad and Syria, Brexit in the UK, or a renewed focus on “America-First” from Washington, a renewed sense of nationalism is evident worldwide. Additionally, autocratic regimes are experiencing something of a resurgence; Kim Jong Un in North Korea, Vladimir Putin in Russia, and even a Saudi Royal Family that is now under suspicion of murdering a journalist. We’ve also seen China putting up walls on Internet accessibility and a focus by state actors on crafting narratives aimed at influencing subsections of populations and fostering dissent within rival nations. Individuals too, like Princip before them, also can play a role.

The Arab Spring, for example, was sparked by one man in Tunisia with a grievance, but soon went viral on Social Media and led to a significant change in the Middle East. In these cases, technology may serve not as a driver, but instead as an enabler of the human driver.

As a futurist, I am concerned when so much of our effort focuses on one aspect of change, in this case, technology. I have attended many events focusing on the future, read a number of authors who focus on the radical changes AI or quantum computing will have on society, and seen many very similar interpretations of the way the future will unfold. Indeed, views of the future are coalescing around technological innovation compelling broader societal changes. It is clear that technology is a driver that needs to be studied.

But it is equally important to understand what drives thought and belief, and how these can be shaped and influenced, for both good and nefarious purposes. My intention in starting with Remarque was not to force a dystopian or deterministic view of the future. Nor am I falling back on George Santayana’s observation about a failure to learn history. History is important, as it shows us how events unfolded, and allows us to understand how problems developed; however, I do not believe we are doomed to repeat August 1914. But I do believe that we need to spend as much time looking at the intellectual, emotional, and even popular Zeitgeist to understand how people view the world and make decisions in light of all of the changes that technology is bringing around us. We need to learn not only what is happening, but must ask ourselves the hard “why?” and “so what?” questions, lest we be unable to understand and warn our leaders during some crisis in August 2028.

“By 2038, there won’t just be one internet — there will be many, split along national lines” — An Xiao Mina, 2038 podcast, Episode 2, New York Magazine Intelligencer, 25 October 2018.

[Editor’s Note: While the prediction above is drawn from a podcast that posits an emerging tech cold war between China and the U.S., the quest for digital sovereignty and national cryptocurrencies is an emerging global trend that portends the fracturing of the contemporary internet into national intranets. This trend erodes the prevailing Post-Cold War direction towards globalization. In today’s post, Mad Scientist Laboratory welcomes back guest blogger Dr. Mica Hall, who addresses Russia’s move to adopt a national cryptocurrency, the cryptoruble, as a means of asserting its digital sovereignty and ensuring national security. The advent of the cryptoruble will have geopolitical ramifications far beyond Mother Russia’s borders, potentially ushering in an era of economic hegemony over those states that embrace this supranational cryptocurrency. (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]

At the nexus of monetary policy, geopolitics, and information control is Russia’s quest to expand its digital sovereignty. At the October 2017 meeting of the Security Council, “the FSB [Federal Security Service] asked the government to develop an independent ‘Internet’ infrastructure for BRICSnations [Brazil, Russia, India, China, South Africa], which would continue to work in the event the global Internet malfunctions.” 1 Security Council members argued the Internet’s threat to national security is due to:

“… the increased capabilities of Western nations to conduct offensive operations in the informational space as well as the increased readiness to exercise these capabilities.”2

This echoes the sentiment of Dmitry Peskov, Putin’s Press Secretary, who stated in 2014,

“We all know who the chief administrator of the global Internet is. And due to its volatility, we have to think about how to ensure our national security.”3

At that time, the Ministry of Communications (MinCom) had just tested a Russian back-up to the Internet to support a national “Intranet,” lest Russia be left vulnerable if the global Domain Name Servers (DNS) are attacked. MinCom conducted “a major exercise in which it simulated ‘switching off’ global Internet services,” and in 2017, the Security Council decided to create just such a backup system “which would not be subject to control by international organizations” for use by the BRICS countries.4

While an Internet alternative (orAlternet) may be sold to the Russian public as a way to combat the West’s purported advantage in the information war, curb excessive dependency on global DNS, and protect the country from the foreign puppet masters of the Internet that “pose a serious threat to Russia’s security,”5 numerous experts doubt Russia’s actual ability to realize the plan, given its track record.

Take the Eurasian Economic Union (EAEU), for example, an international organization comprised of Russia, Kazakhstan, Kyrgyzstan, Armenia, and Belarus. Russia should be able to influence the EAEU even more than the BRICS countries, given its leading role in establishing the group. The EAEU was stood up in January 2016, and by December, “MinCom and other government agencies were given the order to develop and confirm a program for the ‘Digital Economy,’ including plans to develop [it in] the EAEU.”6 As Slavin observes, commercial ventures have already naturally evolved to embrace the actual digital economy: “The digital revolution has already occurred, business long ago switched to electronic interactions,”7 while the state has yet to realize its Digital Economy platform.

Changing the way the government does business has proven more difficult than changing the actual economy. According to Slavin, “The fact that Russia still has not developed a system of digital signatures, that there’s no electronic interaction between government and business or between countries of the EAEU, and that agencies’ information systems are not integrated – all of that is a problem for the withered electronic government that just cannot seem to ripen.”8 The bridge between the state and the actual digital economy is still waiting for “legislation to support it and to recognize the full equality of electronic and paper forms.”9 Consequently, while the idea to create a supranational currency to be used in the EAEU has been floated many times, the countries within the organization have not been able to agree on what that currency would be.

The cryptoruble could be used to affect geopolitical relationships. In addition to wielding untraceable resources, Russia could also leverage this technology to join forces with some countries against others. According to the plan President Putin laid out upon announcing the launch of a cryptoruble, Russia would form a “single payment space” for the member states of the EAEU, based on “the use of new financial technologies, including the technology of distributed registries.”10 Notably, three months after the plan to establish a cryptoruble was announced, Russia’s Central Bank stated the value of working on establishing a supranational currency to be used either across the BRICS countries or across the EAEU, or both, instead of establishing a cryptoruble per se.11

This could significantly affect the balance of power not only in the region, but also in the world. Any country participating in such an economic agreement, however, would subject themselves to being overrun by a new hegemony, that of the supranational currency.

As long as the state continues to cloak its digital sovereignty efforts in the mantle of national security – via the cryptoruble or theYarovaya laws, which increase Internet surveillance – it can continue to constrict the flow of information without compunction. As Peskov stated, “It’s not about disconnecting Russia from the World Wide Web,” but about “protecting it from external influence.”12 After Presidents Putin and Trump met at the G20 Summit in July 2017, MinCom Nikiforov said the two countries would establish a working group “for the control and security of cyberspace,” which the U.S. Secretary of State said would “develop a framework for cybersecurity and a non-interference agreement.”13 Prime Minister Medvedev, however, said digitizing the economy is both “a matter of Russia’s global competitiveness and national security,”14 thus indicating Russia is focused not solely inward, but on a strategic competitive stance. MinCom Nikiforov makes the shortcut even clearer, stating, “In developing the economy, we need digital sovereignty,”15 indicating a need to fully control how the country interacts with the rest of the world in the digital age.

The Kremlin’s main proponent for digital sovereignty, Igor Ashmanov, claims, “Digital sovereignty is the right of the government to independently determine what is happening in their digital sphere. And make its own decisions.” He adds, “Only the Americans have complete digital sovereignty. China is growing its sovereignty. We are too.”16 According to Lebedev, “Various incarnations of digital sovereignty are integral to the public discourse in most countries,” and in recent years, “The idea of reining in global information flows and at least partially subjugating them to the control of certain traditional or not-so-traditional jurisdictions (the European Union, the nation-state, municipal administrations) has become more attractive.”17 In the Russian narrative, which portrays every nation as striving to gain the upper hand on the information battlefield, Ashmanov’s fear that, “The introduction of every new technology is another phase in the digital colonization of our country,”18 does not sound too far-fetched.

The conspiracy theorists to the right of the administration suggest the “global world order” represented by the International Monetary Fund intends to leave Russia out of its new replacement reference currency, saying “Big Brother is coming to blockchain.”19 Meanwhile, wikireality.ru reports the Russian government could limit web access in the name of national security, because the Internet “is a CIA project and the U.S. is using information wars to destroy governments,” using its “cybertroops.”20 As the site notes, the fight against terrorism has been invoked as a basis for establishing a black list of websites available within Russia. Just as U.S. citizens have expressed concerns over the level of surveillance made legal by the Patriot Act, so Russian netizens have expressed concerns over the Yarovaya laws and moves the state has made to facilitate information sovereignty.

According to the Financial Times, “This interest in cryptocurrencies shows Russia’s desire to take over an idea originally created without any government influence. It was like that with the Internet, which the Kremlin has recently learned to tame.”21 Meanwhile, a healthy contingent of Russian language netizens continue to express their lack of faith in the national security argument, preferring to embrace a more classical skepticism, as reflected in comments in response to a 2017 post by msmash called, “From the Never-Say-Never-But-Never Department,” — “In Putin’s Russia, currency encrypts you!”22 To these netizens, the state looks set to continue to ratchet down on Internet traffic: “It’s really descriptive of just how totalitarian the country has become that they’re hard at work out-Chinaing China itself when it comes to control of the Internet,” but “China is actually enforcing those kind of laws against its people. In Russia, on the other hand, the severity of the laws is greatly mitigated by the fact that nobody gives a **** about the law.”23 In addition to suggesting personal security is a fair price to be paid for national security via surveillance and Internet laws, the state appears poised to argue all information about persons in the country, including about their finances, should also be “transparent” to fight terrorism and crime in general.