Wednesday, October 19, 2016

Was Donald J. Trump's political rise in 2015-2016 a "black swan" event? "Yes" is the answer asserted by Jack Shafer this Politico article. "No" is the answer from other writers, including David Atkins in this article on the Washington Monthly Political Animal Blog.

Orange Swan

My answer is "Yes", but not in the same way that other events are Black Swans. Orange Swans like the Trump phenomenon is fits this aphorism:

"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- attributed to Mark Twain

In other words, the signature characteristic of Orange Swans is delusion.

Rethinking "Black Swans"

It doesn't make sense to label any set of events as "Black Swans". It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

Tuesday, June 21, 2016

[Submitted in writing at this meeting. An informal 5 min. version was presented during the public comment period. This statement is my own and does not represent the views or interests of my employer.]

Summary

Cyber security desperately needs institutional innovation, especially involving incentives and metrics. Nearly every report since 2003 has included recommendations to do more R&D on incentives and metrics, but progress has been slow and inadequate.

Why?

Because we have the wrong model for research and development (R&D) on institutions.

My primary recommendation is that the Commission’s report should promote new R&D models for institutional innovation. We can learn from examples in other fields, including sustainability, public health, financial services, and energy.

What are Institutions and Institutional Innovation?

Institutions are norms, rules, and social structures that enable society to function. Examples include marriage, consumer credit reporting and scoring, and emissions credit markets.

Cyber security[1] has institutions today, but many are inadequate, dysfunctional, or missing. Examples:

overlapping “checklists + audits”;

professional certifications;

post-breach protection for consumers (e.g. credit monitoring);

lists of “best practices” that have never been tested or validated as “best” and therefore are no better than folklore.

There is plenty of talk about “standards”, “information sharing”, “public-private partnerships”, and “trusted third parties”, but these remain mostly talking points and not realities.

Institutional innovation is a set of processes that either change existing institutions in fundamental ways or create new institutions. Sometimes this happens with concerted effort by “institutional entrepreneurs”, and other times it happens through indirect and emergent mechanisms, including chance and “happy accidents”.

Institutional innovation takes a long time – typically ten to fifty years.

Institutional innovation works different from technological innovation, which we do well. In contrast, we have poor understanding of institutional innovation, especially on how to accelerate it or achieve specific goals.

Finally, institutions and institutional innovation should not be confused with “policy”. Changes to government policy may be an element of institutional innovation, but they do not encompass the main elements – people, processes, technology, organizations, and culture.

The Need: New Models of Innovation

Through my studies, I have come to believe that institutional innovation is much more complicated [2] than technological innovation. It is almost never a linear process from theory to practice with clearly defined stages.

There is no single best model for institutional innovation. There needs to be creativity in “who leads”, “who follows”, and “when”. The normal roles of government, academics, industry, and civil society organizations may be reversed or otherwise radically redrawn.

Techniques are different, too. It can be orchestrated as a “messy” design process [3]. Fruitful institutional innovation in cyber security might involve some of these:

What all of these have in common is that they produce something that can be tested and can support learning. They are more than talking and consensus meetings.

There are several academic fields that can contribute defining and analyzing new innovation models, including Institutional Sociology, Institutional Economics, Sociology of Innovation, Design Thinking, and the Science of Science Policy.

Role Models

To identify and test alternative innovation models, we can learn from institutional innovation successes and failures in other fields, including:

Common resource management (sustainability)

Epidemiology data collection and analysis (public health)

Crash and disaster investigation and reporting (safety)

Micro-lending and peer-to-peer lending (financial services)

Emissions credit markets and carbon offsets (energy)

Open software development (technology)

Disaster recovery and response[6] (homeland security)

In fact, there would be great benefit if there were a joint R&D initiative for institutional innovation that could apply to these other fields as well as cyber security. Furthermore, there would be benefit making this an international effort, not just limited to the United States.

Microsoft's artificial intelligence (AI) program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages. However, the program once again went wrong and Tay's account was set to private after it began repeating the same message over and over to other Twitter users. According to a Microsoft, the account was reactivated by accident during testing."Tay remains offline while we make adjustments," a spokesperson for the company told CNBC via email. "As part of testing, she was inadvertently activated on Twitter for a brief period of time." (emphasis added)

I'm puzzled by this explanation but I'll go back through the evidence to see which explanation is best supported.]

[Update 6:35am It now looks like the "account hack" was really a bungled test session by someone at Microsoft Research -- effectively a "self-hack".

Important: This episode was not "Tay being Tay".]

The @Tayandyou Twitter chatbot has been silent since last Thursday when Microsoft shut it down. Shortly after midnight today, Pacific time, the @Tayandyou Twitter account woke up and started blasting tweets at very high volume. All of these tweets included other Twitter handles in them, maybe from previous tweets, maybe from followers.

But it became immediately apparent that something was different and wrong. These tweets didn't look anything like the ones before, in style, structure, or sentience. From the tweet conversations and from the sequence of events, I believe that the @Tayandyou account was hacked today (March 30), and was active for 15 minutes, sending over 4,200 tweets.

[Update 4:30amThe online media has started posting articles, but they all treat this as more "Tay runs amok". Only The Verge has updated their story. If you read an article that doesn't at least consider that Tay's Twitter account was hacked, could you please add a comment with link to this post? Thanks.]

Tuesday, March 29, 2016

One of the most surprising things I've discovered in the course of investigating and reporting on Microsoft's Tay chatbot is how the rest of the media (traditional and online) have covered it, and how the digital media works in general.

None of the articles in major media included any investigation or research. None. Let that sink in.

Sunday, March 27, 2016

While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:

There is no AI (worthy of the name) in Tay. (probably)

I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods. But I'm willing to bet on it.

Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015. He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.