Rationalizations for Unsupervised Learning Techniqueshttps://www.diveplane.com
Keeping the humanity in AITue, 11 Dec 2018 19:44:01 +0000en-UShourly1https://wordpress.org/?v=4.9.9https://www.diveplane.com/wp-content/uploads/2018/07/cropped-Logo-JustDPSub-32x32.pngRationalizations for Unsupervised Learning Techniqueshttps://www.diveplane.com
3232Rationalizations for Unsupervised Learning Techniqueshttps://www.diveplane.com/rationalizations-for-unsupervised-learning-techniques/
Fri, 30 Nov 2018 16:28:37 +0000http://www.diveplane.com/?p=7667In machine learning, there are currently debates about what an explanation or explainable model is and what is necessary for a given purpose. This post details the concepts of explanation and interpretation to help clarify the difference between the two; discusses how, although interpretation is preferable, explanation is the only option for many machine learning techniques; and then details a clustering technique that aids explanation for unsupervised machine learning.

One pair of definitions related to explanations of models from academia splits the field into interpretability and explainability. An interpretation is a set of understandable reasons why a model made a particular decision, which means that the model itself must in some way be sufficiently understandable to interpret. Examples of interpretable models include decision trees and some limited applications of linear regression. An interpretation may, for example, point to the particular piece of training data that caused the model to make the decision. An explanation is a set of understandable reasons that rationalize or justify why a model made a decision that may or may not be at all related to why a model made a particular decision. That is, an explanation may point to a particular piece of training data, and that particular piece of training data may or may not have had anything to do with the decision the model made. Examples of explanations for models include LIME, Google’s What-If Tool, and IBM’s AIF360.

Using an interpretable model is essential if one wants to understand why a decision was actually made. The difference between an interpretable model and an explainable model can be illuminated via a simple example. Suppose a person buys a car. An explanation, or rationalization, for the decision might be that the person justifies buying the car because their old car was becoming unreliable, the new car was in their price range, and the new car had all the features and properties they desired. However, the real reasons that the person bought the car, if we were to be able to interpret the internals of the person’s thought process, were because the person was feeling jealous of a friend’s car and the sales person used effective emotional leverage.

Unfortunately, some deployed machine learning models use techniques that are very opaque, black boxes that are hard or impossible to interpret and thus rationalizations (explanations) are the only available choice for understanding the data. Further, the existing tools for explaining opaque models are designed for supervised learning. How can we apply them to unsupervised learning?

Obtaining rationalizations for unsupervised learning techniques can be done easily. Once the unsupervised learning technique has been performed, the values returned from the unsupervised technique can be appended into a new feature or target to the original data. First, consider an anomaly detection method that assigns a score to each training case (which may be a feature vector of some sort) indicating how anomalous the point is. This anomaly score is treated as the label for each case as if it were a supervised learning system and run through the explanation tool. The same technique can be applied to hard or soft clustering. Hard clustering is easy, given that each case belongs to exactly one cluster. The cluster ID can be used as the new label and, as before, the data can be run through an explanation tool. In soft clustering, each case may potentially belong to any number of clusters, potentially in a fractional manner. With soft clustering, each cluster can be one-hot encoded with each cluster being a label as if it were the output of a multilabel classification supervised learning system, and then these labels can be used in conjunction with the other features in the explanation system.

]]>Diveplane named in Top Ten Startups to Watch by NC Techhttps://www.diveplane.com/diveplane-named-in-top-ten-startups-to-watch-by-nc-tech/
Mon, 26 Nov 2018 16:41:45 +0000http://www.diveplane.com/?p=7662NC Tech has named Diveplane Corporation as one of the top ten startups to watch in the 2018 NC Tech Awards.
]]>Diveplane named among Top 10 Startups to Watch by NC TECHhttps://www.diveplane.com/diveplane-named-among-top-10-startups-to-watch-by-nc-tech/
Mon, 01 Oct 2018 19:26:29 +0000http://www.diveplane.com/?p=7632

Diveplane is honored to be named among the Top 10 Startups to Watch by the North Carolina Tech Association alongside companies such as of Dais X, Fluree and myBeeHyve.

]]>Commentary: Why Fortnite Avoided the Google Play Storehttps://www.diveplane.com/why-fortnite-avoided-the-google-play-store/
Tue, 21 Aug 2018 19:01:57 +0000http://www.diveplane.com/?p=7536Earlier this month, Epic announced it would release Fortnite on Android, but with a twist. Epic has chosen to do what only Amazon and few others have done: issue its product directly to consumers and bypass the Google Play storefront altogether.
]]>Epic Games’ Former Pres Wants to Save the World from AIhttps://www.diveplane.com/epic-games-former-pres-wants-to-save-the-world-from-ai/
Tue, 07 Aug 2018 18:51:53 +0000http://www.diveplane.com/?p=7529Mike Capps tells us about his new company, Diveplane, and expresses his pride in Fortnite despite no longer being at Epic.
]]>AI Offers Huge Potential, But it Won’t Happen Overnighthttps://www.diveplane.com/ai-offers-huge-potential-but-it-wont-happen-overnight/
Mon, 30 Jul 2018 14:26:41 +0000http://www.diveplane.com/?p=7513

AI will change how people interact with all sorts of devices, [GM’s VP of Strategy Mike] Abelson said, and voice interfaces “will feel a lot more like Star Trek really quickly.” [Diveplane CEO Michael] Capps said he’s more afraid of the Twilight Zone. “A black box scares the hell out of me,” he said, and to that end he is working on “understandable AI.”

While the company’s new, the technology behind the Raleigh artificial-intelligence start-up Diveplane Corp. has been under development for about seven years, CEO Mike Capps says. Diveplane is essentially a spinoff of Hazardous Software, a firm founded in 2007 that has since done “a mix of classified and unclassified work,” says Capps, the former head of Epic Games in Cary.

Mike Capps, the former president of Epic Games, retired six years ago to focus on his family. But yesterday he announced that he has cofounded a new business, Diveplane, that aims to use artificial intelligence for the benefit of humans by making the AI understandable, accurate, safer, and more easily double checked by humans. “We want to keep the humanity in AI,” said Capps in an interview with VentureBeat.