InnovationAfrica » Technologyhttp://www.innovationafrica.org
Shaping the Future TodayMon, 02 Mar 2015 09:31:01 +0000en-UShourly1http://wordpress.org/?v=4.1.1Democratizing biotech researchhttp://www.innovationafrica.org/2015/02/democratizing-biotech-research/
http://www.innovationafrica.org/2015/02/democratizing-biotech-research/#commentsFri, 06 Feb 2015 14:37:17 +0000http://www.innovationafrica.org/?p=29191The convergence of software and hardware, and the growing ubiquitousness of the Internet of Things is affecting industry across the board, and biotech labs are no exception. For this Radar Podcast episode, I chatted with DJ Kleinbaum, co-founder of Emerald Therapeutics, about lab automation, the launch of Emerald Cloud Laboratory, and the problem of reproducibility.

Kleinbaum and his co-founder Brian Frezza started Emerald Therapeutics to research cures for persistent viral infections. They didn’t set out to spin up a second company, but their efforts to automate their own lab processes proved so fruitful, they decided to launch a virtual lab-as-a-service business, Emerald Cloud Laboratory. Kleinbaum explained:

“When Brian and I started the company right out of graduate school, we had this platform anti-viral technology, which the company is still working on, but because we were two freshly minted nobody Ph.D.s, we were not going to be able to raise the traditional $20 or $30 million that platform plays raise in the biotech space.

“We knew that we had to be much more efficient with the money we were able to raise. Brian and I both have backgrounds in computer science. So, from the beginning, we were trying to automate every experiment that our scientists ran, such that every experiment was just push a button, walk away. It was all done with process automation and robotics. That way, our scientists would be able to be much more efficient than your average bench chemist or biologist at a biotech company.

“After building that system internally for three years, we looked at it and realized that every aspect of a life sciences laboratory had been encapsulated in both hardware and software, and that that was too valuable a tool to just keep internally at Emerald for our own research efforts. Around this time last year, we decided that we wanted to offer that as a service, that other scientists, companies, and researchers could use to run their experiments as well.” (more…)

Bitcoin is a collection of concepts and technologies that form the basis of a digital money ecosystem. Units of currency called bitcoins are used to store and transmit value among participants in the bitcoin network. Bitcoin users communicate with each other using the bitcoin protocol, primarily via the Internet; although, other transport networks can also be used. The bitcoin protocol stack, available as open source software, can be run on a wide range of computing devices, including laptops and smartphones, making the technology easily accessible.

Users can transfer bitcoin over the network to do just about anything that can be done with conventional currencies, such as buy and sell goods, send money to people or organizations, or extend credit. Bitcoin technology includes features that are based on encryption and digital signatures to ensure the security of the bitcoin network. Bitcoins can be purchased, sold, and exchanged for other currencies at specialized currency exchanges. Bitcoin, in a sense, is the perfect form of money for the Internet because it is fast, secure, and borderless. (more…)

Tags: Payment systems, Bitcoin
]]>http://www.innovationafrica.org/2015/01/bitcoin-is-a-digital-money-ecosystem/feed/0Solid 2015: submit your proposalhttp://www.innovationafrica.org/2014/11/solid-2015-submit-your-proposal/
http://www.innovationafrica.org/2014/11/solid-2015-submit-your-proposal/#commentsWed, 19 Nov 2014 12:42:45 +0000http://www.innovationafrica.org/?p=28545Last May, we engaged in something of an experiment when Joi Ito and I presented Solid, our conference about the intersection between software and the physical world. We drew the program as widely as possible and invited demos from a broad group of large and small companies, academic researchers, and artists. The crowd that came — more than 1,400 people — was similarly broad: a new interdisciplinary community that’s equally comfortable in the real and virtual worlds started to, well, solidify.

I’m delighted to announce that Solid is returning. The next Solid will take place on June 23-25, 2015, at Fort Mason in San Francisco. It’ll be bigger, with more space and a program spread across three days instead of two, but we’re taking care to maintain and nourish the spirit of the original event. That begins with our call for proposals, which opens today. Some of our best presentations in May came from community members we hadn’t yet met who blew away our program committee with intriguing proposals. We’re committed to discovering new luminaries and giving them a chance to speak to the community. If you’re working on interesting things, I hope you’ll submit a proposal.

We’re expecting a full house at this year’s event, so we’ve opened up ticket reservations today as well — you can reserve your ticket here, and we’ll hold your spot for seven days once registration opens early next year.

It’d be an understatement to say that the hardware movement and the Internet of Things (IoT) are hot right now. According to Google, search interest in the IoT has more than doubled in the last 12 months. The race by software companies to reach into the physical world, and the parallel race by manufacturers to develop their software and intelligence offerings, is bringing about all sorts of exciting collisions.

A screen shot of the Google Trends results looking at the interest in “Internet of Things” and “IoT” over time.

I’d like to hear from you about what’s going on in hardware right now: how to design great products, how to build them in socially responsible ways, how to program them so that they’re efficient and delightful. Solid will be rich with these kinds of stories, told by engineers, artists, scholars, and executives from giant enterprises and nascent start-ups.

That said, my greatest pleasure in programming the 2014 edition of Solid was in featuring presentations that framed our conversation in terms of art, craft, societal impact, theoretical depth, and long-term context. Thoughtful, fresh takes on the hardware movement and the Internet of Things are welcome.

International Journal of Innovation and Technology Management, Ahead of Print.

When using the open source software (OSS), development model firms face the challenge to balance the tension between the integration of knowledge from external individuals and the desire for control. In our investigation, we draw upon a data set consisting of 109 projects with 912 individual programmers and 110 involved firms and show how those different projects are governed in terms of project leadership. Our four hypotheses show that despite the wish for external knowledge from voluntary programmers firms are relying on own resources or those from other firms to control a project, that projects with low firm participation are mainly led by voluntary committers, and that projects with high firm participation are mainly led by paid leaders. This research extends the dominating literature by providing empirical evidence in that area and helps to deepen our understanding of firm participation in OSS projects as a form of open innovation activity.Go to Source

Tags: open source software, Intellectual property law
]]>http://www.innovationafrica.org/2014/11/firms-resource-deployment-and-project-leadership-in-open-source-software-development/feed/0Great user experience + clear value proposition = value innovationhttp://www.innovationafrica.org/2014/11/great-user-experience-clear-value-proposition-value-innovation/
http://www.innovationafrica.org/2014/11/great-user-experience-clear-value-proposition-value-innovation/#commentsMon, 17 Nov 2014 10:06:38 +0000http://www.innovationafrica.org/?p=28510Editor’s note: this is an excerpt from our forthcoming book UX Strategy; it is part of a free curated collection of chapters from the O’Reilly Design library — download a free copy of the Experience Design ebook here.

The word seems to be used everywhere. It’s found in almost all traditional and contemporary business books since the 1970s. In Management: Tasks, Responsibilities, Practices, Peter Drucker talks about how customer values shift over time. He gives an example of how a teenage girl will buy a shoe for its fashion, but when she becomes a working mother, she will probably buy a shoe for its comfort and price. In 1984, Michael Lanning first coined the term “value proposition” to explain how a firm proposes to deliver a valuable customer experience. That same year, Michael Porter defined the term “value chain” as the chain of activities that a firm in a specific industry performs in order to deliver a valuable product.

All these perspectives on value are important, but let’s fast-forward to 2004 when Robert S. Kaplan discussed how intangible assets like computer software were the ultimate source of “value creation.” He said, “Strategy is based on a differentiated customer value proposition. Satisfying customers is the source of sustainable value creation.”

There are a lot of things in that quote that align with what we just learned [earlier in the chapter] about business strategy — differentiation and satisfied customers. But there’s one thing that we didn’t discuss — the fact that we are designing digital products: software, apps, and other things that users find on the Internet and use every day. Often, the users of these digital products don’t have to pay for the privilege of using them. If a business model is supposed to help a company achieve sustainability, how can you do that when the online marketplace is overrun with free products? Well, we learned how many companies, like Waze, found a sustainable business model: sharing their crowdsourced data made them lucrative to other companies like Google. But in order to get the data, they had to provide value to their customer base for mass adoption, and that value was based entirely on innovation.

“Innovative” means doing something that is new, original, and important enough to shake up a market. As W. Chan Kim and Renée Mauborgne describe in Blue Ocean Strategy, value innovation is “the simultaneous pursuit of differentiation and low cost, creating a leap in value for both buyers and the company.” This is accomplished by looking for ways that a company can reduce, raise, lower, and eliminate factors that determine the cost and quality of a product.

When we transpose this theory to the world of digital products, the value proposition manifests itself as a unique feature set. Features are product characteristics that deliver benefits to the user. In most cases, fewer features equals more value. Value can be created by consolidating features from relevant existing solutions (i.e. Meetup and Evite) and solving a problem for users in a more intuitive way (i.e. EventBrite). Value can be created by transcending the value propositions from existing platforms (i.e. Google Maps + crowdsourcing = Waze). Sometimes it’s consolidated from formerly disparate user experiences into one single solution (one-stop shop for a user task), i.e. sharing a video you made with your phone on YouTube, into one elegant simple solution (i.e. Vine and Instagram). We will deconstruct these complex techniques in Chapter 7: Storyboarding Value Innovation for Digital Products.

In a blue ocean, the opportunity is not constrained by traditional boundaries.But for now, let’s discuss the most important reason that we want to be unique and disruptive with both our products and our business models. There are bigger opportunities in unknown market spaces. We like to call these unknown market spaces “blue oceans.” This term comes from the book Blue Ocean Strategy that I mentioned earlier. The authors discuss their studies of 150 strategic moves spanning more than 100 years and 30 industries. They explain how the companies behind the Ford Model T, Cirque du Soleil, and the iPod chose unconventional strategies rather than fighting head-to-head with direct competitors in an existing industry. The sea of other competitors with similar products is known as a “red ocean.” Red oceans are full of sharks that compete for the same customer by offering lower prices and eventually turning a product into a commodity.

In the corporate world, the impulse to compete by destroying your rivals is rooted in military strategy. In war, the fight typically plays out over a specific terrain. The battle gets bloody when one side wants what the other side has — whether it be oil, land, shelf space, or eyeballs. In a blue ocean, the opportunity is not constrained by traditional boundaries. It’s about breaking a few rules that aren’t quite rules yet or even inventing your own game that creates an uncontested new marketplace and space for users to roam.

A perfect example of a company with a digital product that did this is Airbnb. Airbnb is a “community marketplace” for people to list, discover, and book sublets of practically anything from a tree house in Los Angeles to a castle in France. What’s amazing about this is that their value proposition has completely disrupted the travel industry. It’s affecting the profit margins of hotels so much that Airbnb was banned in New York City. Its value proposition is so compelling that once customers try it, it’s hard to go back to the old way of booking a place to stay or subletting a property.

For instance, I just came back from a weekend in San Francisco with my family. Instead of booking a hotel that would have cost us upwards of $1,200 (two rooms for two nights at a 3.5-star hotel), we used Airbnb and spent half of that. But for us, it wasn’t just about saving money; it was about being in a gorgeous and spacious two-bedroom home closer to the locals and their foodie restaurants. The 3% commission fee we paid to Airbnb was negligible. Interestingly, the corporate lawyer who owned this SF home was off in Paris with her family. She was also staying at an Airbnb, which could have been paid for using some of the revenue ($550+) from her transaction with us. Everybody won! Except, of course, the hotels that lost our business.

Airbnb achieves this value innovation by coupling a killer user experience design with a tantalizing value proposition. A value proposition is the reason why customers accept one solution over another. Sometimes the solution solves a problem we didn’t even know we had. Sometimes it creates an undeniable desire. A value proposition consists of a bundle of products and/or services (“features”) that cater to the requirements of a specific customer segment. Airbnb offers a value proposition to both sides of its two-sided market: the people who list their homes and those who book places to stay.

Value innovation is about changing the rules of the game.Airbnb chose not to focus on beating the existing competition (other subletting sites and hotels) at their game. Instead they made the competition irrelevant by creating a leap in value for all of their potential users. They did this by creating a marketplace that improves upon the weaknesses of all of their competition. Airbnb is more trustworthy than Craigslist. It has much more inventory than HomeAway and VRBO because listings are free. They provide value along the way — from the online experience (booking/subletting) to the real-world experience (showing up on vacation/getting paid for your sublet).

To create a blue ocean product, you need to change the way that people think about doing something. Value innovation is about changing the rules of the game.

Airbnb did this by enabling a free-market sub-economy in which quality and trust were given a high value that spanned the entire customer journey from the online experience to the real-world experience. And they catered to both of their customer groups (subletters and renters) with distinct feature sets that turned what was once a potentially creepy endeavor (short-term subletting) into something with incredible potential for everybody involved.

There are many other products causing widespread disruption to the status quo. Uber, which matches drivers with people who need rides, is threatening the taxi and limousine industries. Kickstarter is changing the way businesses are financed. Twitter is disrupting how we get news. And we can never forget how Craigslist broke the business models of local newspapers by providing a superior system for personal listings.

Writing my post about AI and summoning the demon led me to re-read a number of articles on Cathy O’Neil’s excellent mathbabe blog. I highlighted a point Cathy has made consistently: if you’re not careful, modelling has a nasty way of enshrining prejudice with a veneer of “science” and “math.”

Cathy has consistently made another point that’s a corollary of her argument about enshrining prejudice. At O’Reilly, we talk a lot about open data. But it’s not just the data that has to be open: it’s alsothemodels. (There are too many must-read articles on Cathy’s blog to link to; you’ll have to find the rest on your own.)

You can have all the crime data you want, all the real estate data you want, all the student performance data you want, all the medical data you want, but if you don’t know what models are being used to generate results, you don’t have much. You’re going to be showing black people homes in predominantly black neighborhoods not because you want to keep white neighborhoods pure, but because that’s where the model says they’re most likely to buy. You’re going to be stopping and searching more minority drivers without cause not because you’re prejudiced, but because the model says they’re more likely to be arrested for crimes. And if you stop more minority drivers, you almost certainly will arrest more minority drivers, so the model becomes self-fulfilling.

Intentions mean nothing when they’re hidden behind a model that makes decisions for you. A recent study of police profiling in my state, Connecticut, showed not only that blacks were more likely to be stopped than whites, but also that when they were stopped and searched, whites were significantly more likely to have something illegal in their cars. How would we build a model from this data, and what would it show? How would we know what the model is doing, if it’s never examined? Would the column with surprising data be dropped because it leads to unexpected and politically unacceptable results? Would it be weighted less than a column on, say, past arrests? If the model isn’t open, how would you ever know? As we become more dependent on modeling, more and more of our world becomes inscrutable. Without the models, you will never understand the way financial markets are manipulated. Without the models, you will never understand how school teachers are evaluated. You may never know why the real estate agent showed you certain houses, or why you’re paying so much for insurance. Is that OK? It all seems nice and scientific.

Open data enables the democratization of data. It’s important to be able to do your own analysis of public data sets. But if you really want to understand the effect data is having on law enforcement, on insurance, or on education, or on the economy, you need the models. Cathy has documented being stonewalled on requests for the models, which are almost always viewed as proprietary. That’s a problem, particularly when the modellers (not the poets) become the “unacknowledged legislators of the world” (Shelley, A Defense of Poetry).

A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.

There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.

The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%.

What kinds of applications can you build from technologies that are only accurate 80% of the time, or even 97.5% of the time? Quite a few. You might build an application that created dynamic travel guides from online photos. Or you might build an application that measures how long diners stay in a restaurant, how long it takes them to be served, whether they’re smiling, and other statistics. You might build an application that tries to identify who appears in your photos, as Facebook has. In all of these cases, an occasional error (or even a frequent error) isn’t a big deal. But you wouldn’t build, say, a face-recognition-based car alarm that was wrong 20% of the time — or even 2% of the time.

Similarly, much has been made of Google’s self-driving cars. That’s a huge technological achievement. But Google has always made it very clear that their cars rely on the accuracy of their highly detailed street view. As Peter Norvig has said, it’s a hard problem to pick a traffic light out of a scene and determine if it is red, yellow, or green. It is trivially easy to recognize the color of a traffic light that you already know is there. But keeping Google’s street view up to date isn’t simple. While the roads change infrequently, towns frequently add stop signs and traffic lights. Dealing with these changes to the map is extremely difficult, and only one of many challenges that remain to be solved: we know how to interpret traffic cones, we know how to think about cars or humans behaving erratically, we know what to do when the lane markings are covered by snow. That ability to think like a human when something unexpected happens makes a self-driving car a “moonshot” project. Humans certainly don’t perform perfectly when the unexpected happens, but we’re surprisingly good at it.

You wouldn’t build a face-recognition-based car alarm that was wrong 20% of the time.So, AI systems can do, with difficulty and partial accuracy, some of what humans do all the time without even thinking about it. I’d guess that we’re 20 to 50 years away from anything that’s more than a crude approximation to human intelligence. It’s not just that we need bigger and faster computers, which will be here sooner than we think. We don’t understand how human intelligence works at a fundamental level. (Though I wouldn’t assume that understanding the brain is a prerequisite for artificial intelligence.) That’s not a problem or a criticism, it’s just a statement of how difficult the problems are. And let’s not misunderstand the importance of what we’ve accomplished: this level of intelligence is already extremely useful. Computers don’t get tired, don’t get distracted, and don’t panic. (Well, not often.) They’re great for assisting or augmenting human intelligence, precisely because as an assistant, 100% accuracy isn’t required. We’ve had cars with computer-assisted parking for more than a decade, and they’ve gotten quite good. Larry Page has talked about wanting Google search to be like the Star Trek computer, which can understand context and anticipate what the humans wants. The humans remain firmly in control, though, whether we’re talking to the Star Trek computer or Google Now.

I’m not without concerns about the application of AI. First, I’m concerned about what happens when humans start relying on AI systems that really aren’t all that intelligent. AI researchers, in my experience, are fully aware of the limitations of their systems. But their customers aren’t. I’ve written about what happens when HR departments trust computer systems to screen resumes: you get some crude pattern matching that ends up rejecting many good candidates. Cathy O’Neil has writtenon several occasionsabout machine learning’s potential for dressing up prejudice as “science.”

The problem isn’t machine learning itself, but users who uncritically expect a machine to provide an oracular “answer,” and faulty models that are hidden from public view. In a not-yet published paper, DJ Patil and Hilary Mason suggest that you search Google for GPS and cliff; you might be surprised at the number of people who drive their cars off cliffs because the GPS told them to. I’m not surprised; a friend of mine owns a company that makes propellers for high-performance boats, and he’s told me similar stories about replacing the propellers for clients who run their boats into islands.

David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool. More specifically, the problem is forgetting that an assistive technology is assistive, and assuming that it can be a complete stand-in for a human.

Second, I’m concerned about what happens if consumer-facing researchers get discouraged and leave the field. Although that’s not likely now, it wouldn’t be the first time that AI was abandoned after a wave of hype. If Google, Facebook, and IBM give up on their “moonshot” AI projects, what will be left? I have a thesis (which may eventually become a Radar post) that a technology’s future has a lot to do with its origins. Nuclear reactors were developed to build bombs, and as a consequence, promising technologies like Thorium reactors were abandoned. If you can’t make a bomb from it, what good is it?

The problem isn’t the tool; it’s the application of the tool.If I’m right, what are the implications for AI? I’m thrilled that Google and Facebook are experimenting with deep learning, that Google is building autonomous vehicles, and that IBM is experimenting with Watson. I’m thrilled because I have no doubt that similar work is going on in other labs, in other places, that we know nothing about. I don’t want the future of AI to be shortchanged because researchers hidden in government labs choose not to investigate ideas that don’t have military potential. And we do need a discussion about the role of AI in our lives: what are its limits, what applications are OK, what are unnecessarily intrusive, and what are just creepy. That conversation will never happen when the research takes place behind locked doors.

At the end of a long, glowing report about the state of AI, Kevin Kelly makes the point that every advance in AI, every time computers make some other achievement (playing chess, playing Jeopardy, inventing new recipes, maybe next year playing Go), we redefine the meaning of our own human intelligence. That sounds funny; I’m certainly suspicious when the rules of the game are changed every time it appears to be “won,” but who really wants to define human intelligence in terms of chess-playing ability? That definition leaves out most of what’s important in humanness.

Perhaps we need to understand our own intelligence is competition for our artificial, not-quite intelligences. And perhaps we will, as Kelly suggests, realize that maybe we don’t really want “artificial intelligence.” After all, human intelligence includes the ability to be wrong, or to be evasive, as Turing himself recognized. We want “artificial smartness”: something to assist and extend our cognitive capabilities, rather than replace them.

That brings us back to “summoning the demon,” and the one story that’s an exception to the rule. In Goethe’s Faust, Faust is admitted to heaven: not because he was a good person, but because he never ceased striving, never became complacent, never stopped trying to figure out what it means to be human. At the start, Faust mocks Mephistopheles, saying “What can a poor devil give me? When has your kind ever understood a Human Spirit in its highest striving?” (lines 1176-7, my translation). When he makes the deal, it isn’t the typical “give me everything I want, and you can take my soul”; it’s “When I lie on my bed satisfied, let it be over…when I say to the Moment ‘Stay! You are so beautiful,’ you can haul me off in chains” (1691-1702). At the end of this massive play, Faust is almost satisfied; he’s building an earthly paradise for those who strive for freedom every day, and dies saying “In anticipation of that blessedness, I now enjoy that highest Moment” (11585-6), even quoting the terms of his deal.

So, who’s won the bet? The demon or the summoner? Mephistopheles certainly thinks he has, but the angels differ, and take Faust’s soul to heaven, saying “Whoever spends himself striving, him we can save” (11936-7). Faust may be enjoying the moment, but it’s still in anticipation of a paradise that he hasn’t built. Mephistopheles fails at luring Faust into complacency; rather, he is the driving force behind his striving, a comic figure who never understands that by trying to drag Faust to hell, he was pushing him toward humanity. If AI, even in its underdeveloped state, can serve this function for us, calling up that demon will be well worth it.

When a team first starts to consider using Hadoop for data storage and processing, one of the first questions that comes up is: which file format should we use?

This is a reasonable question. HDFS, Hadoop’s data storage, is different from relational databases in that it does not impose any data format or schema. You can write any type of file to HDFS, and it’s up to you to process it later.

The usual first choice of file formats is either comma delimited text files, since these are easy to dump from many databases, or JSON format, often used for event data or data arriving from a REST API.

There are many benefits to this approach — text files are readable by humans and therefore easy to debug and troubleshoot. In addition, it is very easy to generate them from existing data sources and all applications in the Hadoop ecosystem will be able to process them.

But there are also significant drawbacks to this approach, and often these drawbacks only become apparent over time, when it can be challenging to modify the file formats across the entire system.

Part of the problem is performance — text formats have to be parsed every time they are processed. Data is typically written once but processed many times; text formats add a significant overhead to every data query or analysis.

But the worst problem by far is the fact that with CSV and JSON data, the data has a schema, but the schema isn’t stored with the data. For example, CSV files have columns, and those columns have meaning. They represent IDs, names, phone numbers, etc. Each of these columns also has a data type: they can represent integers, strings, or dates. There are also some constraints involved — you can dictate that some of those columns contain unique values or that others will never contain nulls. All this information exists in the head of the people managing the data, but it doesn’t exist in the data itself.

The people who work with the data don’t just know about the schema; they need to use this knowledge when processing and analyzing the data. So the schema we never admitted to having is now coded in Python and Pig, Java and R, and every other application or script written to access the data.

And eventually, the schema changes. Someone refactors the code generating the JSON and moves fields around, perhaps renaming few fields. The DBA added new columns to a MySQL table and this reflects in the CSVs dumped from the table. Now all those applications and scripts must be modified to handle both file formats. And since schema changes happen frequently, and often without warning, this results in both ugly and unmaintainable code, and in grumpy developers who are tired of having to modify their scripts again and again.

There is a better way of doing things.

Apache Avro is a data serialization project that provides schemas with rich data structures, compressible file formats, and simple integration with many programming languages. The integration even supports code generation — using the schema to automatically generate classes that can read and write Avro data.

Schema changes happen frequently, and often without warning.Since the schema is stored in the file, programs don’t need to know about the schema in order to process the data. Humans who encounter the file can also easily extract the schema and better understand the data they have.

When the schema inevitably changes, Avro uses schema evolution rules to make it easy to interact with files written using both older and newer versions of the schema — default values get substituted for missing fields, unexpected fields are ignored until they are needed, and data processing can proceed uninterrupted through upgrades. When starting a data analysis project, most developers don’t think about how they’ll be able to handle gradual application upgrades through a large organization. The ability to independently upgrade the applications that are writing the data and the applications reading the data makes development and deployment significantly easier.

The problem of managing schemas across diverse teams in a large organization was mostly solved when a single relational database contained all the data and enforced the schema on all users. These days, data is not nearly as unified — it moves between many different data stores, structured, unstructured or semi-structured. Avro is a very versatile and convenient way of bringing order to chaos. Avro formatted data can be stored in files, in unstructured stores like HBase or Cassandra, and can be sent through messaging systems like Kafka. All the while, applications can use the same schemas to read the data, process it, and analyze it — regardless of where and how it is stored.

Decisions made early in the project can come back to bite later. Hadoop offers a rich ecosystem of tools and solutions to choose from, making the decision process more challenging than it was back when data was always stored and processed in relational databases. File formats are no exception — there are probably 10 different file types that are supported through the Hadoop ecosystem. Some of the formats are easy to use by beginners, some offer special performance optimizations for specific use-cases. But for general-purpose data storage and processing, I always tell my customers: just use Avro.

Designing for IoT comes with a bunch of challenges that will be new to designers accustomed to pure digital services. How tricky these challenges prove will depend on:

The maturity of the technology you’re working with

The context of use or expectations your users have of the system

The complexity of your service (e.g. how many devices the user has to interact with).

Below is a summary of the key differences between UX for IoT and UX for digital services. Some of these are a direct result of the technology of embedded devices and networking. But even if you are already familiar with embedded device and networking technology, you might not have considered the way it shapes the UX.

Functionality can be distributed across multiple devices with different capabilities

IoT devices come in a wide variety of form factors, with varying input and output capabilities. Some may have screens, such as heating controllers or washing machines. Some may have other ways of communicating with us (such as flashing LEDs or sounds).

Some may have no input or output capabilities at all and are unable to tell us directly what they are doing. Interactions might be handled by web or smartphone apps. Despite the differences in form factors, users need to feel as if they are using a coherent service rather than a bunch of disjointed UIs. It’s important to consider not just the usability of individual UIs but interusability: distributed user experience across multiple devices.

The locus of the user experience may be in the service

Although there’s a tendency to focus on the novel devices in IoT, much of the information processing or data storage often depends on the Internet service. This means that the service around a connected device is often just as critical to the service, if not more so, than the device itself. For example, the London Oyster travel card is often thought of as the focus of the payment service. But the Oyster service can be used without a card at all via an NFC enabled smartphone or bank card. The card is just an ‘avatar’ for the service (to borrow a phrase from the UX expert Mike Kuniavsky).

We don’t expect internet-like failures from the real world

It’s frustrating when a web page is slow to download or a Skype call fails. But we accept that these irritations are just part of using the Internet. By contrast, real-world objects respond to us immediately and reliably.

When we interact with a physical device over the Internet, that interaction is subject to the same latency and reliability issues as any other Internet communication. So, there’s the potential for delays in response and for our requests and commands to go missing altogether. This could make the real world start to feel very broken. Imagine if you turned your lights on and they took two minutes to respond, or failed to come on at all.

In theory, there could be other unexpected consequences of things adopting Internet-like behaviors. In the Warren Ellis story The Lich House, a woman is unable to shoot an intruder in her home: her gun cannot contact the Internet for the authentication that would allow her to fire it. This might seem far-fetched, but we already have objects that require authentication, such as Zipcars.

IoT is largely asynchronous

When we design for desktops, mobiles, and tablets, we tend to assume that they will have constant connectivity. Well-designed mobile apps handle network outages gracefully, but tend to treat them as exceptions to normal functioning. We assume that the flow of interactions will be reasonably smooth, even across devices. If we make a change on one device (such as deleting an email), it will quickly propagate across any other devices we use with the same service.

Many IoT devices run on batteries and need to conserve electricity. Maintaining network connections uses a lot of power, so they only connect intermittently. This means that parts of the system can be out of sync with each other, creating discontinuities in the user experience. For example, imagine your heating is set to 19 degrees celsius. You use the heating app on your phone to turn it up to 21C, but it takes a couple of minutes for your battery powered heating controller to go online to check for new instructions. During this time, the phone says 21C, and the controller says 19C.

Code can run in many more places

The configuration of devices and code that makes a system work is called the system model. In an ideal world, users should not have to care about this. We don’t need to understand how conventional Internet services, like Amazon, work in order to use them successfully. But as a consumer of an IoT service right now, you can’t always get away from some of this technical detail.

A typical IoT service is composed of:

one or more embedded devices

a cloud service

perhaps a gateway device

one or more control apps running on a different device, such as a mobile, tablet, or computer.

Compared to a conventional web service, there are more places where code can run. There are more parts of the system that can, at any point, be offline. Depending on what code is running on which device, some functionality may at any point be unavailable.

For example, imagine you have a connected lighting system in your home. It has controllable bulbs or fittings, perhaps a gateway that these connect to, an Internet service, and a smartphone app to control them all. You have an automated rule set up to turn on some of your lights at dusk if there’s no one home.

If your home Internet connection goes down, does that rule still work? If the rule runs in the Internet service or your smartphone, it won’t. If it runs in the gateway, it will. As a user, you want to know whether your security lights are running or not. You have to understand a little about the system model to understand which devices are responsible for which functionality, and how the system may fail.

It would be nice if we could guarantee no devices would ever lose connectivity, but that’s not realistic. And IoT is not yet a mature set of technologies in the way that ecommerce is, so failures are likely to be more frequent. System designers have to ensure that important functions (such as home security alarms) continue to work as well as possible when parts go offline and make these choices explicable to users.

Devices are distributed in the real world

The shift from desktop to mobile computing means that we now use computers in a wide variety of situations. Hence, mobile design requires a far greater emphasis on understanding the user’s needs in a particular context of use. IoT pushes this even further: computing power and networking is embedded in more and more of the objects and environments around us. For example, a connected security system can track not just whether the home is occupied, but who is in it, and potentially video record them. Hence, the social and physical contexts in which connected devices and services can be used is even more complex and varied.

Remote control and automation are programming-like activities

In 1982, the HCI researcher Ben Shneiderman defined the concept of direct manipulation: user interfaces based on direct manipulation “depend on visual representation of the objects and actions of interest, physical actions or pointing instead of complex syntax, and rapid incremental reversible operations whose effect on the object of interest is immediately visible. This strategy can lead to user interfaces that are comprehensible, predictable and controllable.” Ever since, this has been the prevailing trend in consumer UX design. Direct manipulation is successful because interface actions are aligned with the user’s understanding of the task. They receive immediate feedback on the consequences of their actions, which can be undone.

IoT creates the potential for interactions that are displaced in time and space: configuring things to happen in the future, or remotely. For example, you might set up a home automation rule to turn on a video camera and raise the alarm when the house is unoccupied and a motion sensor is disturbed. Or you might unlock your porch door from your work computer to allow a courier to drop off a parcel.

Both of these break the principles of direct manipulation. To control things that happen in future, you must anticipate your future needs and abstract the desired behavior into a set of logical conditions and actions. As the HCI researcher Alan Blackwell points out, this is basically programming. It is a much harder cognitive task than a simple, direct interaction. That’s not necessarily a bad thing, but it may not be appropriate for all users or all situations. It impacts usability and accessibility.

Unlocking the door remotely is an easier action to comprehend, but we are distanced from the consequences of our actions, and this poses other challenges. Can we be sure the door was locked again once the parcel had been left? A good system should send a confirmation, but if our smartphone (or the lock) lost connectivity, we might not receive this.

Complex services can have many users, multiple UIs, many devices, and many rules and applications

A simple IoT service might serve only one or two devices: e.g. a couple of connected lights. You could control these with a very simple app. But as you add more devices, there are more ways for them coordinate with one another. If you add a security system with motion sensors and a camera, you may wish to turn on one of your lights when the alarm goes off. So, the light effectively belongs to two functions or services: security and lighting. Then add in a connected heating system that uses information from the security system to know when the house is empty, and assume there are several people in the house with slightly different access privileges to each system. For example, some can change the heating schedule, some can only adjust the current temperature, some have admin rights to the security system, and some can only set and unset the alarm. What started out as a straightforward system has become a complex web of interrelationships.

For a user, understanding how this system works will become more challenging as more devices and services are added. It will also become more time consuming to manage.

Many differing technical standards make interoperability hard

The Internet is an amazing feat of open operating standards, but, before embedded devices were connected, there was no need for appliance manufacturers to share common standards. As we begin to connect these devices together, this lack of common technology standards is causing headaches. Just getting devices talking to one another is a big enough challenge, as there are many different network standards. Being able to get them to coordinate in sensible ways is an order of magnitude more complicated.

The consumer experience right now is of a selection of mostly closed, manufacturer-specific ecosystems. Devices within the same manufacturer’s ecosystem, such as Withings, will work together. But this is the only given. In the case of Withings, this means that devices share data with a common Internet service, which the user accesses via a smartphone app. Apple’s Airplay is an example of a proprietary ecosystem in which devices talk directly to each other.

We’re starting to see manufacturers collaborating with other manufacturers, too. So, your Nest Protect smoke detector can tell your LIFX lightbulbs to flash red when smoke is detected. (This is done by connecting the two manufacturer’s Internet services rather than connecting the devices).

There are also some emerging platforms that seek to aggregate devices from a number of manufacturers and enable them to interoperate. The connected home platform Smart Things supports a range of network types and devices from manufacturers such as Schlage and Kwikset (door locks), GE and Honeywell (lighting and power sockets), Sonos (home audio), Philips Hue, Belkin, and Withings. But the platform has been specifically configured to work with each of these. You cannot yet buy any device and expect it to work well with a platform such as Smart Things.

For the near future, the onus will be largely on the consumer to research which devices work with their existing devices before purchasing them. Options may be limited. In addition, aggregating different types of devices across different types of networks tends to result in a lowest common denominator set of basic features. The service that promises to unify all your connected devices may not support some of their more advanced or unique functions: you might be able to turn all the lights on and off but only dim some of them, for example. It will be a while before consumers can trust that things will work together with minimal hassle.

IoT is all about data

Networked, embedded devices allow us to capture data from the world that we didn’t have before, and use it to deliver better services to users. For example, drivers looking for parking spaces cause an estimated 30% of traffic congestion in US cities. Smart parking applications such as Streetline’s Parker use sensors in parking spaces to track where spaces are open for drivers to find via a mobile app. Likewise, Opower uses data captured from smart energy meters to suggest ways in which customers could save energy and money.

Networked devices with onboard computation are also able to use data, and in some cases act on it autonomously. For example, a smart energy meter can easily detect when electrical activity is being used above the baseload. This is a good indicator that someone is in the house and up and about. This data could be used by a heating system to adjust the temperature or schedule timing.

To quote another phrase from Mike Kuniavsky: “information is now a design material.”

Tags: Application programming interface, Internet of Things, Android (operating system), AllJoyn, 3D printing
]]>http://www.innovationafrica.org/2014/11/how-is-ux-for-iot-different/feed/0If it weren’t for the people…http://www.innovationafrica.org/2014/11/if-it-werent-for-the-people/
http://www.innovationafrica.org/2014/11/if-it-werent-for-the-people/#commentsSun, 09 Nov 2014 14:42:32 +0000http://www.innovationafrica.org/?p=28434Editor’s note: At some point, we’ve all read the accounts in newspapers or on blogs that “human error” was responsible for a Twitter outage, or worse, a horrible accident. Automation is often hailed as the heroic answer, poised to eliminate the specter of human error. This guest post from Steven Shorrock, who will be delivering a keynote speech at Velocity in Barcelona, exposes human error as dangerous shorthand. The more nuanced way through involves systems thinking, marrying the complex fabric of humans and the machines we work with every day.

In Kurt Vonnegut’s dystopian novel ‘Player Piano’, automation has replaced most human labour. Anything that can be automated, is automated. Ordinary people have been robbed of their work, and with it purpose, meaning and satisfaction, leaving the managers, scientists and engineers to run the show. Dr Paul Proteus is a top manager-engineer at the head of the Ilium Works. But Proteus, aware of the unfairness of the situation for the people on the other side of the river, becomes disillusioned with society and has a moral awakening. In the penultimate chapter, Paul and his best friend Finnerty, a brilliant young engineer turned rogue-rebel, reminisce sardonically: “If only it weren’t for the people, the goddamned people,” said Finnerty, “always getting tangled up in the machinery. If it weren’t for them, earth would be an engineer’s paradise.”

While the quote may seem to caricature the technophile engineer, it does contain a certain truth about our collective mindsets when it comes to people and systems. Our view is often that the system is basically safe, so long as the human works as imagined. When things go wrong, we have a seemingly innate human tendency to blame the person at the sharp end. We don’t seem to think of that someone – pilot, controller, train driver or surgeon – as a human being who goes to work to ensure things go right in a messy, complex, demanding and uncertain environment.

Our mindset seems to inform our attitude to automation, but it is one that – if it ever were valid – will be less so in the future.

Human as Hazard and Human as Resource

The view of ‘human as hazard’ seems to be embedded in our traditional approach to safety management (see EUROCONTROL, 2013; Hollnagel, 2014), which Erik Hollnagel has characterized as Safety-I. It is not that this is a necessarily a (conscious) mindset of those of us in safety management. Rather, it is how the human contribution is predominantly treated in our language and methods – as a source of failure (and, in fairness, as a source of recovery from failures, though this is much less prominent). Most of our safety vocabulary with regard to people is negative. In our narratives and methods, we talk of human error, violations, non-compliance and human hazard, among other terms. We routinely investigate things that go wrong, but almost never investigate things that go right.

This situation has emerged from a paradigm that defines safety in terms of avoiding that things go wrong. It is also partly a by-product of the translation of hard engineering methods to sociotechnical systems and situations. As the American humanistic psychologist Abraham Maslow famously remarked in his book Psychology of Science, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as is it were a nail.” If we only have words and tools to describe and analyze human failures, then human failures are all we will see. Yet this way of seeing is also a way of not seeing. What we do not see so clearly is when and how things go right.

It is not just the safety profession. It is, to an extent, management and all of society. At a societal level, we seem to accept a narrative that systems are basically safe as designed, but that people don’t use them as designed, and these blunders cause accidents. Hence the ubiquitous “Human error blamed for…” in newspaper headlines. From a human as hazard perspective, it seems logical to automate humans out wherever possible. Where this is not possible, hard constraints would seem to make sense, limiting the degrees of freedom as much as possible and suppressing opportunity to vary from work-as-designed.

An alternative view is that humans are a resource (or, for those who object to the term’s connotations, are resourceful). In this view, people are the only flexible part of the system and a source of system resilience. People give the system purpose and form interconnections to allow this purpose to be achieved. They have unique strengths, including creativity, a capacity to innovate, and an ability to adapt. As it is impossible to completely specify a sociotechnical system, it is humans – not automation – who must make the system work, anticipating, recognizing and responding to developments.

This view of the human in a safety management context seems to resonate with a more fundamental view of the human in management thinking more generally. Over 50 years ago, Douglas McGregor identified two mindsets regarding human motivation that shape management thinking: Theory X and Theory Y. Theory X dictates that employees are inherently lazy, selfish and dislike work. The logical response to this mindset is command-and-control management, requiring conformity and obedience with processes designed by management, and a desire to automate whatever can be motivated, because this removes a source of trouble.

The Theory Y mindset is that people need and want to work; they are ambitious and actively seek out responsibility. Given the right conditions, there is joy in work, and so work and play are not two distinct things. Rather than needing to be ‘motivated’ by managers, people are motivated by the work itself and the meaning, satisfaction and joy they get out of it. Importantly, humans are creative problem solvers.

Toward a humanistic and systems perspective

Two things seem to be certain for the future. The first is obvious: we will see more automation. The second is less obvious, but equally certain: Whatever mindset motivates the decision to automate, it will be necessary to move toward a more humanistic view of people that incorporates Hollnagel’s Human as Resource and McGregor’s Theory Y. For this view to prevail, we will need to reform our ideas about work away from command-and-control and towards a more humanistic and systems perspective.

It is inevitable that work with automation will not always be as designed or imagined. While part of the design philosophy may have sought to suppress human performance variability, humans must remain variable in operation. As well as the rare high-risk scenarios, there will be disturbances and surprises, and even routine situations will require human flexibility, creativity and adaptation. This does not call for technophobia, but humanistic and systems thinking. People will be key to making the system as a whole work.

We, the people

Finnerty’s exclamation raises an important question: who are the people? It seems that he was talking about people on the front-line. But they are not the only people. We might think of four roles for the people in the system: system actors (e.g. front line employees, customers), system experts/designers (e.g. engineers, human factors, human resources), system decision makers (e.g. managers and purchasers), and system influencers (e.g. the public, regulators) (Dul et al, 2012). When automation goes wrong, it tangles up people in all roles. The system actors (front-line staff and customers) just pay the highest price. The responsibility for automation in the context of the system must therefore be shared among all of us, because automation does not exist just within the boundary of a ‘human-automation interaction’ between the controller/pilot and the machinery. Automation exists within a wider system. So how can we make sense of this?

Making sense of human work with automation

Our experiences with automation present us with some puzzling situations, and we often struggle to make sense of these from our different perspectives. For example, we might wonder why someone ‘ignored’ an alarm that seemed quite clear to us, or why they did not respond in the way that (we think) we would have responded. We might also wonder why someone would have purchased a particular system, or made a particular design decision, or trained users in a certain way. To make sense of these sorts of situations, and to ensure that things go right, we need to consider the overall system and all of our interactions and influences with automation, not isolated individuals, parts, events or outcomes.

Involve the right people. The people who do the work are the specialists in their work and are critical for system improvement. When trying to make sense of situations and systems, who do we need to involve as co-investigators, co-designers, co-decision makers and co-learners?

Listen to people’s stories and experiences. People do things that make sense to them given their goals, understanding of the situation and focus of attention at that time. How will we understand other’s (multiple) experiences with automation from their local perspectives?

Reflect on your mindset, assumptions and language. People usually set out to do their best and achieve a good outcome. How can we move toward a mindset of openness, trust and fairness, understanding actions in context using non-judgmental and non-blaming language?

Consider the demand on the system and the pressure this imposes. Demands and pressures relating to efficiency and capacity have a fundamental effect on performance. How can we understand demand and pressure over time from the perspectives of the relevant field experts, and how this affects their expectations and the system’s ability to respond?

Investigate the adequacy of resources and the appropriateness of constraints. Success depends on adequate resources and appropriate constraints. How can we make sense of the effects of resources and constraints, on people and the system, including the ability to meet demand, the flow of work and system performance as a whole?

Look at the flows of work, not isolated snapshots. Work progresses in flows of inter-related and interacting activities. How can we map the flows of work from end to end through the system, and the interactions between the human, technical, information, social, political, economic and organizational elements?

Understand trade-offs. People have to apply trade-offs in order to resolve goal conflicts and to cope with the complexity of the system and the uncertainty of the environment. How can we best understand the trade-offs that we all system stakeholders make when it comes to automation with changes in demands, pressure, resources and constraints – during design, development, operation and maintenance?

Understand necessary adjustments and variability. Continual adjustments are necessary to cope with variability in demands and conditions, and performance of the same task or activity will vary. How can we get and understanding of performance adjustments and variability in normal operations as well as in unusual situations, over the short or longer term?

Consider cascades and surprises. System behavior in complex systems is often emergent; it cannot be reduced to the behavior of components and is often not as expected. How can we get a picture of how our systems operate and interact in ways not expected or planned for during design and implementation, including surprises related to automation in use and how disturbances cascade through the system?

Understand everyday work. Success and failure come from the same source – ordinary work. How can best observe and discuss how ordinary work is actually done?

Conclusion

If it weren’t for the people, it is true that there would be no-one to get tangled up in the machinery. But if it weren’t for the people, there would be no system at all: no purpose, no demand, no performance. We need to reflect, then, on our mindsets about us, the people, about the systems we work with and within, and about how we will ensure that things go right.

This paper documents the relationship between appropriation instruments and the innovation activity in Tunisia. It focuses on the factors that determine the appropriation of innovation activities like the value of sales of the firms, networking, science–industry linkage, competitive pressure and demand pull. To this end, we suggest an econometric analysis of 586 Tunisian firms using simple and bivariate logit regressions. We find significant interaction effects between appropriability and R&D activity. The results confirm that patenting is primarily driven by firm-level factors, not by industry affiliation. Access to external knowledge and firm’s specific characteristics are the most linked factors to the innovation protection. Firms that use appropriation instruments have a higher probability of investing in R&D than others. Indeed, the capacity to integrate external knowledge and performing R&D (networking, science–industry linkage, cooperation with other firms, belonging to a group) is related to the use of appropriation instruments. We find that appropriation instruments have a significant effect on product innovation. The effect on process innovation is not significant for Tunisian firms.Go to Source

Looking back at the evolution of our Strata events, and the data space in general, we marvel at the impressive data applications and tools now being employed by companies in many industries. Data is having an impact on business models and profitability. It’s hard to find a non-trivial application that doesn’t use data in a significant manner. Companies who use data and analytics to drive decision-making continue to outperform their peers.

Up until recently, access to big data tools and techniques required significant expertise. But tools have improved and communities have formed to share best practices. We’re particularly excited about solutions that target new data sets and data types. In an era when the requisite data skill sets cut across traditional disciplines, companies have also started to emphasize the importance of processes, culture, and people.

As we look into the future, here are the main topics that guide our current thinking about the data landscape.

Note: This document represents our thinking as of October 2014. You can keep up with the latest analysis and developments in the data space through the O’Reilly Data newsletter.

Cognitive augmentation

The combination of big data, algorithms, and efficient user interfaces can be seen in consumer applications such as Waze or Google Now. Our interest in this topic stems from the many tools that democratize analytics and, in the process, empower domain experts and business analysts. In particular, novel visual interfaces are opening up new data sources and data types.

“Moving dots” (e.g. tracking data from athletics) are being analyzed by companies that specialize in spatio-temporal pattern recognition. Startup Second Spectrum provides analytics to coaches and front offices in many professional basketball teams. In the near future, their technology and recommendations will be available in real time to coaching staffs during in-game situations.

The convergence of cheap sensors, fast networks, and distributed computation

The Internet of Things (IoT) will require systems that can process and unlock massive amounts of event data. These systems will draw from analytic platforms developed for monitoring IT operations. Beyond data management, we’re following recent developments in streaming analytics and the analysis of large numbers of time series.

Data (science) pipelines

Analytic projects involve a series of steps that often require different tools. There are a growing number of companies and open source projects that integrate a variety of analytic tools into coherent user interfaces and packages. Many of these integrated tools enable replication, collaboration, and deployment. This remains an active area, as specialized tools rush to broaden their coverage of analytic pipelines.

Evolving, maturing marketplace of big data components

Many popular components in the big data ecosystem are open source. As such, many companies build their data infrastructure and products by assembling components like Spark, Kafka, Cassandra, and ElasticSearch, among others. Contrast that to a few years ago when many of these components weren’t ready (or didn’t exist) and companies built similar technologies from scratch. But companies are interested in applications and analytic platforms, not individual components. To that end, demand is high for data engineers and architects who are skilled in maintaining robust data flows, data storage, and assembling these components.

Data scientists, design, and social science

To be clear, data analysts have always drawn from social science (e.g., surveys, psychometrics) and design. We are, however, noticing that many more data scientists are expanding their collaborations with product designers and social scientists.

Thinking with Data: This book by Max Shron provides an overview of ideas and techniques from the social sciences.

Building a data culture

“Data-driven” organizations excel at using data to improve decision-making. It all starts with instrumentation. “If you can’t measure it, you can’t fix it,” says DJ Patil, VP of product at RelateIQ. In addition, developments in distributed computing over the past decade have given rise to a group of (mostly technology) companies that excel in building data products. In many instances, data products evolve in stages (starting with a “minimum viable product”) and are built by cross-functional teams that embrace alternative analysis techniques.

Related resources:

Building Data Science Teams: Data scientists are at the forefront of innovation in many data-driven organizations. This report offers practical advice for constructing teams that can drive that innovation.

Just Enough Math is a video series that introduces mathematical concepts using business cases.

Data Jujitsu: A primer on organizing teams and building data products.

Perils of big data

Every few months, there seems to be an article criticizing the hype surrounding big data. Dig deeper and you find that many of the criticisms point to poor analysis and highlight issues known to experienced data analysts. Our perspective is that issues such as privacy and the cultural impact of models are much more significant.

If you Google “next industrial revolution,” you’ll find plenty of candidates: 3D printers, nanomaterials, robots, and a handful of new economic frameworks of varying exoticism. (The more generalized ones tend to sound a little more plausible than the more specific ones.)

The phrase came up several times at a track I chaired during our Strata + Hadoop World conference on big data. The talks I assembled focused on the industrial Internet — the merging of big machines and big data — and generally concluded that in the next industrial revolution, software will take on the catalytic role previously played by the water wheel, steam engine, and assembly line.

The industrial Internet is part of the new hardware movement, and, like the new hardware movement, it’s more about software than it is about hardware. Hardware has gotten easier to design, manufacture, and distribute, and it’s gotten more powerful and better connected, backed up with a big-data infrastructure that’s been under construction for a decade or so.

All of that means it’s an excellent way to extend the reach of software into the physical world, so people who have spent their lives in software are turning toward hardware now, hoping to build little rafts that will carry their code out of the comfort of the server room and down the unexplored rivers of the physical world.

The problems of the industrial Internet are particularly interesting because they require an enormous amount of domain knowledge in addition to clever software thinking. Our first speaker at our Strata + Hadoop World Industrial Internet session, Daniel Koffler, described aluminum smelting pots that use 600,000 amps of current — enough to disable electronic equipment and magnetize cars nearby. Our second speaker, Ami Daniel, described the lengths that smugglers and savvy merchant captains go to in order to obscure the data streams that come from oceangoing ships, and the skepticism and precision that his team uses to outsmart them.

In my closing panel with executives from Accenture, GE, and Pivotal, we spent the most time talking about integration and skills — how to draw together a lot of experts to work on extraordinarily complicated systems. If you approach these kinds of problems unilaterally as a software generalist, you won’t get very far.

For a few more thoughts on the next industrial revolution, I encourage you to watch my colleague Jenn Webb interview Nate Oostendorp, a co-founder of Sight Machine (and another speaker in my industrial Internet program). Sight Machine uses computer vision and other software techniques to help factories and other physical environments improve their operations. (Full disclosure: O’Reilly’s sister firm, O’Reilly AlphaTech Ventures, is an investor in Sight Machine.)

Design is both the disruptor and being disrupted. It’s disrupting markets, organizations, and relationships, and forcing us to rethink how we live. The discipline of design is also experiencing tremendous growth and change, largely influenced by economic and technology factors. No longer an afterthought, design is now an essential part of a product, and it may even be the most important part of a product’s value.

The latest devices, appliances, and services are beautiful, but their true significance is how they improve our lives. Steve Jobs said it well: “Design is the fundamental soul of a man-made creation that ends up expressing itself in successive outer layers of the product or service.”

There are two areas where this notion of design creating value resonates: in the emerging space of the Internet of Things (IoT) and within organizations that treat design as a key corporate asset. In the coming months, I’ll be digging deeper into both of these topics. Below, I outline my initial thinking as I begin these explorations.

Internet of Things and design: Complexity is an opportunity

The IoT, with its massive data sets, remote, and — in some cases — autonomous control, promiscuous connectivity, and ubiquitous sensors, requires designers to completely rethink interfaces. This represents a bigger change for designers than iterations from print to web and web to mobile. Whether you’re an interaction designer, UX designer, visual designer, or an industrial designer, if you’re working in — or even near — the IoT space, your role and responsibilities are undergoing a transformation.

We’re in the midst of what Andy Huntington calls the “Geocities of Things” phase. (For those who don’t remember or weren’t “Internet conscious” at the time, Geocities was a ‘90s web hosting service that let anyone build a rudimentary online presence. As you might imagine, the results were mixed.) Huntington notes that maturity requires mass experimentation, which is exactly what we’re seeing in the IoT design space now. The tools and resources for designing “smart stuff” are being democratized. Designers are moving from proprietary, overly complex tools to easier to use and free tools, and non-designers are beginning to learn how to use these, too. Funding for new ideas is within reach for many more people with the help of platforms like Kickstarter. Just as web design and development matured through experimentation, IoT design is also evolving along the same path. While IoT still feels a bit messy, I expect it will become more relevant and user-centered as it matures.

I see plenty of experimentation in IoT design as engineers and designers come together to identify products that go beyond what customers think they need to what they want. Successful companies like Nest, Belkin, and Samsung, and lesser-known companies like Lively, have figured out key ingredients: identifying use cases, building cross-disciplinary teams, and leading with design to create products customers love.

IoT is also changing how designers are perceived and what is expected of them. Claire Rowland talks about the design stack for IoT: visual design, interaction design, interusability, industrial design, service design, conceptual models, productization, and platform design. Designers’ responsibilities are expanding at a dizzying pace.

The convergence of the physical and digital requires different groups coming together to solve real human problems. In addition to hardware and software engineers, industrial designers, interaction designers, visual designers, and user researchers all need to collaborate. To realize the promise of IoT, teams of designers, product managers, and engineers need to take on the hard problems — standards, big data, cloud computing, and privacy to name a few. Interaction designers now need to embrace topics like security and performance. Building the future requires industrial and interaction designers collaborating to make sense of the whirl of technology and human-centered needs.

In the digital world, poor design decisions cause annoyance or frustration, such as waiting for a web page to load or having to re-enter your password multiple times. In the physical world, the consequences of poor design are far more obvious and significant. What happens if your smoke detector sets itself off in error, or your lights are delayed in turning on upon arriving home? A recent example is Nest’s recall of Protect due to a gestural design feature. In this case, the recall was a simple fix: disabling the “Wave” feature over Wi-Fi. When design and IoT intersect, the stakes are higher, but the solutions are often easier.

For many years, interaction designers have focused on a single user using a single device. IoT shifts designers to think in terms of context and connected worlds. User experience grows beyond a one-to-one relationship to become a web of touchpoints and conversations. Devices pull, digest, and process data to serve us in ways we may not recognize or understand. The best smart devices and apps provide users with the types of inputs that help augment decision making by combining the heavy burden of processing, and summarizing massive stores of sensor data with machine learning that improves results based on analyzing collective behavior. The design of everyday things has never been more challenging and exciting. In his book Enchanted Objects and through his talks, David Rose describes this as “objects that anticipate.”

Design drives profit and innovation

Organizations that value design and treat it as a corporate asset increase their odds of success. Conversely, organizations that minimize design’s impact and continue to treat it as an adjacent activity will fail.

Need proof?

The Danish Design Center (DDC) ran a study using the Design Ladder, a tool for measuring the economic impact of investing in design. The main finding from the study was that companies more heavily invested in design had gross revenues 22% higher than those investing less in design.

The Design Management Institute (DMI) conducted research to identify companies that are design leaders. The big takeaway: design-driven organizations outperformed the S&P 500 by 228% in the last 10 years. Among the top performing firms: Apple, Coca-Cola, Ford, Herman-Miller, IBM, Intuit, Newell-Rubbermaid, Procter & Gamble, and Starbucks. What constitutes a design-driven company? The study used six criteria: “publicly traded in the U.S. for 10+ years; deployment of design as an integrated function across the entire enterprise; evidence that design investments and influence are increasing; clear reporting structure and operating model for design; experienced design executives at the helm directing design activities; and tangible senior leadership-level commitment for design.” Jeneanne Rae, who worked with DMI, provides additional analysis of the study in this Harvard Business Review post.

If companies know that design drives innovation and increases profit, why aren’t all organizations embracing design as a core business asset? The simple answer is: it’s hard. Those who believe in design’s power to transform business results know that it’s difficult to make cultural changes to an organization, which is precisely the kind of shift required for design to become part of a company’s fabric. Just ask IBM, which is investing $100 million in UX this year and hiring 1,000 employees to support new design initiatives.

While many startups have embraced design, integrating design in an established enterprise can be far more complicated. In some companies, there are product managers who facilitate the collaboration. In other cases, consultants are brought in to lead this facilitation. Many companies are in the early stages of figuring out how to align business and design to create better products and services for their customers.

A growing number of companies make design a core cultural attribute and use design to develop products and services that customers need and love. Airbnb uses design to solve a distinctly human problem: how to make it comfortable for all parties to have a stranger stay at your home. Airbnb’s user experience makes the renter feel comfortable about their decision, and it compels the property owner to show the real condition of the property. The service’s design serves as an essential part of the product offering, arguably as important as the marketplace for casual rentals Airbnb now mostly owns.

Uber offers another example. Aaron Levie of Box.net wrote on Twitter: “Uber is a $3.5 billion lesson in building for how the world should work instead of optimizing for how the world does work.” The brilliance of Uber is that it uses the new capabilities provided by GPS-enabled smartphones — stored payments, seamless anonymous communication, and reputation systems — in the hands of both driver and passenger to completely rethink how taxi service ought to be delivered. Rather than simply recreating old processes on a new platform, Uber puts the user first, and applies design thinking to change key expectations. Its frictionless user experience is delighting customers and disrupting the transportation market.

Companies such as Airbnb and Uber treat design not as a feature but as intrinsic to the product or service they are trying to sell. They use design to create a natural, almost intangible coherence that makes users more comfortable and more willing to spend their time and money.

Design and business degree programs are also being challenged to change. Stanford’s Institute of Design has been leading the charge with a focus on design thinking. Several MBA programs are now incorporating design and creative problem solving into their curricula as well. As noted in this Wall Street Journal article, universities beyond Stanford’s Institute of Design are embracing a hybrid approach, meshing business acumen with design thinking. This hybrid model means tomorrow’s leading designers will have a blend of design, communication, and business skills.

Data-driven thinking is already an essential part of successful businesses, and now it’s expanding its influence to guide design groups as well. Years back, Google was viewed as pushing data driven a little too far with the notorious story of 41 shades of blue, but today’s organizations are taking a more balanced approach to data and design. In this article, Rochelle King, global head of design for Spotify, explains how her team used data-informed decision making to redesign Spotify’s site. The best designs are human-centered and use the data gathered by the discovery process to identify problems and possible solutions in the form of products and services. This human-centered approach enables organizations to ask the right questions and gain insights about behavior, emotion, engagement, and motivation. One of my favorite examples of using data to focus on user needs is the redesign of gov.uk. By putting the user’s needs first — rather than the government’s — Mike Bracken and his team folded thousands of disparate websites into one, improving people’s lives while saving taxpayers money. Their user-centered design was crafted with data.

Finally, the successful organizations of tomorrow will embed design in their DNA if venture capitalists have anything to say about it. In recent years, VCs have hired designers as partners. Most recently, John Maeda, former president of the Rhode Island School of Design, joined Kleiner Perkins Caulfield and Byer, and Irene Au, former head of user experience at Google, joined Khosla Ventures. Google Ventures has several design partners on staff. Each of these VCs has brought on well-known designers for different responsibilities, but the message is clear: investors recognize design’s impact on the bottom line.

Our exploration of experience design: What comes next

In the coming months, I’ll be exploring the future of design, the changing role of designers, and how design is shaping our lives in new and different ways. We have a lot planned, so we hope you’ll come along as we dig into these fascinating spaces.

I’m also interested in knowing what you see and think. If you’re a designer, how are you building your skill set? How is design viewed within your company? What’s your take on the Internet of Things? If you’re in a large corporation or a startup, how are you leveraging design? If you’re involved with education and training, how is your curriculum changing to address the growing demand for designers? Share your thoughts with me on Twitter at @marytreseler or email me at mary@oreilly.com.

Tags: design management, DESIGN, harvard business school, Danish design, investment
]]>http://www.innovationafrica.org/2014/11/experience-design-is-shaping-our-future/feed/0The Future of AngularJShttp://www.innovationafrica.org/2014/09/the-future-of-angularjs/
http://www.innovationafrica.org/2014/09/the-future-of-angularjs/#commentsWed, 24 Sep 2014 09:32:15 +0000http://www.innovationafrica.org/?p=27936AngularJS, for me, was a revelation the first time I encountered it. I was coming from using GWT (Google Web Toolkit), and seeing our large application shrink in lines of code by 90% was close to a spiritual experience. I was a convert from day one, because I knew how bad things were otherwise. Ever since, I have been associated with AngularJS in one way or another, and have seen how it makes things absolutely simple with data binding, templating, routing, unit testing, and so much more. But the more I used it, some things didn’t make sense, from naming to concepts. I got the hang of it, but I never really got to like why directives needed to be so complex, or how the built-in routing was quite limiting. While AngularJS made it trivial to write applications, it also made it trivial to write slow, hard-to-maintain applications if you didn’t understand how it all worked together.
With each version of AngularJS, and with each improvement in documentation, things started improving for the better.

Data binding performance improvements were made with each successive release.

The AngularJS documentation underwent a major overhaul to make it easier to consume and understand.

Routing became an optional module.

The community created many modules and plugins to improve things, from localization / translation (angular-translate), to a better routing (ui-router) to whatever else you might ever need.

AngularJS has undergone significant changes under the covers from version 1.0 to 1.3 to improve almost every single part of AngularJS, visibly or otherwise. It has gone from an experimental MVC framework to a stable well-supported framework with significant adoption. I have done more workshops on AngularJS in the last year than I did in all the years before it, combined.

But the core AngularJS team (which has also grown) have not been sitting around resting on their laurels. After 1.3, instead of looking at incremental improvements, they decided to tackle what the team has been calling AngularJS 2.0. Taking into account feedback from developers, as well as inspiration from other brilliant frameworks out there, AngularJS 2.0 aims to be as revolutionary a step forward from AngularJS 1.0 as AngularJS was when it released. And unlike last time, the community has been involved significantly, with all the design docs available for review and comments. Here’s why you should be looking forward to AngularJS 2.0 (though of course, any and all of it might change by the time it releases).

Forward Looking

With AngularJS 1.3, AngularJS dropped support for IE8. AngularJS 2.0 looks to continue this trend, with the focus on faster, modern browsers (IE10/11, Chrome, Firefox, Opera & Safari) on the desktop, and Chrome on Android, iOS6+, Windows Phone 8+ & Firefox mobile. This allows the AngularJS codebase to be short and succinct (without needing hacks for older browsers), and also allows AngularJS to support the latest and greatest features without worrying about backward compatibility and polyfills. The expectation is that by the time AngularJS 2.0 rolls out, most of these browsers will be the standards and defaults, and developers can focus on building apps specifically for them.

ECMAScript 6 + Redux Dependency Injection

ECMAScript 6 is what JavaScript will be like in a few years: A truly object-oriented language with native JS class support, and first class module and AMD (Asynchronous Modular Dependencies), and tons of improvements to the syntax to allow for more concise, declarative code. The entire AngularJS 2.0 code will be written in ES6. But you might think, hey, none of the current browsers support all the ES6 features — what does that mean for me as a developer?

Have no worries. Even though the entire AngularJS source code will be written in ES6, it will compile into standard ES5 (or what we call Javascript today) using the Traceur compiler. AngularJS is also adding support for annotations and assertions into the Traceur compiler, so that the AngularJS application you write can be even more declarative, by just adding annotations instead of any crazy syntax (the current dependency injection system, anyone?). So you might be able to write AngularJS code like this very soon (not necessarily the final syntax though):

1

2

3

4

5

6

7

8

9

10

11

12

13

@Inject(CoffeeMaker, Microwave)

class Pantry {

constructor(coffeeMaker, microwave) {

this.coffeeMaker = coffeeMaker;

this.microwave = microwave;

}

makeCoffee(finishedCb) {

this.coffeeMaker.turnOn();

this.coffeeMaker.makeCoffee(finishedCb);

}

}

And AngularJS 2.0 will be fully backwards compatible with ES5 (it has to be), so you can continue writing an equivalent syntax without ever having to deal with ES6 syntax, if you so decide.

Faster, Buttery-smooth

Nowadays, Everything needs to be faster, faster, faster. 60 fps, load time of less than 400ms, and so on. With version 2.0, the focus is on speed. How fast can the UI be updated? How can the data binding be sped up? One approach is to replace the dirty checking that AngularJS currently does with Object.observe, which is a proposal to add native support for model change listeners and data binding. AngularJS 2.0 will totally use this to significantly speed up the whole data-binding and update cycle.

But Object.observe is still only supported in Chrome canaries, and no other browser. It seems like there is still quite some time before it makes it out as a default in all the browsers. Thankfully, the AngularJS folks have been hard at work on the change detection feature, and have some insights on how to improve the dirty checking for objects and arrays significantly without the need for Object.observe support in the browser. The aim is to be able to handle several thousands of bindings under 1 ms. The design doc lays out how AngularJS 2.0 plans to handle this.

Flexible Routing

Routing was a core AngularJS feature in version 1.0, and became an optional module (ngRoute) in version 1.2. Part of this was because of some excellent work done by the open source community to support a wide variety of requirements and needs, like child and sibling states. The UI-Router module stepped up and handled this beautifully, while providing a syntax similar to that of ngRoute.

With version 2.0, the aim is to bring in some of these features (nested states, sibling views) into the core AngularJS router. At the same time, there were multiple other requirements that were not easily satisfied with routing in AngularJS:

State-based transitions: UI-Router supported this, but it was not part of the core AngularJS routing module. So sub-states, sibling states where different parts of the views corresponded to different states of the URL / application will be declaratively and simply specified as part of the routing in AngularJS 2.0.

Authentication and Authorization: This was done using resolves in AngularJS, but AngularJS 2.0 plans to introduce a common, easy to understand idiom to support authorization and authentication to be able to state requirements like:

User needs to be logged in

Only admins can access a certain page

Only members of a certain group can navigate to the admin section

Preserving State: In the current version of AngularJS, if the user quickly switches back and forth between two routes in the UI, the controller and views associated with those routes are destroyed and recreated every time. This might not be optimal for a variety of use cases. Thus, AngularJS 2.0 is exploring ways to preserve the state between such transitions (through an LRU cache or something similar), thus allowing state transitions to be faster and more optimal from an user’s perspective.

Data Persistence

One last major piece of the puzzle that is AngularJS 2.0 is the persistence story. AngularJS started with pure XHR request support (through $http and $resource). Sockets were introduced through third-party services and integrations. Offline support was done through LocalStorage on a per-application basis. These have become common enough in applications nowadays that rethinking the core AngularJS communication and persistence story was necessary. To this extent, the following are planned:

Phase 1 of AngularJS 2.0 would come with support for Http communication (using ngHttp), Local Storage, Session Storage, and IndexedDB access (through ngStore), and WebSocket API (through ngWebSocket). Each of these would be optional modules that could be included on a per-project basis.

Phase 2 would build on top of this to build offline-first applications, which would be able to check connectivity status, cache data offline, and more.

Phase 3 would finally aim to build an ngData module which would allow developers to build Model classes which represent your data, and act as an abstraction layer on top of Phase 1 and Phase 2 modules. Thus, it would be able to handle offline access, querying network, fetching pages and so on.

The aim is to give developers the tools and the language to be able to declaratively create APIs and paradigms that reflect their data model and the way it is to be accessed, fetched and shown to the users. The ability to build offline-first or realtime multi-user presence applications should be possible with just the core AngularJS modules.

Summing Up

This article barely scratches the surface of the major revamp that AngularJS 2.0 promises or is attempting to deliver. We are still a few months away from even the very first unstable release of AngularJS 2.0, but the future is exciting for all developers.

Editor’s note: Get up to speed and find out what it takes to build structured web apps with “AngularJS: Up and Running” by Shyam Seshadri and Brad Green.