A couple of days ago, I had a walking meeting with Frederic Guarino to discuss virtual and augmented reality, and how it might change the entertainment industry.

At one point, we started discussing interfaces — would people bring their own headsets to a public performance? Would retinal projection or heads-up displays win?

One of the things we discussed was projections and holograms. Lighting the physical world with projected content is the easiest way to create an interactive, augmented experience: there’s no gear to wear, for starters. But will it work?

This stuff has been on my mind a lot lately. I’m headed to Augmented World Expo this week, and had a chance to interview Ori Inbar, the founder of the event, in preparation.

Among other things we discussed what Inbar calls his three rules for augmented reality design:

The content you see has to emerge from the real world and relate to it.

Should not distract you from the real world; must add to it.

Don’t use it when you don’t need it. If a film is better on the TV watch the TV.

To understand the potential of augmented reality more fully, we need to look at the notion of consensual realities.

We’re on the cusp of an era in which each of us perceives the world around us differently because of technology. One might argue that we’re already there — even with friends, half our thoughts are in our smartphones, in chat, in maps, on Facebook. But it’s going to get much more obvious when we start augmenting our senses.

Imagine that, during our walking meeting, Guarino and I had been wearing augmented reality devices that projected heads-up displays into our eyes. As we walked, we’d connect with at least four kinds of information:

Personal, private data (such as a reminder to call a loved one.)

Shared data (such as notes and hyperlinks about what we’d discussed on our walk.)

Public opt-in data (such as an advertisement from a liquor store as we walked past.)

Public, unavoidable data (such as a red warning when accidentally crossing the street into oncoming traffic.)

Each of these is a set of contextual information layered atop our perception. Even when sharing the same layer (such as a map) with someone else, there will be significant variances. Today, for example, Google scrapes your inbox for flight and hotel reservations, then displays this in your calendars and maps — so my version of a map layer might have annotations about a hotel on it.

Personal data will require tremendous context. The best personal agent won’t just tell remind me I need batteries when I’m at the hardware store; it will also know when not to interrupt me because I’m concentrating or focused. Some of this data will come from paid software — your colleague may not be able to afford the experience you’re having.

Shared data means sharing applications, and handling permissions. Collaboration tools will be a hotbed of innovation in AR software, but issues like version control and attaching content to physical locations aren’t well-resolved yet.

Public opt-in data will face governance and regulation. Alcohol ads shouldn’t be shown to children who pass by; the filters for opting out based on age, gender, religion, and so on suggest that this will follow an opt-in model rather than an opt-out one, but either way, AR spam will be a real problem.

Finally, there will be some data that’s unavoidable. Your mobile device has to support 911 calls regardless of phone plan; emergency warning systems like Amber Alerts can push messages to your smartphone’s screen whether you like it or not. Data that’s in the public interest is one thing, but parents and guardians may impose oversight software on their charges. Imagine what happens when a headset warns you against binge drinking.

Navigating a world where everyone else has a slightly different view of reality will be jarring, too: one person wearing lie detector software, another exhibiting perfect social recall.

Of course, being in a space is a form of consent, and for those environments, holograms and projections work well. But the data model of augmented reality is likely to be a series of layers, some of which we consent to share, temporarily, with others.

Maybe the distraction of our handsets is just training us for such a world.

]]>http://radar.oreilly.com/2015/06/consensual-reality.html/feed1Filing cabinets, GAAP, and the accountant’s dilemmahttp://radar.oreilly.com/2015/06/filing-cabinets-gaap-and-the-accountants-dilemma.html
http://radar.oreilly.com/2015/06/filing-cabinets-gaap-and-the-accountants-dilemma.html#commentsMon, 01 Jun 2015 12:54:14 +0000http://radar.oreilly.com/?p=77285Learn more about Next:Money, O’Reilly’s conference focused on the fundamental transformation taking place in the finance industry.

There’s plenty of news about the fintech, or financial technology, sector these days. Hundreds of nimble startups are disaggregating the age-old financial systems on which every transaction has relied for decades. There’s little doubt that this will continue — after all, more than four billion humans have a mobile phone, and 1.3 billion know how to use a Facebook feed, but only a billion are what we’d consider “normally banked.” Something’s got to give, and software is eating traditional financial systems one bite at a time.

But the existing financial industry isn’t just under threat from outside. Many of the processes and institutions of finance have been around for centuries, and their processes are tied to physical systems rather than digital ones. As a result, they’re unable to take advantage of digital innovations easily and remain competitive.

Let’s look at accounting

Accounting is a legislatively necessary process. It supports taxation, allows the evaluation of a company’s worth, lets lenders establish credit-worthiness, and so on. Formally, it is “the measurement, processing, and communication of financial information about economic entities. Accounting measures the results of an organization’s economic activities and conveys this information to a variety of users, including investors, creditors, management, and regulators.” Accounting is a broad field, which includes financial, management, and tax accounting as well as auditing.

Accounting itself can be traced back to the dawn of recorded history, and the practice of double-entry bookkeeping has been around since the 15th century. The organized profession of accountancy started in the 19th century, particularly after the stock market crash of 1929.

For centuries, accountants have needed a way to track a company’s spending, revenues, and assets. They did this according to a set of rules that defined where things should live, ensuring consistency across years and throughout different companies. In the 20th century, those rules were formalized into standard frameworks like the Generally Accepted Accounting Principles (GAAP) or the International Financial Reporting Standards (IFRS).

Unfortunately, age-old practices like accounting are full of outdated systems that are hard to update for a digital era.

Filing cabinets are a bad mental model

Consider, for example, the practice of account classification, which tells an accountant where a particular asset should be recorded. It’s bit like using a filing cabinet, and knowing which folder each document should be stored in. But filing cabinets are skeuomorphs — old analogies that we’ve dragged into new uses. Skeuomorphs are mental training wheels. They ease us into a learning curve, but they soon hold us back. Dragging the physical attributes of a filing cabinet into a digital world limits the innovation we can produce with the resulting digital filing cabinet — and as a result, accounting is held back by its reliance on these kind of older models.

I’m drinking a cup of coffee on a plane as I write this. The airline might file it under a passenger (Alistair); an object (cup); or a beverage (coffee). It might even file it under the seat I’m in, or the flight number; it might add information to the cup over time.

Imagine you want to file my coffee cup somewhere in a physical filing system. Where does it live? Where do I keep a tally of cups served, or of liters of lackluster airline coffee served? Someone tasked with counting cups during a beverage audit might not look under coffee.

One approach to solving this might be to make several copies of my cup of coffee, filing one under “alistair,” one under “cup,” and one under “coffee.” But then a note made on the “Alistair” cup wouldn’t be updated to the “cup” and “coffee” instances of the cup. The copies would quickly diverge, and the filing system grow unwieldy.

To tackle this problem, accountants follow a system of account classification that tells them where to file things: costs, bill payments, physical goods, and so on. In other words, there can be only one “Alistair’s Cup of Coffee,” and it’s filed under “cups.”

Hashtags break GAAP

In a modern, digital world, there’s an object, which might be tagged as #coffee, #alistair, #Seat19D, #cup, and myriad other meaningful pieces of metadata. To get a tally of cups, you’d add up #cup; to find out how much coffee you’d served, you’d add up #coffee. Hashtags are a different kind of filing system, one that’s patently obvious to anyone who’s tagged a picture in Facebook or added a hashtag to a Tweet. But it’s not how accounting works.

New thoughts to think

Traditionally, standard-setters address challenges by adding more detail to the audit report, or to the note disclosures of financial statements. For accounting, innovation has been synonymous with an increase in regulatory complexity or the volume of reporting. This, of course, tends to reduce management’s ability to comply with GAAP and to increase the cost of audit.

The inability to think new thoughts and take advantage of digital technology is as big a threat to large financial organizations as any brazen, well-funded fintech startup. Financial incumbents face tremendous changes as technology becomes ubiquitous:

Once every transaction is tracked through digital systems, auditing and taxation gets dramatically simpler. As any manager knows, getting employees to complete expense reports is tough. But software has no choice but to record its actions. Every Uber ride tracks itself, including not only the cost of the trip, but also the time, start, and end points.

Financial reports can prepare themselves. Auditing is about examining financial records to ensure that the statements management has prepared are fair. GAAP reporting rules are complex, making it harder for mid-level management teams to properly interpret them, and managers are more concerned with running the company than formal reporting. So, accounting can often be like forensics. But when software collects transactions and understands how to report them, accountants will spend less time on forensics and more time doing QA and testing of software’s underlying algorithms.

As financial tools are democratized, there’s less variance and more standardization of the underlying data; and what isn’t consistent, machine learning can sort out relatively well, flagging exceptions. When every phone is a point-of-sale terminal and digital wallet, everything is stored consistently. A more standardized level of record keeping and financial reporting makes transactions easier to certify and verify, which was the idea behind the eXtensible Business Reporting Language (XBRL).

New currencies have their own ledgers. Tax accounting isn’t the same as financial accounting, but it’s another important area for innovation. If taxation authorities could simplify and harmonize tax code, it might reduce filing costs and tax audit disputes. But because tax code is an economic tool for governments, it’s a difficult problem to address. E-commerce and virtual companies present some of the thorniest tax questions for regulators. Once cryptocurrency — which includes its own independently verifiable record of transactions — becomes a viable alternative to traditional currencies, tax accounting will need to fundamentally change.

Even consumer protection will change: the JOBS act and relaxation of “accredited investor” requirements increase the need for automated auditing and risk assessment tools that can report financial health in real time. This is a big topic of discussion in the crowdfunding sector: early-stage companies can’t pay for robust audit procedures and VCs use metrics such as milestones achieved and talent pools — regulators have few tools to assess financial information and protect investors.

An innovator’s dilemma

That accounting will change is obvious. But the world’s big accounting firms don’t feel like they’re under threat, in part because the regulatory environment hasn’t shifted yet. Banks still want audited statements; tax authorities still expect auditable filings. Can the industry disrupt itself before others do? The jury’s still out, but plenty of people are trying.

In The Innovator’s Dilemma, Clay Christensen observed that disruption happens not because the existing vendors couldn’t innovate — indeed, they were innovating faster than even their biggest customers needed them to — but because a cheaper, “good enough” alternative was initially adopted by a market that wasn’t attractive to existing vendors, and then grew because it appealed to a different value than the original.

For a more modern example of Christensen’s model, we need look no further than Amazon Web Services. Incumbent server vendors were building faster, more reliable servers for CIOs; meanwhile, Amazon launched an on-demand computing platform whose machines were slower, failed unexpectedly, and cost more to run over time than dedicated hardware. Early buyers — startups, CTOs, rogue teams — didn’t look like the kind of customer that server vendors wanted, and they were largely ignored. Over time, cloud architectures became more resilient and powerful than dedicated servers, and now Amazon has an almost unbeatable head start.

The accounting industry is in a similar situation. Hundreds of software tools aimed at consumers and small businesses are finding their way slowly into the hands of bigger organizations, altering how formal accounting and auditing (which is ripe for disruption) happen.

A slightly different way to think of accounting is as a set of processes in the present that make activity in the past discoverable in the future. Technology can significantly improve and automate the existing accounting industry, but it probably won’t, at least not in the near future, because its ancient metaphors create too much inertia.

Instead, new entrants will emerge, breaking existing rules, and eventually those rules will become commonplace, and then formalized, and then fact.

A relatively commonplace occurrence — credit card fraud — made me reconsider the long-term impact of financial technology outside the Western world. I’ll get to it, but first, we need to talk about developing economies.

I’m halfway through Hernand de Soto’s The Mystery of Capital on the advice of the WSJ’s Michael Casey. Its core argument is that capitalism succeeds in the Western world and fails everywhere else because in the West, property can be turned into capital (you can mortgage a house and use that money to do something). The book uses the analogy of a hydroelectric dam as a means of unlocking the hidden, potential value of the lake.

But in much of the world, it is unclear who owns what, and as a result, the value of assets can’t be put to work in markets. In the West, we take concepts like title and lien and identity for granted; yet, these systems are relatively new and don’t exist around the world. As de Soto noted in his book, in the Soviet Union, unofficial economic activity rose from 12% in 1989 to 37% in 1994.

There’s a lot of money unaccounted for

De Soto’s research suggests that there is $9.3 trillion in property owned by what we would consider the “unbanked” or “poor” worldwide. For context, at the time of writing, that’s nearly twice the circulating money supply of the United States. But because there aren’t standardized legal systems for tracking the ownership and transfer of that property, it can’t be turned into capital, and it can’t be put to work.

So, back to credit card fraud. On May 25 at 4 a.m., a taxi service in San Francisco sent me a receipt (pictured right) for $80. The receipt was mailed by Square, which handled the payment.

Click for the full receipt image.

But I wasn’t on the West Coast.

Clearly, this was fraudulent activity. Square sent me the receipt automatically because they had my email address on file from past transactions, associated with my credit card number. The fraudster signed the receipt illegibly; I have a copy of it, thanks to email. I forwarded the mail to Visa and an autoresponder told me to check my bill for charges I might want to dispute. I waited.

The next day, my bank called me back to say my card had been cloned. “I know,” I said. “I’m the one who told you.” The representative couldn’t believe it; at first, he wanted to know how the fraudster knew my email. But there it was in the transaction history — along with an attempt to charge $4,000, which failed; followed by a gas-buying binge up and down the West Coast in the few short hours before I mailed Visa.

How does this turn property into capital?

So, how might my cloned card be related to the emergence of capital in the developing world?

Electronic transactions create records where none existed, automatically. What was once a lengthy process of detecting fraud is quickly becoming trivial now that we all have mobile devices. Banks like Chase send an SMS when a credit card is used. And more than four billion humans have mobile devices with them all of the time. That’s why fraudsters prefer gas stations, money transfer tools, and other places where a small payment is still done with an unverified signature.

Modern fintech is going to create formal, standard records about economies where none existed before — not as a separate activity, but as a side effect of how it works. A blockchain transfer creates its own ledger; an Uber ride generates a receipt; a purchase on Square triggers an email. Humans are lousy at keeping records, but software has no choice but to do so. What’s more, machine learning tools can look at signals within those records and form ideas about ownership, creditworthiness, and provenance.

As fintech becomes normal and starts to involve the 4.3 billion humans with a phone, it will also take much of the world’s “unrecognized property” and make it recognizable. Digital, ubiquitous financial technology might lift trillions of dollars—and billions of people—out of obscurity and into prosperity.

Now, consider what would happen to the economic landscape of the planet if money equivalent to two times the U.S. cash supply entered the economy, creating working capital. That’s why Next:Money isn’t just about rethinking transactions or markets — but capitalism itself.

]]>http://radar.oreilly.com/2015/05/9-3-trillion-reasons-fintech-could-change-the-developing-world.html/feed0Mind if I interrupt you?http://radar.oreilly.com/2015/05/mind-if-i-interrupt-you.html
http://radar.oreilly.com/2015/05/mind-if-i-interrupt-you.html#commentsWed, 20 May 2015 11:30:42 +0000http://radar.oreilly.com/?p=76864We’ve been claiming information overload for decades, if not centuries. As a species, we’re pretty good at inventing new tools to deal with the problems of increasing information: language, libraries, broadcast, search, news feeds. A digital, always-on lifestyle certainly presents new challenges, but we’re quickly creating prosthetic filters to help us cope.

Now there’s a new generation of information management tools, in the form of wearables and watches. But notification centers and Apple Watches beg the question: what’s the best way to interrupt us properly? Already, tables of friends take periodic “phone breaks” to check in on their virtual worlds, something that might have been considered unthinkably gauche a few years ago.

Since the first phone let us ring a bell, uninvited, in a far-off house, we’ve been dealing with interruption. Smart interruption is useful: Stewart Brand said that the right information at the right time just changes your life; it follows, then, that the perfect interface is one that’s invisible until it’s needed, the way Google inserts hotel dates on a map, or flight times in your calendar, or reminders when you have to leave for your next meeting.

But all of this technology is interfering with reflection, introspection, and contemplation. In Alone Together, Sheri Turkle observes that it’s far easier to engage with tools like Facebook than it is to connect with actual humans because interactive technology’s availability makes it a junk-food substitute for actual interaction. My friend Hugh McGuire recently waxed rather poetically on the risks of constant interruption, and how he’d forgotten how to read because of it.

At work, modern productivity tools like Slack might do away with email conventions, encouraging better collaboration, but they do so at a cost because they work in a way that demands immediate attention, and that interrupts the natural rhythm we all need to write, to read, and to immerse ourselves in our surroundings. It’s hard to marinate when you’re being interrupted.

One message center to rule them all

This is made worse by the sprawl of modern messaging platforms. People with a smartphone often have multiple IM and phone channels: Google Voice, text messages, Facebook, Twitter, LinkedIn, Skype, FaceTime, and so on. It’d be nice to have only a few, well-managed, communications channels.

Of course, conventions change quickly: children avoid their parents’ social networks, making it hard for us to settle on standards, leaving us with the lowest common denominator of SMS. And every new startup is clamoring for attention, engagement, and the right to notify users.

One wrong message can disrupt an entire day — as Jon Bruner put it, “Whenever I’m on the phone and someone texts my Google Voice number, my entire environment explodes. Whatever queue of work I’d built for myself at the beginning of the day starts to fall apart under the incoming traffic.”

Notification centers are partisan, too. When we select a device — Android, iPhone, and so on — we’re delegating to it opinions about what’s important, and those opinions will likely be tied to their platform. The Apple Watch will want to use Apple Maps, the Android version will prefer Google Voice. In this way, attention management becomes a form of vendor lock-in because as soon as you use non-Apple communication channels on an Apple platform, the attention management won’t work as well. You’ll be punished with a barrage of bad interruption for stepping outside the orchard walls.

Hacking your interruptions

Most user interfaces make a lot of assumptions about what the user wants to do. An elevator’s buttons need to work for everyone; they don’t take long to learn. UX designers create affordances, suggest workflows, and help users gradually learn how the system works. But the more intimate the system, the more that people will want to make it their own — and few things are as intimate as a personal agent.

Once you get this close to an end user, effectively becoming a part of who they are and how they process the world around them, everyone’s unique. We’ll develop hacks atop our prosthetic brains, customizing them, because they’re us.

Watches and wearables don’t just deliver information, of course. They also collect it. If the watchmakers are smart, they’ll create feedback loops that learn from users and incorporate the best attention hacks and smartest defaults into future versions of software.

Today, Jawbone’s Up band learns what basketball feels like by noticing a particular pattern of movement, then asking users if they were playing basketball, crowdsourcing a better understanding of activities. Tomorrow’s smart watches can take that further: it’s not hard to imagine a smart wearable that listens for the sound of typing and a leveling of pulse, decides you’re in a flow state, changes music to encourage concentration, and suppresses notifications or even reschedules meetings.

Fixing interruption without technology

A world in which vendors actively work to throttle interruptions and win us free time may not be realistic. ‘Distracted’ may be an inherently self-inflicted state, like ‘busy.’ There’s no software or hardware that will make you less busy if your response to every invitation to spend your time is to say ‘yes.’ Similarly, there may be no software or hardware to make you less distracted if you see distractions as inherently positive: ‘Someone likes me! I learned something! My team is working!’

If more technology isn’t going to solve an interruption problem created by technology in the first place, then it’s up to us as people to do the management ourselves: timebox our activities and availability, manually disable and physically manage our digital interruptions, and embrace the interruptions when they do come. This ‘hack’ works, but it requires that you grant yourself agency: step out of the path of the firehose when you don’t want to drink from it.

Context is everything

Ultimately, this is all about context. The more a system knows about you, the better it works. If you’re concentrating intently or in the middle of a burst of productivity, then a good agent would require a far more urgent message to allow an interruption. Similarly, a talented musician doesn’t want their wrist buzzing when they’re in the middle of a piece, but a student might want guidance and feedback as they play.

We have plenty of unavoidable interruptions today that we appreciate. Peripheral vision and flinching means you don’t get hit in the head with a ball; pulling your arm away from a hot pan means you don’t get serious burns. Most people equate feedback from wearables with Facebook notifications or turn-by-turn directions; but done right, interruption won’t feel like I’m getting an SMS—it’ll feel like a natural extension of my brain, like reflexes that help me survive and navigate the world, like a personal coach.

With watches and wearables as the gatekeepers of our information, there’s a tremendous amount of promise and peril sitting on our wrists. Whether we fine-tune agents to handle information, or give them tremendous context so they can adapt to our circumstances, or opt out of information entirely, one thing is clear: how we handle information says much about who we are.

]]>http://radar.oreilly.com/2015/05/mind-if-i-interrupt-you.html/feed1Apple Watch and the skin as interfacehttp://radar.oreilly.com/2015/05/apple-watch-and-the-skin-as-interface.html
http://radar.oreilly.com/2015/05/apple-watch-and-the-skin-as-interface.html#commentsTue, 12 May 2015 13:08:34 +0000http://radar.oreilly.com/?p=76689Recently, to much fanfare, Apple launched a watch. Reviews were mixed. And the watch may thrive — after all, once upon a time, nobody knew they needed a tablet or an iPod. But at the same time, today’s tech consumer is markedly different from those at the dawn of the Web, and the watch faces a different market all together.

“It was only on Day 4 that I began appreciating the ways in which the elegant $650 computer on my wrist was more than just another screen,” he wrote. “By notifying me of digital events as soon as they happened, and letting me act on them instantly, without having to fumble for my phone, the Watch became something like a natural extension of my body — a direct link, in a way that I’ve never felt before, from the digital world to my brain.”

On-body messaging and brain plasticity

Manjoo uses the term “on-body messaging” to describe the variety of specific vibrations the watch emits, and how quickly he came to accept them as second nature. The success of Apple’s watch, and of wearables in general, may be due to this brain plasticity.

For example, there’s a belt you can wear, ringed with pads, called the Sensebridge Northpaw. The north-facing pad on the belt vibrates, helping you to get your bearings. Quinn Norton wrote a post about the experience of trying one on, and users report never getting lost after a few days of wearing it. They also report disorientation when they remove it. Our brains are plastic, and they make surprisingly short work of turning the belt’s feedback into a new sense.

Adapting to new information is what brains do best. Radically transformative plasticity happens when a brain input is altered drastically — motor cortex remapping when fingers fuse together, compensating for lost limbs, and so on. But our brains are adapting all the time, constantly on the verge of chaos.

Synthesizing the world around us

Much of what your brain does is synthesis — creating entirely new, synesthesia-like responses in the brain that don’t exist in the real world. Our brains process what they can, and the world we perceive is a construct. Our senses aren’t great; our brain makes them so, and in doing so, makes a lot of stuff up.

“Consider that even your cell phone camera has better resolution than [your eyes]. So, how can it be that you have such a rich and detailed perception of the world, when in fact your visual system’s resolution is equivalent to a cheap digital camera?” ask neuroscientists Stephen L. Macknik and Susana Martinez-Conde, the authors of Sleights of Mind. “The short answer is that the richness of your visual experience is an illusion created by the filling-in processes of your brain.”

Overloading our senses

Some senses, like sight and touch, can be augmented: we can have several wearables, each with its own patch of skin; we can have heads-up displays projected onto our retina.

Other senses aren’t as adept at input overload. In the Music/Data report I’ve been writing, one of the “turing problems” of what several folks named — and I’m going to start calling — Music Science, is that we can’t quickly scan songs.

If you go to an art gallery, your eyes can saccade across many images to find the one you like; for music, it takes around five seconds to decide you hate something, and 25 seconds to decide you like it, as Google’s Douglas Eck explained to me. But of course, you can’t listen to two songs at once (okay, you can because Girl Talk, but you get my meaning.) Video has the same real-time bottleneck as audio. At least you can consume lectures at 1.5x speed without losing much fidelity of experience, but the same isn’t true of aesthetics like songs or art films.

The Sensebridge Northpaw and the Apple Watch are good examples of augmenting perception, co-opting bundles of nerves to send new kinds of information to our brains. Frankly, I’m way more excited about Magic Leap’s retinal projection and DARPA’s cortical modem because skin patches are a scarce, messy resource.

From one-way to two-way communication

Belts, watches, and heads-up displays are all one-way inputs from the world into the human, akin to broadcast back in the day. The next next thing is going to be two-way interfaces, just as the interactive Web supplanted broadcast.

Is it time for my implant?

There are 13 pairs of nerves (counting the recently discovered terminal nerve) going into my brain right now. If I don’t want to overload my optic nerve, or to clutter up patches of my skin, or make any other of the 13 nerves do double duty, is it time for an implant?

The notion of adding new, fundamental senses is fraught with peril and ethics:

From whom do I buy my implant, and where is it legal?

What will it do to my brain once it’s installed?

If I miss my medical payments, can I pay it off by watching ads?

Perhaps I should I be wary of giving control of my nervous systems to technology, and just be happy repurposing patches of my skin, upgrading my input bandwidth the way I once upgraded a modem. If so, then maybe that’s why the Apple Watch will catch on. But it’s just a baby step toward physical augmentation.

When we create not only new senses, but also new brain areas for motor control, we’ll become genuinely new beings. And that will redefine consciousness and completely alter the species.

]]>http://radar.oreilly.com/2015/05/apple-watch-and-the-skin-as-interface.html/feed2Startups suggest big data is moving to the cloudshttp://radar.oreilly.com/2015/05/startups-suggest-big-data-is-moving-to-the-clouds.html
http://radar.oreilly.com/2015/05/startups-suggest-big-data-is-moving-to-the-clouds.html#commentsMon, 11 May 2015 14:24:33 +0000http://radar.oreilly.com/?p=76642At Strata + Hadoop World in London last week, we hosted a showcase of some of the most innovative big data startups. Our judges narrowed the field to 10 finalists, from whom they — and attendees — picked three winners and an audience choice.

Underscoring many of these companies was the move from software to services. As industries mature, we see a move from custom consulting to software and, ultimately, to utilities — something Simon Wardley underscored in his Data Driven Business Day talk, and which was reinforced by the announcement of tools like Google’s Bigtable service offering.

Ultimately, big data gives clouds something to do. Distributed sensors need a widely available, connected repository into which to report; databases need to grow and shrink with demand; and predictive models can be tuned better when they learn from many data sets.

While on-demand data services might seem the obvious endgame of big data, the users may not agree. Our attendees voted Bigboards — which makes a portable cluster that sits on your desk — their top choice, and loved it as a quick, easy platform for experimentation, proving that while big data might live in clouds, humans still want to be able to kick the tires from time to time.

Editor’s note: this post originally appeared on the author’s blog, Solve for Interesting. This lightly edited version is reprinted here with permission.

In 10 years, every human connected to the Internet will have a timeline. It will contain everything we’ve done since we started recording, and it will be the primary tool with which we administer our lives. This will fundamentally change how we live, love, work, and play. And we’ll look back at the time before our feed started — before Year Zero — as a huge, unknowable black hole.

This timeline — beginning for newborns at Year Zero — will be so intrinsic to life that it will quickly be taken for granted. Those without a timeline will be at a huge disadvantage. Those with a good one will have the tricks of a modern mentalist: perfect recall, suggestions for how to curry favor, ease maintaining friendships and influencing strangers, unthinkably higher Dunbar numbers — now, every interaction has a history.

This isn’t just about lifelogging health data, like your Fitbit or Jawbone. It isn’t about financial data, like Mint. It isn’t just your social graph or photo feed. It isn’t about commuting data like Waze or Maps. It’s about all of these, together, along with the tools and user interfaces and agents to make sense of it.

Every decade or so, something from military or enterprise technology finds its way, bent and twisted, into the mass market. The client-server computer gave us the PC; wide-area networks gave us the consumer web; pagers and cell phones gave us mobile devices. In the next decade, Year Zero will be how big data reaches everyone.

Content as a gateway drug

The battle for our digital lifelog is already well underway. You probably buy into Facebook, Apple, Amazon, Google, Microsoft, or a handful of others for your calendar, your email, and your media. Media was a gateway drug to a walled garden: your content is locked in, and the barriers to entry are simply too huge to leave.

“The reality is that once inside the walled gardens of GAFA [Google, Apple, Facebook, Amazon], consumers will see the walls begin to rise. My music is in the cloud, but soon enough, courtesy of the gateway drug of the quantified self movement, my medical records will be in the cloud, my home security will be managed from the cloud, my banking and my energy needs too.

“If a consumer subscribes to one cloud service, then they are very likely to continue with all the extensions of that cloud service platform rather than mix and match. It becomes increasingly inconvenient to remain service agnostic. The dominant players nurture their ‘walled-gardens’ of creative content and other services tethered to their digital formats and devices.” – Jeremy Silver, Digital Medieval.

Why now?

If this sounds far-fetched, consider two things.

We’re ready for the tech. Ten short years ago, the iPhone didn’t exist. Yet a decade later, we don’t know how to function without a prosthetic brain in our pockets. Today the Leapfrog Leap Band monitors the activity of three-year-olds, giving them rewards for activity and yelling at them if they stop. The Jibo home robot, which recognizes faces and acts as a family’s central tool for coordinating their lives, was so popular that it not only blew past its Indiegogo target, it shut down pre-sales. A BBC survey of futurists predicted that by 2019, many humans would permanently wear a device that records, stores, and indexes every conversation they have.

But — and this is a big but — it’s disparate. Nobody has the whole picture.

Where the data comes from

Some of this data we’ll collect ourselves, of course, through wearables and millions of smart devices. Some of it, like bank transactions, we’ll get easily through downloads and programming interfaces. And some of it — particularly that which is secretive, like watchlists, or that which is sold for profit, like credit history — will only be liberated through enlightened legislation.

We’re also at a turning point in human history because we’re digitizing everything. That means copies are free and analysis is effortless. Consider that music has metadata, leaving that industry in tatters, with power moving from publishers to those who can analyze consumption. Digital data changes everything it touches: Affectiva captures and quantifies emotions in real time from facial image processing; Sociometric Solutions does the same with tone of voice.

It’s here, and it’s a moral issue

This will become a flashpoint for digital rights. Others track you; at the very least, you should have access to that data. It’s your life, after all, and as regulation becomes increasingly data-driven we need a sort of data habeas corpus (more specifically, a confrontation clause): I have the right to see the data collected about me by others. In fact, we’ll have to update that right, too: I need the right to see the data my accusers present against me, using the tools of my accuser.

Ultimately, this is a profound social challenge, and one that I believe will become the moral issue of the next decade: nobody should know more about you than you do. Others might understand the data better — your banker understands your finances; your doctor understands your health. But you should be able to look at it because the tools that analyze and visualize it are improving rapidly and becoming agent-based.

Consider that IBM’s Watson went from diagnosing cancer at the level of a second-year medical student to being 40% better than doctors, in a few short years. As those kinds of tools become cheap — or even too cheap to bill for, the way Google is — computers will be as good as your banker and your doctor. But you’ll need your data.

Ann Wuyts explains that new European legislation around digital transparency could hasten this kind of thinking. Belgium’s Bart Tommelein wants to see an annual “privacy return” the same way income tax returns happen. But what format should this take? How do we send individuals only their own data when it is inextricably intertwined with that of everyone else’s? And what tools would enable the average citizen to explore it meaningfully?

What’s behind it?

This permanent life feed is the convergence of two big trends.

Enterprise resource planning (ERP): Most big companies use massive ERP platforms to coordinate their operations. ERP tools manage cash, employees, materials, and the way the entire company runs at scale. While ERP software is often seen as cumbersome and hidebound, it is also the nervous system that lets large organizations function consistently and predictably.

Today’s consumer already has access to a wide range of tools to manage their lives online and offline. Calendaring and messaging, once the domain of enterprise IT, are now so commonplace we take them for granted. Transaction-by-transaction billing is here, and it’s only getting more digital with Paypal, Apple Pay, and the foundations of cryptocurrency. From rentals by owner to dog-walking services to car sharing tools — all the pieces of the collaborative economy are made possible by a software foundation, and that software leaves a digital breadcrumb trail our life feeds can consume.

Armed with a life feed and the tools to analyze it properly, we’re ushering in an era of what Catherine Barr calls “Life Resource Planning”, or LRP, that lets people coordinate their lives both online and offline. It won’t look like ERP any more than the Facebook feed looks like the feed from a Bloomberg trading desk. But it will serve similar functions.

Big data: Despite being a horribly over-used term, big data is a fundamental element of this change. We think nothing of being able to consult the sum of human knowledge in seconds. This has already altered human behavior in significant ways. And computers are only now starting to be able to classify data themselves, forming smart categories and groupings that make it easier to retrieve information.

Big data isn’t really anything new; rather, it’s about the fact that analyzing huge amounts of varied information quickly has now become vanishingly cheap; so cheap, in fact, that we can’t even bill for it, so we subsidize it through advertising.

When we store information, we generally use a unique key — your social security number, for example, or your Costco member ID. But how can we consolidate information across all those sources in a coherent, consistent way?

Fairly easily, as it turns out. Interstellar aside, time is the primary key of the universe. And as long as we’re on our planet, we can pinpoint interactions by longitude, latitude, and altitude.

Knowing those four things is enough to create a basic index of all events in someone’s physical life; for online activity, we might need an IPV6 address or some kind of account, or visibility into the client or browser they’re using. But that’s probably sufficient for most things; machine learning and human intervention can successfully classify the rest of our lives.

But tying that to identity in reliable ways is challenging; it probably relies on intensely personal biometrics, from retina and heartbeat to even more invasive information. Biometric security is terrifying because the ways in which you hack it are terrifying, and because the more personal and invasive it is, the more likely it is to become a dangerously lucrative business.

Some consequences

It’s difficult to speculate from this side of Year Zero, but there are undoubtedly vast consequences for how humanity will change. For example:

It’s hard to hide. Consider the work of a secret agent today. No more Universal Exports; now it’s “Mr. Bond, I checked Facebook and you’re obviously a spy.” Alibis will change fundamentally, and we’ll see the emergence of forged lives. The need and demand for fake lives will never disappear.

Hijacking lives. With our lives connected to digital information, they could be hijacked.

Patterns are obvious. Correlation, alerting, and prediction will seem commonplace. Choosing what to share, with whom, will be a huge consideration. The history of those interactions will itself be both intensely private and hugely revealing — and potentially misleading, since so many interactions can and will be automated.

Thoughtfulness means less. How excited are you when someone remembers your birthday now that we all have Facebook? With attention a scarce resource, paying attention to them is a sign that they mean something to you. When an agent pays attention for you, remembering something is a sign that someone launched a task, or that they can afford a better agent.

How do we reinvent ourselves? If everything we’ve ever liked, and everyone we’ve ever met, becomes part of a documented history, how can we rewrite our own stories? How can we escape a stalker or create a new life for ourselves?

Prediction and over-optimization. When we go beyond predictive to prescriptive analytics, we’ll change how people behave. And that behavioral change will alter the conditions: when everyone goes to the hidden gem of a restaurant, it’ll be packed. To quote Yogi Berra, “Nobody goes there any more; it’s too crowded.” Lutz Finger makes the point that predictive analytics is already maturing fast.

Will it fuel its own growth?

On the one hand, once this timeline gets some data, it will want more. Machine learning is a hungry engine, and we’ll want to feed it, rewarded by ever-better cognition. Given a bit of information, it will infer more; no need to measure UV exposure directly when geographic coordinates, the UV index, and ambient light provide a reasonable proxy.

What’s more, once a few people use it, it’ll infect everyone, one meeting, doctor’s visit, or speeding ticket at a time. Once citizens realize that others know more than they do, and that it’s easy to correct that imbalance, they’ll demand access.

On the other hand, maybe it will plateau. Younger generations already value privacy above cognition. The fear of being judged by peers they don’t even know, for example, is a huge motivator for not sharing. Think about how scary it was to worry about what people in high school would think of your weird shirt, and multiply that by the chance that the photo of you in your weird shirt could become a negatively life-altering meme.

Get ready for real personal agents

With Life Resource Planning a ubiquitous reality, we’ll finally be ready for personal agents. Already Google Now, Siri, and Cortana are shifting how we use information from responsive to anticipatory, interrupting us wisely.

An agent with true AI will become a sort of alter ego; something that grows and evolves with you. What rights will these agents have, and how can they be designed to protect us rather than reporting on us? That’s a far broader moral question than I’m considering here, but it’s one that must be considered. Can a personal agent invoke the Fifth Amendment?

The next 10 years

I believe this life feed will shape the next decade of consumer technology. It’s easily a trillion dollars, just as home computers, or the Internet, or smartphones were. And it’s more than just a new market or new industry — it’s the start of a new species. Year Zero is the top of a slippery slope toward a singularity.

When the machines get intelligent, some of us may not even notice, because they’ll be us and we’ll be them. But since technology is never evenly distributed, for those left behind, or saddled with inferior technology, it won’t be a smooth transition.

]]>http://radar.oreilly.com/2015/03/year-zero-our-life-timelines-begin.html/feed11Startup Showcase winners reflect the data industry’s maturityhttp://radar.oreilly.com/2015/02/data-startup-winners.html
http://radar.oreilly.com/2015/02/data-startup-winners.html#commentsThu, 26 Feb 2015 17:11:58 +0000http://radar.oreilly.com/?p=74421At Strata + Hadoop World 2015 in San Jose last week, we ran an event for data-driven startups. This is the fourth year for the Startup Showcase, and it’s become a fixture of the conference. One of our early winners, MemSQL, has since raised $50 million in financing, and it’s a good way for companies to get visibility with investors, analysts, and attendees.

This year’s winners underscore several important trends in the big data space at the moment: the maturity of management tools; the deployment of machine learning in other verticals; an increased focus on privacy and permissions; and the convergence of enterprise languages like SQL with distributed, schema-less data stacks.

Third place went to Unravel, which improves the reliability, performance and utilization of Hadoop applications and clusters. As data systems have become increasingly complex, the efficiency of those systems has been under siege; no one person knows the whole stack, and abstraction layers designed to simplify eventually become costly in terms of processing. This happened in networking and cloud computing, and now it’s happening in big data.

Second place went to Caspida, which finds hidden threats using behavior-based machine learning algorithms. Computer vulnerabilities are also increasingly complex — we’re far up the stack from TCP/IP, and exploits often combine a variety of social, logical, and brute force approaches. As a result, security monitoring relies heavily on heuristics, and tools that can learn what abnormality looks like are a first line of defense.

The audience choice was Blue Talon, which ensures fine-grained control around who has access to what data. Now that NoSQL approaches to data have moved beyond search engines and product recommendations into enterprise environments, access control is a “table stakes” feature. That means control over encryption, deletion, recovery, and eventually even things like billing and cost control.

And our judges’ first place winner was Snowflake, a SQL data warehouse built as an elastic cloud service that processes semi-structured and structured data in one system without transformation or fixed schemas. Once, computing resources were costly, so safeguarding those resources was an implicit design constraint. But cloud computing gives us a nearly limitless number of inexpensive machine instances, changing many of the underlying constraints that led to the design of traditional data warehouses.

Ultimately, this year’s showcase was a reflection of the maturity and enterprise readiness we’re seeing in the industry. Absent from the winners were real-time technologies, or companies tackling machine data and the Internet of Things — even though these were hot topics in the halls and sessions of the event.

The Internet of Things (IoT) has a data problem. Well, four data problems. Walking the halls of CES in Las Vegas last week, it’s abundantly clear that the IoT is hot. Everyone is claiming to be the world’s smartest something. But that sprawl of devices, lacking context, with fragmented user groups, is a huge challenge for the burgeoning industry.

What the IoT needs is data. Big data and the IoT are two sides of the same coin. The IoT collects data from myriad sensors; that data is classified, organized, and used to make automated decisions; and the IoT, in turn, acts on it. It’s precisely this ever-accelerating feedback loop that makes the coin as a whole so compelling.

Nowhere are the IoT’s data problems more obvious than with that darling of the connected tomorrow known as the wearable. Yet, few people seem to want to discuss these problems:

Problem one: Nobody will wear 50 devices

If there’s one lesson today’s IoT start-ups have learned from their failed science project predecessors, it’s that things need to be simple and turnkey. As a result, devices are designed to do one thing really well. A corollary of this is that there’s far too much specialization happening — a device specifically, narrowly designed to measure sleep, or eating speed, or knee health.

With this many competitors, the industry will crash. Wearables today are a digital quilt, a strange patchwork of point solutions trying to blanket a human life. To achieve simplicity, companies have over-focused on a single problem, or a single use case, deluding themselves that their beach-head is actually a sustainable market. The aisles of CES were littered with digital yoga mats, smart sun sensors, epilepsy detectors, and instrumented snowboard bindings.

Problem two: More inference, less sensing

Consider the aforementioned sun sensor. Do you really need a wristband that senses how much sunlight you’ve been exposed to? Or can your smartphone instead measure light levels periodically (which it does to determine screen brightness anyway), decide whether you’re outside, and check the UV index? The latter is inference, rather than sensing, and it’s probably good enough.

When the IoT sprawl finally triggers a mass extinction, only a few companies will survive. Many of the survivors will be the ones that can discover more information by inference, and that means teams that have a data science background.

Early versions of Jawbone’s wearable, for example, asked wearers to log their activity manually. More recent versions are smarter: the device notices a period of activity, guesses at what that activity was by comparing it to known patterns — were you playing basketball for a half hour? — and uses your response to either reinforce its guess, or to update its collective understanding of what basketball feels like.

Problem three: Datamandering

This sprawl of devices also means a sprawl of data. Unless you’re one of the big wearable players — Jawbone, Fitbit, Withings and a handful of others — you probably don’t have enough user data to make significant breakthrough discoveries about your users’ lives. This gives the big players a strong first-mover advantage.

When the wearables sector inevitably consolidates, all the data that failed companies collected will be lost. There’s little sharing of information across product lines, and export is seldom more than a comma-separated file.

Consider that one of the strongest reasons people don’t switch from Apple to Android is the familiarity of the user experience and the content in iTunes. Similarly, in the IoT world, interfaces and data discourage switching. Unfortunately, this means constant wars over data formats in a strange kind of digital jerrymandering — call it datamandering — as each vendor jockeys for position, trying to be the central hub of our health, parenting, home, or finances.

As Samsung CEO BK Yoon said in his CES keynote, “I’ve heard people say they want to create a single operating system for the Internet of Things, but these people only work with their own devices.”

Walking CES, you see hundreds of manufacturers from Shenzhen promoting the building blocks of the IoT. Technologies like fabric sensors — which only months ago were freshly released from secret university labs and lauded on tech blogs — can now be had at scale from China. Barriers to entry crumble fast. What remains for IoT companies are attention, adoption, and data.

When technical advances erode quickly, companies have little reason to cooperate on the data they collect. There’s no data lake in wearables, just myriad jealously guarded streams.

Problem four: Context is everything

If data doesn’t change your behavior, why bother collecting it? Perhaps the biggest data problem the IoT faces is correlating the data it collects with actions you can take. Consider V1bes, which calls itself a “mind app.” It measures stress levels and brain activity. Sociometric Solutions does the same thing by listening to the tone of my voice, and can predict my stress levels accurately.

That sounds useful: it’d be great to see how stressed I was at a particular time, or when my brain was most active. But unless I can see the person to whom I was talking, or hear the words I was thinking about, at that time, it’s hard to do anything about it. The data tells me I’m stressed; it doesn’t tell me who’s triggering my chronic depression or who makes my eyes light up.

There might be hope here. If I had a photo stream of every day, and with it a voice recorder, I might be able to see who I was with (and whom to avoid). Start-ups like Narrative Clip, which constantly logs my life by taking a photo every 30 seconds and using algorithms to decide which of those photos are interesting, might give me a clue about what triggered my stress. And portable recorders like Kapture can record conversations with time stamps; their transcripts, analyzed, could help me understand how I react to certain topics.

Ultimately, it’s clear that the Internet of Things is here to stay. We’re in the midst of an explosion of ideas, but many of them are stillborn, either too specific or too disconnected from the context of our lives to have true meaning. The Internet of Things and big data are two sides of the same coin, and building one without considering the other is a recipe for doom.

]]>http://radar.oreilly.com/2015/01/the-internet-of-things-has-four-big-data-problems.html/feed12Decide Betterhttp://radar.oreilly.com/2014/03/deciding-better.html
http://radar.oreilly.com/2014/03/deciding-better.html#commentsWed, 26 Mar 2014 18:00:20 +0000http://strata.oreilly.com/?p=62043When we launched Strata a few years ago, our original focus was on how big data, ubiquitous computing, and new interfaces change the way we live, love, work and play. In fact, here’s a diagram we mocked up back then to describe the issues we wanted the new conference to tackle:

Yet big data alone wasn’t going to change our lives. It’s just information, after all. Marry the data science that helps us optimize, learn, and improve the way we decide with with a world of sensors to collect and of interfaces to control and display, however, and you’ve got a feedback loop of unprecedented proportions.

At its core, Strata is about one thing: Deciding better. Better as individuals, better as businesses, better as societies—and better as a species. We’re confronted with a daunting array of challenges, ranging from regional conflict, to energy, to pollution, to overpopulation, and many of these are by-products of the technologies we create. We think that a data-driven society can right many of these wrongs, and that innovation can overcome its own side-effects. I’m an optimist because, as Strata speaker James Burke observed, “the pessimists jump out the window.”

Now that data science, parallel computing, and realtime answers are “table-stakes” for technology discussions, Strata is moving forward. We’re adding several tracks to the program, partly because we’ve moved the event to a bigger venue, and partly because we’re addressing broader topics that apply to every facet of a business. We’re also revising other tracks to reflect how those topics have changed, with the goal of exploring new ideas among the proposals we receive.

The Design & interfaces track looks not only at user interfaces, but at how data can inform design, from the way experiments are conducted to the way we learn how people interact with and explore information. Interfaces might be on a screen, in a car, or around your wrist, and more often than not, interfaces are two-way—so that when you read a display, it reads you back.

The Ethics, law and society track has been part of Strata since its inception, but today an abundance of open data, insights into public surveillance, and heightened privacy concerns mean new, and often controversial, thinking on governance, ethics, and compliance. No less than the framers of the Internet are calling for a renegotiation of the pact we make with a life lived in public now that data collection is frictionless and ubiquitous and half our lives are lived online.

The new security track focuses not only on the tools needed to secure data and assure privacy, but on the ways data can help us win the race against adaptive adversaries. Data is a good tool for defense, but many adversaries will try to game the very algorithms with which we hope to find and defeat them, engaging in data warfare both online and off.

The Machine Data track dives into the data collected and generated by everything around us. It’s hard to store, analyze, and publish the torrent of information that today’s devices produce, and harder still to turn that torrent into understandable, meaningful insights.

Half a century ago, the average company on the Fortune 500 had a lifespan of 50 years; today, it’s there for 15. The Business and industry track recognizes that data razes incumbents even as it raises up new leaders. Companies that have harnessed information and technology—and make better decisions as a result—not only thrive; they get to rewrite the rulebook.

The big data industry is crossing a chasm, to quote Geoffrey Moore, another Strata speaker from earlier this year. It’s moving from niche applications of data science in vertical industries—finance, ad/tech, political campaigns—into a broader, more accessible field of decision science. That’s a significant leap. Very little is as fundamental as changing how we decide. We think the next few years will connect those who work with data far more closely to those who consume it.

We’re excited to make that leap, and we hope you’ll join us. We welcome ideas for presentations and tutorials, see the Call for Proposals for details.