The Channel Explosion: Off Screens and Out the Window

Perhaps the most fascinating single fact of the Cambrian explosion is that life on Earth diversified from largely unicellular organisms that occasionally bunched into colonies to multicellular organisms that came to represent much of the present-day animal kingdom — all at a single discernible moment 541 million years ago in the fossil record. As we approach the end of this decade, we're experiencing a similar Cambrian explosion — not in life forms, but in form factors.

Today, as our users and customers grapple with an expanding buffet of choices when it comes to their digital experiences, the question quickly turns to how best to build for a lengthening list of experiences that approach the ideal of content everywhere. But before diving into the details, it's helpful to take stock of how far we've come and what exactly those ambitious digital experiences that we're targeting look like. After all, we can only build what we understand.

In this week's column, the second in the Experience Express series (see the introductory column), we'll tackle a few of these thorny questions first before jumping into the architecture and the code: How did we get here? Which digital experiences are worth paying attention to? Is there any way to systematize or better categorize these new digital experiences?

Websites are now just the starting point

As late as the late 1990s, websites consisted almost exclusively of HTML consisting of text, images, and occasional other media assets. At the time, web content was primarily comprised of large swaths of narrative text, with images interspersed throughout that would in later years span the entire viewport on desktop and mobile. From the standpoint of user experience, most engaged with these web experiences solely through a keyboard and mouse attached to a desktop computer.

Until the end of the First Browser War, coding standards were not evenly codified across browser makers, especially as the Cascading Style Sheets (CSS) standard proposed by Håkon Wium Lie in 1994 finally began to be implemented in the late 1990s. For a time, the slow adherence to established World Wide Web Consortium (W3C) standards stunted the development of best practices in web development like the abandonment of table-based layouts in favor of CSS-driven layouts. Meanwhile, competition for browser share between Netscape and Microsoft cast a shadow on the initial reputation of JavaScript, which was written by Brendan Eich in 1995 in ten days while at Netscape but implemented differently across different browsers.

Drupal, whose versions 1.0 through 3.0 were released in 2001, was part of the initial push towards server-side dynamic web pages, where a server-side implementation — a back-end CMS — would process content entered by the user and retrieved from a database to generate markup through templates, instead of the foregoing method of hosting flat HTML and other assets. The advent of server-side dynamism foreshadowed the development of similar application logic occurring on the client side in the 2000s.

It's difficult to believe, after all of this, that today websites are now merely the starting point. At present, there are countless other digital contexts where standardization similar to that of the early-2000s web is still in its infancy.

From websites to web applications

With the transition to Web 2.0 and Dynamic HTML (DHTML) approaches that introduced interactive elements into websites, the era of web applications had begun. Whereas JavaScript had experienced infamy in the form of its inconsistent implementation in browsers, in the early 2000s it began to be used to enrich interactions through asynchronous handling of dynamic markup changes on the client side via Ajax (Asynchronous JavaScript and XML).

In Ajax, developers began to use a key component of core JavaScript functionality, the XMLHttpRequest API, to facilitate asynchronous data retrieval and enable background functionality where full page refreshes could be avoided. This was revolutionary in the transition of websites to web applications and cemented the migration away from flat-file HTML assets and markup generated on the server side toward web pages with dynamic components that would obviate the need for complete round trips to the server.

This is where the distinction between "websites" and "web applications" begins to blur, a differentiation that remains challenging to codify today. For more about the transition to client-side JavaScript, the ensuing JavaScript renaissance, and universal (isomorphic) JavaScript, Dries Buytaert has written a blog post on JavaScript's history across the stack.

Responsive web design

In the late 2000s, responsive web design (RWD) became a means to allow websites to transition gracefully across desktop, tablet, and mobile states without forcing a separate version of the page to be served. By treating content as a liquid ("content is like water") that must adapt to the shapes of the vessels in which it is delivered, responsive web design (a term coined by Ethan Marcotte in 2010) reconciled the desktop–mobile divide in web design and is now commonplace across the web as a compelling example of user interface plasticity.

In responsive web applications, content can either take the shape of a typical website, or, on a mobile device, adopt many of the characteristics of their native mobile counterparts. From the user's standpoint, large swaths of narrative text span the viewport, but so do images and other multimedia. Unlike desktop experiences, mobile and tablet versions of many responsive websites tend to respond to mobile-specific interactions such as taps, pinch-and-zoom, and swipe.

Native desktop and mobile applications

While native desktop and mobile application frameworks have existed for a long time, they were usually closed ecosystems inextricably tied to platform-specific technologies. Developers wishing to write mobile applications usually needed to learn Objective-C for iOS and Java for Android and engage with two very different communities.

In the late 2000s, new frameworks began to emerge which enabled the creation of cross-device native mobile applications based on nonnative code, like Xamarin, which converted C# applications into native code. The introduction of Titanium and Cordova (formerly PhoneGap), both web frameworks for native mobile applications, led to a new spotlight on web-to-native frameworks that enabled developers to write familiar web code and then compile it to native code. By 2013, Titanium was estimated to power applications on up to 10% of all smartphones globally.

In the wake of the JavaScript renaissance, many JavaScript frameworks and libraries such as Angular and React have become deeply involved in the web-to-native phenomenon by offering pure JavaScript-to-native frameworks such as Electron, Ionic, and React Native — some of which also include capabilities for an ongoing resurgence in native desktop applications built using web technologies. These frameworks treat platform agnosticism as a first-class citizen by stressing the need for web applications built in these JavaScript frameworks to be identical to their native generated counterparts.

Zero user interfaces

Outside of web development, the evolution of other user interfaces continues unabated, adding to the list of channels which marketing teams and enterprises must handle today beyond websites, native applications, and JavaScript applications. Many of today's user interfaces no longer have a manually manipulated component or even a visual component. These are known as zero user interfaces, because they have no screens at all.

While the most obvious usual suspects among zero user interfaces are the now-familiar Amazon Echo, other aurally or gesturally manipulated interfaces also fit into this paradigm, including haptic and ambient interfaces which rely on surrounding stimuli rather than an explicit input from the user into a screen. As we'll see throughout later entries in this column, zero user interfaces and the concomitant evolution in interaction design will necessitate a wholesale reinvention of usability testing and user research, especially to handle adaptive interfaces reacting to gestures or voice interfaces responding to queries presented in myriad ways.

Conversational content

Over the last several years, conversational content — accessing and interacting with content via interlocution — has become a prime target of marketing teams and enterprises. Conversational interfaces can run the gamut between traditional chatbots and messenger bots like those on Slack and Facebook, or they can be voice assistants that perform double duty as physical devices that can be programmed with additional functionality. In the middle reside the in-phone voice assistants like Siri and Cortana that represent closed ecosystems and a limited array of custom functionality.

In conversational content, content is wholly inaccessible without a particular track of decisions and forks in the road that lead to the desired content. Text must be limited, as rather than pages, conversational interfaces traffic in self-contained utterances with no images or multimedia outside of audio. Perhaps most distinctly, conversational interfaces can only be interacted with verbally using dialogue, whether written or spoken.

Conversational interfaces also require a rethinking of how content is produced and delivered. For most organizations, simply rendering content accessible by voice assistants through search tools or a basic chatbot is inadequate to serve the growing needs of the customer, who increasingly seeks a personalized and intimate dialogue with such automated interlocutors. As a result, conversational content remains a fairly unexplored area where standardization is beginning to occur with the efforts of organizations such as Dialogflow (formerly api.ai).

Content in augmented and virtual reality

Similarly, at the same time that content is becoming conversational and accessible through dialogue with a machine, content is also becoming more contextual. Certain ascendant technologies, like machine vision for detecting items in view and imaging techniques that allow for wraparound panoramic 360º images in virtual reality, suggest that content will soon be as much a fixture of our physical world as it is our digital one.

One of the key outgrowths of the new focus on a user's location and environment is the incorporation of content that lies in context of a user's surroundings, whether those surroundings reflect the real world in augmented reality (AR) or a fictional one in virtual reality (VR). In 2016, Forrester Research wrote that "companies will continue to experiment with AR and VR, setting the foundation for larger implementations in 2018 and 2019." The results of a survey commissioned by Accenture in the wake of CES 2018 buttress this claim, with users increasingly cozying up to implementations of augmented and virtual reality that help them learn about their surroundings or become better at certain tasks while eschewing gaming-oriented gimmicks.

In superimposed content in augmented and virtual reality, content is tied to the user's surrounding context. As such, any limited text or multimedia has to be superimposed based on the user's current situation. Interactions take place via gestures recognized by the smartphone or AR/VR hardware, but the delivery of content is predicated on the user's motion and surroundings.

There are many examples of early adopters of augmented and virtual reality technologies, such as the Smithsonian Museum of Natural History's Skin and Bones exhibit, which overlays physical exhibits of long-lost fauna with digital approximations of how they may have looked, still living and breathing, through augmented reality overlays. But we can take this a step further still; when we integrate augmented and virtual reality with elements of the Internet of Things (IoT) like beacons, contextual content can become content that is locational and situational.

Situational content

As geolocation technology advances, the ability to pinpoint a user's location precisely lends itself to more personalized content catered to where the user is at the present moment. While there are multiple ways to triangulate a user's location, the most common is either through geolocation techniques on smartphones or using Bluetooth location-enabled proximity beacons.

In recent years, proximity marketing using beacons and other Internet of Things (IoT) hardware has gained prominence in the enterprise. The closer a user is to a particular brick-and-mortar location, the more targeted a business's content delivery can become. If a user has indicated an interest in neckties, for instance, a brick-and-mortar store with beacons installed can provide the customer with information about remaining stock, ongoing promotions, or even textual content about an individual necktie retrieved from the website and displayed in augmented reality in front of the user's eyes.

But situational content is challenging due to the complexities of orchestration across multiple pieces of hardware and software, as well as the need to address the constantly shifting location and desires of any user consuming the experience. As a result, content delivered based on beacons and augmented reality tends to be comprised of superimposed overlays with limited prose and embedded media. In short, the content augments the surrounding experience rather than vice versa — a far cry from how content-first websites are designed and built.

Other channels

It is impossible to account for every single possible channel where content may end up in a digital ecosystem, but three channels in particular are becoming more prominent in the Drupal community and wider web landscape: wearables, digital signage, and set-top boxes like Roku and Apple TV.

In all three cases, the amount of content served encounters certain limitations. In the case of digital signage, for instance, content must be visible from far away, and legibility is key, so the transmitted quantity must be limited in size. On the other hand, screen real estate is at a premium in smartwatches, intensifying the need to reduce text size and minimize the amount of content shown. Meanwhile, set-top boxes have design limitations that force delivered content to adhere to a rigid set of restrictions and prefabricated templates.

Conclusion

Now that we're armed with a quick and dirty understanding of the digital landscape in light of our own Cambrian explosion of channels, it's time to dive into the code of making these experiences work. In a few weeks, I'll kick off with a description of Drupal's web services capabilities available in its core flavor and how you can set up Drupal 8 to be a web services provider. But first, the Experience Express calls in Chicago, where I gave a talk about decoupled Drupal across the stack and joined up with my Acquia colleagues Sarah Thrasher, Chris Urban and Jeff Geerling for a wide-ranging conversation. Stay tuned here for a MidCamp roundup with insights into how Drupal practitioners in the Midwest and beyond are considering their move to Drupal 8.

In the interest of time, because there are ample resources available that introduce the concepts surrounding decoupled Drupal, I'll be skipping over some of the more fundamental themes relating to historical developments in server-side and client-side web technologies, web services and RESTful APIs, and types of decoupled Drupal architectures (and their risks and rewards). If you'd like to brush up on those topics, please consider reviewing a few of these resources before next week's column.

Perhaps the most fascinating single fact of the Cambrian explosion is that life on Earth diversified from largely unicellular organisms that occasionally bunched into colonies to multicellular organisms that came to represent much of the present-day animal kingdom — all at a single discernible moment 541 million years ago in the fossil record. As we approach the end of this decade, we're experiencing a similar Cambrian explosion — not in life forms, but in form factors.