In this video, Tom analyzes the new features that are available as well as discusses topics such as the shift towards social messaging, the role of YouTube’s Uptime application and a preview towards the world of immersive co-viewing with YouTube in virtual reality.

I had the privilege to speak and host Epsilon Symposium 2017. With hundreds of clients in attendance, I was tasked with discussing the role of artificial intelligence across Epsilon and Conversant as well as tease examples of emerging technology that my team and I are working on.

Here are the key highlights I discussed during Symposium 2017. Part of my role is evaluating and embracing the latest innovations and determining how they connect to our Epsilon and Conversant solutions.

So the last ten years I have talked about how disruption is the new normal. How emerging technology can impact consumer behavior and what it means for marketers.

Today we are at an inflection point. Where we are seeing the shift from mobile first to AI first. It’s less about disruption and more about acceleration through intelligent systems.

That’s where Epsilon and Conversant’s heritage of aligning data and technology and driving innovation is the key to leveraging whatever the future may bring and where consumers will be.Within the agency business, we are using Machine Learning to categorize the data of culture along with our data of identity to fuel our creative approach.

From a product perspective, We are also achieving harmony (Pun intended ;) through machine learning and AI through a centralized intelligence hub for decisioning across channels.

Finally, Conversant is at the forefront of integrating AI through machine learning and image recognition to create world-class speed and scale where every 5 minutes, consumer actions across 160M individual profiles lead to over a billion model updates.

Regardless of how the future state shifts and evolves… be it through bots becoming agents on our behalf, the evolution of consumer based journey’s expanding to include system based journey’s or a hyper connected augmented reality future. All of those elements will be highly dependent on Data and decisioning as the foundational element.

Included in the analysis is the reveal of the Xbox One X and why it’s relevant for marketers, briefly discusses some of the game reveals, discusses the role of gaming and connecting with Gen Z including eSports, the role of affinity alignment and more.

When I think of Apple, 3 things come to mind: Industrial design of it’s hardware, interoperability across products, and of course millions of apps. After WWDC 2017, I need to add artificial intelligence (AI) enabled experiences, device level privacy and a new focus on augmented reality.

AI was a the key theme of WWDC (mentioned 20 times in 2.5 hours). Apple highlighted how both machine learning and deep learning are now integrated across multiple products. From Apple Watch, Siri, facial recognition in photos and even hand written notes in iOS11. AI integrated experiences were one of the more important areas discussed during WWDC.

WWDC also saw a new hardware launch in the form of the HomePod. HomePod is Apple’s entry into the Smart speaker market. While Siri is integrated into the device it’s to be determined the role it can play for brand marketers as the skills and actions we have begun to depend on in other product ecosystems was surprising absent.

Apple is also investing heavily into enabling augmented reality experiences through hardware and software. With the launch of ARKit, their strategy is to empower the millions of developers to take their AR building blocks and create immersive experiences that are closely mapped to the real world via world tracking for both 2D and 3D elements.

Apple is building a foundation for the future built on device level privacy, artificial intelligence, augmented reality and multimodal computing through evolving Siri beyond handsets into cars and the home with Homepod.

2017 has seen a rapid acceleration of technology trends. Of the 50+ trends observed from CES, MWC, SXSW, F8, Google I/O and more, here are the top 5 midyear trends that I am closely monitoring heading into 2018.

1) MOBILE FIRST TO AI FIRST

For the past few years, Facebook, Google and other industry heavy weights have proclaimed to be mobile first organizations. Now at the midpoint of 2017 we are seeing shifts from mobile first to AI first. Google recently announced their intent to redefine their core products around AI research, tools and applied AI.

Through 2017 Machine Learning (ML) and Artificial intelligence (AI) are rapidly transforming business, products and services. A primary fuel for ML/AI is data. Understanding how to create actionable data centric AI experiences is critical to drive growth in 2017 and beyond.

2) MULTIMODAL INTERFACES

Conversational experiences have been a primary topic of discussion in 2017. From bots to voice based experiences, to computer vision and object recognition, expanding solutions beyond mobile and desktop has been a major trend through the first part of the year.

The shift towards AI first means text and visual tied to mobile and desktop are not enough to evolve the future of interaction. As 2017 continues to unfold, we will see more voice + paired visual experiences come to market where voice is driving a visual companion experience to further enhance Alexa Skills and Google Actions.

3) CAMERA AS A PLATFORM

As marketers begin to shift their attention from Millennials to Gen-Z, strategies in the first half of 2017 are shifting towards leveraging the camera as a platform.

From Snapchat’s ever evolving lenses to Facebook’s newly announced Frames & AR studios, major industry players are taking a core native behavior that is all about empowerment for the consumer and building new solutions that will integrate real-time data, location and object recognition to create new forms of effect based marketing.

4) RISE OF THE PROXY WEB

The first part of 2017 has shown the first major steps towards the rise of the proxy web. The proxy web is predicated on systems taking over core day-to-day human functions and becoming agents on our behalf. One of the big steps towards this in 2017 was the recent launch by Google of Google Lens.

Google Lens combines the power of Google Assistant and provides the ability to overlay computer vision, which will serve as the basis for contextual augmented reality that links to various services, from purchasing, to content, to predictive reservations based on traffic and other environmental factors. Voice has led the way in 2017, 2018 will be the year of computer vision powered experiences.

5) DEMOCRATIZATION OF IMMERSIVE COMPUTING (VR/AR)

One of the drawbacks to mass adoption of virtual reality has been tied to how isolating an experience can be with limited abilities to share “what’s happening” Both Google and Facebook realize that adoption is closely to accessibility and the ability to share experiences. 2017 has seen a major shift towards the driving the democratization of virtual reality.

The key to driving adoption at scale is to empower consumers, developers and other 3rd parties to create experiences. From empowering the creation of user generated 360 degree content to co-viewing, casting, capturing and sharing VR content. It’s important for brand marketers to pay attention to how consumers interact with these experiences and the rate at which they are creating their own virtual content.

This week I had the opportunity to attend the Google I/O conference in Mountain View, California. It was an incredibly compelling event as Google shifted their focus as a company from mobile first to AI first. This means that all products will be redefined and enhanced through various forms of AI.

This includes the Google Assistant, which was the star of the show. The deck goes into detail, but it’s incredibly important that we begin thinking about the role that the Google Assistant plays across home, smartphone, wearables, auto and soon AR. With the launch on the iPhone announced at the conference it gives Assistant 200 million voice enabled devices out of the gate.

What is also key to consider is the Google Assistant equivalent of an Alexa Skill, called an Action by Google. Actions can support transactions outside of Amazon as well as not requiring installation. Also, there is a very small number of actions that exist today, but a huge and rapidly growing ecosystem of devices that are Google Assistant enabled.

Here is the full trend recap and analysis:

Section one covers trends tied to connection & cognition:

Vision of Ubiquitous Computing

Multi-Modal Computing

Google Assistant (Actions, Auto, Computer Vision, Wear)

Android O

Progressive Web Apps

Structured Data & Search

Section two covers all facets of immersive computing:

Immersive Computing

Daydream (Virtual Reality)

Social VR

WebVR

Visual Positioning Services

Tango (Augmented Reality)

WebAR

In addition to the attached recap, there is also a 4 minute “light recap” video:

For third party commentary, discussed the role of Google Lens & Computer Vision with AdExchanger here

Coming to you live from San Francisco and Google’s I/O conference. Here is a recap of some of the key highlights. From their shift from mobile first to AI first, the launch of Google Lens, computer vision as the next form of computing, and the digitally augmented future.

BlackFin360 Archives

Tom Edwards, Ad Age Marketing Technology Trailblazer and Chief Digital Officer, Agency @ Epsilon analyzes best practices and points of difference between Google Actions across the Google Assistant ecosystem as well as Amazon Alexa Voice Services.

In this video, Tom compares and contrasts Amazon Alexa Skills with Google Actions and discusses feature differences, outlines best practices associated with deploying skills and actions as well as key points to consider before submitting for final approvals.

Tom also discusses driving skill and action discovery as well as strategic thoughts tied to going beyond tactical utility towards full ecosystem considerations.

In this video, Tom analyzes the new features that are available as well as discusses topics such as the shift towards social messaging, the role of YouTube’s Uptime application and a preview towards the world of immersive co-viewing with YouTube in virtual reality.