DK: You have been described as “a comedian, author, and activist,” and while I think that’s an accurate statement, it also seems incredibly generic and incomplete. How do you describe yourself and the work that you do?

Baratunde: In the beginning, I would just say: I’m a comedian, author, and activist. Sometimes I would add a little bit of spice and say I’m a futurist comedian, which then begs the question: What is a futuristic comedian? And the ridiculous answer is: The comedian who knows what jokes are going to work before he tells them! But in my case, it means that there is some sort of technology infused. I try to weave themes of technology, as well as using technology directly in how I tell stories and do the activism that I do. There is some blend, but the overlapping themes are a strong technological presence; a strong civic presence and sort of political/social membership presence; and there is a strong comedic presence. There’s not a single word for all of that. So when I’m feeling really cheeky, the way I describe myself is: Baratunde Thurston!

DK: One of the things I found really interesting is how you’ve been able to connect with people in different formats. In your writing, your TED Talks, and your comedy, you’re always telling a story. Are there key elements you see as important when you’re crafting a story, regardless of the format that you’re using?

Baratunde: It’s a pretty intuitive format, but I would say the most essential elements are a beginning, middle, and an end. It’s amazing how simple an outline that is! Generally, my stories have a protagonist; which is often me. I’m thinking of one story that I had fun with. It was an adventure I experienced on a flight, and I live-tweeted as it was happening. I didn’t know where it (the story) was going to go, but I knew that flight was going to end. So I had a beginning, middle, and an end. But there was a twist along the way; my flight had to make an emergency landing. I didn’t know that going in. But along the journey I made friends; bought whiskey for people; we got free Chick-fil-A; and Tom Selleck was a surprise character in the story. It ended up having the basics— a beginning, middle, and end—but also surprise, intrigue, humor, and community.

I think some of the best stories give you a protagonist and some kind of obstacle, and ideally but not always a surmounting of that obstacle to get to some higher plane or better place. Some stories actually have the opposite; they have a failure to overcome the obstacle and so you get a different sort of payoff, which is still deeply emotional though it may end in sorrow or regret rather than joy, elation, and a sense of accomplishment.

DK: Your Twitter story resonates with my next question. You’ve been doing this for a while in your career, and a lot has changed during that time—much of it driven by advancements in technology. You’ve been engaged in the digital world since the early days of the internet. Has the evolving digital space changed how you see storytelling?

Baratunde: Yes and no. It hasn’t changed the basic building blocks; there are core fundamentals that I don’t think change whether you’re writing on a piece of paper or building some interactive web experience or some augmented reality experience. But then, the yes part of that answer has to do with the ability for others to participate, and maybe even change how that story goes.

A book is not very interactive … primarily it’s coming out of my experience; my mind; my research. It’s physically locked in a format and then shipped out, and it cannot evolve other than an updated edition with some typos corrected.

Compare this to a story that I purposely designed to be interactive. I have a community of people that I connect with via text messaging. It’s like a text message service—not my personal number—and I can target my message based on what state they’re in. On Election Day in the United States, I was messaging people asking: Did you vote? Who were you rooting for? How was the experience? Anything you want to share?

I got back little stories from all these people, and then I wove that into a larger narrative that I published in a set of Instagram stories. So I started with a story of my own, which is: I voted; here’s what it was like, but I baked in a prompt. I collected their stories, remixed them, and republished them back to those same people so that they can see each other. That’s a different structure to the storytelling because most of the content isn’t mine. I find that technology is allowing more interesting ways to involve others, yet I still want to maintain a theme or purpose to the story. It’s not just a random grab bag of what other people want to say.

DK: In recent years we’ve seen this emergence of big data, and the ability for stories to be told via the data that we’re collecting. I know you’ve written extensively in this area. What do you see as the advantages and the risks of our increasingly data-centric culture?

Baratunde: The advantages are amazing discounts on LED televisions, rapid delivery of burritos, and all sorts of opportunities to spend money on clothing that actually does fit my taste. In the past, I was seeing advertisements for products that had nothing to do with my budget level, my geography, or my personality. Personalizing experiences for commerce, for community, and for politics has the advantage of relevance, and often because of increased relevance, an increased feeling at least of meaning and power. So having Netflix tell me what to watch because they knew what I’ve watched and because it’s gathering data about me through cookies distributed all over the web, I get a more satisfactory viewing experience. That’s cool; there are upsides to that. There’s efficiency as an upside; there’s relevance as an upside; and there’s discovery as an upside.

The downsides are a total loss of agency and a collapse of democracy. Discounts on TVs and a lack of coherent democracy … let’s weigh those against each other and I think the big question to be asked about network-based and data-centric storytelling is: Who’s able to wield that power? Where is the value accruing in the chain? And I think it’s a pretty strong case to be made that the value I get out of slightly-faster commercial experiences, more relevant viewing experiences, and more relevant content experiences is not enough to account for the literal loss of self. When a social network, let’s use Facebook as an example, or an e-commerce platform like Amazon, knows much more about me than I know about myself, that rubs me as highly risky. To what ends are they’re going to use that information, and am I in charge of my own behavior at that point? Am I really in charge of my spin and am I in charge of the jobs I’m going to see, or am I only seeing the jobs that the algorithms want me to see? So, it’s a question of power and agency, and it gets philosophical because it asks: Who am I? And are the choices I’m making my own, or am I actually some form of a pawn in a larger algorithm designed to optimize an extraction of value FROM me, rather than the creation of value FOR me? It’s a big social question, political question, economic question, and I think existential question, ultimately. It’s not obvious to me that value economically or efficiency-wise outweighs the risk of a coherent democracy where we share experiences and of a sense of self; where we really are in charge of the choices we make.

DK: This idea of personalization is kind of a driving force of the tech world. Even in the community that I serve—the education and training space—there are efforts towards creating personalized experiences that provide the information that people need to know; when they need it; and in the context of what they’re doing. It sounds great and can be a transformative shift in how we look at learning, but it’s built upon this foundation of personal information. How does an organization, an individual, or society know where to place that yardstick?

Baratunde: So I think a big area worth exploring is shifting some of the access to the data and the insights around them back to the person; back to the user; back to the customer; back to the citizen; back to the human-centric view, rather than a corporation-centric or entity-centric view.

I don’t want access to all the raw data about me; I can’t do very much with it, but I do want some access to the tools that are used to interpret that so I can learn more about myself. I want to understand the reason why YouTube is recommending this video, and to maybe show me why I am not getting some of those others, so I can make a choice. The risk is passively following a thread because a particular algorithm is set up to optimize a particular outcome that was not democratically decided, or even decided by me as the user. Expose me to what some of those choices are, and let me adapt. Give me a dashboard that lets me check in every quarter and says: Hey, last year you were watching a ton of cooking videos; are you really that interested in cooking? You told me that you really wanted to learn more about physics for whatever reason; let’s adjust the mix. Let’s adjust the recipe and offer us a chance to intervene and reset because I think what happens now is there’s been some very frightening studies with YouTube as an example. You just start using the autoplay on YouTube and you follow that algorithm, and it literally leads to Nazis because people are hacking that algorithm. It goes to more extreme hacking of our own sense of psychology and click bait.

I think there is a magic world coming and I described it previously as the anticipatory web. We’re entering into that. We’re not going to make active choices in parts of our lives. We won’t choose which route our vehicle takes to get us to work; we may not choose who we date; we may not choose what we eat because we’ll have a DNA scan; we’ll have an economic profitability scan; we’ll have a geo-location sensibility; and we’ll have temperature sensors and friend algorithms; and we’ll put all that into the mix and then just follow this kind of proverbial path, a GPA through life in our own universe. That to me is too much. These systems will serve us, but I think part of being human is making active choices. Part of being a student isexploring the things we don’t even know we want to know. To limit education and learning to just the things I say I want is going to poorly serve me. I think there’s room for general education that’s outside of this optimized, narrow personal path. I want to mix that in to balance out this algorithmic future that we are clearly headed toward.

DK: It’s a fascinating world that we’re entering and I really appreciate you spending some time with me today exploring the work that you do in that space.

Baratunde: Yeah. Most of the actions and work we do as people will have some technological system encroaching on what previously we kind of dominated. I’m seeing it even in writing, comedy, and performance. There’s this fascinating thing called Botnik Studios where they suck in large bodies of text. It’s like machine learning applied to comedy. So they suck in every script from Seinfeld and then have their team generate a Seinfeld script. So they have the humans work with the machine. You’ve seen this sort of thing with automated chess systems versus human chess systems, and the best player is neither fully machine nor fully human; it’s this hybrid.

I think the real future is going to have to do that kind of dance together where we will have technology; we will have personalization and data, but we won’t totally abdicate our agency to it because that’s a boring, garbage life. But we won’t totally rewind to this sort of fully agricultural analog society either, because we’ll feel like we’re missing out or we’re moving too slowly. The trick is: How do we work with the machines? How do we learn with the personalization engines and yet still maintain humanity in the process?

Most interesting to me is: If we needed to, would we be able to revert to a world if there was a massive power outage or glitch in the matrix? Would we still know how to eat correctly; still know how to transport ourselves; because there might be this WALL-E-type future that we are actually in where if we unplugged it all, we’d all starve; we’d make bad music; and we’d have garbage jobs. I don’t think that healthy, either. I think we all need to know how to grow stuff; we’ve got to know how to do some things manually. That will be the new gym membership. The gym of the future is just typing, because typing will be determined to be this like ancient skill since the way we will connect with computers in the future will likely be through gestures and voice.

So it’s fascinating to think about, and I’m really looking forward to playing out more of this with you and the eLearning community.

David Kelly is the Executive Vice President and Executive Director for The eLearning Guild’s face-to-face events and conferences. Prior to joining The eLearning Guild he has been an internal learning and performance consultant and training director for over 10 years in both the financial services and non-profit sectors. David has served as a local board member and national adviser to ASTD Chapters. He is active in the learning community, and often speaks at industry conferences and events. In 2011, the eLearning Council voted him one of the 10 most influential eLearning bloggers for his blog, “Misadventures in Learning.” He is also known for his curation efforts, especially related to conferences and events for learning and performance professionals.

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Website

Welcome to the eLearning Guild Blog

TWIST is the official blog of The eLearning Guild. Posts on this blog will offer insights, ideas, perspectives, and discussion on the current trends in the learning industry, the technologies used to support learning and performance, happenings at the Guild, tips for professional development, and much more. Each post will include the personal ‘twist’ of the writers, including members of the Guild team and guest bloggers from the Guild community.