FlowA Critical Forum on Television and Media Culture2015-03-02T01:39:53ZWordPresshttp://flowtv.org/feed/atom/David Church / Indiana Universityhttp://flowtv.org/?p=266192015-03-02T01:35:53Z2015-03-02T01:35:53Z

Among the voluminous critical commentary produced over the decades on the social experience of moviegoing, one of the most recurrent strains concerns the public comportment of one’s fellow patrons.1 For a medium whose culturally dominant forms have, after all, privileged the spectator’s immersion in self-contained narrative worlds, the ever-present possibility remains that less disciplined or overtly disruptive ticketholders will effectively become an unexpected (and unrewarding) part of the show. Still, this common criticism of moviegoers (always others, never oneself) has now perhaps become less to do with patrons’ actual behavior than with the relative exceptionality of going to a public movie theater in an era saturated with home viewing options. If, as Charles Acland’s research has shown, avid moviegoers (i.e., those attending more than one movie every two weeks) now account for only about 8 percent of the U.S. population but make up 75 percent of a film’s ticket buyers during its first two weeks in theaters,2 then instances of misbehavior might be all the more likely to stick in the memories of that far larger demographic which treats theatrical attendance as a special occasion.

Further examples of silent-era pre-show projector slides.

In cinema’s silent years, undisciplined comportment derived from over-sized hats, smoking, whistling, foot stomping, and so on—as the pre-show projector slides shown above imply. In our present moment, common complaints have coalesced around the intrusion of external technology—especially cell phone use—into a cinematic apparatus which, for all its vaunted digital transitions, still basically retains its century-old relationship between a projector and a single screen. Meanwhile, notable exceptions to these expectations have become framed as special events (and accordingly sold at premium ticket prices), even if they more closely resemble the quotidian pleasures of home viewing: from Disney’s experimentation with “second screen” events where patrons play games and access expanded content on an iPad app while watching the film, to Hecklevision screenings in which viewers use the MuVChat app to text snarky comments that appear onscreen during a “bad” movie (à la Mystery Science Theater 3000).

As might be expected, complaints about audience behavior primarily center around the corporately owned multiplex—that most populist of theatrical sites, where ostensibly broad tastes and broad viewerships come together under the same blandly interchangeable roofs. Anyone who has been to a multiplex in the last ten years has, of course, seen the pre-roll promos warning patrons to silence their cell phones and refrain from talking or texting, although certain chains enforce these rules of decorum with more verve than others. The Alamo Drafthouse Cinema chain has, for example, manifestly promoted its anti-texting policies as a means of differentiating itself from larger corporate chains like AMC and Regal, a practice complementing its selective programming of cult cinema and special screenings geared toward film aficionados who can exhibit “proper” modes of aesthetic appreciation.3 Take the 2011 voicemail from an angry Drafthouse ejectee, since repurposed in one of the chain’s anti-texting promos:

As the misogynistic tenor of so many YouTube comments (e.g., “dumb girl,” “stupid bitch”) on this video suggest, however, calls for better theater etiquette can easily shade into a thinly veiled demonization of mass (multiplex) culture and its audiences as “feminized” philistines.4 Likewise, when a retired police captain shot an unarmed man in a dispute over texting at a Florida multiplex in January 2014, many online commenters reacted to the news coverage as a cautionary tale or even a violent wish-fulfillment fantasy about the consequences of theater misconduct. And yet, between the twin poles of second-screen events aimed at entertaining antsy kids/millennials and theater franchises that make a virtue of aggressive “shushing,” there are a wide variety of disruptive behaviors existing beyond the multiplex.

Indeed, the persistently elitist assumption that multiplex patrons are the worst behaved moviegoers belies the fact that audiences at art houses and repertory theaters are as likely—or even more likely—to be disruptive (albeit in sometimes different ways). Surely readers who have spent their fair share of time in arthouse theaters will have their own personal batch of horror stories about eccentric, unsettling, or otherwise unconventional characters or behavior rivaling that of any multiplex show: inappropriate or bizarre outbursts, idiosyncratic seating rituals, odd comings-and-goings, smuggled-in meals, and other breaches of decorum. There are, of course, plenty of historical precedents for unconventional arthouse behavior—such as the origin stories for cult films like Casablanca (1942) and The Rocky Horror Picture Show (1975), in which rep theater viewers developed viewing rituals to perform along with the film out of (respectively) earnest appreciation or ironic deprecation—but such cult activities are presumed to enhance the overall moviegoing experience, not detract from it. 5

Lucas Hilderbrand observes that bootleg recordings of movies captured at multiplex theaters often reveal the sheer range of ambient sights and sounds (e.g., glowing exit signs, quiet verbalizations, eating noises, people occasionally coming and going) that the average viewer is generally accustomed to tune out.6 But what of less disciplined behavior in the arthouse experience? The most obvious explanation might be that such misbehavior is simply more noticeable if the films shown in art houses tend to be more challenging in form and content, ostensibly requiring more focused contemplation than the average Hollywood picture—but that argument would certainly not apply to all films programmed at either specialty theaters or multiplexes (especially around awards season).

Another consideration is the question of which types of people are more likely to attend art houses than multiplexes, particularly as expressions of cultural distinction. A recent study has found that people tend to ascribe higher value and more “authenticity” to an artwork if the artist him/herself is perceived as eccentric7 , but we might ponder whether the reverse is true: are more eccentric people more likely to gravitate toward “higher” art forms and the exhibition spaces associated therewith? In other words, do the same taste biases that would associate art houses with a “higher” or more “refined” class of cinema also encourage a “stranger” clientele—or perhaps at least a demographic more oblivious to its own potential for disruptiveness? If texting in the multiplex seems to scream self-absorbed entitlement, what of the arthouse aficionado’s distracting or eccentric comportment, cloaked beneath the aesthete’s self-satisfaction of simply being in an independent theater watching artsier fare?

For an example of an extreme case, see Angela Christlieb and Stephen Kijak’s documentary Cinemania (2002), which follows a cadre of New Yorkers with an obsessive-compulsive drive to attend hundreds of movies per month at the city’s repertory houses. Most of them are unemployed, living off disability benefits, and it is not difficult to see their cinephilia as potential expressions of underlying personality disorders.

Extreme cases though they may be, such patrons nevertheless suggest that aesthetic preferences toward theaters showing less conventional cinematic fare may be inevitably accompanied by encounters with less conventional viewers than found at the multiplex. Tobin Siebers argues that “modern art’s love affair with misshapen and twisted bodies, stunning variety of human forms, intense representation of traumatic injury and psychological alienation, and underlying preoccupation with wounds and tormented flesh” reveals the centrality of disability and other forms of human variation to judgments of aesthetic worth.8 Yet, if mental or physical difference is so often aestheticized within modern art, “[t]he ability of the work of art to take possession of an audience…is almost always treated as serving a call to knowledge or greater self-possession, and those who are possessed by more powerful experiences are thought to be mentally defective.”9 As easy as it might be to dismiss art houses’ more disruptive attendees as “kooks” or “weirdoes,” we might do well to consider them an added price of admission to cinematic art forms that already fall outside the cultural or economic norm. After all, if a taste for experiential diversity likely draws us to such theaters in the first place, then what does reviling this human diversity lurking just outside the screen truly accomplish?

For a history of such responses, see Richard Butsch’s books The Making of American Audiences from Stage to Television, 1750-1990 (Cambridge, UK: Cambridge University Press, 2000); and The Citizen Audience: Crowds, Publics, and Individuals (New York: Routledge, 2008).

Each semester I teach courses on digital media, which means each semester I discuss issues of online privacy with college students. While their concerns and practices have evolved over the course of these conversations, one privacy issue keeps emerging as increasingly concerning to them: (future) employers’ use of the internet to find information about them. While the internet can be a great tool for networking and promoting one’s work etc., it can also lead to unwanted discoveries and misinterpretations. It seems almost every week I read yet another news story about someone being fired for something they said online. To a certain extent, some of these firings seem justified, as might be the case if an employee discusses something that explicitly violates company policies of disclosure or conduct. However, in far too many instances, the firings bring up questions of ethics, privacy, identity expression, and work-life boundaries that make me, and my students, increasingly uncomfortable.

Take for example, the Ohio elementary school teacher who posted pictures of live animals in crates on his Facebook account. He is a vegan and was trying to raise awareness about the inhumane treatment of many farm animals. This is quite clearly a personal value that does not disgrace the school, nor present him as an unacceptable role model for students. Yet, he was fired for expressing his vegan values. The reason? His school was in a rural area of Ohio, one in which many of his students’ families earned their income from farming. He was told he, “might offend the community and the economic interests of the community…if [he] wanted to be a strong vegan advocate, [he] might want to look into something other than teaching.” Nevermind that he was offended by the unethical treatment of animals, his potential to offend others (and threaten their economic interests) was deemed more important that his right to express himself online. To me, the school infringed upon the teacher’s right to his own beliefs and the right to freely express those beliefs in a personal space. What this example highlights (and it is only one of many, many, more like it) is that the ways in which employers are surveilling employees’ online profiles is just as much about identity and speech as It is privacy. Really, it’s about constructing particular subjectivities that are valued in the workforce, even at the cost of other subject positions and identities.

A teacher/girls’ basketball coach in Idaho posted this photo of her fiancé (the school’s football coach) and herself on summer vacation. She was fired (three months after posting the photo), he was only reprimanded.

Far too often, conversations about employers’ use of digital media to monitor employees get thrown into the camp of “well, just be careful about what you put online.” In other words, don’t be stupid about what you share with your employer and you’ll be fine. Similar to my earlier Flow column regarding sexting and Snapchat, I believe this rhetoric falsely presumes that we are the only ones responsible for our own privacy and that we have complete control over privacy. It reduces a complicated issue – identity expression and speech – to an issue of mere individual responsibility, and thus dismisses the questions of ethics and boundaries all together. What the Ohio teacher example reveals is the extent to which we are being disciplined to think of our online identities – and therefore our subjectivities – first and foremost in terms of workers. Arguably the Ohio teacher did not do anything patently offensive, did not bring shame or harm upon the school or his students, and in all likelihood thought he was “being smart” about what he posted. Yet, in essence what the school fired him for was for expressing beliefs that were in contrast to those of his work environment. That’s it. And that’s scary.

Computers and internet-enabled technologies have been constructed as “boundary-crossing” technologies1 that permeate and blur the boundaries of personal and work life. Such technologies have led to a variety of legal issues, such as, when can an employer view files on a company-owned laptop that the employee also takes home? Can an employer access text messages sent outside of work hours if the employee’s mobile service is paid for by the employer? Can employees have any expectations of privacy when accessing personal email at work? Such issues have been addressed in court to varying degrees. At the heart of the issues is the fact that telecommunication technologies allow for our personal lives to be increasingly visible and accessible at work, and of course that means it is increasingly possible for work to invade our personal spaces and time as well (e.g. how many of us check work email from home and “off the clock” on a daily basis?).

Findings from a survey conducted by the social network monitoring company, Reppler. Note the frequency that online information is used to hurt the employer’s perception of the employee.

Thus, the use of social media becomes just the latest iteration of slippery legal and ethical questions we must consider, questions that require us to re-think boundaries between personal and work life. Certainly a lot of employees create and maintain accounts on social media prior to being hired, they use them to connect with individuals outside of work, and often think of them as personal and private spaces. Even though employees may know their employers may have some access to the accounts, that doesn’t negate the fact that we still tend to think of our social media profiles as personal spaces for expression and connections. Research demonstrates that social networking sites are useful in helping us maintain latent and weak ties, and to acquire emotional and social capital.2 There are a lot of uses, motivations, and benefits of participating in social media that have little to nothing to do with our roles as workers in the marketplace.

Likewise, the internet has been heralded as a tool for democracy that allows underrepresented or misrepresented populations to express their opinions and experiences.3 While we know that there are limitations and deep-seeded systematic inequalities that cannot be easily eradicated via the internet, it does nonetheless provide spaces for marginalized populations to potentially network, build community, organize, and foster change.4 However, such opportunities are likely to be stifled when we are disciplined to first and foremost think of ourselves in terms of (potential) employees. Within a neoliberal context, we are being disciplined to use online spaces as sites to invest in our “human capital”.5 Thus, I think it is imperative we ask: what are we sacrificing and what is the cost (to ourselves and to society) when we must not only police what we share and say in online spaces out of fear of being fired for personal activities, but when we must also think about how our personal values could potentially hurt our reputation in the workforce (even when they do not intersect with our job descriptions)?

Furthermore, the extent to which individuals must invest in their “good worker” identities (i.e. ability to capitalize on their skills, qualifications, qualities, etc.) becomes even more problematic when we take into consideration the literacies and skillsets necessary to intentionally construct positive online identities (as are interpreted by the marketplace). Research reveals that some individuals are better prepared and equipped to participate in such ways, as compared to others. Knowingly constructing an “acceptable” online identity involves not only technical access and competencies, but also an understanding of social and network literacies that some individuals have not developed,6 nor do employees and employers necessarily share the same culturally contextualized understandings of these spaces and identities.

Findings from a survey conducted by the social network monitoring company, Reppler. Notice employers tend to prefer personal/social sites over LinkedIn (a space intended for professional networking and use).

To return to the concerns expressed by the college students in my classroom, here’s what I’m seeing – to a certain degree they are hyper aware of online privacy concerns; they know not to post pictures of red Solo cups, they know that even a cigarette might be mistaken for a joint, they know better than to be blatantly racist or homophobic. In other words, they’re trying to do all the “right” things online so they can get jobs. But what I’m hearing is that they are so afraid of something being taken out of context – an offhand joke between friends – or that their sexual or religious or political identities might be used against them, that some are opting out of online social networking almost all together. There’s a reason they are using private Instagram accounts, ephemeral Snapcaht apps, and anonymous Tumblrs – these sites are disconnected from any sort of public online profile or community. On the one hand, these are effective strategies that afford greater privacy and therefore more freedom of expression. But, we also know that social networking sites such as Facebook and Twitter are good at maintaining weak ties with those who aren’t in our immediate social circles.7 Weak ties are valuable for exposure to diverse ideas, for getting jobs, and expanding opportunities. When students opt out of the more diverse and open “networked publics” for the more insular and private forms of interpersonal communication, what is lost?

Lastly, I’ve been primarily discussing this in the context of middle-class jobs and middle class employees. But how do these issues become even more important when we think about minimum wage jobs and nondominant populations? Low-income workers are often subjected to greater surveillance,8 for example, as evidenced by drug testing for minimum wage jobs (but not white-collar jobs). Additionally, many nondominant populations may not have the digital and social literacies required to protect their privacy and construct “good” online identities. We should be concerned that employee surveillance further exasperates inequalities. Who is being monitored, in what ways, for what purposes? How much transparency is there? To what extent should employers at the very least inform employees that they are being searched and monitored? What about individuals who opt out of online networks all together? Or those who have really common names that can lead to mistaken identities or guilt by algorithmic association? Or those who have adolescent mistakes in their past they would like to cover up and move on from? Or what about nondominant expressions of cultural capital that are ripe for misinterpretation from those who benefit from and maintain the status quo?9 And, of course we can’t overlook the opportunities for blatant discrimination based on age, sex, ethnicity, and religion. While these are legally protected categories, we know that in practice it is all too easy for an employer to ascertain this information online. And we cannot forget that there are still 29 states in the U.S. in which it is legal to fire someone for being gay (an identity no one should have to hide, and yet social media often renders visible and thus open to discrimination).

Suggested policies for employee monitoring (Gartner Inc.).

These are all legitimate privacy concerns, and thankfully some are starting to be addressed in legal literature.10 But what I’m also increasingly concerned about are the unintended consequences that extend beyond explicit questions of privacy in and of itself. I worry that these modes of surveillance have the potential to chill speech, further silence marginalized identities and experiences, and hinder opportunities for individuals to invest in and acquire other kinds of capital (such as social, emotional, and political). The effects of employee monitoring serve to discipline individuals into modes of self-regulation that have potentially detrimental effects and consequences that far exceed blatant discriminatory hiring/firing practices. Thus, we need to think deeply and critically about how privacy laws and norms must evolve to take into consideration not only expectations of privacy, but also the detrimental consequences surveillance has on other areas of online and offline life and society. And lastly, we need to know who is most likely to be harmed by these practices. My concern is that not all populations are equally affected by increasing modes of online surveillance, thus as researchers we must continue to conduct in-depth, empirical, critical, and diverse research into such questions.

*Author’s note: This article is very much a work-in-progress as I am beginning a much larger empirical study into these questions. I would greatly appreciate any advice and feedback about the direction the research should take.

Ellison, N.B., Steinfield, C., and Lampe, C. (2007). The benefits of Facebook ‘friends’: social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication 12(4), pp. 1143-1168.

Liveness has long been popularly thought to be at the core of television’s essence as both an information and aesthetic medium1. Today, though, what’s left of the collectively-experienced, evanescent moments made possible by live broadcasting provide the only incentive for audiences to tune-in at appointed times and schlep through commercial breaks. Over a decade in to television’s on-demand-driven era of convergence, we’ve grown alternately to embrace and tolerate liveness mostly for major sports match-ups and special events like awards ceremonies. This trend only seems likely to continue with every new record-breaking telecast rights deal between networks and sports leagues or record ad rate for an Oscar broadcast. At the same time, and with considerably less hullabaloo, the pleasures and possibilities of liveness have inexorably ebbed from just about every other form of scripted television entertainment. This has been particularly apparent in the last two seasons of Saturday Night Live, coming to a head earlier this month with the sketch comedy stalwart’s 40th anniversary special, “SNL 40.”

As it’s always been with SNL, the small, fleeting moments tend to be the show’s most significant ones. To the chagrin of many SNL devotees, Eddie Murphy’s appearance on “SNL 40” was stilted and brief, awkwardly cutting to commercial when it became clear Murphy would give little more than a perfunctory “thanks.” Noted small person and longtime Lorne Michaels confidant Paul Simon closed the show with “Still Crazy After All These Years,” a gentle performance no less touching than the first time he sang it on SNL’s second episode in 1975. Maybe the most meaningful small moment of the special, though, arrived at the end as several generations of SNL cast-members and celebrities crowded the studio 8H stage to wave goodnight. Michaels grudgingly received their well wishes as the credits rolled, but not before The Tonight Show host Jimmy Fallon quickly darted over and shared a private sentiment that elicited a rare, publicly visible smile and laugh from Michaels.

Jimmy Fallon snags a personal moment with Lorne Michaels during the closing credits

Fallon had opened the show performing a medley of SNL highlights alongside Justin Timberlake in exactly the kind of quick-hit mash-up that’s ready made for Internet spreadability. Of course, excerptible digital videos have become The Tonight Show’s calling card under Fallon and Michaels, extending the show’s cultural reach beyond late night in much the same way The Lonely Island’s digital shorts did for SNL a decade ago. Eddie Murphy’s “SNL 40” appearance has similarly lived on in social media conversations not because of what he did or said onstage, but because of the stories popping up in its wake. According to former cast-member Norm Macdonald, Murphy declined the opportunity to lampoon Bill Cosby’s many sexual assault allegations in a “Jeopardy!” sketch, a decision for which Cosby then publicly thanked Murphy.

As Mike Myers and Dana Carvey unintentionally noted in a “Top Ten” list during their “Wayne’s World” sketch, liveness has become the least important aspect of Saturday Night Live, particularly as a site of aesthetic innovation. Splitsider’s Erik Voss made a similar observation in breaking down last season’s brilliant “Darrell’s House” sketches, in which a public access television host played by Zach Galifianakis frustratedly asks for several of his flubs to be fixed in post, with the version containing his requested cuts airing half an hour later in SNL’s live broadcast. According to Voss, the sketch demonstrates an experimental sensibility—one fundamentally based in liveness—rarely seen in SNL’s predictable parade of pre-“Weekend Update” parodies of celebrities and political goings-on.

A “Wayne’s World” sketch highlights the lagging importance of liveness for SNL

SNL’s move away from liveness is partly a matter of survival. The show is in year two of a very rough rebuilding process, and it has increasingly relied on recorded material—comprising a full third of the show now—and writers trained at the Internet comedy powerhouses of Funny or Die and CollegeHumor to ease the transition. The diminished role of liveness on SNL is also a matter of generational tastes. The baby boomers of SNL’s early cast and writers were weaned on the vaudevillian holdovers and boozy breaking of live, network era, variety-style comedy. The current cast, all of whom were born after SNL’s 1975 premiere, are more literate in the millennial sensibilities of mash-up and distracted viewing.

More than anything, though, SNL is participating in a broader debunking of liveness as television’s ontological essence. Certainly, live events continue to be incredibly important to television networks and advertisers. Their cultural import, however, is highly dependent on their afterlife among social media networks and news cycles, or at least upon the success or failure of the next hastily-produced awards show to generate clicks. Some live events seem to be cynically conceived of as a Twitter trending topic first and as a television entertainment program in name only. With an increasing investment in collapsing the distinction between the two, SNL might just as well have displayed a graphic for “Liveness” during the “In Memoriam” segment of “SNL 40.”

Image Credits:
All images are from the author’s personal collection.

NOTES

Of course, the notion of liveness as television’s ontological essence has been thoroughly critiqued by television scholars over the years. See, among others, Jane Feuer, “The Concept of Live Television: Ontology as Ideology,” in Regarding Television: Critical Approaches—An Anthology, ed. E. Ann Kaplan (Los Angeles: The American Film Institute, 1983) and Elana Levine, “Live! Defining Television Quality at the Turn of the 21st Century,” http://cmsw.mit.edu/mit3/papers/elana_levine.pdf

Historical broadcasting, distributing content over the publicly-owned airwaves, dramatically extended the reach of popular culture, particularly through live airings that brought entire families in front of the radio or television set. Today, however, audience fragmentation, a pull (rather than push) method of consumption, and channel/platform proliferation have limited water cooler chatter to focus only on the most major of events, like the Superbowl. Beyond major events, it is quite difficult to find common ground in terms of simultaneous viewing. Critics have had to account for this lack of simultaneity, tardy viewers surf the web haunted by the threat of spoilers, and television programs like Scandal work every social media angle available to maintain some sense of “must-see” (read: live) viewing.

At the Flow Conference in October 2014, panel moderator Horace Newcomb repeatedly asked a panel of industry experts about the implications of audience fragmentation. In particular, he wanted to know what is at stake if we lose the communal nature of television. Acknowledging his own investment in the model of television as a “cultural forum,” Newcomb was not able to entice the other panelists to engage in this debate, perhaps due to their realistic acknowledgment that the days of mass media may be over. Yet Newcomb’s question speaks to broader, and significant, changes in the cultural role of television, especially its ability to incite debate about the most urgent issues facing Americans today. Consider a program like House of Cards—the show has its critics, but it nevertheless attempts to expose an extremely ugly side of politics that could inspire discussions of real-life political scandals. But that show streams exclusively to Netflix subscribers, accessible to non-subscribers only through piracy or digital purchase. Similarly, Amazon’s groundbreaking portrayal of a transsexual father of three in Transparent, which became the first online program to win a Golden Globe, is available only to Amazon Prime members. A recent one-time offer by Amazon to watch Transparent for free, designed to drive increased subscriptions to their Prime service, only highlights the limited access to the program for non-subscribers. Is Newcomb right that the people who most need to see a sensitive portrayal of a man’s decision to live life as a woman probably never will see it?

Transparent

Intensifying this silo-ing of TV viewers is one of the hottest trends in the business of digital content licensing: exclusivity, or giving exhibition rights to one distributor (here: one website) only. One example of an exclusive licensing deal is CBS’s arrangement with Hulu that designates it as the only streaming source for episodes of the Sherlock Holmes series Elementary.1 A key feature of the deal, however, requires Hulu to wait until after the conclusion of the program’s third season on CBS to begin streaming older seasons, likely an effort to encourage viewers to keep up with the program live as it airs on CBS. Children’s content has also become a hot ticket in the licensing wars between subscription video sites like Netflix and Amazon. Netflix made a wide-ranging deal with Dreamworks to air 300 hours of familiar and new children’s programming, while Amazon established an exclusive streaming arrangement with Viacom for Nickelodeon programs. Among streaming sites, including smaller competitors to Netflix and Amazon like Hulu, AOL, Vimeo, and Yahoo Screen, there have been countless smaller exclusive arrangements for individual programs or groups of shows, including New Girl, South Park, Community, HBO original content, and a block of FX’s original series.

Exclusive licensing is a trend, but not necessarily a new one; HBO subscribers, for example, have long enjoyed privileged access to HBO content.2 Among subscription video businesses like HBO and streaming video on demand sites, exclusive licensing sustains their business model by providing special access to subscribers, and it can involve both new and old content, particularly as digital windows often serve as one more outlet to syndicate broadcast programs. As Reed Hastings of Netflix explained, “If the content is not exclusive and it’s on cable and on other services, it might be pleasant to watch on Netflix, but it’s not really reinforcing customers to stay with Netflix.”

With distribution platforms proliferating online, and dominant streaming sites like Netflix and Amazon expanding their production of original content, an exclusive licensing deal can serve as a distinguishing factor for any website (the licensee) trying to build a recognizable brand. For instance, Yahoo’s decision to resurrect Community (canceled by NBC) as an exclusive weekly web series will bring profound attention to a relatively second-tier site like Yahoo Screen. For the content owner, or licensor, exclusivity serves as the mechanism for extracting enhanced value for content, and it provides a program with a longer life span.

Community on Yahoo

Enhanced value matters in a marketplace that defies the usual agreements that determine exchange value. In traditional media distribution, like over-the-air television, the ratings system has withstood decades of methodological3 and technological4 shifts to serve as what Ien Ang has termed the “convenient fiction” of ratings accuracy. The fiction is convenient because it is only through a common faith in the accuracy of this data that television networks and advertisers can agree upon a fair market value for the airtime purchased by sponsors. While not without controversy, the data collection practices of companies like Nielsen have established some stability for television stakeholders needing a mutually agreed upon measurement system. Online, though, no such convenient fiction exists. Exclusive licensing deals protect content value by negotiating premium licensing fees and by limiting the circulation of content online.5

The stakes for subscription-based media companies are relatively clear—a popular series can help a site gain a competitive advantage in a crowded content landscape—but what are the implications of exclusive licensing for viewers? “Disruption” has long been a buzzword to describe contemporary media change, but its application often angles toward industry concerns about monetization, piracy, and threatening new media companies. Industry discourse tends to isolate viewer needs through the rhetoric of “TV everywhere” (or TVE), a term popularized by Time Warner’s Jeff Bewkes that describes consumer desire to watch television where, when, and how they want. TVE may purport to describe an on-demand world where consumers drive engagement, but its narrower business function protects the operations of cable companies.

The best way for cable companies to maintain a subscriber base in an increasingly online world is to build up their “on demand” and online streaming capabilities. Take, for example, industry leader Comcast’s intensive focus on branding their own TVE efforts as “Xfinity” and, now, X1. While technical challenges have slowed the adoption of TVE more widely, it should not be misconstrued as an effort to break down the walls that control access to cable content—in fact, TVE reinforces those walls. There is an unsettling economic prerogative to an on-demand culture that extends the logic of television as a paid—rather than a free—medium, with exclusivity as a particularly effective tactic to encourage financial outlay by consumers for access.

Broadcasting has of course never been truly free in that Americans had to own a receiver and “paid” for content through the labor of viewing (eyeballs converted into ratings data points translated into dollars), but today there are a variety of additional ways viewers pay for (access to) content. From HD televisions to cable subscriptions, over-the-top devices, digital video recorders, and a variety of subscription packages—viewers pay for “free” television repeatedly.6

Increasingly, however, even “free” content is not free online. Broadcast network CBS has created a subscription video-on-demand site called CBS All Access, providing viewers with a deeper library of content for $5.99 a month. So, what you could watch for “free” during initial airings on CBS, you can pay to watch delayed, on demand, and live through an internet connected device. Similarly, you could watch NBC’s most recent airing of the Olympics for “free” live during its highly edited primetime broadcasts, but if you wanted to view a live stream online or otherwise catch up on demand, you had to be a cable subscriber. Sports, in general, have proven to be a powerful motivator for viewers to watch content live, partly because streaming sports platforms usually require a subscription fee. Streaming may therefore be convenient, but it adds to the costs of television viewing today.

The flip side of the debate is that subscription streaming television on Netflix and Amazon airs without commercials, which means the content may not be free, but at least Americans are not paying for it twice (through their subscription and their viewing of advertisements). Television has always positioned the viewer as ancillary, with the real business transactions occurring between the network and the sponsor. The relationship between the two—content distributor and advertisers—is longstanding but has also been adopted widely by new media companies like Facebook, Buzzfeed, and Twitter. Netflix and Amazon have so far chosen not to adopt the advertising model.7 Netflix does not report its viewing numbers publicly, and Amazon is perhaps even more cagey, unwilling to break down how many users subscribe to Prime, how many of those subscribers then stream content through Amazon Instant Video, and how many individuals stream particular programs. Because these streaming subscription sites do not participate in the advertising economy, they not only protect the secrecy of their (likely small) viewing data but also sell directly to consumers in a way that broadcasters and basic cable networks have never done. Direct to consumer may indeed be the brave new world of television, though most likely, it will cost a lot more up front.

WGN America made a similar deal to air Elementary on its cable channel, so while WGN enjoyed exclusivity among cable channels, Hulu had the same privilege among online streaming sites.

HBO’s eased its control over some of its original programs when it dabbled in syndication, exploring as early as 2000 the possibility of licensing Sex and the City to the wider masses of basic cable.

For example, Nielsen has endured longstanding questions about accurate accounting for minority viewers among its sample size.

Among the prominent technological changes to Nielsen’s survey methodology has been its adoption of People Meters and current efforts to account for mobile viewing.

Within the economics of media, the practice of distribution windowing has created an artificial scarcity that helped elevate the value of content that is otherwise a pubic good.

It is worth noting that the concept of “free” TV continues to have importance in pubic culture. Consider, the U.S. government demonstrated an impressive dedication to the notion of broadcasting as “free” when it underwrote the costs of analog households buying a digital converter box to prepare their cathode-ray television sets for the digital transition in 2009.

Amazon and Netflix may not deliver ratings data to a measurement company, but they are decidedly active data crunchers, tracking consumer behavior on their own websites.

“The worst thing the French ever gave us is the auteur theory,” he said flatly. “It’s a load of horseshit. You don’t make a movie by yourself, you certainly don’t make a TV show by yourself. You invest people in their work. You make people feel comfortable in their jobs; you keep people talking.”
- Vince Gilligan1

Despite Breaking Bad’s series creator and showrunner Vince Gilligan’s protestations to the contrary, orthodoxy surrounding the “showrunner” as the primary author of TV is a discourse rigidly embedded within the discipline — one that sweepingly omits the contributions of many collaborators and actors in particular. This omission is partly due to the lack of scholarship that concretely explains what an actor does within a given show. Since an actor’s labor is ethereal in nature, credit tends to be attributed to other agents, such as writers and directors, or ignored completely.

Acting is not commonly discussed in relation to television. More often than not, television acting, and by proxy, television actors, has been viewed as inferior to film, which is more often associated with prestige. Although TV’s cultural value is on the upswing, perceptions about TV acting is still an under-examined subject, especially in latest era of great television. However, as Torben Grodal reminds us, scholars and critics are drawn to directors, while audiences relate to actors and their performances.2
So, while TV may now be more cinematic, good acting is still what ultimately fuels the engine.

I offer Bryan Cranston’s universally lauded, multiple Emmy Award-winning performance in Breaking Bad as an intervention to this problem. By characterizing Cranston’s portrayal of Walter White as a “long-form performance text,” and concentrating on what is arguably the series’ most memorable moment – I propose an alternate method to consider television authorship and an actor’s contribution to the series. As Cranston’s five-season-long character arc from the nebbish White to methamphetamine kingpin “Heisenberg” is one of the most pronounced and vivid transformations in television history, I maintain that it can consequently be utilized to read moments of agency and labor that Cranston brought to his role.

Only by considering Cranston as a complementary author to showrunner Vince Gilligan can we understand how he shapes the series, and how actors function within television more generally. The question remains: how do we quantify Cranston’s performance as Walter White? Without direct access to his script pages, there is little room for such analysis of the actor’s interpretation, not to mention the difficulty of translation from the script page to the screen. Moreover, how is Cranston’s performance shaped by his fellow actors, including multiple Emmy-winning performers Aaron Paul and Anna Gunn.

Inspired by Cynthia Baron and Sharon Marie Carnicke’s analysis of “character interactions” within The Grifters (Steven Frears, 1990) I will analyze Breaking Bad’s most famous monologue3 . Next, I will illustrate the scene’s context in order to account for the shifts within it. I will also position this scene within the rules of classical tragedy, where the protagonist makes a fatal decision that ultimately leads to his downfall. Finally, I will analyze the scene as a performance within a performance, where Walt’s bravado in the scene is actually a sign of his weakness rather than his strength.

Adding nuance to the monologue deemed the series best is not an easy task, particularly as the “I am the one who knocks” speech has taken on a life of its own as a robust paratext 4 –as seen in Samuel L. Jackson’s reinterpretation of the monologue and its many, many online imitators. .

Putting this monologue back into context is a necessary step if we are to understand not only how these lines work within the larger scene, but also within the series as a whole. Recontextualizing the lines also means remembering that Walter is not operating from a position of strength at this moment of the series and that these lines are spoken in order to make his wife Skylar submit to his will.

The scene actually begins as Skylar listens to her husband’s message on the answering machine and she believes that she detects fear in his voice. Then, she walks into the bedroom where Walter is nursing his hangover and eventually goads him into talking to her. Conflict here is based on a balance of power, and this power is exchanged within the course of the scene. Walter also begins lying down, pretending to be hung over and then snaps into lucidity with Skylar standing and when they are level to one another (when this clip begins) they are on equal footing and equal volume.

As famed acting guru Stella Adler once pronounced, all “acting is reacting.” One reason that this scene is so memorable is because of the skillful work of Anna Gunn as Skyler White. While Cranston may score the goal in this particular power play it is only because he gets a great assist from his scene partner. So despite this scene being more famous for Cranston’s monologue, it is his scene partner Gunn who anchors the action and has the more difficult role of reacting to Walter’s bravado.

What the meme-ification of this line misses, then, is the opportunity for us to read the scene as it plays out organically. Thus, the first step in understanding how it operates is to remember that it opens and closes on Skylar (Anna Gunn), thus giving her much of the focus. In fact, when put into this context one can easily argue that the scene is, in fact, hers, rather than Walt’s. Watching the scene again, we can see the dynamics of power and conflict within it.

Indeed, as Cranston relates in the following clip, the scene takes place at Walt’s complete transformation into Heisenberg as he reveals his tragic flaw.

Dramatic irony also comes into play here. What the audience knows and Skylar doesn’t is that Walter’s speech is itself an act, very much calculated in order to silence his wife on the subject. So, despite Walter’s protestations to the contrary, what is clear in the scene, is that he is actually in a much more precarious position than he is indicating here — making this a violent performance within a performance.

Moreover, what the scene ultimately reveals is that Walt’s superpower is not his ability to cook crystal meth better than anyone else, but, in fact, it is the power of his performances and his ability to lie convincingly. In other words, he is an actor of the highest order, and much of his empire is built on theatre — the main source of his power.

There is also the question of whose scene this is. Since it begins and ends with Skylar, I would argue that it is actually hers. Again, following Adler, her reactions to Walter’s bravado are crucial to the way the scene plays out. Taken as a whole, we see a classic staging of the transfer of power. It is also a demonstration of extreme violence under the surface and from here, Skylar is basically cowed into submission by Walter’s display. I would also state that what is unsaid is as important as what is being said and the subtextual dimension (what is neither spoken nor heard within the scene, but is clearly going on in Skylar’s silence) speaks more effectively than Walter’s monologue does. Here, the acting is combined with editing to register the contrast between Walter’s madness and Skylar’s growing anxiety.

Conclusions

Within this short piece, I have demonstrated that television acting needs to be taken seriously within the context of television’s commercial and critical resurgence. Only by examining Walter White’s famous monologue in relation to the character it is spoken at, and within the series-long character arcs that Cranston, Brandt and Paul perform in, can we productively come to conclusions about the state of contemporary TV. What I offer here is merely a snapshot of the kinds of methods that could be employed if critics and scholars are to effectively understand some of the overlooked elements that are largely responsible for the current ascension of Television’s “Quality.”

Archer is a trap. It is a tantalizing, glimmering object bobbing in the murky, docu-serial infested waters of non-premium cable. Each 22-minute episode says so much, both with words and images, that the critic salivates instinctually. And, yet, Archer is a perfectly designed lure. The joke is on the biter.

To switch metaphors, Archer is the patient whose mother is a surgeon, who took a few anatomy classes in college, who spent the whole afternoon on WebMD. She may not know more about medicine than her doctor. She does, however, know exactly what to say to lead the expert down a path of her choosing.

Archer’s favorite version of this game is Freudian. Its opening scene screams out a litany of Oedipal themes. Sterling Archer, shirtless, muscled and paradigmatically handsome, dangles from a dungeon wall. An interrogator declares that Archer, codename Dutchess, is “known from Berlin to Bangkok as the world’s most dangerous spy.” The scene’s S&M implications morph quickly into explications. The ostensible spycraft narrative fades away. The scene lights up, revealing the truth. It has all been a show with an audience of one. Sterling’s boss, an elegant, 50-something year old woman, has been watching the torture—a training session—from the start. She is also, of course, Archer’s mother. Dutchess, in addition to being Archer’s nom de guerre, is revealed to be the name of mother’s favorite pet. As the scene cuts out, mother glances longingly at an Annie Leibowitz-style portrait in which she, naked, caresses Dutchess—the dog—in bed.

The scene sends a clear instruction to those viewers who have taken the time to memorize Dr. Freud’s number: page him. It’s all right there. A son takes pleasure in pain, sexualizing himself in front of his mother. The mother displaces the sexual bond of the breast onto another object, claiming that Archer’s codename was “random” i.e. not conscious. Archer, it seems, is luring viewers into a state of comfort through a blend of strange comedy and fantastic animation. It then scratches unconscious itches, giving expression to desires for mother-love that no one dares express in normal, waking life and yet are capable of providing great pleasure. And perhaps it does.

But, perhaps it does not. More than being Freudian, the scene is about Freud, Freud’s place in contemporary humor and Freud’s role in the 1950s culture from which Archer draws its aesthetic. Throughout its run, it has asked viewers to think about Freud when considering Archer’s character. In episode eleven of season three, the script makes this official, with Archer’s on-and-off lover Lana proclaiming: “If you want to know why Archer is Archer, you need to go back in time and have a threesome with Oedipus and Sigmund Freud.” To think overtly about the unconscious, however, is to deny its power. The unconscious is like Keyser Soze’s devil— its power derives from our ability to ignore its existence.

Archer’s mother and her dog, Dutchess

And yet, I cannot help but take a bite at Archer’s psychoanalytic spinnerbait. There is, undeniably, something profoundly dreamlike in Archer’s construction. In addition to the oft-noted surreal qualities of animation, the show’s diagetic landscape is perhaps television’s most comprehensive presentation of the “Kettle logic” that Freud invokes in The Interpretation of Dreams (1978).1 Using the example of a patient’s dream about returning a broken kettle, Freud notes the mysterious, powerful way in which the unconscious supports simultaneous, incompatible states of being. Questioned by his neighbor, the dreamer in Freud’s example offers three explanations: 1. He returned the kettle unbroken 2. The kettle was damaged at the time of his borrowing 3. He had not, in fact, ever borrowed the kettle. Crucially, these are not alternative excuses, as they would be in waking life. In the world of dreams they, somehow, can be simultaneously true, allowing for the unconscious psyche to play out its own contradictory desires and understandings.

Archer’s is a world full of kettles simultaneously borrowed and unborrowed, broken and unbroken. It is a universe in which there are futuristic cellphones, planes, guns and cars. And, yet the Soviet Union and the KGB are going strong. It is a time in which 9/11 has already happened, but Burt Reynolds is still a picture of virility. Archer has an English butler who served in World War I and yet takes supersonic jets to enjoy transgender prostitutes in, of course, Bangkok.

This kettle logic, I submit, is not a commentary on Freud like the mother-love scene that began the series. It is instead a (seemingly unconsciously) attractive way to construct a world in which to play out the complex desires of the unconscious. Yes, some of these are certainly sexual. However, as noted above, the sexual aspects are often tainted by the show’s very intentional invocation of Freudian themes and ideas.

Archer’s mother watching the interrogation

Instead, I turn to the work of Otto Frank, a student and colleague of Freud’s who would, eventually, come to diverge in important ways from his mentor. Rank, who is rarely mentioned in the realm of media theory, represents one of the earliest thinkers to overtly connect the moving screen image with the play of the unconscious. In his 1914 book The Double, Rank (1971) analyzes the film 1913 film The Student of Prague, noting that cinematography “in numerous ways reminds us of the dreamwork (p.4).2 One cannot help but imagine how much Rank’s conviction in this confluence would strengthen were he to see a college student curled up in bed, in the dark, watching seasons of Archer flow from episode to episode as she drifts to sleep.

In his analysis of The Student of Prague, Rank focuses on the literary theme of the double, something he associates with modern man’s desire to maintain a belief in the immortality of the soul in an age the rejects spiritual thinking. However, the heart of Rank’s approach to psychoanalysis resides in what Rankian theorist Ernest Becker (2007) summarizes as “the denial of death.”3 In contrast to Freud, Rank held that people do not, consciously or unconsciously, harbor a desire for their own destruction. Invoking a theory of the psyche in which the conscious mind exercises a greater influence than in Freud’s approach, Rank argued that much, if not all human activity, can be understood through man’s struggle to reject the fact of her mortality. In particular, he argued, dreams can be understood through such a lens. There are two types of dreams pertaining to death, he writes in Psychology and the Soul (2002).4 The first type, in which the dreamer somehow improbably survives danger, denies death by suggesting the subject’s immortality. The second, in which the dreamer either dies or is about to die, also denies death, this time by forcing the subject to awaken and contrast her livingness with a death that has proven illusorily. In either case, the dream serves the dreamer by bolstering her ability to keep death at bay in waking life.

Archer, already taking place in a world that recalls the dream state, consistently invokes themes that play on each of Rank’s dream-types. Nearly every episode features someone, most often Archer, on the absolute brink of death. A lethally poisonous snake bites him, he contracts aggressive breast cancer, he falls into a punji tiger pit. He confronts 99 ways to die, but submits to none of them. Whereas most action series slide into the unrealistic in the construction of gun battles, Archer takes the idea ad absurdum. Every Archer episode recalls Butch and Sundance’s showdown with the Bolivian army. Except in Archer, the army is defeated and the heroes live on.

Other plots hew closer to Rank’s second type of dream, going as far as to kill off characters and then, nearly immediately, return them to life. Barry, Archer’s archrival, dies. So does Archer’s fiancé Katya. There is a moment of shock and mourning but then, just as the dreamer awakes, the dead character returns to life, often as a cyborg, denying death in the process. Such scenes, I submit, are attractive not simply due to the comedic logic of their absurdity, but also their appeal to the viewer’s desire to demote death from inevitability to impossibility.

It is perhaps odd to dismiss Freudian interpretations of Archer while embracing those of Rank, a follower of Freud. However, in this case, Rank’s relative obscurity helps preserve the potentially unconscious nature of the program’s engagement with death themes. Neither writer nor viewer is likely to step back and view Archer’s comedic approach to mortality as a comment on Rankian theories death, while they very well may take the time to make conscious the more obvious Freudian aspects of the show. That, or I’m engaging in my own little denial, believing that I’ve mastered the process of death denial and thus, in some way, controlled the uncontrollable. One never knows.

Rank, O. (2002). Psychology and the soul: A study of the origin, conceptual evolution, and nature of the soul. JHU Press.

]]>0Derek Johnson / University of Wisconsinhttp://flowtv.org/?p=266172015-03-02T01:31:48Z2015-03-02T01:31:48Z

The “televisualization” of the comic book film.

Looking back at the year 2014, Mark Harris of the sports and pop culture blog Grantland recently characterized Hollywood as haunted by superheroes, unable to break its cyclical dependence on formulaic sequels even as that franchising threatens to “poop all over everything.” Such overwrought, doomsday reflection on the “toxic” and “annihilating” creative atmosphere within the blockbuster-driven film industry is anything but novel. Over at Antenna, Brad Schauer has explored the ways in which critics lamenting the supposed end of narrative in Hollywood position themselves as the “last bastion” of good taste in opposition to the audiences of comic book films, and his research more broadly has revealed the long history by which science fiction and other franchise blockbusters have been dismissed by critics. So I’d add very little here to merely take Harris to task for keeping that story running. But where Harris does make an extremely valuable contribution to our understanding of contemporary Hollywood—in need of both further exploration and further critique of the kind Schauer might call for—is in his realization that the contemporary comic book blockbuster has given film an increasingly televisual quality.

Of greatest concern to Harris about the film industry of 2014 is the way that it replicated itself into 2015 and beyond, as made most tangibly clear by the carefully planned futures of the DC Comics and Marvel Comics film franchises. Each company made spectacular announcements throughout the year revealing the titles of dozens of comic book films to be produced by the end of the decade. As Harris writes, the film industry of 2014 is all about “creating a sense of anticipation in its target audience that is so heightened, so nurtured, and so constant that moviegoers are effectively distracted from how infrequently their expectations are actually satisfied. Movies are no longer about the thing; they’re about the next thing, the tease, the Easter egg, the post-credit sequence, the promise of a future at which the moment we’re in can only hint.” Despite his doom and gloom, Harris provides here an extremely useful perspective on narrative aesthetics in contemporary media franchising. Much as I have argued that media franchising applies the logic of episodic production long central to US television to a host of other entertainment industries, Harris conceptualizes this promise and anticipation of the future as a televisionification of blockbuster film. “TV knows how to keep people coming back, which is its job, every day and every week, and is a quality that, above all others, the people who finance movies would dearly love to poach,” Harris writes. While the specific episodic logics that have long been a part of comic book form can be seen to have their own transformational effects on television (as argued by Alisa Perren), Harris’ insight encourages us to look in parallel to television studies to understand what is happening in the industrial embrace of the comic book film.

The Marvel film slate through 2018 is announced.

While Harris’ invocation of television seems meant to evoke a sense of monotonous, economically determined, illegitimate, and above all risk-averse form of cultural production to justify his claims about creative bankruptcy, television scholars might consider the case of comic book film franchising with somewhat more ambivalence. Yes, we have long known that episodic television is an especially risk averse and particularly repetitive cultural form. Yet TV scholars like Jeff Sconce have considered what it might mean to be creative within that context. Thinking about the challenges of ongoing, episodic production and above all the need to generate episodic difference amid the reuse of series and generic formula, Sconce argues that the “true art in the algebra of televisual repetition is not the formula but the unique integers plugged into the equation.”1 In this way, television studies can prompt us to think about franchised creativity as something that comes as much in response to repetition as something annihilated by it. Creativity in that sense might be a little less celebrated and magical, and instead a more negotiated struggle through which formula support both stasis and change at the same time.

Harris’ essay seems to focus only on stasis. He looks at the production slate for Marvel Studios and sees the extension of a 2014 formula (itself an extension of what’s proven successful in years past) to the next several years of blockbuster filmmaking through 2020. He sees the replication of that formula as a reason to be concerned for all “the movies that aren’t getting made.” And he’s right. The Marvel films are nothing if not formulaic, and the crowding of the blockbuster market by comic book films—to say nothing of what blockbuster emphasis in general means for quieter independent projects and untested ideas—is a concern about diversity of voice and perspective that cannot be waved away by a conversation about the art of repetition. But Harris’ invocation of television means we have to think about the unique integers demanded by repetition too.

The DC Comics film release line-up through 2020.

Of course Harris is willing to admit that with the huge number of comic book films being
produced, the odds are that one or two “good” movies will “sprout up.” Instead of looking at such instances as anomalies in an otherwise homogeneous sea of carefully managed production, though, we might think about them as important parts of franchising logic—the variance and “unique integers” necessary to keep the formula fresh and, especially, to adapt that formula to new audiences and tastes. More than anything, Harris seems troubled by the “Stalinist” way studios have planned out the road to 2020, introducing one new comic book hero or property after another to be run through the same blockbuster franchise formula. For DC, Superman vs. Batman will lead to Suicide Squad and Wonder Woman; for Marvel, Avengers: Age of Ultron will lead to Ant-Man and Captain Marvel. Yet there’s something permitted here in the plugging of all these different integers into the same formula that earlier moments in the franchising of comic book films did not. The promise of a future represented by these extended production slates depends on a commitment to gradual, cumulative narrative change and the exploration of new characters to replace the old (no more rebooting in order to tell the exact same story again, a la Sony’s Spider-Man film franchise; though the breaking news that Sony will allow Marvel to reunite Spider-Man and The Avengers suggests one last reboot may be required there before Marvel commits to integrating the character in their long-term, future-thinking strategy). That promise of cumulative development may ultimately go undelivered, but it imagines Hollywood franchise filmmaking as something ideally balancing formulaic stasis with iterative dynamism.

Carol Danvers, also known as Captain Marvel, is set to make her big-screen debut in 2018.

While glacial, these dynamic shifts have political importance too. How might the stability of the formula allow a broader range of experimentation in imagining power and who gets to wield it in these popular fantasies? Even if formulaic, both Wonder Woman and Captain Marvel represent a shift in industrial logic as to whom the subjects and audiences of blockbuster franchising might include. Make no mistake—this is a shift based in market analysis and calculated risk assessment, but nevertheless one that should be recognized as something other than simply more of the same. Unfolding over time across a decade of industry strategy, franchising is a site where we can see glacial changes in corporate culture, logics, and lore. As Joss Whedon so eloquently quipped in describing Marvel’s post-Guardians of the Galaxy confidence in the extension of its franchise formula, “If a raccoon can carry a movie, then they believe maybe even a woman can.”

A publicity still for Wonder Woman, directed by Michelle MacLaren and slated for a 2017 release.

With this in mind, my point is not that we should celebrate Marvel for offering change in the most cynical, managed, and risk averse way possible. Instead, it is to point out that the persistent presence of almost imperceptible change helps us put in new perspective the concerns that Harris and others have about the movies that aren’t getting made. Because franchise formulas do change, they can be applied to new markets and new audiences. Five years ago, moviegoers had to look well outside of Marvel’s offerings to find strong female heroes at the center of a film narrative; five years from now, strong female heroes will have become one of the many unique integers plugged into the Marvel formula, and that formula may have become the most profitable, risk averse place for that kind of content. If successful, Captain Marvel and Wonder Woman may create a larger market for fantasy narratives focused on women (and hopefully made by women), but at the same time they may cement the overall Marvel film franchise as a one-size-fits-all formula that can be adjusted to suit all audiences (and producers). We might similarly think of the Ghostbusters franchise as one of many new potential containers for comedians like Kristin Wiig and Melissa McCarthy. Our critical concern for media franchising, therefore, should take a page from television studies (and in this case, feminist television studies) to be equally attuned to formulaic mutability as the potential for creative stasis.

The new sitcom Cristela premiered as part of ABC’s Friday night lineup on October 10th, part of a slew of new fall programs, including the Emmy-winning Jane the Virgin (CW) and the already-cancelled Red Band Society (Fox), that feature Latino and Latina actors in prominent roles. Among them, Cristela stands out as the one best positioned to become what we might call the “Great Latino Family Comedy,” the show that will finally succeed in shifting Latinidad (rough translation: Latino-ness) from the margins to the center of the TV family sitcom without compromising its Latin soul. Many Latinos have long been waiting for the Latino Cosby Show, the program that, for better or worse (and plenty of African-American critics have panned Cosby’s assimilationist leanings and “politics of respectability” and are now struggling to make sense of his recent fall from grace), changed the racial landscape of television by exposing audiences of all colors to a loving, functional black family. Although the show failed to define “family” beyond a heterosexual nuclear unit, as subsequent sitcoms have done, its vision of blackness was groundbreaking for its time.

Latino audiences still cannot claim a Cosby of our own, despite the increased visibility of Latino characters on network and cable television and the meteoric rise of Eva Longoria and then Sofia Vergara in recent years. ABC seems to hope that Cristela Alonzo, a Tejana comedienne, is the TV star to fill this niche.

The George Lopez Show came close to achieving what ABC has in mind for Cristela. Lopez appeared on the same network and had a similarly winning lead actor, but could never fully balance its star’s edgy standup persona with the banal suburban setting the show placed him in. Although Lopez positioned itself as something of a class comedy by frequently referencing Lopez’s blue-collar roots and job as a manager in a factory, its setting and characters had a distinctly middle-class feel, not exactly relatable to the Latino audience the show’s producers presumably wanted to court and not unlike the class politics of Cosby. Lopez, in the eponymous role, seemed to have his wings clipped under the pressure to represent the Latino family in a blandly positive light. Margaret Cho’s 1994 series All American Girl, another precursor to Cristela, suffered under similar constraints. Both Lopez and Cho had much to say about the minority experience in their standup routines, which appealed to both white and non-white audiences, but both of their shows, though designed as star vehicles, diluted their capacities for social and political commentary.

The George Lopez Show

As a result of this stifled tone, what we got in Lopez was a generic Cosby without the undeniable charms of that show’s stellar cast or the top-notch writing of a Carsey-Werner production. What is more, the family dynamics that it depicted (stubborn husband, exasperated wife, wisecracking grandmother) were played for bigger laughs in Everybody Loves Raymond and The King of Queens. Although it ran for six seasons and 120 episodes, George Lopez never ranked above 50th in the Nielsen ratings. The sitcom does not enjoy a legacy among Latinos and Latinas as particularly groundbreaking or affirming, unlike Lopez’s classic standup routines. Years after its cancellation, scholars in Latino television and media studies continue to pay scant attention to the show (except in a few smart pieces that have appeared in Flow).

Ugly Betty

Ugly Betty, another ABC property, serves as a better template for the Great Latino Family Comedy. Sure, the real drama occurred at Mode, the high fashion workplace where Betty Suarez (played with winsome sincerity by America Ferrera), the sartorially challenged daughter of Mexican immigrants felt like a fish out of water, but the Suarez family provided a warm blooded vision of Latino domestic life. Importantly, it did so while avoiding many of the stereotypes associated with Latinos throughout television history: Ignacio (Tony Plana), the paterfamilias, defied the longstanding trope of the macho head of household—unlike, for example, Cristela’s brother-in-law Felix (Carlos Ponce)—by feeding and tenderly nurturing his family over the course of four seasons; when Betty’s nephew Justin (Mark Indelicato) comes out, the series showed that, despite popular media representations, Latino families are not necessarily more homophobic than white families; Betty herself had plenty to say about Latina feminism (even if she would have never called it that). In these ways and many others, the show quietly subverted a century of film and television stereotypes that have constructed the Latino family as inherently dysfunctional, capable of producing only thugs, maids, drug dealers, and stereotypes that we have seen played as clowns or threats. But Ugly Betty was a single-camera dramedy, not a multi-camera sitcom with a laugh track or studio audience like Cosby, Lopez, or, now, Cristela.

Cristela Alonzo’s standup is funny. As a performer, she is charming and down to earth, unafraid to laugh at herself or the dominant culture’s expectations of her as a Chicana. Her career, boosted by several recent late night appearances and now her own sitcom, looks like it is off to a strong start. I wish I could say the same for Cristela. The jokes and repartee feel too familiar (“If you were my wife, I’d put poison in my coffee,” threatens Felix. “If I were your wife, I’d drink it,” retorts Cristela, to super-sized laughs), the stereotypes too broad (A culturally incompetent and overly Catholic immigrant mother nags the titular character about finding a husband and a real job), and the setting too sanitized (The family’s huge suburban-looking house resembles the one in Lopez) to stand out as something original in the noisy landscape of primetime TV. Just like with Ugly Betty, the pilot paints Cristela as the underdog as she pursues and wins a plum internship—this time at a Houston law firm—but the premise does not go far enough in presenting the lead character as something more complex than just a wisecracking dreamer of the garden variety. Many of the jokes in the pilot are drawn directly from Alonzo’s standup routine but do not feel as funny or vital now that they are situated in the canned world of the sitcom. I found it especially surprising that the first season so liberally borrows plot points and jokes from Ugly Betty, The George Lopez Show, and Margaret Cho’s stand-up routines. This derivative feel suggests that Cristela is not exactly striving for greatness, even if coverage in the New York Times and other media outlets saw the emergence of the show as a significant step forward in Latina/o representations.1 The show does tackle Mexican-American stereotypes in just about every episode, but it plays them for laughs rather than to dissect and upend them as, for example, Black-ish attempts to do on Wednesday nights on the same network. Time—and Nielson ratings—will tell if the show will find an audience beyond its inaugural season. I, for one, have tuned out but will keep looking for the Great Latino Family Comedy.

This year’s Golden Globe Awards, which aired Jan. 11, included a notable achievement for Latina representation when Gina Rodriguez, the Puerto Rican star of the CW breakout dramedy Jane the Virgin (2014+), won Best Performance by an Actress in a Television Series-Musical or Comedy. Clearly surprised to have won, Rodriguez accepted the award with a poignant speech that underscored the importance of her starring role while Latina and Latino characters are still often left out of U.S. television story worlds. “This award is so much more than myself,” she told viewers. “It represents a culture that wants to see themselves as heroes.” Rodriguez expanded on her comments to the press backstage, noting that her win allowed Latina/o viewers “to see themselves invited to the same party” of American television.

While I find Rodriguez’s win and Jane the Virgin’s critical success hopeful signs, the snail’s pace growth of Latina/o visibility in television is otherwise discouraging. A 2014 study by Frances Negrón-Mutaner and other researchers at Columbia University found there were no Latina/o lead roles in scripted television series in 2013.1 From this perspective, Jane the Virgin and Cristela, both launched in 2014, are signs of progress, but only from invisibility to—slight visibility. The number of Latina/os at the Golden Globes to hear Gina Rodriguez’s speech also is telling. Aside from the Jane the Virgin cast and the glamorous Jennifer Lopez and Salma Hayek, both of whom served as presenters, the only Latinos to be seen were Mexican director Alejandro Gonzalez Inárritu and his fellow Birdman (2014) writers, who won in the Best Screenplay category, and Louis C.K., nominated for his performance in Louie (2010+), who largely ignores his partial Mexican ancestry. Where are the Latino John Hamm and Wes Anderson, the Latina Mindy Kaling and Lena Dunham? Not invited yet.

While it was not the first time a Latina or Latino actor was recognized at the Golden Globes or Emmys for a recurring role in a TV series, the number in this esteemed club is woefully small. America Ferrera won both awards for Ugly Betty (2006-2010) in 2007, as did Edward James Olmos for Miami Vice (1984-1990) in 1986, while Jimmy Smits won an Emmy for L.A. Law (1986-1994) in 1990 and a Golden Globe for NYPD Blue (1993-2005) in 1996. These are the only Latina/o actors recognized for recurring roles in the 66 years since the first Emmy awards show in 1949. Keeping in mind that acting accolades are the result of not just performers’ abilities but also the creation of compelling characters and television narratives, what can we make of the relative absence of Latina/o actors in the television VIP club? In this essay, I ruminate on five of the primary reasons why Latina/os are still often left out.

1. First, in my assessment, Latina/o characters and storylines are still not taken seriously by many television creatives, executives, and advertisers. There is a dearth of Latina/o characters even in this New Golden Age of television, which has brought us such complex and unconventional series as Breaking Bad (2008-2013), Girls (2012+), and Transparent (2014+). This is not for lack of good intentions, however. As scholars such as Martha Menchaca have documented, the de facto segregation of cities in the Southwest in the last century has resulted in most Latina/o and non-Latina/o Americans living very separate lives.2 In Los Angeles, the hub of television production, upper-income white Angelenos typically know Latina/os as their nannies and gardeners, not as their neighbors, friends, or coworkers. Hollywood tradition also has trained American viewers (even Latina/o viewers) to expect narratives of white heroism, wit, and beauty, and of Latina/o comic relief and marginalization. This contributes to a tendency for many television professionals to consider Hispanic-driven narratives too culturally different to draw in American viewers, even as Latina/os constitute 17 percent of the potential viewing audience. Arguably, a series pitched today that includes a Latina/o hero or heroine won’t make the cut or will be green-lit with such a low budget that it will quickly fall flat. Vibrant, engrossing series such as Jane the Virgin, Hulu’s East Lost High (2013+), and NuvoTV’s reality series Los Jets (2014) are still exceptions to usual standards of practice.

NuvoTV’s Los Jets

2. Another reason for the lack of award-worthy roles for Latina/os is a dearth of great writing for the few characters that do appear on network television. When characters and narratives are underwritten, stereotypical, or don’t give the actors that play them a chance to show their chops, why would we identify with and follow them? See reason #1 above: Funding with such a low budget that a series will quickly fall flat. It’s also useful to consider here how Latina/o narratives in television are still typically relegated to the situation comedy genre, while the critically acclaimed series of the last decade have overwhelmingly been dramas and dramedies. Audiences, especially young adult viewers, clearly crave the greater complexity and nuanced characterizations afforded by dramatic conventions.

3. Moreover, recent Latina/o roles arguably are often uninspired because of their genesis in series remakes rather than in original narratives. A number of Latina/o-focused series of the last decade, such as Ugly Betty, Jane the Virgin, and Devious Maids (2013+), have been remakes of popular telenovelas. These series’ lead characters thus are not the personal, original creations of their writers. For example, as much as I enjoy Jane the Virgin and Gina Rodriguez’s performance, I find the novela-inspired narrative and characters too predictable for my taste at times. The networks’ overreliance on novela remakes likely is related to confusion about what else Latina/o viewers will watch. While studies repeatedly document that over three-fourths of American Latina/os consume both English and Spanish-language entertainment media,3 U.S. television producers and advertisers still often appear unsure of how to appeal to them with original programming.

4. These last two reasons are related as well to the lack of Latina/os at the table when it comes to writing and producing television. It makes a difference that Jill Soloway, creator of Transparent, grew up with a transgender parent who came out to the family as trans late in her life. Similarly, Carlos Portugal and Kathleen Bedoya, the creators of East Los High, have been instrumental to crafting Latina/o teen characters that feel authentic even while the series works within the confines of the prime-time soap genre.

The cast of season 1 of East Los High at a benefit for the Hulu series in July 2013.

However, few Latina/os worked as television writers prior to the 1990s, and their numbers have barely grown since. The Columbia University study found that Latina/os comprised only 2 percent of writers, 1.1 percent of producers, and 4.1 percent of directors of television series from 2010 to 2013. They comprised none of the series showrunners.4 This scarcity arguably leads to depictions that fail to capture the diversity and richness of Latina/o experiences. Complicating this issue, Latina/o writers who get opportunities to create series may need more experience to first develop their craft. Without being hired for other writing positions in television, they have few opportunities to gain this experience.

5. Finally, fear of stereotyping arguably has also had a chilling effect. In response to media advocacy efforts, network executives appear apprehensive about presenting characters or stories that are strongly culturally marked or set in working class neighborhoods, for fear of portraying Latina/os in a manner deemed non-aspirational and of turning off viewers or advertisers. As a former student of mine who interned in a major network’s development department once told me, pilot scripts that posit a Latina/o or African American working-class protagonist will always get an automatic “pass” for this reason. While Latina/o characters with exaggerated characteristics such as broken English, heavy accents, and colorful costumes are less often seen in TV story worlds,5 less assimilated, working class, and darker skinned Latina/os also are rarely present. The range of roles and story possibilities has sadly been diminished even while Latina/o televisual representation is seemingly more “positive” in recent years.

Gina Rodriguez’s Golden Globe win and the critical and popular success of Jane the Virgin are hopeful signs that Latina/os are in fact beginning to be taken seriously by networks and streaming television outlets. I think they point in particular to rising awareness of the grave need for networks to do so for their own survival. I believe it’s inevitable that as tomorrow’s television creators and stars, Latina/os will become welcome guests and hosts of the party. But the invites are nevertheless overdue.

A notable exception is Modern Family’s Gloria Pritchett, a character that has made Sofia Vergara the highest paid actress in television. However, Gloria’s popularity is heavily linked to the character’s complexity and the show’s excellent writing, which has been recognized with numerous Emmys, Golden Globes, Television Critic Awards, and Writers Guild of America Awards over the years.

In August 2014, the Federal Communications Commission (per the 21st Century Communication and Video Accessibility Act) adopted an order requiring wireless carriers and other technologies that enable the delivery of text messages to deliver 9-1-1 texts to emergency services in areas that support such services, and to send bounce-back messages in areas without support. It is now up to 911 call centers, officially known as Public Safety Answering Points (PSAPs), to implement technologies to receive and respond to text messages. The FCC created a system for PSAPs to register their readiness, and text-to-911 services are now rapidly expanding around the country (see where services are available).

My home state of Indiana was the second to widely implement text-to-911 (following Vermont). This was made possible by INdigital telecom, which manages Indiana 911 services, routing calls on behalf of the Indiana Wireless Enhanced 9-1-1 Advisory Board. In May 2014, INdigital launched texTTY for use in 911 call centers. Indiana PSAPs now have access to texTTY, “a platform that allows the PSAP to ‘add on’ non-voice (SMS)”1 to an existing call. This means that PSAP workers can receive texts and reply, as seen in the video below, using a simple interface; they may also choose to initiate texts following a silent phone call from a given number.

Platforms such as texTTY are important because while the FCC can make recommendations regarding telecommunications services, the 9-1-1 programs are run at the state level, with various personnel and technological arrangements. This means that it is actually up to states to ensure that the technological ability to text-to-911 is met by an infrastructure that is capable of responding to these texts. The FCC2 has compiled a list of best practices for PSAPs regarding implementing text-to-911, “without requiring significant up-front investments or upgrades, including the use of web browsers, gateway centers, conversion of text messages to TTY calls, and state or regional aggregation of text-to-911 processing.”

In the case of Indiana, according to the Indianapolis Star,3 text-to-911 “will let Hoosiers call for help with the same amount of effort it takes them to “LOL.”” It may be possible for me to alert emergency services with a quick “OMG,” but the rollout of this service has been accompanied by constant exhortations that members of the public avoid using it. The FCC advises that “texting to 911 should be thought of only as a last resort, …. people who are hard of hearing, deaf, or speech-impaired should still be encouraged to use TTY for calling when they can.”4

Image from satirical magazine The Onion

In Indiana, The State Treasurer’s office, which oversees 911 services, launched a series of radio public service announcements with the message “B4 U TEXT, VOICE IS BEST.” Two announcements were released in May 2014 as part of an initial phase of awareness-raising, explaining that text-to-911 was coming to “most areas,” and would be “of great help to the deaf and speech impaired, and in other select situations.” Then, in September, a third announcement blanketed radio stations in Bloomington, West Lafayette, and Muncie (homes of Indiana University, Purdue University, and Ball State University, respectively). This third radio spot belied any claim that text-to-911 would be, in the words of The Washington Post, “911 for the texting generation,” or that it would offer what CNET described as a “useful way to help the younger demographic that feels more comfortable texting than calling.”5 Instead, following the basic “voice is best” message of the May spots, it introduced a contest component, asking listeners to take the “911 voice is best pledge” by texting “VOICE” to a different number for a chance to win a pizza or MacBook. It concluded, “Remember, your voice matters. Use it in an emergency. 911. Before you text, voice is best.”

In short, text-to-911 in Indiana was introduced primarily as a benefit for those in situations in which speaking would be unsafe, and as an accommodation for people with hearing or speaking impairments. Indiana is estimated to have nearly a quarter million d/Deaf residents, and Barry Ritter, executive director of the state 911 Board explained to the Indiana Daily Student that “It was our focus that the sooner we could begin offering text-to-911 services, we would be providing equal access to the deaf and speaking-impaired.”6

The emphasis on direct access to emergency services for d/Deaf and speech-impaired individuals is laudable as a means of extending access to telecommunications media. Certainly, this is an improvement on the retrofitting of technologies and services that has historically characterized access to media by people with various disabilities.7 Wentz, Jaeger, and Lazar argue that such retrofitting is encouraged by disability laws, which offer numerous exemptions and rely upon individual complaints for enforcement, thus offering very little incentive for policymakers or media industries to consider disability needs prior to the development of new media forms or technologies.8

“Universal design” offers a more inclusive approach, which would take into consideration the needs of people with disabilities or other non-normative bodies or needs in the development phase of new technologies or structures.9 One of the benefits of universal design, dating back to its origins in architectural fields, is that it produces benefits for all users. For instance, curb cuts help wheelchair users and are also beneficial for people using strollers, dollies, or on roller skates.

Image of a bounce-back message, as published on Dispatch Magazine

Text-to-911 would seem, at first glance, to be a potential triumph of universal design in telecommunications. Enabling this service would extend access for some people with disabilities, make it possible to alert emergency services in situations in which speaking is dangerous, and could potentially encourage usage by young people and others who use text messaging as a dominant mode of interpersonal communication. Yet, the limitations suggested by both the FCC and the Indiana roll-out indicate that mass usage of text-to-911 is still quite a ways off, as PSAPs slowly prepare for this new service while relying on the legacies of landline voice calls.

The instructions offered by texTTY are illustrative in this regard:

“Select ‘create a new text message’. Put 911 in the to: field. Put your emergency and your location in the message body. Do not attach or send pictures or videos. Keep your message short and do not use abbreviations.”10

In the interests of simple implementation, images, videos, emoji, abbreviations, and other standard elements of contemporary text messaging culture are expressly forbidden. It is unclear what, exactly, would happen were a PSAP to receive a text message featuring an image of illegal activity. Yet, this might easily be the first impulse of a user – d/Deaf or hearing – sending a text from a tense situation in which typing a longer message could attract attention. Even as the spelling of Indiana’s “B4 U TEXT” public service announcements acknowledges that texting involves unique cultural literacies, the expectations of emergency call centers (and their technologies) rely upon standard written communication or equivalence with spoken messages.

Finally, text-to-911 fails as an example of universal design in its limited, and gradual roll-out. Mobile devices are increasingly central to daily life; the Pew Internet and American Life Project reports that roughly 90% of adults own a cell phone, and 81% of adults use their cell phones to send or receive text messages.11 These numbers are fairly consistent along racial demographic lines, and vary only slightly by urban or rural location. Yet, in Indiana, it is the urban and more racially diverse cities of Indianapolis and Gary that are lagging in implementations of text-to-911. Even in Bloomington, there was concern about the unknown volume of texts that could be received in a town with a large undergraduate population, resulting in a slight delay in implementation. Thus, in areas that might be expected to have high demand for these services, there is little availability, likely leading to a depression of awareness and use that may persist long after text-to-911 services become fully operational and work against the inclusive benefits of such a technology.

The potential benefits of text-to-911 are enormous, for people with and without disabilities and in a variety of contexts. Yet, instead of functioning as an exemplar or universal design, text-to-911 looks to be hampered by its historical roots in landline technologies, its lack of cultural flexibility, and its attempts to discourage use of this service outside of specific cases. Though there are practical concerns to implementation, the narrowness of text-to-911 seems likely to limit its utility rather than to enable a real advance in telecommunications capabilities and relevance.

Javonte Anderson, “Indiana One of First States to Introduce Text-to-911,” Indiana Daily Student, May 18, 2014, http://www.idsnews.com:80/article/2014/05/indiana-one-of-first-states-to-introduce-text-to-911.