Posts Tagged ‘Woody Allen’

Yesterday, I read a passage from the book Music and Life by the critic and poet W.J. Turner that has been on my mind ever since. He begins with a sentence from the historian Charles Sanford Terry, who says of Bach’s cantatas: “There are few phenomena in the record of art more extraordinary than this unflagging cataract of inspiration in which masterpiece followed masterpiece with the monotonous periodicity of a Sunday sermon.” Turner objects to this:

In my enthusiasm for Bach I swallowed this statement when I first met it, but if Dr. Terry will excuse the expression, it is arrant nonsense. Creative genius does not work in this way. Masterpieces are not produced with the monotonous periodicity of a Sunday sermon. In fact, if we stop to think we shall understand that this “monotonous periodicity ” was exactly what was wrong with a great deal of Bach’s music. Bach, through a combination of natural ability and quite unparalleled concentration on his art, had arrived at the point of being able to sit down at any minute of any day and compose what had all the superficial appearance of being a masterpiece. It is possible that even Bach himself did not know which was a masterpiece and which was not, and it is abundantly clear to me that in all his large-sized works there are huge chunks of stuff to which inspiration is the last word that one could apply.

All too often, Turner implies, Bach leaned on his technical facility when inspiration failed or he simply felt indifferent to the material: “The music shows no sign of Bach’s imagination having been fired at all; the old Leipzig Cantor simply took up his pen and reeled off this chorus as any master craftsman might polish off a ticklish job in the course of a day’s work.”

I first encountered the Turner quotation in The New Listener’s Companion and Record Guide by B.H. Haggin, who cites his fellow critic approvingly and adds: “This seems to me an excellent description of the essential fact about Bach—that one hears always the operation of prodigious powers of invention and construction, but frequently an operation that is not as expressive as it is accomplished.” Haggin continues:

Listening to the six sonatas or partitas for unaccompanied violin, the six sonatas or suites for unaccompanied piano, one is aware of Bach’s success with the difficult problem he set himself, of contriving for the instrument a melody that would imply its underlying harmonic progressions between the occasional chords. But one is aware also that solving this problem was not equivalent to writing great or even enjoyable music…I hear only Bach’s craftsmanship going through the motions of creation and producing the external appearances of expressiveness. And I suspect that it is the name of Bach that awes listeners into accepting the appearance as reality, into hearing an expressive content which isn’t there, and into believing that if the content is difficult to hear, this is only because it is especially profound—because it is “the passionate, yet untroubled meditation of a great mind” that lies beyond “the composition’s formidable technical frontiers.”

Haggins confesses that he regards many pieces in The Goldberg Variations or The Well-Tempered Clavier as “examples of competent construction that are, for me, not interesting pieces of music.” And he sums up: “Bach’s way of exercising the spirit was to exercise his craftsmanship; and some of the results offer more to delight an interest in the skillful use of technique than to delight the spirit.”

As I read this, I was inevitably reminded of Christopher Orr’s recent article in The Atlantic, “The Remarkable Laziness of Woody Allen,” which I discussed here last week. Part of Orr’s case against Allen involves “his frenetic pace of one feature film a year,” which can only be described as monotonous periodicity. This isn’t laziness, of course—it’s the opposite—but Orr implies that the director’s obsession with productivity has led him to cut corners in the films themselves: “Ambition simply isn’t on the agenda.” Yet the funny thing is that this approach to making art, while extreme, is perfectly rational. Allen writes, directs, and releases three movies in the time it would take most directors to finish one, and when you look at his box office and awards history, you see that about one in three breaks through to become a financial success, an Oscar winner, or both. And Orr’s criticism of this process, like Turner’s, could only have been made by a professional critic. If you’re obliged to see every Woody Allen movie or have an opinion on every Bach cantata, it’s easy to feel annoyed by the lesser efforts, and you might even wish that that the artist had only released the works in which his inspiration was at its height. For the rest of us, though, this really isn’t an issue. We get to skip Whatever Works or Irrational Man in favor of the occasional Match Point or Midnight in Paris, and most of us are happy if we can even recognize the cantata that has “Jesu, Joy of Man’s Desiring.” If you’re a fan, but not a completist, a skilled craftsman who produces a lot of technically proficient work in hopes that some of it will stick is following a reasonable strategy. As Malcolm Gladwell writes of Bach:

The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, [Dean] Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great.

As Simonton puts it: “Quality is a probabilistic function of quantity.” But if there’s a risk involved, it’s that an artist will become so used to producing technically proficient material on a regular basis that he or she will fall short when the circumstances demand it. Which brings us back to Bach. Turner’s remarks appear in a chapter on the Mass in B minor, which was hardly a throwaway—it’s generally considered to be one of Bach’s major works. For Turner, however, the virtuosity expressed in the cantatas allowed Bach to take refuge in cleverness even when there was more at stake: “I say that the pretty trumpet work in the four-part chorus of the Gloria, for example, is a proof that Bach was being consciously clever and brightening up his stuff, and that he was not at that moment writing with the spontaneity of those really creative moments which are popularly called inspired.” And he writes of the Kyrie, which he calls “monotonous”:

It is still impressive, and no doubt to an academic musician, with the score in his hands and his soul long ago defunct, this charge of monotony would appear incredible, but then his interest is almost entirely if not absolutely technical. It is a source of everlasting amazement to him to contemplate Bach’s prodigious skill and fertility of invention. But what do I care for Bach’s prodigious skill? Even such virtuosity as Bach’s is valueless unless it expresses some ulterior beauty or, to put it more succinctly, unless it is as expressive as it is accomplished.

And I’m not sure that he’s even wrong. It might seem remarkable to make this accusation of Bach, who is our culture’s embodiment of technical skill as an embodiment of spiritual expression, but if the charge is going to have any weight at all, it has to hold at the highest level. William Blake once wrote: “Mechanical excellence is the only vehicle of genius.” He was right. But it can also be a vehicle, by definition, for literally everything else. And sometimes the real genius lies in being able to tell the difference.

In his flawed but occasionally fascinating book Bambi vs. Godzilla, the playwright and director David Mamet spends a chapter discussing the concept of aesthetic distance, which is violated whenever viewers remember that they’re simply watching a movie. Mamet provides a memorable example:

An actor portrays a pianist. The actor sits down to play, and the camera moves, without a cut, to his hands, to assure us, the audience, that he is actually playing. The filmmakers, we see, have taken pains to show the viewers that no trickery has occurred, but in so doing, they have taught us only that the actor portraying the part can actually play the piano. This addresses a concern that we did not have. We never wondered if the actor could actually play the piano. We accepted the storyteller’s assurances that the character could play the piano, as we found such acceptance naturally essential to our understanding of the story.

Mamet imagines a hypothetical dialogue between the director and the audience: “I’m going to tell you a story about a pianist.” “Oh, good: I wonder what happens to her!” “But first, before I do, I will take pains to reassure you that the actor you see portraying the hero can actually play the piano.” And he concludes:

We didn’t care till the filmmaker brought it up, at which point we realized that, rather than being told a story, we were being shown a demonstration. We took off our “audience” hat and put on our “judge” hat. We judged the demonstration conclusive but, in so doing, got yanked right out of the drama. The aesthetic distance had been violated.

Let’s table this for now, and turn to a recent article in The Atlantic titled “The Remarkable Laziness of Woody Allen.” To prosecute the case laid out in the headline, the film critic Christopher Orr draws on Eric Lax’s new book Start to Finish: Woody Allen and the Art of Moviemaking, which describes the making of Irrational Man—a movie that nobody saw, which doesn’t make the book sound any less interesting. For Orr, however, it’s “an indictment framed as an encomium,” and he lists what he evidently sees as devastating charges:

Allen’s editor sometimes has to live with technical imperfections in the footage because he hasn’t shot enough takes for her to choose from…As for the shoot itself, Allen has confessed, “I don’t do any preparation. I don’t do any rehearsals. Most of the times I don’t even know what we’re going to shoot.” Indeed, Allen rarely has any conversations whatsoever with his actors before they show up on set…In addition to limiting the number of takes on any given shot, he strongly prefers “master shots”—those that capture an entire scene from one angle—over multiple shots that would subsequently need to be edited together.

For another filmmaker, all of these qualities might be seen as strengths, but that’s beside the point. Here’s the relevant passage:

The minimal commitment that appearing in an Allen film entails is a highly relevant consideration for a time-strapped actor. Lax himself notes the contrast with Mike Leigh—another director of small, art-house films—who rehearses his actors for weeks before shooting even starts. For Damien Chazelle’s La La Land, Stone and her co-star, Ryan Gosling, rehearsed for four months before the cameras rolled. Among other chores, they practiced singing, dancing, and, in Gosling’s case, piano. The fact that Stone’s Irrational Man character plays piano is less central to that movie’s plot, but Allen didn’t expect her even to fake it. He simply shot her recital with the piano blocking her hands.

So do we shoot the piano player’s hands or not? The boring answer, unfortunately, is that it depends—but perhaps we can dig a little deeper. It seems safe to say that it would be impossible to make The Pianist with Adrian Brody’s hands conveniently blocked from view for the whole movie. But I’m equally confident that it doesn’t matter the slightest bit in Irrational Man, which I haven’t seen, whether or not Emma Stone is really playing the piano. La La Land is a slightly trickier case. It would be hard to envision it without at least a few shots of Ryan Gosling playing the piano, and Damien Chazelle isn’t above indulging in exactly the camera move that Mamet decries, in which it tilts down to reassure us that it’s really Gosling playing. Yet the fact that we’re even talking about this gets down to a fundamental problem with the movie, which I mostly like and admire. Its characters are archetypes who draw much of their energy from the auras of the actors who play them, and in the case of Stone, who is luminous and moving as an aspiring actress suffering through an endless series of auditions, the film gets a lot of mileage from our knowledge that she’s been in the same situation. Gosling, to put it mildly, has never been an aspiring jazz pianist. This shouldn’t even matter, but every time we see him playing the piano, he briefly ceases to be a struggling artist and becomes a handsome movie star who has spent three months learning to fake it. And I suspect that the movie would have been elevated immensely by casting a real musician. (This ties into another issue with La La Land, which is that it resorts to telling us that its characters deserve to be stars, rather than showing it to us in overwhelming terms through Gosling and Stone’s singing and dancing, which is merely passable. It’s in sharp contrast to Martin Scorsese’s New York, New York, one of its clear spiritual predecessors, in which it’s impossible to watch Liza Minnelli without becoming convinced that she ought to be the biggest star in the world. And when you think of how quirky, repellent, and individual Minnelli and Robert De Niro are allowed to be in that film, La La Land starts to look a little schematic.)

And I don’t think I’m overstating it when I argue that the seemingly minor dilemma of whether to show the piano player’s hands shades into the larger problem of how much we expect our actors to really be what they pretend that they are. I don’t think any less of Bill Murray because he had to employ Terry Fryer as a “hand double” for his piano solo in Groundhog Day, and I don’t mind that the most famous movie piano player of them all—Dooley Wilson in Casablanca—was faking it. And there’s no question that you’re taken out of the movie a little when you see Richard Chamberlain playing Tchaikovsky’s Piano Concerto No. 1 in The Music Lovers, however impressive it might be. (I’m willing to forgive De Niro learning to mime the saxophone for New York, New York, if only because it’s hard to imagine how it would look otherwise. The piano is just about the only instrument in which it can plausibly be left at the director’s discretion. And in his article, revealingly, Orr fails to mention that none other than Woody Allen was insistent that Sean Penn learn the guitar for Sweet and Lowdown. As Allen himself might say, it depends.) On some level, we respond to an actor playing the piano much like the fans of Doctor Zhivago, whom Pauline Kael devastatingly called “the same sort of people who are delighted when a stage set has running water or a painted horse looks real enough to ride.” But it can serve the story as much as it can detract from it, and the hard part is knowing how and when. As one director notes:

Anybody can learn how to play the piano. For some people it will be very, very difficult—but they can learn it. There’s almost no one who can’t learn to play the piano. There’s a wide range in the middle, of people who can play the piano with various degrees of skill; a very, very narrow band at the top, of people who can play brilliantly and build upon a technical skill to create great art. The same thing is true of cinematography and sound mixing. Just technical skills. Directing is just a technical skill.

This is Mamet writing in On Directing Film, which is possibly the single best work on storytelling I know. You might not believe him when he says that directing is “just a technical skill,” but if you do, there’s a simple way to test if you have it. Do you show the piano player’s hands? If you know the right answer for every scene, you just might be a director.

By the early seventies, Isaac Asimov had achieved the cultural status, which he still retains, of being the first—and perhaps the only—science fiction writer whom most ordinary readers would be able to name. As a result, he ended up on the receiving end of a lot of phone calls from famous newcomers to the field. In 1973, for example, he was contacted by a representative for Woody Allen, who asked if he’d be willing to look over the screenplay of the movie Sleeper. Asimov gladly agreed, and when he met with Allen over lunch, he told him that the script was perfect as it was. Allen didn’t seem to believe him: “How much science fiction have you written?” Asimov responded: “Not much. Very little, actually. Perhaps thirty books of it altogether. The other hundred books aren’t science fiction.” Allen was duly impressed, turning to ask his friends: “Did you hear him throw that line away?” Asimov turned down the chance to serve as a technical director, recommending Ben Bova instead, and the movie did just fine without him, although he later expressed irritation that Allen had never sent him a letter of thanks. Another project with Paul McCartney, whom Asimov met the following year, didn’t go anywhere, either:

McCartney wanted to do a fantasy, and he wanted me to write a story out of the fantasy out of which a screenplay would be prepared. He had the basic idea for the fantasy, which involved two sets of musical groups: a real one, and a group of extraterrestrial imposters…He had only a snatch of dialogue describing the moment when a real group realized they were being victimized by imposters.

Asimov wrote up what he thought was an excellent treatment, but McCartney rejected it: “He went back to his one scrap of dialogue, out of which he apparently couldn’t move, and wanted me to work with that.”

Of all of Asimov’s brushes with Hollywood, however, the most intriguing involved a director to whom he later referred as “Steve Spielberg.” In his memoir In Joy Still Felt, Asimov writes:

On July 18, 1975, I visited Steve Spielberg, a movie director, at his room in the Sherry-Netherland. He had done Jaws, a phenomenally successful picture, and now he planned to do another, involving flying saucers. He wanted me to work with him on it, but I didn’t really want to. The visual media are not my bag, really.

In a footnote, Asimov adds: “He went on to do it without me and it became the phenomenally successful Close Encounters of the Third Kind. I have no regrets.” For an autobiography that devotes enormous amounts of wordage to even the most trivial incidents, it’s a remarkably terse and unrevealing anecdote, and it’s hard not to wonder if something else might have been involved—because when Asimov finally saw Close Encounters, which is celebrating its fortieth anniversary this week with a new theatrical release, he hated it. A year after it came out, he wrote in Isaac Asimov’s Science Fiction Magazine:

Science Digest asked me to see the movie Close Encounters of the Third Kind and write an article for them on the science it contained. I saw the picture and was appalled. I remained appalled even after a doctor’s examination had assured me that no internal organs had been shaken loose by its ridiculous sound waves. (If you can’t be good, be loud, some say, and Close Encounters was very loud.) To begin with there was no accurate science in it; not a trace; and I said so in the article I wrote and which Science Digest published. There was also no logic in it; not a trace; and I said that, too.

Asimov’s essay on Close Encounters, in fact, might be the most unremittingly hostile piece of writing I’ve seen by him on any subject, and I’ve read a lot of it. He seems to have regarded it as little more than a cynical commercial ploy: “It made its play for Ufolators and mystics and, in its chase for the buck, did not scruple to violate every canon of good sense and internal consistency.” In response to readers who praised the special effects, he shot back:

Seeing a rotten picture for the special effects is like eating a tough steak for the smothered onions, or reading a bad book for the dirty parts. Optical wizardry is something a movie can do that a book can’t, but it is no substitute for a story, for logic, for meaning. It is ornamentation, not substance. In fact, whenever a science fiction picture is praised overeffusively for its special effects, I know it’s a bad picture. Is that all they can find to talk about?

Asimov was aware that his negative reaction had hurt the feelings of some of his fans, but he was willing to accept it: “There comes a time when one has to put one’s self firmly on the side of Good.” And he seemed particularly incensed at the idea that audiences might dare to think that Close Encounters was science fiction, and that it implied that the genre was allowed to be “silly, and childish, and stupid,” with nothing more than “loud noise and flashing lights.” He wasn’t against all instances of cinematic science fiction—he had liked Planet of the Apes and Star Wars, faintly praising the latter as “entertainment for the masses [that] did not try to do anything more,” and he even served as a technical consultant on Star Trek: The Motion Picture. But he remained unrelenting toward Close Encounters to the last: “It is a marvelous demonstration of what happens when the workings of extraterrestrial intelligence are handled without a trace of skill.”

And the real explanation comes in an interview that Asimov gave to the Los Angeles Times in 1988, in which he recalled of his close encounter with Spielberg: “I didn’t know who he was at the time, or what a hit the film would be, but I certainly wasn’t interested in a film that glorified flying saucers. I still would have refused, only with more regret.” The italics are mine. Asimov, as I’ve noted before, despised flying saucers, and he would have dismissed any movie that took them seriously as inherently unworthy of consideration. (The editor John W. Campbell was unusually cautious on the subject, writing of the UFO phenomenon in Astounding in 1959: “Its nature and cause are totally indeterminable from the data and the technical understanding available to us at the time.” Yet Asimov felt that even this was going too far, writing that Campbell “seemed to take seriously such things as flying saucers [and] psionic talents.”) From his point of view, he may well have been right to worry about the “glorification” of flying saucers in Close Encounters—its impact on the culture was so great that it seems to have fixed the look of aliens as reported by alleged abductees. And as a man whose brand as a science popularizer and explainer depended on his reputation for rationality and objectivity, he couldn’t allow himself to be associated with such ideas in any way, which may be why he attacked the movie with uncharacteristic savagery. As I’ve written elsewhere, a decade earlier, Asimov had been horrified when his daughter Robyn told him one night that she had seen a flying saucer. When he rushed outside and saw “a perfect featureless metallic circle of something like aluminum” in the sky, he was taken aback, and as he ran into the house for his glasses, he said to himself: “Oh no, this can’t happen to me.” It turned out to be the Goodyear blimp, and Asimov recalled: “I was incredibly relieved!” But his daughter may have come even closer to the truth when she said years later to the New York Times: “He thought he saw his career going down the drain.”

What does Twin Peaks look like without Agent Cooper? It was a problem that David Lynch and his writing team were forced to solve for Fire Walk With Me, when Kyle MacLachlan declined to come back for much more than a token appearance, and now, in the show’s third season, Lynch and Mark Frost seem determined to tackle the question yet again, even though they’ve been given more screen time for their leading man than anyone could ever want. MacLachlan’s name is the first thing that we see in the closing credits, in large type, to the point where it’s starting to feel like a weekly punchline—it’s the only way that we’d ever know that the episode was over. He’s undoubtedly the star of the show. Yet even as we’re treated to an abundance of Dark Cooper and Dougie Jones, we’re still waiting to see the one character that I, and a lot of other fans, have been awaiting the most impatiently. Dale Cooper, it’s fair to say, is one of the most peculiar protagonists in television history. As the archetypal outsider coming into an isolated town to investigate a murder, he seems at first like a natural surrogate for the audience, but, if anything, he’s quirkier and stranger than many of the locals he encounters. When we first meet Cooper, he comes across as an almost unplayable combination of personal fastidiousness, superhuman deductive skills, and childlike wonder. But you’re anything like me, you wanted to be like him. I ordered my coffee black for years. And if he stood for the rest of us, it was as a representative of the notion, which crumbles in the face of logic but remains emotionally inescapable, that the town of Twin Peaks would somehow be a wonderful place to live, despite all evidence to the contrary.

In the third season, this version of Cooper, whom I’ve been waiting for a quarter of a century to see again, is nowhere in sight. And the buildup to his return, which I still trust will happen sooner or later, has been so teasingly long that it can hardly be anything but a conscious artistic choice. With every moment of recognition—the taste of coffee, the statue of the gunfighter in the plaza—we hope that the old Cooper will suddenly reappear, but the light in his eyes always fades. On some level, Lynch and Frost are clearly having fun with how long they can get away with this, but by removing the keystone of the original series, they’re also leaving us with some fascinating insights into what kind of show this has been from the very beginning. Let’s tick off its qualities one by one. Over the course of any given episode, it cuts between what seems like about a dozen loosely related plotlines. Most of the scenes last between two and four minutes, with about the same number of characters, and the components are too far removed from one another to provide anything in the way of narrative momentum. They aren’t built around any obligation to advance the plot, but around striking images or odd visual or verbal gags. The payoff, as in the case of Dr. Jacoby’s golden shovels, often doesn’t come for hours, and when it does, it amounts to the end of a shaggy dog story. (The closest thing we’ve had so far to a complete sequence is the sad case of Sam, Tracey, and the glass cube, which didn’t even make it past the premiere.) If there’s a pattern, it isn’t visible, but the result is still strangely absorbing, as long as you don’t approach it as a conventional drama but as something more like Twenty-Two Short Films About Twin Peaks.

You know what this sounds like to me? It sounds like a sketch comedy show. I’ve always seen Twin Peaks as a key element in a series of dramas that stretches from The X-Files through Mad Men, but you could make an equally strong case for it as part of a tradition that runs from SCTV to Portlandia, which went so far as to cast MacLachlan as its mayor. They’re set in a particular location with a consistent cast of characters, but they’re essentially sketch comedies, and when one scene is over, they simply cut to the next. In some ways, the use of a fixed setting is a partial solution to the problem of transitions, which shows from Monty Python onward have struggled to address, but it also creates a beguiling sense of encounters taking place beyond the edges of the frame. (Matt Groening has pointed to SCTV as an inspiration for The Simpsons, with its use of a small town in which the characters were always running into one another. Groening, let’s not forget, was born in Portland, just two hours away from Springfield, which raises the intriguing question of why such shows are so drawn to the atmosphere of the Pacific Northwest.) Without Cooper, the show’s affinities to sketch comedy are far more obvious—and this isn’t the first time this has happened. After Laura’s murderer was revealed in the second season, the show seemed to lose direction, and many of the subplots, like James’s terminable storyline with Evelyn, became proverbial for their pointlessness. But in retrospect, that arid middle stretch starts to look a lot like an unsuccessful sketch comedy series. And it’s worth remembering that Lynch and Frost originally hoped to keep the identity of the killer a secret forever, knowing that it was all that was holding together the rest.

In the absence of a connective thread, it takes a genius to make this kind of thing work, and the lack of a controlling hand is a big part of what made the second season so markedly unsuccessful. Fortunately, the third season has a genius readily available. The sketch format has always been David Lynch’s comfort zone, a fact that has been obscured by contingent factors in his long career. Lynch, who was trained as a painter and conceptual artist, thinks naturally in small narrative units, like the video installations that we glimpse for a second as we wander between rooms in a museum. Eraserhead is basically a bunch of sketches linked by its titular character, and he returned to that structure in Inland Empire, which, thanks to the cheapness of digital video, was the first movie in decades that he was able to make entirely on his own terms. In between, the inclination was present but constrained, sometimes for the better. In its original cut of three hours, Blue Velvet would have played much the same way, but in paring it down to its contractually mandated runtime, Lynch and editor Duwayne Dunham ended up focusing entirely on its backbone as a thriller. (It’s an exact parallel to Annie Hall, which began as a three-hour series of sketches called Anhedonia that assumed its current form after Woody Allen and Ralph Rosenbaum threw out everything that wasn’t a romantic comedy.) Most interesting of all is Mulholland Drive, which was originally shot as a television pilot, with fragmented scenes that were clearly supposed to lead to storylines of their own. When Lynch recut it into a movie, they became aspects of Betty’s dream, which may have been closer to what he wanted in the first place. And in the third season of Twin Peaks, it is happening again.

Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:

A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”

With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)

But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.

And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:

“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”

Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.

So I haven’t heard all of Kanye West’s new album yet—I’m waiting until I can actually download it for real—but I’m excited about what looks to be a major statement from the artist responsible for some of my favorite music of the last decade. Predictably, it was also the target of countless barbs in the weeks leading up to its release, mostly because of what have been portrayed as its constant title changes: it was originally announced as So Help Me God, changed to Swish, made a brief stopover at Waves, and finally settled on The Life of Pablo. And this was all spun as yet another token of West’s flakiness, even from media outlets that have otherwise been staunch advocates of his work. (A typical headline on The A.V. Club was “Today in god, we’re tired: Kanye West announces album title (again).” This was followed a few days later by the site’s rave review of the same album, which traces a familiar pattern of writers snarking at West’s foibles for months, only to fall all over themselves in the rush to declare the result a masterpiece. The only comparable figure who inspires the same disparity in his treatment during the buildup and the reception is Tom Cruise, who, like Kanye, is a born producer who happens to occupy the body of a star.) And there’s a constant temptation for those who cover this kind of thing for a living to draw conclusions from the one scrap of visible information they have, as if the changes in the title were symptoms of some deeper confusion.

Really, though, the shifting title is less a reflection of West’s weirdness, of which we have plenty of evidence elsewhere, than of his stubborn insistence on publicizing even those aspects of the creative process that most others would prefer to keep private. Title changes are a part of any artist’s life, and it’s rare for any work of art to go from conception to completion without a few such transformations along the way: Hemingway famously wrote up fifty potential titles for his Spanish Civil War novel, notably The Undiscovered Country, before finally deciding on For Whom the Bell Tolls. As long as we’re committed to the idea that everything needs a title, we’ll always struggle to find one that adequately represents the work—or at least catalyzes our thoughts about it—while keeping one eye on the market. Each of my novels was originally written and sold with a different title than the one that ended up on its cover, and I’m mostly happy with how it all turned out. (Although I’ll admit that I still think that The Scythian was a better title for the book that wound up being released as Eternal Empire.) And I’m currently going through the same thing again, in full knowledge that whatever title I choose for my next project will probably change before I’m done. I don’t take the task any less seriously, and if anything, I draw comfort from the knowledge that the result will reflect a lot of thought and consideration, and that a title change isn’t necessarily a sign that the process is going wrong. Usually, in fact, it’s the opposite.

The difference between a novel and an album by a massive pop star, of course, is that the latter is essentially being developed in plain sight, and any title change is bound to be reported as news. There’s also a tendency, inherited from movie coverage, to see it as evidence of a troubled production. When The Hobbit: There and Back Again was retitled The Battle of the Five Armies, it was framed, credibly enough, as a more accurate reflection of the movie itself, which spins about ten pages of Tolkien into an hour of battle, but it was also perceived as a defensive move in response to the relatively disappointing reception of The Desolation of Smaug. In many cases, nobody wins: All You Need Is Kill was retitled Edge of Tomorrow for its theatrical release and Live Die Repeat on video, a series of equivocations that only detracted from what tuned out to be a superbly confident and focused movie—which is all the evidence we need that title trouble doesn’t have much correlation, if any, with the quality of the finished product. And occasionally, a studio will force a title change that the artist refuses to acknowledge: Paul Thomas Anderson consistently refers to his first movie as Sydney, rather than Hard Eight, and you can hear a touch of resignation in director Nicholas Meyer’s voice whenever he talks about Star Trek II: The Wrath of Khan. (In fact, Meyer’s initial pitch for the title was The Undiscovered Country, which, unlike Hemingway, he eventually got to use.)

But if the finished product is worthwhile, all is forgiven, or forgotten. If I can return for the second time in two days to editor Ralph Rosenblum’s memoir When the Shooting Stops, even as obvious a title as Annie Hall went through its share of incarnations:

[Co-writer Marshall] Brickman came up to the cutting room, and he and Woody [Allen] engaged in one of their title sessions, Marshall spewing forth proposals—Rollercoaster Named Desire, Me and My Goy, It Had to be Jew—with manic glee. This seemed to have little impact on Woody, though, for he remained committed to Anhedonia until the very end. “He first sprung it on me at an early title session,” remembers Brickman. “Arthur Krim, who was the head of United Artists then, walked over to the window and threatened to jump…”

Woody, meanwhile, was adjusting his own thinking, and during the last five screenings, he had me try out a different title each night in my rough-cut speech. The first night it was Anhedonia, and a hundred faces looked at me blankly. The second night it was Anxiety, which roused a few chuckles from devoted Allen fans. Then Anhedonia again. Then Annie and Alvy. And finally Annie Hall, which, thanks to a final burst of good sense, held. It’s hard now to suppose it could ever have been called anything else.

He’s right. And I suspect that we’ll feel the same way about The Life of Pablo before we know it—which won’t stop it from happening again.

Note: This post is the forty-fourth installment in my author’s commentary for Eternal Empire, covering Chapter 43. You can read the previous installments here.

“I am truly at my happiest not when I am writing an aria for an actor or making a grand political or social point,” Aaron Sorkin said a while back to Vanity Fair. “I am at my happiest when I’ve figured out a fun way for somebody to slip on a banana peel.” I know what he means. In fact, nothing makes me happier than when an otherwise sophisticated piece of entertainment cheerfully decides to go for the oldest, corniest, most obvious pratfall—which is a sign of an even greater sophistication. My favorite example is the most famous joke in Raiders of the Lost Ark, when Indy nonchalantly draws his gun and shoots the swordsman. It’s the one gag in the movie that most people remember best, and if you’re a real fan, you probably know that the scene was improvised on the set to solve an embarrassing problem: they’d originally scheduled a big fight scene, but Harrison Ford was too sick to shoot it, so he proposed the more elegant, and funnier, solution. But the most profound realization of all is that the moment works precisely because the film around it depends so much on craft and clockwork timing to achieve its most memorable effects. If every joke in the movie were pitched on that level, not only wouldn’t we remember that scene, but we probably wouldn’t be talking about Raiders at all, just as most of us don’t look back fondly on 1941. It’s the intelligence, wit, and technical proficiency of the rest of the movie that allows that one cornball moment to triumphantly emerge.

You often see the same pattern when you look at the movies in which similar moments occur. For instance, there’s a scene in Annie Hall—recently voted the funniest screenplay of all time—in which the audience needs to be told that Alvy and Annie are heading for Los Angeles. To incorporate that information, which had been lost when a previous scene was cut, Woody Allen quickly wrote and shot the bit in which he sneezes into a pile of cocaine. It included all the necessary exposition in the dialogue, but as editor Ralph Rosenblum writes in his memoir When The Shooting Stops:

Although this scene was written and shot just for this information, audiences were always much more focused on the cocaine, and when Woody sneezes into what we’ve just learned is a two-thousand-dollar cache, blowing white powder all over the living room—an old-fashioned, lowest-common-denominator, slip-on-the-banana-peel joke—the film gets its single largest laugh. (“A complete unplanned accident,” says Woody.) The laughter was so great at each of our test screenings that I kept having to add more and more feet of dead film to keep the laughter from pushing the next scene right off the screen…Even so, the transitional information was lost on many viewers: when they stop laughing and spot Alvy and Annie in a car with Rob, who’s discussing how life has changed for him since he emigrated to Beverly Hills, they are momentarily uncertain about how or why the couple got there.

And while the two moments are very different, it’s revealing that in both cases, an improvised moment of slapstick was introduced to crack an unanticipated narrative problem. It’s no surprise that when writers have to think their way out of dilemma, they often turn to the hoariest, most proven building blocks of story, as if they’d briefly written a scene using the reptile brain—while keeping all the other levels of the brain alive and activated. This is why scenes like this are so delightful: they aren’t gratuitous, but represent an effective way of getting a finely tuned narrative to where it needs to be. And I’d also argue that this runs in both directions, particularly in genre fiction. Those big, obvious moments exist to enable the more refined touches, but also the other way around: a large part of any writer’s diligence and craft is devoted to arranging the smaller pieces so that those huge elements can take shape. As Shane Black pointed out years ago, a lot of movies seem to think that audiences want nothing but those high points, but in practice, it quickly grows exhausting. (Far too many comedies these days seem to consist of nothing but the equivalent of Alvy sneezing into the cocaine, over and over and over again.) And Sorkin’s fondness for the banana-peel gag arises, I suspect, from his realization that when such a moment works, it’s because the less visible aspects of the story around it are working as well.

My novels contain a few of these banana peels, although not as many as I’d like. (One that I still enjoy is the moment in City of Exiles when Wolfe trips over the oversized chess pieces during the chase scene at the London Chess Classic.) And while it’s not quite the same thing, there’s something similar at work in Chapter 43 of Eternal Empire, which features nothing less than a knock-down, drag-out fight between two women, one a runaway bride, the other still wearing her bridesmaid’s dress. If I’ve done my job properly, the scene should work both on its own terms and as an homage to something you’d see on a soapy network or basic cable show like Revenge. And I kind of love the result. I like it, in part, because I know exactly how much thought was required to line up the pieces of the plot to get to this particular payoff: it’s the kind of set piece that you spend ninety percent of the novel trying to reach, only to hope that it all works in the end. The resulting fight lasts for about a page—I’m not trying to write Kill Bill here—but I still think it’s one of the half dozen or so most satisfying moments in the entire trilogy, and it works mostly because it isn’t afraid to go for a big, borderline ridiculous gesture. (If Eternal Empire is my favorite of the three books, and on most days it is, it’s because it contains as many of those scenes as the previous two installments combined, and only because of the groundwork that comes with two volumes’ worth of accumulated backstory.) And although there’s no banana peel, both Wolfe and Asthana are falling now, and they won’t land until the book is over…