Author: jhg2135

In a footnote from her essay “Against Interpretation,” Susan Sontag refers to film as a “subdivision of literature.” Now, I have never been one to uphold any kind of “hierarchy of the arts” (of what use would this be anyhow?), but I am interested in the relationship between different artistic mediums, and, in particular, as Sontag describes, that between film and literature. “Subdivision of literature” suggests literature as a kind of umbrella term encompassing film within its greater arena, as opposed to, as one might have intuitively supposed, two separate subsets within the greater arena that is “art.” Furthermore, the phrase disallows the opposite (“literature as subdivision of film”) to be true. What is it, then, that makes literature more “all-encompassing,” and what does it mean for a film to be “literary”?

An examination of “Godard’s Vivre Sa Vie,” Sontag’s essay on the French filmmaker’s fourth film about a struggling-artist-turned-prostitute, will prove useful here. In the essay, Sontag points out two general tendencies of the artist: the tendency toward proof, characterized by an emphasis in considerations of form, and the tendency towards analysis, which is more akin to fruitless “rambling” within a work, as the artist chases after the “infinite angles of understanding.”

As you might have guessed, Sontag favors the former, insisting that “In great art, it is form—or, as I call it here, the desire to prove rather than the desire to analyze—that is ultimately sovereign. It is form that allows one to terminate.” Thus, it is characteristic of great art to contain “endings that exhibit grace and design, and only secondarily convince in terms of psychological motives or social forces.” Vivre Sa Vie is therefore “literary” in the sense that, as in all great literature (Sontag names Shakespeare’s early comedies and Dante’s Divine Comedy as paragons), at play is a predominant concern towards proof—as opposed to analysis. The term “literary,” used to describe film, is thus a bit of a misnomer on Sontag’s part, as it might have suggested the presence of qualities intrinsic to literature, whereas all she is referring to is that which defines good art, within any medium. For Sontag, this means the artist emphasizes the formal: that is, they include a conspicuous element of design (symmetry, repetition, inversion, doubling, etc.).”

Sontag’s insistence on form strongly reminds me of my Art Hum instructor, Daniel Ralston, who would call us out whenever we would respond to a painting with such platitudes as: “I think the three birds represent the Holy Trinity” or “The expression of the left-most figure is one of intense melancholy”—statements of a nature which would no doubt have gone unheeded (perhaps praised) in some of my previous Core classes. For example, during my Literature Humanities course several years ago, a full hour was once spent on a Freudian analysis of Woolf’s To the Lighthouse (which, unfortunately for me, I consider to be one of the most beautiful novels of all time). Ralston would often respond to these comments by saying, “Yes, but, what about formally—for example, what can you say about the composition?” And though frustratingly delimiting and didactic at first, I eventually came to realize this methodology was far more compatible with my personal relationship with art, which, for the most part, had tended to go ignored by many of my humanities classes at Columbia.

This issue came up once during the discussion section to my Western class (FILM 2120, Topics in American Cinema: The West) the previous semester. The topic of discourse was the Edenic imagery permeating throughout some boring film whose name I can’t recall. Someone had said, “I don’t see it. I don’t see him [the director] trying to do that,” to which the others collectively responded in defensive choir, “But it’s there,” leaving the poor girl outnumbered. In that moment, what none of us understood was that, at its core, the disagreement arose out of a difference in hermeneutical approach. On one hand, there was the school of thought that perpetuates myth by asserting that “this is there” and this isn’t, that “this ought to be but not that” (i.e. all the feminist readings of these films), and, on the other hand, there were those who believed that a work of art is the thing itself, not whatever meaning is forced out of it by some ulterior agenda.

The subject of her famous “Against Interpretation” essay, Sontag is well aware of this dry hermeneutical approach, prevalent among most schools, which tends to mistreat the work of art. As she writes: “…it is still assumed that a work of art is its content. Or, as it’s usually put today, that a work of art by definition says something. (‘What X is saying is…,’ ‘What X is trying to say is…,’ ‘What X said is…’ etc., etc.)” (4). “Content,” in this sense, is tantamount to “what I think it says” which is always subjective—whereas it should be acknowledged that content is in fact objective (“This is not Edenic imagery, just a shot of a meadow where this story happens to take place”), and that anything more than that is a stretch, fabricating superfluous intellectual delusions that numb the senses and are best befitted for the most cerebral of students, those who relish the thought of life in academia and seek to write theses along the lines of “A Queer Reading of the Works of Pedro Almodovar” or “Marxism in Kafka”—horrible titles, but you get the idea. Sontag beautifully sums up the problem as follows:

“Like the fumes of the automobile and of heavy industry which befoul the urban atmosphere, the effusion of interpretations of art today poisons our sensibilities. In a culture whose already classical dilemma is the hypertrophy of the intellect at the expense of energy and sensual capability, interpretation is the revenge of the intellect upon art.”

And what would fix this? A de-emphasis on content and a recognition of art as a sensory experience. Or, as Sontag put it: “In place of hermeneutics we need an erotics of art.” It is by abiding by this mantra I’ve discovered the audiovisual intensity of Faulkner to be found in Aronofsky’s crescendos, the minimalist serenity and ennui of Hemingway in Antonioni, and the hypnotic allure of flawed (but painfully realistic) characters from Tolstoy in Kieslowski. Literature is thus capable of being as “cinematic” as the cinema is of being “literary”—it’s just a matter of form, form, form.

My previous column was all about the cultural importance of Star Wars as the quintessential modern myth. I even mentioned the need for myth in these troubled times, insinuating my desire for Star Wars: The Last Jedi to acknowledge, or comment on, the current political climate in some capacity. And so, having now watched it, I ask: how good was it, and how does it hold as a modern myth?

To begin, much of the progressivism from The Force Awakens is carried over here, and is given much more room to breathe in some instances, as in Finn (John Boyega) and Rose’s (Kelly Marie Tran) excursion to Cantonica, a desert planet run by greedy, corporate, casino-obsessed profiteers who benefit from the galactic war between the First Order and the Resistance. As many reviews have been quick to point out, this arc is easy to bait as a digressing rambling point, though this is most attributable not to the narrative intentions of the arc, but rather the lackluster execution of these explorations which at times threaten to inspire a blatant indifference on the audience’s part. From the moment Rose begins telling her sob backstory, which then leads into a preachy animal-rights midnight exodus extravaganza, the narrative feels forced and progressive for the sake of being progressive—in short, it feels inauthentic.

I would narrow down my problems with this movie to one pivotal, overarching problem that effectively ruined all of the things that could have worked for the film: pacing. By this I mean not only the editing from one plot to another, but the consistent incorporation of “tonal distractions,” both of which, collectively, forbid any one point in the story to breathe and really come into its own. One result of this is that, unlike The Force Awakens, the film no longer feels character-based—the word “feels” is crucial here as the narrative was evidently attempting to darken and flesh out three of its main characters: Rey, Kylo Ren, and Luke Skywalker. This sophistication had the potential to be the holy grail of the film’s engagement, but, whenever this character-building is at play, it is superfluously embroidered by these aforementioned tonal distractions, whether it’s Luke tricking Rey into “using the force” with a blade of grass, Kylo Ren being shirtless (but why?), or a Pog face-planting into a window during what should be a serious rescue scene on the planet Crait. It’s as if Robert Altman had been hired to write a Star Wars movie and immediately decided to Nashville the sh!+ out of it.

The thing is (and this gets to the heart of why I abhor Robert Altman films) the film medium is temporally built to sustain a well-chosen economy of narrative if it has any hopes of fabricating and sustaining any degree of emotional investment. Shows like Game of Thrones and Orange is the New Black have shown that the serial format is much more compatible with large ensemble casts because they are given the room to be explored in an organic and engaging way. When condensing these kaleidoscopic endeavors into a film, much of the emotional weight is lost in favor for what essentially amounts to “interesting ideas”: the philosophy underlying Luke’s cynicism, Rey’s development as a Jedi (we are given some “shocking” background story, but how does this affect her character? She’s still on the good side at the end [I almost wanted her to go to the dark side, just to shake things up]), or Kylo Ren’s inner conflict (which, again, amounts to nothing—he is still the “bad guy” at the end of the film).

While The Last Jedi does not have a terribly high amount of plots and characters, it does incessantly move from one thing we are meant to be taking seriously to another, a system which amounts to the same thing: the dilution of the audience’s emotional investment. Sure, much of the frantic pacing works for the fresh new theme of “let the past die, look to the future” which may in fact be commenting on the generally pessimistic milieu of our times, and whose newness does manage to “keep the myth interesting, and hence relevant” as I mentioned in my last column. However, The Last Jedi is revisionism done wrong, in the vein of Nolan’s The Dark Knight Rises, where a lot of interesting things are going down without succeeding in making us care. This is in sharp contrast to the much more cogent (and also revisionist) The Dark Knight, or The Empire Strikes Back. Recall how much time we spend following Luke’s training with Yoda in Episode V, or Rey the scavenger-for-parts at the beginning of The Force Awakens. These are some of my favorite moments in the franchise, and the reason they work is because we’re there for a while, to the point where the depicted world begins to feel organic, our own—thus paving the way for emotional investment.

If anything, The Last Jedi has compelled me to familiarize myself to a much greater extent with the Star Wars canon. Through my current efforts to understand just what in the world was happening in the film, I might eventually be able to tame my currently lashing and thrashing response to such a degree that the film may not appear as messy and improvised as it does now. Who knows, a year from now—maybe less—I may even like it.

The new Star Wars: The Last Jedi trailer has been out for months now, and fans—old and new alike—are still raving about it, once more submerging themselves in that paroxysm of fervent fan-boy anticipation, pre-packaged with every preview of the upcoming chapter which instantaneously dominates the masses, spreading like wildfire the moment they hit YouTube. “What this trailer did,” said Jeremy Jahns, popular YouTube movie reviewer, “is what Star Wars trailers do, and that’s put Star Wars at the forefront—like yeah, this is happening.”

One person who’s probably less excited about the upcoming film is Star Wars creator himself, George Lucas, who gave up creative rights to the Star Wars universe after selling the franchise to Disney in 2012 for a whopping 4.05 billion USD. In a 2015 interview with Charlie Rose, when asked how he felt about Episode VII: The Force Awakens (the first installment of the reboot trilogy) Lucas said: “We call it space opera but it’s actually a soap opera. And it’s all about family problems—it’s not about spaceships…They decided they were gonna go do their own thing…They wanted to make a retro movie—I don’t like that. I like…Every movie I make I work very hard to make them different. I make them completely different: with different planets, different spaceships—yenno, to make it new.”

I disagree with Lucas’ judgement of Disney’s “nostalgia” approach and maintain that, in order for the reboot to have had the same initial impression of awe-inspiring proportions on the new generation as A New Hope (’77) had on the old, it had to retain as much of its mythic dimensions as possible—which, in order to accomplish, adopting the nostalgia approach was clearly the most surefire way to go. Whatever backlash The Force Awakens (2015) might have received in regards to its “uninteresting” and “boring” semblance to the original fails to recognize what it is that makes Star Wars so compelling a cultural force: that is, its function as myth, which, by its very nature, must remain as little changed as possible if it is to remain relevant.

Here it is important to distinguish between myth and narrative, for the latter is merely the particular (and always varying) mediation of the former (which is always the same). Put another way, a narrative, or an individual story, is simply a representation of a kind of “master story” that pre-exists in the audience’s mind long before they sit down to watch The Force Awakens for the first time—assuming, of course, the audience has lived long enough to have acquired a fairly confident intuition in regards to what constitutes this so-called “master story” that is myth.

“Myth” comes from the Greek word “mythos,” meaning “story.” It is from this definition that our understanding of myth must necessarily arise, for most theories of myth begin from the accepted idea of myth as a kind of “canon of story.” Here it is noteworthy that the medium of the story is not signified, for it would be erroneous to confine myth to a single art form (i.e. myth as the literary canon). Consider, for example, how ancient cave paintings are fraught with narrative imagery, from the dancing scenes of Serra de Capivera, Piauí, Brazil (28,000 to 6,000 BC) to the enigmatic beings and animals of Kadaku, Northern Territory, Australia (26,000 BC); after all, the story “I saw a kangaroo” is still a story, though, to us, not a particularly interesting one (insofar as it is not all that sophisticated).

What is interesting is that such geographically disparate populations, who would have had no physical means of contact with one another, should engage in the same activity (which is not necessary for biological survival) with the same level of behavioral predictability of birds from separate continents—all of whom seem to instinctively grasp the concept of “nest-building” as pivotal for their offspring’s protection. What is it, then, that prompts what appears to be a primordially entrenched instinct of human nature? What is the point of saying, “I saw a kangaroo”?

The answer to this can be arrived at by emphasizing the two subjects of the sentence and studying the resulting truth-values derived thereof. For if the emphasis is placed on “a kangaroo,” then one extracts an empirical value tantamount to the scientist’s collected data. Here, the sentence derives significance from its illumination of some perceived aspect (in this case, the “kangaroo”) of the world, that is, of reality. On the other hand, if one places the emphasis on “I saw,” a second meaning is discovered, this time signifying the presence of “I,” that is, the storyteller. This too can be perceived as empirical but, more notably, as humanistic, for the manifested will to engage in an activity that will record the existence of oneself at a given time is a behavior unique to the human species.

What results from this innocuously curios act of paint-on-wall, then, is the radical evolutionary leap towards self-reflexivity, whereby an innate curiosity is cognitively mastered through creativity. Of course, this process has long been practiced by humans, but early-on it was strictly in the material sense, and motivated by survival at that. With the emergence of art, however, the human’s cognitive faculties began to operate within a more fundamentally psychological dimension, one motivated not by survival, but the acquirement of knowledge, especially as this knowledge relates to the human being. In other words, cave painting illustrates a primordial desire to understand reality–that is, the universe–and humanity’s place in it.

The primary questions which myth asks, then, are: What is the nature of reality, and why am I a part of it?

The narrative patterns that emerge from humanity’s collective efforts to answer these questions is myth. These patterns can be found not only in paintings (depictions of animals, hunting scenes), but also, more complexly, in the literary tradition. Herein lies my previous need to distinguish the “storytelling” canon from the “literary” one, since the literary, by its very nature, allows for a more immediate and elaborate representation of stories. We can count in these patterns, among others, creation stories, Campbell’s “monomyths,” earth/water mothers, etc. Most of us brought up with a classical education which included a relatively similar rubric of books are no longer surprised to find that the narrative elements of the Bible can be found in the Epic ofGilgamesh, can be found in the Popol Vuh, Homer, Shakespeare, Faulkner—you get the idea.

The last author mentioned beautifully described this intrinsic human need for myth during his Banquet Speech at the Nobel Prize ceremony in 1949. Having discussed the paranoia bred by the Cold War, and the consequent nihilism of that milieu, he insisted that Man must remind Himself of “the old virtues and truths of the heart, the old universal truths lacking which any story is ephemeral and doomed—love and honor and pity and pride and compassion and sacrifice…[otherwise] His griefs grieve in no universal bones.”

All the “universal truths” Faulkner mentioned are major narrative forces of George Lucas’ epic saga: Anakin’s pride leading up to his metamorphosis into Darth Vader (The Revenge of the Sith, 2005), only for him to express compassion and pity in his final moments (The Return of the Jedi, 1983); the honor and love between friends that keeps the pack together through all manner of adversities (as in, say, Leia’s rescuing of Luke in The Empire Strikes Back, 1980); and, more recently, the sacrificial deaths from all of Rogue One’s (2016) major characters. Thus, The Last Jedi will be the latest installment of what can safely be called one of modernity’s greatest myths, for its treatment of these perennial themes has given it a universal appeal and, consequently, a formidable staying power worthy of mythic status.

In light of all this, the Reader (especially if they do not consider themselves a fan—on any level) may begin to appreciate the magnitude of cultural significance The Last Jedi is bound to have come this Christmas. Its inception into cinemas this December will call upon (as the best mythic tales often do) a mass gathering of people who will expect to be awed and moved and shocked and, on top of all these things, reminded of these universal truths, thereby permeating, if at least for a moment, a sense of solidarity among the masses which the cynical media eye will have us believe is practically nonexistent in modern times.

In a society as fast-paced and demanding as ours, it’s no wonder that, given the opportunity to rewind, the average person would opt for a film pre-packaged with all those qualities the viewer knows will suffice to fulfill their expectations without demanding much “mental exertion” on their part: archetypal characters, traditional narrative structures, impressive special effects, maybe a few laughs. A good story, a good time. One might have read a good novel instead and been subjected to the same gist of artistic treatment, but the movie has the added bonus of passive viewing—compared to the arduous demand of reading—within a radically condensed span of time (roughly two hours or so). Indeed, there is a reason Aristotle’s Poetics has become standardized reading for many an aspiring filmmaker: today, cinema has become the equivalent of the “condensed visual novel.”

This is a gross underuse of a medium that, as we shall see, can offer us so much more.

To begin with, any art is most compelling (that is, most likely to emotionally impact the receiver of the art) if it prioritizes those potentialities that are unique to the particular form. In other words, if these potentialities are what come to the forefront in the artistic expression, insofar as they are the principal driving mechanisms by which the artist aims to achieve their goal(s).

This is the presupposition that drives the cinematic theories of avant-garde filmmakers Jean Epstein (1897-1953) and Germaine Dulac (1882-1942), both of whom are invaluable resources in the search for an “essence of cinema.”

That both of these theorists are avant-garde is key, because, as Dulac teaches us, the avant-garde filmmaker is characterized by their “in tune-ness” with this so-called “essence of cinema” in their work–a cinematic approach that dawned after all previous major forms (realism, narrative, psychological realism) had been exhausted. Dulac stresses the importance of the avant-garde scene, for the continued evolution of the cinema form is dependent upon its ongoing survival.

This may seem as if Dulac is interested in cinema’s evolution in and of itself—that is, for the hackneyed postmodern “art for art’s sake” case—but one mustn’t be fooled by the formal intellectualization of her language. Beneath all the technicalities, the reader senses an authentic desire to affect the viewer through a kind of crystallized beauty, which, in film, for Dulac can only be accomplished through the formation of a “visual poem made up of life instincts, playing with matter and the imponderable. A symphonic poem, where emotion bursts forth not in facts, not in actions, but in visual sonorities” (655). Such impassioned—almost sentimental—statements prove Dulac is completely on board with Epstein’s search for a cinema that “arouses an aesthetic emotion, a sense of infallible wonderment and pleasure” (257).

For both theorists, said search is characterized by the filmmaker’s quest to pierce through that elusive, truth-veiling something,which both of them term “the imponderable.” But what is the imponderable? The filmmaker is aspiring to unveil the truth about what?

This is a question that is not particular to the cinematic form and whose answer is virtually the same for all modes of artistic expression: truth about the nature of reality itself. This has been the role assigned to the artist since time immemorial, dating back to the tragedy plays of the Classical era. Even today, the cinema-goer is most contented when they can confidently say about a film that it “told it how it is” (with the bonus fantastical embellishments here and there, of course).

Following the premises of Dulac and Epstein, the question then becomes, “How is the filmmaker uniquely positioned to approach this task, and what are the artistic utilities at his or her disposal?” To the first question, both theorists would answer the same way: that the filmmaker is uniquely positioned insofar as they deal with—by the very nature of the medium—visual movement. This answer consequently explains Dulac’s emphasis for rhythm as the vital technique in fulfilling the artist’s expression. After all, the visual movement exists within a “frozen” space-time continuum (a kind of filmmaker’s “canvas”), and it is only by deriving a contrived cadence from this canvas that the filmmaker achieves personal expression; in other words, the filmmaker concerns themselves with the manipulation of time in order to achieve their creative expression.

Although Epstein’s “Photogénie and the Imponderable” (1935) is far less specific than Dulac’s “The Avant-Garde Filmmaker” in answering the second question, his text nevertheless proves to be a rich resource for a better understanding of this “filmmaker’s canvas,” this “frozen space-time continuum,” especially as it pertains to the viewer’s emotional needs—needs which, by the way, the viewer may be unaware of possessing. We may arrive at these affective ramifications using “Photogénie” in a rather indirect manner.

Epstein points out man’s “physiological inability to master the notion of space-time and to escape this atemporal section of the world, which we call the present” (254). He describes this eternal “atemporal section,” this present, as “psychological time,” as it is borne out of our “egocentric [that is, automatic, subconscious] habit” (255) of accepting this flow as an absolute in our lives—which is true. Despite Einstein’s illuminating truths which characterize space-time as a malleable fabric permeating the entirety of the universe, capable of being stretched, producing myriad ebbs and flows, we on Earth experience only one of these flows and have learned to accept it as an inherent aspect of what is in fact (as Einstein shows us) a very limiting perspective of physical reality.

Here I want to take what will feel like a digression, but I assure you, it’s not (please just bear with me for a second): I want to take a moment to consider the teachings of twentieth-century German philosopher Martin Heidegger.

According to Heidegger, people tend to stay out of touch with the sheer mystery of existence, the mystery he termed “das Sein,” meaning “Being.” One of the main culprits, he notes, is the rapidity of the modern world—always keeping us on the move, overwhelming us with work and information so that we’re virtually in a state of perpetual distraction from the mystery of being, unable to step back and see the strange in the familiar, the act of which, Heidegger admits, has its downside: fear, or “angst,” may take ahold of us as we realize the primordial chaos from which we come, and are in fact constantly in. In this way, we come face to face with the meaninglessness of all things.

Epstein alludes to this “angst” in his own—and more colorful—way: “Not without some anxiety, man finds himself before that chaos which he has covered up, denied, forgotten, or thought was tamed. Cinematography apprises him of a monster” (255).

The thing is, once the initial shock has passed, what follows is a kind of out-of-body, existentialist sensation which is nonetheless therapeutic in its own way. Epstein uses the example of watching footage of oneself from long ago: though we acknowledge the ontological link, this link feels disconcertedly severed by the fact that that former self no longer lives in psychological time–that is, in the present. Consequently, this gives us the impression of a phantom-like projection of ourselves that is simultaneously there and not there. But herein lies the secret of cinema’s unique “medicinal” capabilities.

Both Epstein and Dulac wrote about the rhythmic grace emanated by time-manipulated footage. Dulac mentions the “formation of crystals,” “the bursting of a bubble,” and the “evolutions of microbes,” (656) while Epstein points out how a plant “bends its stalk and turns its leaves toward the light,” as elegant as “the horse and rider in slow motion” (254-255). It is clear that, for both of these theorists (and I am completely on board with this), the key to freeing the viewer from the mentally draining chains of “psychological time,” which is keeping us from experiencing the wonder of “das Sein,” is by showing them the fragility of their cage, accomplished through cinema by its “trappings” of space-time, by absorbing it like a bubble and freezing it to produce crystal balls through which the viewer looks into the past and realizes the obvious anew: that our time here is short, and every instant is filled with boundless grace and beauty. By playing God, the filmmaker may thus bestow the viewer their moment of affective transcendence.

* * *

The single cinephile in my (admittedly small) social group, I have never been inclined to suggest to my friends such “lofty” films as Tarkovsky’s, Bergman’s, or Antonioni’s—all of who play with time (or call attention to the strangeness of psychological time, especially through the use of long-shots [think Steve McQueen’s heart-wrenching eighty-six-second shot of Solomon’s quasi-lynch scene from 12 Years a Slave; however, in light of this example, I will also note that the “balance” between narrative and avant-garde was not touched on in this essay—all in good time]) and have, for me, produced that aforementioned affective transcendent effect. After all, as Dulac mentions on more than one occasion, the avant-garde “does not appeal to the mere pleasure of the crowd” (653).

But perhaps the fault lies with us, who understand cinema’s greatest power. Perhaps we ought to take a cue from Dulac, who wrote and lectured widely on film aesthetics, to be less apologetic about cinema’s “purer” dimensions. After all, academic institutions deem it worthwhile of students to learn the language of literature, visual arts, and music, in order for us to not only gain appreciation for the Arts, but to derive from them momentous personal value as well.

Why shouldn’t cinema be any different? Is it because, as the Seventh Art, it is still relatively new?

Consider last year’s “top-grossing films” list. These films are not bad, nor are their narratives utterly irrelevant (something I, and both the theorists we have discussed, would disagree on), but there’s just so much to be gained by learning the cinematic language.

And so, I’ve changed my mind—watch Bergman, like, right now!

* * *

Below, a recommended list of more “purely cinematic” works, from which the budding cinephile may “branch” out on their own accord (in order of “difficulty,” 1 being “most challenging”):

During my Crisis, before watching Güeros, I watched Parks and Recreation…Like, all of it. All seven seasons. Before that, I watched the first five seasons of Futurama. Before that, Breaking Bad, Orange is the New Black, Love, and every Best Picture winner since 1939 (minus a few bad eggs, not the least of which includes 2005’s Crash—c’mon, give me some credit!).

For all my talk of the “primordial power of cinema” in my last column, I would be remiss—and indeed, quite hypocritical—in failing to acknowledge cinema’s second primary function, borne out of its alluring spectacle quality: cinema as a medium for entertainment.

Indeed, cinema has always possessed a two-fold functionality—as emotional therapy and as spectacle—which was made apparent immediately following film’s inception in the late 1890s. This dichotomy is most obvious among the two towering French pioneers of this early era: Auguste and Louis Lumière (the “Lumière” brothers) and Georges Méliès. Today, these two are widely considered to be the “founding fathers” of cinema, though their bodies of work could not be more antithetical.

For Siegfried Kracauer, one of the most prominent figures in film theory, this opposition highlights what he famously coined as the two “tendencies” of the cinema: the realistic and formative tendencies.

The realistic tendency was first exemplified through the Lumière brothers’ archival films. Their most famous work, titled Workers Leaving The Lumière Factory in Lyon, takes footage of exactly what the title suggests—within the span of a then-whopping forty-six seconds. The Lumière brothers were interested in, above all, capturing “everyday life after the manner of photographs.” In other words, the realistic tendency strives to capture (or replicate, through staging) the “nakedness” of life, in the style of, say, a documentary.

By contrast, the formative tendency aims to go beyond the replication of physical reality, which, to accomplish, requires emphasis on cinema-specific techniques (special effects). Méliès employed these techniques more adventurously and innovatively than any other filmmaker of his time. The popularization of universally known modern editing strategies such as “time-lapse photography,” “dissolves,” and “hand-painting,” among others, can all be attributed to Méliès. His legacy as the founder of cinema as a “fantastical art” continues today, where he is most recognizably referenced in allusion to his iconic, anthropomorphic moon from A Trip to the Moon.

Although it is clear that most of cinema displays an overlap between these two tendencies, Kracauer’s teachings have nevertheless continued to serve as a useful starting point for many a timid freshman entering the daunting realm of film theory for the first time. All subsequent cases for a “purpose of cinema” tend to exist within Kracauer’s rough outline of these two core functions: cinema as verisimilitude, and cinema as spectacle.

Modern audiences would tend to agree that the best works of cinema employ a harmonious balance between these two. Films on either end of the spectrum do not hold the attention of mass audiences for very long. If anything, a quick look at any recent “highest-grossing films” list within the last few years will show the People’s obvious predilection for spectacle. Films like Captain America: Civil War, Finding Dory, and Rogue One: A Star Wars Story all feature fantastical worlds narratively rooted and motivated by traditional, realist plots—realist insofar as they thematically mirror the plights of our own modern world, whether at the level of the individual, community, or, as in Civil War’s case, nations. Such films succeed in achieving both awe-inspiring and emotional satisfaction. On the flip-side, this clear predilection for spectacle means studios will blatantly abandon substance by backing up projects that rely on the spectacle element alone (Suicide Squad, Batman v. Superman, The Legend of Tarzan, etc.).

(Note: the ongoing success of this era’s Golden Age of Television indicates an audience leaning towards verisimilitude, seemingly contradicting my observations thus far. However, one must take into account the different nature of the TV show, which is fundamentally distinct from that which be accurately deemed “cinematic”—but all this for a later time.)

Although it’s safe to say that the average Columbian is more cultured than the average person, outside of film majors and cinephiles, it’s also probably safe to say that the average Columbian isn’t as well-versed in film as they are in, say, literature, art, or music. The reason for this is obvious: much of the Columbian’s expansive cultural lore can be attributed to our beloved Core Curriculum, which sadly does not include a “Film Humanities” course.

Attempting to coin a term like “Film Humanities” might seem preposterous and naïve on the outset, but such a negative reaction is unwarranted as it is probably based on one of two (or both) fallacious assumptions:

Film is predominantly a “spectacle-based” art, unqualified for the kind of rich and complex analyses other arts tend to incite.

Film is too young an art form and lacks the historical breadth necessary for making any substantial claims about the human condition that are worth investigating in a scholarly fashion.

To the first, we have already discussed film’s two-fold capacity for realism and spectacle, which implies that there exists a whole canon of films predominantly concerned with verisimilitude, with dealing with subject-matters relevant to the human experience. The “spectacle-based” argument illustrates a biased account of cinematic history, whereby at the turn of the millennium the Digital Age pretty much ensured that film as a “fantastical art” would be the way of the future, rendering all previous cinematic periods obsolete in the public eye.

I would also add that to reject “spectacle” point-blank as an element abolishing any degree of humanities-based discourse in an absolute sense is also erroneous, for it fails to take into account the vast and rich spectrum of variations of genre within the real and fantastical (i.e. Ontological Realism, Psychological Realism, Aesthetic Realism; see Bazin’s “The Evolution of the Language of Cinema”)—a spectrum evident in literature as well. Consider, for example, the tremendous difference, from a genre standpoint, between Homer’s The Iliad and Woolf’s To The Lighthouse, both of which are required readings for Literature Humanities.

I will counter the second point in a later column, it being deserving of its own thorough investigation.

For now, I encourage all Columbians—especially those for whom “cinema” is tantamount to “that which is relevant to the current cultural zeitgeist”—to voluntarily explore the history of cinema with the same level of seriousness with which the Core bestows the other, more “noble” arts.

To begin with, this will require a “survey of the greats,” for which I urge you to temporarily put your beloved Netflix/Hulu/Amazon Video TV show on hold and direct yourself to filmstruck.com, where you can subscribe for a two-week trial. This should be enough time to at least begin exploring the following list I have curated for you below. (And if it’s not, you can use this website to see what other platforms offer these films.) All of the following works share a “crossover” (to “artsy” films) appeal that I hope to instill in all you soon-to-be-cinephiles.