Author: -

The translation of Byung-Chul Han’s small book the Burnout Society is unfortunate. For one, it immediately places the work in the discourse of the burnout, which connotes all sorts of self-help positive thinking bear the burden alone nonsense that are ever so perpendicular to what the book is about. For another, it is wrong. The original title, Müdigkeitsgesellschaft, is more accurately translated as “the tiredness society” or “society of the tired”. The actual medical condition of being burned out is an aspect of this tiredness, but it is not the main focus of the book. Which, unfortunately, means that the point of the closing reflection on what it means to be tired together tends to get lost on English readers. It comes as a surprise, rather than as a fitting conclusion.

As these words are written, the importance of quarantining oneself against the Covid-19 virus is entering into public consciousness. This is an interesting point in time, since many latent patterns of thought are directed at and applied to the upcoming quarantine situation. The common sense interpretation of how to deal with the new situation emerging, and thus we get an unusually clear picture of the common sense interpretation of how to deal with the old situation. For a brief moment in time, we can see the change in action, and contrast what’s new with what’s old.

One common reaction to the quarantine is to say “gosh, I’m gonna get so much done! This sure is going to be a very productive time!”. A quarantine is seen as a temporary reprieve from restraints that prevent the full forces of creative output to be unleashed into the world, and thus as a potential time of unparalleled getting shit done. Unread books, hobby projects, writing ideas, gardening feats, culinary experiments – whatever it is, now is the time for getting it done. The pent up creative energy will flow with wild abandon. ushering in a new era of unprecedented personal productivity.

Byung-Chul Han contends that the last few decades have seen a shift from what he calls negative production to positive production. The former is a process of standardization and error elimination, whose goal at all times is to remove flaws in order to maximize efficiency. These flaws can either be technical, in the sense that the productive machine is not optimally configured, or social, in the sense that abnormal elements of society have to be removed or repressed so that the eternal productivity can continue without interruptions. Deviant forms of life, queer sexualities or non-conformist ideologies are examples of such abnormalities. The goal of negative production is to make each part of the production process standardized and interchangeable, including the human components. A worker is a worker, and workers do as they are told.

Positive production, in contrast, relies not on standardized units of production doing what they are told. Instead, these very same units have internalized the imperative to be productive to such an extent that they tell themselves what to do. The specifics vary from person to person, but the imperative to Produce remains a constant. At work, this expresses itself as an ever increased effort to attain maximum productivity, to be the utmost exemplar of whatever work is performed in all aspects. At home, it expresses itself as a nagging sense that one should do something productive. Academics have a name for this nagging sensation: “I should be writing”. The same goes for any other productive endeavor: I should be reading, painting, remixing, organizing, meditating. Whatever the activity, the same nagging sensation arises that it should indeed be done.

The upcoming Covid-19 quarantine, a period of time within which a person is specifically obligated to stay at home, is a good way to see the two mentalities of production in action. For those working under the paradigm of negative production, this would be a period to unwind – to be themselves, to sleep in, to not give a darn about the Man. Those laboring under the new paradigm, however, have to get themselves ready to (as paradoxical as it might seem) get to work. There imperative is still there, and it is even stronger for there not being anything else to do. The quarantine is a great opportunity and an even greater obligation.

Based on this, we should be able to predict that a substantial number of people will end up more tired at the end of the quarantine than at the beginning of it. The amount of actual work accomplished during the quarantine is beside the point; the tiredness is not a result of sustained effort, but of constantly feeling that the Work should be performed. Positive production allows no time for rest, only for more of itself, more production. Instead of the quarantine being two weeks of rest, relaxation and recovery, it will be two weeks of constant anxiety over not sufficiently productive during our allotted time. At the end of it, tiredness and exhaustion will be the words of the day.

The point of this book is not to argue that we should go back to negative forms of production. This is a book of philosophy, after all, which means the point is to get us to ask ourselves if this really is how we want to spend our lives (in and out of quarantine). Like all good works of philosophy, it does not lead to a clear-cut easy to implement answer, and leaves readers with more questions than they previously had. I say we acknowledge our tiredness, and then proceed to be as unproductive as we need to be.

Northrop Frye’s Anatomy of Criticism was published in 1957. This is not only a bibliographically necessary nugget of information for when you want to compile a list of works cited, but also an important touchstone when reading the book. A great many things have happened since 1957, and it is interesting to waltz through the realms of criticism as it were back then with the knowledge of how it turned out later. Armed with the knowledge of the future, returning to this writ is akin to a tour of what might have been. The present back then was contingent in a way that our present is not.

To take an example: at length, Frye gestures to the emerging trend of replacing criticism proper with the act of producing ranked lists, and the inherent methodological problems of such an approach. For one, merely ranking things is not a critical act, it is merely the application of a more or less explicitly defined set of criteria on a limited set of objects. Going through the motions of such a procedure does not increase our understanding of the works in question, or even why they were included in the ranking process to begin with. With the modern phenomenon of listicles firmly in mind, we can look back on these musings and nod an extended agreement. Not to mention the trend of modern mission statements to replace grammatical structures in favor of disjointed yet prominently displayed keywords, where even the pretense of an overarching organizational principle has been abstracted out of the picture.

To take another example: while delineating the different roles of critics and authors, Frye makes a joking aside that Dante, who proclaimed that a certain poem was the best he had ever written, was in so doing an indifferent critic of Dante, and that others had gone on to write better critiques of said poem. Little did Frye know that a mere decade later, the whole death of the author hubbub would flare up in earnest when Barthes kicked the hornets nest. And then kept it going for quite a spell.

A funny third example is how Frye points out how there were no standard introductory books on criticism. He then goes on to speculate what the first page of a potential such book would say, pondering that perhaps it might modestly begin with the question “what is literature?”. Then, he ventures that the second page might expand on this question, in terms of verse and rhythm, and that subsequent chapters then deal with the complexities of genre and other nebulous yet necessary literary terms. Terms which, although recognizable in action and principle, seems strangely resistant to being theoretically explicated.

It is funny in the sense that there are now several such books, who do not necessarily agree with each other on the finer points of what criticism actually is. It is also funny in the additional sense of a reader being able to go to their local university library, scour the shelves of every book that looks vaguely introductory to the enterprise of literary criticism, and empirically investigate how Frye’s prediction turned out. So I went and did just that. Here follows, in the order of which they stacked up next to me after I had done the aforementioned scouring, the results.

First out is Persson (2007), titled Varför läsa litteratur? (Why read literature?). An introductory title if there ever was one. The book begins by mentioning that merely asking this question is seen as blasphemous in certain context, almost taboo. Persson then continues to outline that, in practice, this very question has had a variety of responses throughout the ages, relating to the building of such things as character, nationhood and a (well-read) democratic citizenry. He then gestures towards the contemporary trend within organizations to demand a justification (a stronger word than an explanation) for everything that happens within it. Thus, being prepared with answers with slightly more rhetorical and conceptual bite than “it’s a traditional value held throughout literally all of recorded human history (more often than not constituting said history)” is a modern virtue.

Next up is Barry (2009), with Beginning Theory. It opens up with the observation that the “moment of theory” has passed, and that we now find ourselves in the “hour of theory” – the enthusiastic fervor with which theory was introduced has been replaced with the slightly less enthusiastic aftermath in which we can look back upon what has gone before, and calmly set to work organizing and cataloging the aftermath. Theory, literary theory among them, has become a day-to-day business, and thus it needs standardized books like this one so everyone in said business are, as far as such things are possible, on the same page.

Observant readers will note that “criticism” seems to have been replaced with “theory”. Just theory in general, with “literary“ added on as a reminder that books are somehow involved. Culler (1997) picks up this theme on the first page of Literary Theory, where he differentiates between capital-T Theory in general and literary theory in particular, and then goes on to discuss how the two have been so thoroughly intertwined over the last decades that keeping them separate is a fool’s errand. Non-literary theory (defined broadly) has impacted on how literature has been written, which has then affected how criticism of said literature has taken form, which in turn has influenced literary theory, and to fully understand it all a modern readers has to know a little about every step of this series of events to keep up. Basically, a critic also needs to be a theorist, in order to understand the books they claim to critique.

Franzén (2015), in Grundbok i litteraturvetenskap, (Introduction to literary science), take a slightly more analytic approach, and defines theory in the scientific sense of being a comprehensive set of ideas relating to something; the ‘something’, in this case, is literature in its many forms. Franzén notes that there has been a move from writing about literature in a normative sense – i.e. how it should be – to writing in about it in a descriptive sense – how it actually does what it does. The book then proceeds to outline a number of themes in this straightforward manner.

Eagleton (1996) opens up Literary Theory with the striking formulation “[i]f there is such a thing as literary theory, then it would seem obvious that there is something called literature which it is the theory of”. After this opening salvo, Eagleton takes a closer look at what the category of “literature” includes (e.g. the Illiad) and what it, more importantly, does not include (comic books), and how this selective applicability affects the theory which claims to be about those things included. What is literature indeed.

Peck and Coyle (2002) introduces Literary Terms and Criticism with the assertion that “literary criticism is primarily concerned with discussing individual works of literature”. The authors then immediately clarifies that aspects slightly less particular to an individual work, such as its genre or its historical context, also play into the process of criticism. The tension between books always being singular, unique and one of a kind, and also very possible to group together with other similar monads, is as of yet one of the unresolved questions of theory, literary or otherwise.

Next up is Norton’s monumental tome the Norton Anthology of Theory and Criticism (2010), which features 2758 large pages of small print, covering just about every aspect of theory and/or criticism there is. It starts off by proclaiming that there are those who claim to be anti-theory, who hold the position that all this circumlocution is a mere distraction from the real work of getting it done. Slyly, the anthology then points out that this in itself is a theoretical position, whose assumptions can be critically examined and thus better understood. Not said, but heavily implied, is that the following thousands of pages might be of some use in this critical endeavor.

Finally – it was a big stack, dear reader – Bennett and Royle (2009) begin their An Introduction to Literature, Criticism and Theory by posing the rhetorical question: when will we have begun? From this provocation, the authors then set out to problematize the beginning of a text. Do early drafts count, or shall we limit ourselves to the finished publication? What about marginal notes, commentary, public reception or influential works of criticism? When, indeed, can we with confidence proclaim that we have read and understood enough to finally get on with doing either literature, criticism or theory?

It is tempting to say that Frye is still correct in his assertion that there is no standard introductory work on criticism. The prevalence of many introductory works, plural, only serves to underline this point, albeit probably not in the spirit with which Frye made it. But I reckon it would be more fruitful to say that there is indeed a standard of introductory works, and that what unites them is an unwillingness to once and for all proclaim what literature (and the criticism of it) actually is. Literature is at once both the baseline of human expression (in its many forms), and the gradual expansion of the possibilities of human expression. We all agree that there is such a thing as literature, and then immediately start to argue about the finer points beyond this first principle. Establishing a firm definition of what literature is invites future authors to blur the line by new and creative literary feats, and criticism must always – lagging behind as it is – try to keep up with whatever tools it can get its hands on, theoretical or otherwise. Which is indeed a hopeful thought to take into an uncertain future. It certainly makes the present ever so slightly more contingent.

The I Ching – the book of changes – is a strange thing. It is, all at once, a divinatory practice, a meditative technique, a highly significant cultural document and a vocabulary. All crammed into a very small package, most of which – for western readers – will consist of contextual information, clarifications and useful forewording. The actual text is a mixture of commentary, general life advice and technical documentation, all intertwined. Those looking for a straightforward read will be highly disappointed.

In technical terms, the I Ching is a six bit binary system with 64 different states. As with a computer binary, each bit can either be 0 or 1, yin or yang. Depending on which six bits are given by the divinatory process, the resulting sign can give very differing interpretations of the situation you find yourself in. The sequence 000111 gives you the sign Stagnation, a very clear indication that the situation is hopeless and that nothing good can come out of persisting; the general advice is to leave as quickly as humanly possible. This can be contrasted with the at first glance seemingly opposite 111010, the sign for Calm Anticipation, which advises that a great or dangerous moment is imminent, yet that the time to act is not quite here; the general advice is to wait energetically. Two very different moods to find oneself in, yet very compactly conveyed through the use of merely six lines.

This efficiency is ever so slightly opaque to those who do not know the signs. It is also a remarkable achievement. It manages to place wildly disparate life experiences into the same framework, and thus allows for comparisons between different situations and the appropriate courses of action for each such situation. When under the sign of Stagnation, the only possible way forward is to just drop everything and get out, since nothing can be salvaged. When under the sign of Calm Anticipation, however, the opposite is true – the winning move is to firmly keep your eyes on what’s ahead and sticking to the plan. The wisdom imparted by comparing these two signs is that these are two possible life situations to find oneself in, and that being able to tell which applies to the current moment is crucial to getting ahead.

As you might imagine, there are a great number of possible comparisons to make with 64 available signs. To make things even more interesting, each sign is subdivided into six subvariations depending on which line gets emphasized in the divinatory process. Take 111010 as an example. Emphasis on the first line indicates that the danger is far away still, and that the best way to prepare is to live in such a way that the appropriate virtues are firmly in place when it finally does arrive. This can be contrasted with fourth line, which indicates that the danger is already clear and present, and that the proper move is to not make things worse in a blind panic, but to calmly hold fast. Both indicate that things will get better once the approaching danger can be overcome, but that overcoming this danger is a function of the actions taken in the calm moments of preparation.

Math enthusiasts will quickly figure out that the sum of these subvariations is 384, a respectable number of possible life situations. When I earlier called the I Ching a vocabulary, this is very much what I meant; being able to systematically distinguish between such a large number of possible situations (and the prudent courses of action for each) is a whole dedicated skill in itself. Being able to talk with confidence about the subtle differences between the different signs and their subvariations is yet another skill, one which may very easily be (as the character of Chidi in the TV series The Good Place so eloquently exemplifies) mistaken for wisdom. It is the allure of what is signified through 101001, Effortless Grace, which ever so slyly emphasizes the former over the latter.

The great number of variations points towards one of the inherent paradoxes of the I Ching system. On the one hand, the sheer volume indicated that just about everything ought to be covered in there somewhere. On the other hand, any student of creative writing will surely be able to think up more than six variations for each sign, once they have gotten the general gist of what it is about. Indeed, anyone with sufficient life experience will be able to recall that one time when the sign itself was applicable, but none of the variants really fit. The world is greater than the attempt to systematically categorize it.

This paradox is not a bug, however. It is a feature. Once someone has gotten so used to the signs and variations that they are able to identify the blind spots of the system, they have mastered a vocabulary of situations, remedies and moods so vast as to be able to conceptualize just about anything they stumble upon. If a peculiar situation does not fit into the system, then that too is useful information, and indicates that there is something there that warrants thinking more intently about.

Thinking intently is one of the things the I Ching encourages its practitioners to do. Going through the motions of a divinatory session takes everything from 30 to 90 minutes, during which it is advisable to keep out all distractions. Not only because it is easy to lose count whilst going through said motions, but also because the sheer act of sitting still with the problem firmly in mind is itself a kind of thinking. As Jung almost phrased it, the hands are busy whilst the mind is giving space to consciously and unconsciously process the situation. Once the answer is given and a sign appears, the practitioner is more than ready to see how it applies to the present circumstance, in extensive detail.

The I Ching is a peculiar text, a discursive anomaly. It is, I dare say, a small book of big moods.

Dark Souls is in many ways the prototypical video game. When you first boot it up, there is a grand cinematic explaining the scope and breadth of the narrative universe – there is a god of lightning, a lord of death, a fire witch, a dragon, an epic battle! It’s all very dramatic and cinematic, and then

New Game

The player character is in a dungeon for some reason, and an unknown NPC throws down a key so as to make a timely escape possible. What follows is a period of getting used to the controls, possibly dying once or twice (the big boulder is a contender for this outcome), and an indirect lesson that sometimes you are not ready to fight the big demons just quite yet. The broken sword you begin with might be thematically proper, but something more pointy is required for actual combat. Thus players are introduced to the concepts of switching to appropriate gear and running past enemies, as need be.

When looking at gameplay after this point, what is striking is that so much of it conforms to the image of video games kids have. The player character is a dude (or dudette) with a sword, who fights generic enemies (whose individuality can be safely ignored) and bosses (whose uniqueness make their backstories as interesting as their fighting techniques). All this in a setting steeped with backstory, lore and hidden secrets, who can be uncovered by players enthusiastic and determined enough to give it a go.

In other words, it is very much like when we were young and played early NES games. The graphics were pixelated to perfection, and the physical cartridges the games came on barely fit enough information to convey any narrative information outside the mechanics. Each and every pixelated enemy had a name, a backstory and a place in the universe. And, more importantly, an entry in the manual that came in the box – lovingly crafted to ensure the differentiation of one colored set of pixels from an identical albeit differently colored set of pixels. The Goombas and Bullet Bills had canonical names, and all the implied narrative infrastructure that comes from having a name.

In those archaic pre-internet days, this narrative infrastructure turned into local myths and legends. Part of it came from simply informing everyone involved about the facts – given time and enough double-checking of the manual, soon enough the Bowsers and the Lakitus were known entities. An even bigger part came from the telling and retelling of ideas of how the implied, never shown but carefully named, kingdoms or future settings had to be organized. The world of Super Mario had a princess and a whole series of monarchs being turned into various creatures, establishing that the mushroom kingdom was indeed a magical kingdom. The world of Mega Man implied a whole host of futuristic machines subverted to the twisted ways of Dr Wily, and so a setting could be imagined around that. And so on and so forth.

Given the lack of available textual information (the manual was only so large, and the cartridge could only contain so many bits), there was plenty of room for imagination and extrapolation. Indeed, even speculation. Many a friend group had informal theories of what may or may not have transpired – I dare not call them fan theories, lest the gamers grow restless – some of which are still remembered fondly to this day. These theories served as a springboard and expression for young imaginative minds, and as an informal social glue in a time when such things were rare indeed. If you ever get the chance, do probe someone about their childhood imaginings of these virtual worlds. There is more there than might meet the eye.

When I say that Dark Souls is a prototypical video game, I mean that it harkens back to this earlier era of mythological expansion and exegesis. An enemy is not just an enemy – they have names and backstories. The bosses are not just slightly tougher enemies – they have intricate relationships with each other and the world they find themselves in. The world is not just something put in place by virtue of the necessity of having to render something on the screen to make the gameplay look appealing – everything is significant, every detail conveys important information, every aspect contributes to the overall story. There is more backstory to be uncovered, and more importantly, more stories to be told. Dark Souls is very good at bringing out the forensic storytellers inside its players.

With the advent of the internet, the social space of this storytelling has shifted from the geographically available friend group to a more global setting. The Dark Souls portion of Youtube has viewerships in the millions, with cooperating and competing exegesists comparing notes. The drive to tell, retell and refine the stories found implied in the games – always implied, just at one remove – is still there, burning like a great bonfire. Or, more accurately, like many small bonfires scattered across the lands.

There are those who speak of Dark Souls only in terms of difficulty, as some great obstacle to be overcome by those worthy enough. While I do acknowledge that this, too, is part of the myth building that eventually leads to storytelling, and that there are parallels to the whole Nintendo Hard thing, I must say that such simplistic takes miss the point. If difficulty is your only point of reference for talking about the game, then I am sad to inform you that you have officially failed at Dark Souls.

Take heart, however, for there is always an opportunity to play again. The age of fire is still with us, for a brief time longer. A new game awaits, and new stories. Tell them well.

When anticipating the new Terminator movie, I had two sets of expectations as to how it would play out. Interestingly enough, these expectations map almost seamlessly onto two modern entries into the first person shooter genre of computer games, DOOM (2016) and the Wolfenstein series. These represent two radically different takes on the same genre, thus serving well as a template for expectations for what a new movie in a roughly adjacent genre might play out.

The old DOOM, of 90s fame, was an unabashed feast of running, gunning and rock music. When you did not run, you gunned. When you did not gun, you sure did run. At higher skill levels, the player did both at once, in a non-stop action romp accompanied by the most rolling of rocks. To call it a cerebral experience would be an insult, given its heavy duty focus on running, gunning and nothing else. See the thing, shoot the thing; nothing is too big to blow up, preferably by gun.

This trend continues in the 2016 incarnation, which manages something as seemingly contradictory as an intelligent take on the shooter genre. Rather than trying to smarten things up with an intricate storyline, sophisticated dialogue or morally ambiguous gameplay choices, the developers intentionally pushed all that aside in favor of even more running and gunning (and ripping and tearing). Cleverly, they chose to express this through the actions of the player character, who at times react to the increasingly over the top story beats in either of two ways: blowing it up or tearing it apart. The most iconic expression of this is a passage where the antagonist lays out how to carefully disassemble a complex device, so as to be able to put it back together later, and the player character responding by simply smashing it to pieces. DOOM is not a game about careful deliberation or consideration; let there be no uncertainty on this point.

Naturally, this is an expectation which fits neatly onto the prospect of a new Terminator movie. It would be all too possible to make a deliberate decision to go all in on the action aspects of the franchise. Big guns, big robots, even bigger explosions, even longer car chases. Drop all pretenses of plot and polish, in favor of a big badaboom spectacle rumbling and tumbling, going back to the series bad to the bone origins, all the while knowing that this is exactly what’s what.

As a contrasting template, we have the Wolfenstein series. The originals played out much like the DOOM series, albeit with slightly less rock music and way more Nazis. See a bad guy, shoot a bad guy. See a suspiciously spaced section of wall, press suspiciously spaced section of wall; three times out of ten, it was a secret door leading to a hidden chamber. Whilst generally slower paced than the DOOM series, those having a rough recollection of computer games from the time would place them in the same overall category. Bang bang.

The modern installations of the franchise, however, are a surprisingly nuanced take on what it means to engage in a sustained campaign of resistance against an overwhelming force with every advantage on its side. The big plot points – with time travel and Nazi moon bases and mecha troopers and the rest of it – are as over the top as all get-out. The discussions amongst the groups of people the player character belongs, and even more so between player character and said groupings, tell a story of resigned hope, sustained subversion of the new order, and of a perpetual iron will to affect change despite everything. It also, between the lines, serves as a critique of the idea that one person, no manner how ridiculously overpowered in terms of computer game conventions, can truly change anything by rampaging through the societal institutions defining our time. The world is ever so slightly too large for a one man fix all solution, and in order to affect institutional change, numbers are needed.

This rejection of the great man theory of history, too, could serve as a template for expectations of a new Terminator movie. Whilst acknowledging its past as a big badaboom spectacle, it would be very possible for it to distance itself from the assumptions inherent in the genre henceforth and strike out in a new direction. Either in terms of simply adjusting the numbers on either side – no longer one robot vs the world, but a whole host of robots against a slightly smaller subset of the world – or by introducing subtle complexities into the concept of killer robots which turn everything on its head. The potential for making things cerebral is, as the case of Wolfenstein has showed, very possible to realize indeed.

The new Terminator movie tries to have it both ways, and partially succeeds. Those expecting a traditional escapade of explosions and excitement get their fill, with some room to spare. At one point, there is a fight in a rapidly descending airplane on fire, because of course there is. At another point, the movie goes out of its way to depict scenes from domestic life in a Hispanic neighborhood, with a quiet dignity and somber pace quite at odds with the aforementioned explosions. As it unfolded, I found myself thinking: this is a Terminator movie?

Make no mistake. It is a Terminator movie. Only, it’s thirty years later, and the movie makes great efforts to acknowledge this. The world has moved on. Skynet belongs to a bygone era, a timeline which has ceased to exist. No one remembers it other than a select few involved with stopping it; the new killer robots have no names, no known motivations, no known backstory. Their only defining feature is that they are legion, and that they are out to get us. Their sheer anonymity makes them that much scarier – Skynet might be the devil we know, but at least we know it. We do not know what fate ails us in the coming apocalypse, and that makes the old apocalypse stories less interesting. The coming apocalypse will not be instigated by something with a name, and thus we need more nuanced stories to foretell its arrival.

In conveying this message, the new Terminator movie succeeds. As to what comes next, there is no fate but that which we imagine ourselves.

An ancient rhetorical truth is that it is easier to get an audience to agree with something if they have already agreed with other things. It does not have to be big things, or important things, or even significant things. The mere act of having once nodded in agreement is a gateway drug, as it were, to keep nodding. Part of it is momentum – if a, then b, then probably c too, given the trajectory. Another part is the sheer fact that the orator has been agreeable thus far, thus having had time to establish themselves as someone who knows what they are talking about. Even if the things agreed upon are the rhetorical equivalent of small talk, the dynamic is much more favorable than if the orator went all in for the main points right from the get go. The audience has become familiar to the voice talking, and rhetorically that goes a long way.

One needs to keep this point in mind whilst reading Randi’s prize. Both in order to understand why the book is structured the way it is – it takes quite a long time to actually get talking about the titular prize – and in order to be aware that there comes a point where the book trades in its early merits for future favors. The painstaking historical account given in the early chapters do not, strictly speaking, inform the claims made later on. Yet, upon reading, an uncritical reader might find themselves nodding along out of sheer habit.

Before delving deeper, it might be prudent to specify just what Randi’s titular prize is. Its official title, the One Million Dollar Paranormal Challenge, gives us a hint both as to the amount of money involved, and to what manner of activities are involved in winning it. The challenge was, to phrase it in its simplest form, to prove under scientifically controlled experimental conditions, agreed upon in writing beforehand, that something paranormal is going on. The ‘something paranormal’ could, by virtue of the vastly varying nature of paranormal claims made historically and at present, be any of a long list of things, from dowsing to mediumship to remote sensing. The exact nature of the scientific tests of these proclaimed paranormal abilities would naturally vary depending on which ability were to be tested. The general gist of it was that if the science bore out, then the prize money would follow.

It has to be said that this is a rather rational bet on the part of James Randi. Things that are outside the scope of modern science are (by definition) not prone to be tested under scientifically controlled experimental conditions (lest they had already been thusly tested, and become part of regular science). Claims of paranormal or supernatural activity are thus always-already outside the scope of scientific testing, which means that the likelihood of someone showing up with something that actually works is as close to zero as any human being would rationally factor in. There is a theoretical possibility that it might happen, but mathematically speaking winning the lottery would be more likely.

Here, we bump into an important dividing line. On the one side, we find those who claim that supernatural things are not real, which means they can not be demonstrated experimentally. On the other side, we find those who claim that supernatural things are real, and that the fact they can not be tested experimentally is a fault on the part of science. The author of this blog post leans toward the former (cf the anomaly on astrology), whilst McLuhan (no relation) leans towards the latter.

The early chapters outline the history of paranormal activity in 19th and early 20th century British and American context. They provide an interesting and informative introduction to the cultural practices of séances, table-turning and the consultation of mediums. They also make a point of correcting various accounts made by sceptics (of which Randi is one) about these very same cultural practices. Again and again, it is shown that sceptics got the historical facts wrong, and could have made their cases better had they but bothered to do their homework, rather than just dismiss the whole thing as mere nonsense. The rhetorical trajectory of proving sceptics wrong again and again is firmly established during these chapters, with many examples and at great length.

Here, it should be noted that there is a historical record of paranormal activity throughout time, and that it is important to keep it straight. The book does a great job of taking sceptics to task and demand that they apply the same attention to detail when discussing paranormal activity as when they do actual science. Being wrong in the name of being right is not a good look, and should not be among the virtues cultivated by those proclaiming to love science.

However – and here our introductory remarks on the rhetorical efficacy of small agreements come into play – the book then leverages these inconsistencies on the part of sceptics into a positive argument for the existence of telepathy and remote sensing. If we agree that the sceptics were wrong on these historically documented facts, the thrust of argumentation goes, then we might probably also agree that they are wrong too when they say that telepathy is impossible. After all, they do have a track record of being wrong – there were in fact whole chapters devoted to how wrong they were.

The philosophically appropriate objection to this line of reasoning is that someone being wrong about something does not mean the opposite is correct. Indeed, someone being wrong about one thing does not even mean they are automatically wrong about another thing. It does not follow from sceptics being wrong about séance culture that they are wrong about telepathy – moreover, even if they are, proving this would not constitute proof of telepathy as an actually existing thing. The momentum of rhetorical prowess does not hold sway on these things.

The irony is that if this book had limited itself to correcting the historical record, it would be a nice addition to the collections of esoteric books found in unexpected places (you know the ones). However, since it posits itself as a polemic against sceptics in general (and the titular James Randi in particular), it most likely will not find its way to those whose historical understanding needs amending, nor will it (since the titular prize was terminated in 2015) be anything but a historical, rhetorical and discursive anomaly.

Do not be fooled by its glossy exterior. While Prefiguring Cyberculture might look like a coffee table book, it does indeed fill the function of coffee table book admirably. It is big, it prominently features the word “cyber” on the cover, and it even has pictures. In short, those in the market for these display items could do worse than to seek out a copy to strategically place in a prominent spot.

Those daring to pick up the ever so slightly oversized tome and open its pages, may or may not be delighted to find that it was published in the early 00s. The dividing line between dismay and delight lies with one’s familiarity with literature pertaining to things cyber. Newcomers might harbor the intuition that this is an outdated scripture whose insights have been superseded by actually existing history, useful only as a way for historians to keep track of what happened when. Aficionados of the genre, however, know that the future is not what it used to be, and that 90s and (very) early 00s cyberoptimism was a radically different beast than what came before or after. In short, knowing its publication date informs a prospective reader of what manner of reading is to come.

This temporal aspect runs though the anthology at every turn. Indeed, its preface even acknowledges that readers in some distant and yet unknown cyberfuture might find its speculations quaint, fanciful or accurate in equal measure. In the same vein, the book’s project to investigate the roots of cyberculture – to prefigure it – means that the “now” is an ever negotiated position. The history of cyber is not only a future endeavor, but is also something that harkens back to decades and centuries well before there were learned books on the subject. Historically speaking, merely looking at things prominently featuring the word “cyber” does not tell the whole story.

Those familiar with the genre, will not be surprised that one of the first essays is on the topic of Cartesian dualism as it relates to fictional portrayals of artificial intelligence. In more ways than one, this is a prototypical choice of topic for an essay of this era – it takes something really, really old and applies it to something really, really new. The ensuing discussion regarding how Cartesian dualism has been criticized just about every way it could possibly be criticized (and then some), yet somehow finds purchase in literary depictions of computer intelligences alive without corporeal form – is par for the course. Indeed, I suspect not a few readers will nod and think “yes, this is the content that I crave”. A subset of these readers might then happen upon the second thought: why don’t people write this way any more?

It is a question that radiates from every page, all the while the individual essays are busy discussing this or that historical aspect in detail. It is tempting to propose that one reason might be that the arrival of the cyberfuture itself, which has served to make casual longform writing obsolete; we have online video essays, podcasts and extensive subtweeting to replace the old style communicative form of structured written words. The nature of technological change means new technologies are used (lest it not be much of a change). Given the new capacities of our cyberreality, it would be somewhat archaic to keep doing it old style. To phrase it in contemporary parlance: blogs do not generate engagement or drive traffic.

Framed this way, the book finds itself in the ironic position of painstakingly outlining how the written word has predicted futures (plural) up until the point where the written word is firmly something of the past. Once we got here, the tools of our ancestors were replaced with something more modern. The future arrived; time to let go of the past.

On that note, another chapter features a lengthy account of medieval sponges capable of storing the spoken word (replayable upon the proper squeeze), which then transitions into a pondering of just how we think about the various memory devices we use every day. Memory is not just a number ascribed to hard drives, but also the very thing we use to navigate our way through the whole ordeal of being alive. If we do not remember something, it in some sense ceases to exist. If we outsource our memory processes to external machines, then what becomes of the subject, left to its own devices?

If history and memory are cyber, this raises the question of just what is not cyber. Careful analytical readers will possibly object that this all seems a case of overreach, of overapplication of an underdefined concept. This is, potentially, true. But it also pokes at a contemporary trend of things going post-digital. The 90s ended, cyberculture became the default mode of everyday life, and we are now able to grapple with such complex phenomena as tinder dating rituals without having to discuss at length the various interface affordances of the platform. In very short order, we have gotten past the changes and barged into a future without hesitation or the nostalgic foresight of erecting milestones. One of the few visible legacies remaining is the seemingly mandatory introductory sentence “the improved capacity in communication technologies over the last decades have changed our ways of communication”, with its countless variations on theme. Indeed, bewildered cyberyouth often find themselves wondering how people did these things (for any given definition of ‘these things’) back in the old days.

The book, ultimately, tries to answer part of the question of where the big ideas of cyber came from. Inadvertently, it also raises the question of where the big questions of cyber went.