Monday, 26 December 2011

Last year around this time (albeit slightly before the arrival of the big, fat, red lobster man, unlike this year), I wrote about some of my own personal Christmas traditions. This year I thought I would be doing something slightly different; so please indulge me in some ramblings and musings on the subject of Santa Clause.

To open up this associative chain of thought, allow me to tell you the first half of a joke once told me by one of my colleagues of the linguistic persuasion: What do you call Santa's little helpers? (Hang in there, I promise to provide the answer before we part.)

So, Santa Clause – or, as the British call him, Father Christmas; which opens up nicely for a segue into associative digression #1.

This is in fact my first Christmas as a father, and although the little fellow is a wee bit too small to fully understand or appreciate the proceedings of the holiday, his presence has certainly altered my own perceptions and understanding. And I do not merely mean things like the annual viewing of The Nightmare Before Christmas taking much longer than usual, with an array of pauses for diverse things. No; while that was certainly a noticeable effect, it is merely a symptom of something much larger. From here on and at the very least for a while, Christmas will once more become a children's holiday, viewed through the eyes of a child, and filled with all imaginable magic and wonder. So, maybe the British are spot on, in acknowledging Santa's paternal role in the children's celebrations.

Of course, Santa has other names to; which opens up nicely for a segue into associative digression #2.

The name Santa Clause itself is obviously a version of the longer Saint Nicholas. With that in mind, it would be easy to assume that the "nickname" (pun half intended) Old Nick was one of Santa's, but alas, one would be ever so wrong in assuming that. Old Nick is, in fact, an old elliptical way of referring to the Devil, who also is a child of many names. One of these names, much publicised by Milton, is Satan. In Christmas times like these, however, it is hard not to recognise that the latter is merely a misspelling of Santa (or vice versa), which leaves a lot of unanswered questions to be pondered. And mayhap the reference of Old Nick is not quite so clear, nor the implied referent so erroneous, as originally suggested.

But let us wrap these ramblings up and return to my initial query (for all good things come in threes): what do you call Santa's little helpers?

Monday, 12 December 2011

In the past week, I attended a doctoral colleague's work-in-progress seminar, which touches on one of my interests: adaptation. Now, old readers know that I have written on adaptation in here before (and also on the related question of what constitutes a medium), but the seminar made me think about a few things yet again and it seemed appropriate to revisit the subject.

The seminar once more revealed the seemingly great bogeyman of adaptation studies and theories: the issue of fidelity. For some reason, this issue is deemed very problematic, and has people twisting themselves every which way to avoid it. I cannot help but ask why? In truth, to my mind, the issue is much less problematic than most of these critics and theoreticians seem to think, and I will explain why.

Regardless of what medium one adapts from and to, there is a need to distinguish the process of adaptation from the resulting adaptation. The former is arguably an act of translation, a word which etymologically comes from the Latin translatus, i.e. literally "carried across" (see Merriam-Webster). However, as has been noted many times, the act of translation (whether between languages or media) is never a simple process of transfer, but one that always, without exception, involves change on some level. The etymology of the word "adaptation" ("from Latin adaptare, from ad- + aptare to fit, from aptus apt, fit") is arguably a good indication of this aspect, as the idea that necessary changes occur in the process can be seen as the result of the act of fitting something into a different language or medium, and act of metaphorical tailoring attached to the process of the transfer of content. The resulting adaptation, however, is never just a result of that process; it is also a narrative object in its own right.

So what do I mean with this distinction and why do I stress it? Well, the process of translating anything — whether it is translating a text in Swedish to English, or a narrative in literature to film — implies a given source material that needs to be carried across a void of difference, from one language or medium to another; ultimately being fitted into its new location. As such, fidelity can arguably be seen as the mark of a successful process: Was the material carried across adequately; and was it made to fit its new language or medium?

However, fidelity in itself will never tell us if the resulting narrative object is any good. Arguably, one could conceivably transfer and adapt properly (to use Brian McFarlane's distinctions) more or less everything from the source text into the new object (if you will), but without making that object a good one. Sticking to the classic example of adaptation from literature to film, this would basically mean turning a good book into a faithfully adapted but ultimately poor film.

Please note that I am not saying that faithful adaptation is impossible. I am merely pointing out the rather obvious, yet sometimes quite forgotten truth, that fidelity is not a mark of qualitative narrative. In fact, I think that it is this very forgetfulness that haunts adaptation studies and causes the rather unnatural twists and turns in the discourse in trying to shun notions of fidelity and a source text. It is not that the narrative object to come out of the process is not tied into a web of intertext all its own, but clearly the fact that we talk about it as an adaptation of another text means that one of its intertexts is singled out and heavily emphasised. To pretend otherwise seems to be missing something very fundamental about the process.

Similarly, to merely point out failure to be faithful is equally missing the point. The core question here is rather why than what. This is where formalist theories like McFarlane's is so useful. It allows us to delineate what can be transferred (rather simplistically) and what requires adaptation proper (i.e. medium specific changes). Needless to say really, the process at very least mostly involves changes that strictly speaking are not necessary (from the point of view of possibilities in the target medium) and those instances (like those of adaptation proper, I would argue) are what is truly interesting. They are what gives us insights into dimensions beyond the media specific, let us call them societal dimensions to the processes of translation and adaptation. These include ideological, economic, cultural, and even individual-related factors; because an adaptation is never only a question of transfer, or even of making a good narrative object for that matter. (The latter obviously not being unique to the process of adaptation, but a condition it shares with all artistic endeavours to various degrees.)

Art is always produced in an historical and cultural context; it involves individuals in existing ideological and economic systems. Granted that some media are less dependent upon the latter (e.g. literature in its rawest form) than others (e.g. film-making). It would be foolish to think these factors would not also affect the processes of translation and adaptation, just as it would be foolish to assume that the effect itself would be identical (in a heterogenous fashion, I will grant you) to the production of art that does not involve these processes. From an academic point of view, however, these questions would seem not only highly relevant and interesting, but also something that ought to be situated at the very core of adaptation studies.

In short, I think adaptation studies, theory and theorists need to get over both their reluctance towards fidelity and source material, and their willingness to let these concepts blur all boundaries.

Monday, 28 November 2011

Apropos of last post's focus on writing, I thought it appropriate to follow up with brief recommendation of a great writer's resource: Duotrope.

This resource (originally recommended to me by Swedish author Karin Tidbeck) offers not only an easy way of keeping track of your submissions, but is also an excellent tool in helping you find the appropriate market for a piece of fiction (in terms of content, length, payment, etc). Obviously, you need to look beyond Duotrope too, but I for one would rather look into the submission guidelines of magazines of definite interest rather than searching through the entire forest of (both real and faux) options manually each time for any given piece.

Monday, 14 November 2011

This month is National Novel Writing Month (NaNoWriMo), which means that a lot of people are spending their time committing words to paper, or screen as it were, this month. While I am not one of them, I nevertheless thought it appropriate to discuss writing in this week's post.

As mentioned already in late June, I taught a creative writing class this summer, with a focus on more sustained types of writing (or lengthier writing, if you will). In the wake of this, I have pondered some issues concerning writing more than I normally do. One of the things that keeps popping up in my head is the question of the relation between the writer and language. This issue is deeply linked to questions of writing as art and/or craft.

Is writing something that can be taught? Can it even be learned? Well, obviously no human being was ever born with the ability to write, so the question might seem null and void even before we phrase it; yet we do struggle with two separate views on the writer: that of the artist born with the gift of genius and creativity, and that of the craftsman who has learned the mechanics of the writing process. Personally, I do not think this is an either-or binary. There are people who seem to have been born with a certain talent, to be sure, but they tend to benefit from learning the craft of their talent. Similarly, people without any talent for it never seem to make as prosperous use of the craft they do learn as those in the first category do. Needless to say, really, neither category is black and white either, so it is not a case of having talent or not, or having learned the craft or not, but rather one of degrees in both areas, and how they intersect in any given writer.

So, where am I going with this? Well, the more I have thought about it, the more I have come to see the process of writing through a metaphor of a mosaic or even Lego. A writer has to start with an idea, but that idea needs to be communicated to the world. This means we need to apply language, but not just any language. Fiction writing certainly requires use of regular language, but also a higher level of grammar; narrative grammar, if you will. Especially the longer your writing becomes, the more important it is to understand the inherent structures of narrative and how to use them.

Does this mean that there is really only craft and no creativity? Of course not, and this is where the metaphor of the mosaic comes in handy. The craft is all about learning to recognise narrative structures and understand grammar in practice. The pieces of mosaic or Lego is language and how to bind these elements together is the narrative structures I talk about. The creative part lies in deciding on the motif you want the mosaic to show, and that could be just about anything a person could imagine (not even the sky is the limit here). However, even if you have the greatest motif imaginable in mind, chances are that your mosaic will fall apart, be jagged and jarring, or not even be remotely akin to what was in your mind. This is where technique, or craft, comes in. It is a tool for the writer to analyse what needs to be done in any given writing situation.

I am not suggesting that all writing problems have one singular solution, far from it, but you would not start building a house without a blueprint and expect a solid structure at the end of the process. Similarly you would not expect a great house if you had no understanding of the materials used in constructing it. And yet, a lot of people seem to think that writers can generate organic wholes without any other effort than sitting down to type. Sure, there are people who work more freely than others who plan and plot meticulously, but I do not think it a great exaggeration that those in the former category often have more rewriting to do in the other end. If not more, then at the very least of a very different (and arguably more substantial) kind.

It is conceivable that some people internalise such structuring, possibly by having a talent for that, to a point where this process becomes less visible, and therefore seemingly working itself out. Just as some people start with a better ear for language.

When all is said and done, there are many ways of reaching the same result here, as long as one understands what the process is supposed to be about; that it is about understanding the components and what will hold them together.

A case in point would be the creative use of language. Grammatical correctness quite naturally does not have the same function in fiction writing as in, say, academic writing. Fiction writing does not frown on sentence fragments per se; it does not disavow any writer who feels compelled to break every capitalisation rule known to man. That having been said, writers must really know their language, because the style does effect the reading of a text.

Sentence fragments are a good example. They can be used with great efficiency, because punctuation does not always mimic thought or speech effectively when grammatically correct. But if sentence fragments causes grammatical reference to be lost, a text suddenly starts breaking apart. In essence, this means that while writers do not need to adhere to grammatical correctness, they need to be aware of it. They need to understand the difference between creating a staccato effect or contemplative pauses, and losing coherence in the text. As I said to some of my students on occasion, while one might want to have a reader go back and reread a sentence for purely aesthetic reasons,* one never ever wants any reader to back because the language is too unclear for something basic, like who did what to whom, to be understood. Simply put, the latter is just sloppy work.

Similarly, while there are plenty of grammatical "incorrectness" that does not cause such breakdowns, as a rule of thumb, it is still good to know and understand what one is working against (i.e. the existing, accepted grammar) and also to contemplate what not adhering to any given grammatical rule means. To illustrate, one can generate a strong character voice, a certain idiolect, by creating a slightly skewed grammar for that character, but one also needs to understand that that idiolect will have bearing on how readers interpret the character. And that is the important key here: how we express ourselves in writing (and not just fiction writing) does have effects. Breaking the right grammatical rule can have a great effect in any text, but unless one operates by blind luck, it usually helps to at least have a clue as to what one is doing.

At the end of the day, it is probably true that not everyone can be a great writer. But I would argue that it is equally true that everyone (even great writers) can become better.

* I would also argue, in particular in narrative terms, that sentences that are too aesthetically pleasing (i.e. that causes your reader to go back to reread them) can be very counter-productive. After all, narrative is all about generating sequences of events and actions; continuous breaks in the narrative flow are therefore not necessarily the best way of achieving a coherent whole.

Monday, 31 October 2011

I probably came in contact with some of his work long before I was aware of who Jim Starlin was (e.g. Batman: A Death in the Family), but when I became aware I became an instant fan.

It must have been in 1990 or 1991. I had just a year or so earlier switched to buying and reading US comics in original rather than in Swedish translation, and I was still following the superhero scene (which I would more or less abandon for a very long time within a few years). This naturally meant that DC and Marvel were part of my monthly purchases (the latter for the most part, what with my being something of a Marvel man at the core) and Jim Starlin made his comeback at Marvel with his Infinity trilogy, involving characters he had created or made his mark upon in the 70s, like Adam Warlock, Pip the Troll, Gamora, the most dangerous woman in the universe, and, of course, Thanos of Titan (quite conceivably Starlin's crowning achievement). My love and appreciation for these characters and the cosmic story arcs spun around them had me not only follow their adventures as they were released at the time, but also had me tracking down those glorious stories from the 70s, and much more besides. All in all making me a Starlin fan for life.

Granted that this is not Starlin's sole contribution to comics in general, or even the cosmic superhero genre in particular, but at this particular instance it seemed appropriate to showcase his importance to the cosmic side of the Marvel universe. He built on the foundation created by the likes of Stan Lee, Jack Kirby and Steve Ditko, to be sure, but he made an unquestionable mark by building a strong mythology upon that foundation.

Was it unique? Well, it would be silly not to acknowledge that Starlin borrowed heavily from various sources, including Kirby's Fourth World and Michael Moorcock's Elric of Melniboné. Nevertheless, Thanos is more than a mere Darkseid clone. His motivation throughout these stories, his love and adoration for Mistress Death, makes him a character in his own right. Similarly, Starlin's transformation of Adam Warlock into an idealistic anarchist bound to his vampire-like Soul Gem appears to have roots in Elric of Melniboné and his soul-sucking black rune blade Stormbringer, but Warlock too transcends the similarities, at least to the degree where it would be possible to think of him as another (cosmic) avatar of Moorcock's fictional archetype, the Eternal Champion.

In short, Starlin's mythology is believable, at least in part, because it is not new; because it is made of the recyclable stuff of myth. Yet also because it was done in a new way and did offer us more than that which Starlin drew upon.

In recent years, two writers have emerged over at Marvel, who shows an understanding of the cosmic superhero genre that, perhaps, equals Starlin's. They are Dan Abnett and Andy Lanning, and with successful runs on Nova and Guardians Of The Galaxy, culminating in the mini series The Thanos Imperative, they have not only brought back characters associated with Starlin, but have used them in a manner that positions them as natural heirs to Starlin's cosmic narrative tradition.

I am sure I will be discussing both other Starlin material (e.g. his creator-owned series Dreadstar) and the work of Abnett and Lanning in the future, but for now, this will have to do.

Monday, 17 October 2011

"And so we reach a milestone: this is the one-hundredth post on Thus Spake the Mighty Wha-keem. That is, with this post I have written one-hundred posts since I started posting back in May of 2009, and I am obviously still at it."

If I had written those two lines today, all would have been well. Unfortunately they were written at the end of March this year in the erroneous faux "Post #100: Or, When Is a Q a 9?"; the content of which I am still rather pleased with, despite its flawed basic numerical premise. While I did not notice the error until May, when I dutifully reported it in my summary of my second year as a blogger, I will say in my defence that I had already managed to, quite unintentionally, and most certainly ironically, include the line "I am neither turning into a mathematician nor a numerologist," in the post itself. Obviously, I knew what I was talking about.

Nevertheless, here we are. Again. For the first time.

It would seem appropriate to talk about numerical things yet again, but racking my brain seems to yield no fruitful results. Titles fly past my mind's eye: Gabriel García Márquez' One Hundred Years of Solitude, which I am sad to say I have yet to read (although, am simultaneously happy that I have yet to read; go figure!); Brian Azzarello's 100 Bullets, a Vertigo series I have not read either (although I am thinking of picking up now as the whole series is starting to be collected in nice hardcover editions); Numb3rs, a TV series of which I have seen and enjoyed at least the first two seasons (although not to the degree that I really feel I want to write about it at any greater length here). So... what then?

Well, while I have not yet had a chance to read it, I did recently pick up a book that not only seems very interesting, but also fits the criteria to be mentioned here: Alex Bellos' Alex's Adventures in Numberland. This is a book about mathematics, (at the very least seemingly) written to gain the discipline more fans; or perhaps it is more of a love letter for us non-mathematicians to better understand the beauty of numbers. I guess I will know for sure when I get the time to read the book.

And while we are at it... I would also like to recommend a very good film that also seems appropriate (and which I incidentally have not seen in a day and an age myself): Darren Aronofsky's early and weird b&w gem Pi. This film is all about mathematics and numerology, and the greater mysteries of the universe hidden in the endless string of post-decimal-point numbers in the mathematical constant that the Greek letter π (i.e. pi) symbolises. Well worth watching, albeit certainly not for everyone's pallet.

Thursday, 6 October 2011

So, this year's theme at the Göteborg Book Fair was German language literature (i.e. a focus on German, Austrian and Swiss literature in practice) and I managed to catch at least two seminars related to it.

The first one was the seminar "Bra och dåliga böcker" (Eng. trans: Good and bad books) in which German critic Kristina Maidt-Zinke discussed the role of literary criticism with Swedish critics Jens Christian Brundt and Ingrid Elam (the latter moderating the discussion), comparing cultural differences between Germany and Sweden. The seminar was interesting and pointed to the fact that German criticism is given greater space in the papers than in Sweden, but also indicated that this fact in and of itself need not necessarily indicate that this criticism is more well-read or important outside of the same circles as its Swedish equivalent. However, the space does allow for more in-depth reviews and a different approach to the subject of criticism in any given instance.

The second German-related seminar I attended was on Friday: "Gillar alla barn Pippi" (Eng. trans. Does All Children Like Pippi?), in which moderator Janina Orlov spoke with Rachel van Kooij (Holland, Austria), Cecilia Östlund (Sweden), Gabrielle Alioth (Switzerland), Nadia Budde (Germany), and Cornelia Funke (Germany) about Astrid Lindgren's Pippi Longstocking as a cultural icon and the importance of myths in children's literature. Funke interestingly noted the inherent problems of using myth in German literature, as a cultural fallout of the Nazis appropriation of that kind of symbolically charged material. After hearing her talk, I was really sad that I had skipped her own seminar the previous day; something I had done simply because she had been paired up with an historically proven bad moderator/interviewer (which I have quite frankly no desire ever seeing in action again). However, I have heard that the seminar went really well, mostly because Funke refused to submit to this bad interviewers premises, told her off and went ahead to present a brilliant seminar under her own control (yes, I really, really regret missing that, I confess).

The discussion on the seminar I did attend also covered question about fashion and trends in children's literature, and who decides what is fashionable or trendy: readers, bookshops, publishers, or writers? While no real answer was provided, I think it's safe to say that all of these (to different degrees) act upon the stage of the literary market to set up the conditions for that. And that, I would argue, holds true for all publishing (and quite likely other cultural production like film, comics and music as well).

Outside of the German language theme, I visited a few more seminars this year (albeit fewer than usual, for various reasons). Friday was clearly my busiest day and included two more dips into the field of children's literature.

First I attended the mini-seminar "Att skriva och illustrera för barn" (Eng. trans. Writing and Illustrating for Children), in which the moderator, Swedish publisher Birgitta Westin, talked with children's book creators Emma Adbåge and Pija Lindenbaum. Both author-illustrators showed samples of both old and new work. I am a fan of Lindenbaum's work since before, but was not familiar with Adbåge's. It too is impressive, and I will probably check it out down the line, but what really struck me was how more or less directly autobiographical her work seemed, and the lack of distance she had to her working process. This is not necessarily a bad thing. Being a good literary scholar does not necessarily make a good writer, or vice versa, but it does become somewhat annoying when there is an attempt at a more theoretical and analytical discussion that sadly seems off-key as it were. Lindenbaum by contrast presented herself as sharp and more theoretically aware. For instance, when posed with the core question of the seminar, she defined the difference in writing for children very acutely as "writing without grown-ups' frame of reference," and also discussed children's as-of-yet unfixed view of the boundaries between fantasy and reality.

Secondly I attended the seminar "Den politiskt (in)korrekta barnboken" (Eng. trans. The Politically (In)correct Children's Book), in which Janina Orlov lead a discussion with illustrator Anna Höglund, (Sweden), writer Ulf Stark (Sweden), critic Ulla Rhedin (Sweden), children's book creator Timo Parvela (Finland), and poet and controversial maker of children's books Oskar K (Denmark). While the topic sounded more than promising on paper, this was one of the lowest points of this year's seminars. The time was very unevenly distributed between the participants, which is not necessarily in and of itself a problem, but in this case was disastrous.

The two most dominating voices in the room were Oskar K and Ulla Rhedin. The former spoke a very thick Danish that I was not alone in having a hard time following (e.g. Anna Höglund, when responding to something, politely pointed out that she was not quite certain what he had said, at all), and after a while in most of his monologues, it became impossible to tune out. Rhedin on the other hand, a scholar with a doctorate in children's literature, kept throwing around very abstract academic theory and terminology of the kind that quite frankly makes this particular literary scholar ashamed. As a consequence, most of her contribution had no real roots anywhere in the discussion or the subject of the discussion. By comparison, Höglund's contributions (sadly far too few and too short) were insightful, as were those of Timo Parvela (who impressed me the most of the people on the panel). Unfortunately, Parvela spoke in Finnish, with Orlov acting as interpreter, which meant that he did not get quite as much time, and that what he did get was all the more limited by having to be told twice.

The worst failing of the seminar, however, was how swift the stated topic was abandoned. Instead of looking at possible tendencies of censorship by publishers and the market for fear of controversial decisions (and these do exist, as a seminar from a few years ago had markedly informed me about), the debate quickly pointed out that "political correctness" isn't a good or selling term, but rather a derogatory one, and therefore it would seem strange that anyone would want to create such children's literature. The problem with this assessment (while true to a point), to my mind at least, is that is fails to account for the more insidious nature of political correctness as it has come to develop. While it is true that no one, in any field, would really want to market themselves or their product as politically correct, this obviously does not mean that the politically incorrect is applauded or embraced. Rather we are in actuality faced with edited material. Jan Lööf has for example spoken about how publishers have asked him to redraw parts of illustration for children's books, even ones previously published, because of content being deemed as possibly offensive. As such an important subject to cover, one which was basically advertised in the seminar program, and one which was very quickly swept under the rug by the panel. For shame, say I.

A more rewarding seminar was "Kolonialismens ansikte" (Eng. trans. The Face of Colonialism) in which Swedish writer Ola Larsmo spoke with Nobel Prize Laureate Mario Vargas Llosa about his latest novel The Dream of the Celt. The book is a fictional account of the life of Roger Casement, an Irishman who spent his life in service of the British Empire until his experiences in the Belgian Congo and South America led him to adapt a more radical idea about his native country: Ireland. Casement, who met Joseph Conrad in the Congo and was a great diplomat, was discredited towards the end of his life. Secret diaries depicting brutal homosexual orgies were confiscated and since homosexuality was a crime in Britain at the time, he was consequently prosecuted and found guilty. However, there is a controversy here as historians disagree as to whether or not the diaries were not written by Casement but rather used to frame him. In the interview, Vargas Llosa offered a third interpretation: that Casement may well have written the diaries, without having committed the acts. As a note (and without having read the full accounts of the diaries), one might of course question whether the controversy should matter at all, and if the more telling point is that it in and of itself shows a rather nasty cultural (and legal) view of homosexuality.

My Friday ended, seminar-wise, with a mini-seminar about Cirkeln (Eng. trans. The Circle) by Mats Strandberg and Sara Bergmark Elfgren. Swedish writer Nene Ormes has spoken very favourably of this book, so I could not resist attending a seminar where both its authors talked about the book, and I certainly did not regret that decision. After a very interesting discussion on everything from how to write as a team to what the underlying ideas to their story about witches in a small, fictional Swedish community, I simply could not resist buying a copy of the book and getting it signed. It now resides in the ever-expanding to-be-read section, but is definitely something I look forward to reading.

Saturday started with another mini-seminar, which somehow seemed to be a bit beside the point. In "Att vara politiskt eller historiskt korrekt" (Eng. trans. To Be Politically or Historically Correct) historian and novelist Dick Harrison talked to novelist Maria Gustavsdotter about anachronisms in fiction, and more specifically about how they themselves avoided them and to what results. Granted that it is interesting to look at anachronisms, but unlike its title, the seminar never really delved into issues relating to political correctness so much as, perhaps, sloppy historical research (or maybe even a certain attitude of not giving a damn with some writers). In short, the "issue" was really settled from the start and therefore the seminar was slightly inconsistent with the stated topic. And somewhat boring as a result. A case in point would be when Harrison spoke of the old film The Lion in Winter, declaring that what he remembered of it was that each scene was littered with anachronisms and that they had lovely dresses, only to later question why anyone would want to use a historical setting if they do not adhere to proper historical detail. The given answer seemed to me at least to be that maybe, just maybe, the writer or film maker wants to use the lovely dresses. And maybe, just maybe, that element does have an intrinsic value, in terms purely of storytelling.

A much more interesting seminar was "Drömmar och verklighet" (Eng. trans. Dreams and Reality), in which moderator par excellence, Peter Whitebrook interviewed American writer Lionel Shriver. The discussion focused on Shriver latest novel So Much for That and the US healthcare system that it criticises, but also brought up her award-winning book We Need To Talk About Kevin. Shriver presented herself as a keen intellect with a somewhat harsh and cynical perspective on life. On the whole, I enjoyed the seminar a lot and is very interested in picking up either of the two mentioned books.

The final mini-seminar I attended on Saturday (and, in fact, on the fair as a whole) was "Årets deckare" (Eng. trans. This Year's Crime Fiction) in which the Swedish Academy of Crime Fiction's Johan Wopenka introduced Lillian Fredriksson and Karl G. Fredriksson, who presented the translated and the original Swedish crime fiction of the past year respectively. It was a quick 20 minutes, as several book titles flew by, with very brief descriptions, but it was illuminating in terms of showing certain trends and both Fredrikssons were a good deal of fun in their respective performances.

Sunday, as has been hinted, was left without any seminars attended. I intended to catch a few, but queues on the first one made me lose my interest and I spent the day on the floor instead.

All in all, time spent on the floor on all days yielded good results as well: including, among other things, a signed copy of Erik Magntorn and Lisa Sjöblom's beautiful little children's book, Hitta barnen! (Eng. trans. Find the Children!), which kicks Waldo's butt quite severely in artistic terms, some nicely signed volumes of the collected edition of Peter Madsen's Valhalla in Swedish, with great original artwork now adorning the first page in each, and a piece of original comic book art by Ola Skogäng, whom I also had sign (with some added drawings) my copies of the first three volumes of his brilliant comic Theos ockulta kuriositeter (Eng. trans. Theos Occult Curios) – Mumiens blod (Eng. trans. Blood of the Mummy), De förlorade sidornas bok (Eng. trans. The Book of Lost Pages), and I dödsskuggans dal (Eng. trans. In the Valley of the Shadow of Death).

And on that particular note, I think I will leave you with a view of page 56 of De förlorade sidornas bok.

Monday, 19 September 2011

Okay, so the past two weeks have been kind of crazy (on more levels than I care to remember) and two different planned posts have had to be pushed forward simply because I have not had the time to do some much needed prep on either of them.

However, before I start sounding like the kid who tells the teacher that the dog ate his homework, let me offer you this as an in-between posts kind of post. A good chunk of yesterday was spent browsing through the seminar schedule of the upcoming Göteborg Book Fair in order to decide, at least tentatively, what I should attend this year (the results should obviously be in my next blog post). Thus it does not seem entirely off to use this space on this occasion to promote some books; luckily, I have just read two fine specimens.First off, I would like to recommend David Morrell's 1972 novel First Blood upon which the film of the same name was based. The novel is a tight thriller with political undercurrents and I recommend it warmly. For a more in-depth review, see the one I put up on Goodreads.

The second novel is no less political (perhaps even more so), but also no less emotional at its centre. The Reluctant Fundamentalist by Mohsin Hamid is a great piece of writing, and while it does not claim to be a thriller as the former novel, there is a compelling drive in the story, and a sort of mystery at the heart of it. For a more in-depth review, I once again refer you to a full review on Goodreads.

Hopefully the full reviews will whet your appetites for these books. They are well worth your time.

Monday, 5 September 2011

The post ties in with DC's latest stunt, a total company-wide reboot of their entire line and fictional universe (let's not even get started on that one), and more specifically focuses on their redesign of Superman. The man of the hour here is Jim Lee, who is the man responsible for rethinking and revamping an old faithful design (which has already, admittedly, seen some variations over the years, while nevertheless keeping a basic design intact). Apparently, the first view of this new design can be seen in a single panel in the debut release of the rebooted Justice League #1, and the most radical changes appear to be the addition of a collar and the loss of the otherwise ever-present red-underwear-on-the-outside.

Now, as Hasan points out, the red-underwear-on-the-outside has always been a point open to derision, but I could not agree more with his assessment that "the red trunks (along with the yellow belt) [...] helped give a much-needed sense of visual balance," which the Lee version clearly lacks.

However, the loss of the red trunks is not limited to DC's reboot of the character. This week has also provided the first full frontal look of Henry Cavill in his Superman regalia. Cavill is the actor portraying the character in the upcoming Man of Steel, directed by Zack Snyder and produced by Cristopher Nolan, and currently being filmed. (The photo is from on-set, so the colour scheme is not necessarily a definite match to what the film will show, I hasten to add.)

While the fact that this version (at least) has no collar produces a less fascistic and more traditional looking hero, the lack of even a (wrongly) coloured belt seems to underline Hasan's point about visual balance. However, there is more to it than that. Even discounting Cavill's somewhat awkward pose in this picture (which honestly looks quite a lot like a man in need of a bathroom break),

if we think of the traditional skintight superhero costume as a comic book shorthand for the perfected human form, then the need for modesty — and thus the whole "underwear on the outside" thing — starts making a lot more sense. Just look at Cavill's...*ahem*...area to understand what I'm talking about.

Monday, 22 August 2011

In the wake of the Utøya massacre and its insidious perpetrator, the ever ongoing debate on multiculturalism is a hot topic once again. And as always, when certain political factions or elements start debating this concept and its inherent evil, I do not know whether to laugh or cry. Because it is a simple fact that culture is never clean nor monolithic. Not even when it tries to be.

On one level, defining cultures requires a consensus and set parameters to define the specificity of one compared to another. However, most of the political factions interested in this are actually not so much interested in consensus as in being able to provide the vision to govern the definition. The reason for this is, of course, always to separate us from them and clearly establish the difference between ourselves and our Other(s). But anyone who tries to define a cultural enclave in this manner will always (without exception) stumble on the finishing line. No matter how narrow the parameters are made, it is impossible to exclude all those one wanted to exclude, and the narrower the parameters are set, one also increases the equally inevitable risk of excluding people one wanted to include. In short, whatever makes up any definition of any culture can never be absolute, or entirely fixed for the matter.

This does not necessarily mean that we should abandon cultural definitions altogether, but it should make us aware of the imprecision in their natures.

So, what does this mean? Simply put, culture is something that arises in social contexts, in inter-individual meetings, when the ego means an Other. Furthermore, this central metaphor also expands to an intercultural usage (if you will pardon the confusion for a second). That is to say, when one imprecise cultural definition meets an Other imprecise cultural definition, new cultural references arise in that meeting. This is unavoidable, because culture is both resilient and innovative in its nature.

Do not get me wrong. The equation is obviously not that simple. If it was, colonialism and imperialism would never have been a problem. The difference here is that colonialism and imperialism is not so much about a meeting between cultures as one culture violently attacking another. The main factor here has to do with power, and it can arguably be invoked on any situation where one cultural enclave uses force to apply its own cultural definition over others; either to dominate them or to eradicate them. And even in such instances, history has proven that the meeting is not unilateral anyway. There is an old saying that claims that you are what you eat, and apparently even colonisers and empires are affected by what they devour and digest.

But this is not the case here in Sweden (nor I would dare argue, in most European countries or in the US for that matter). Islamic culture (because as always since at the very least 9/11, this debate is about Western civilisation (and possibly Christianity) being overrun by Islam) is not in any position of power here. Nor, differently put, in any position of power greater than any other minority (and most certainly not greater than any majority). Swedish culture (whatever that is) is not overrun by excessive Islamic references or specific values. If anything, one could argue that Swedish culture or identity runs a greater risk of being overrun by Anglo-American values, but you rarely hear political groups like the Sweden Democrats complain about that type of cultural import as opposed to favouring Swedish culture.

And I would bet that they eat pizza as well. After all, pizza could arguably be seen as rather typical Swedish food. In fact, you cannot go anywhere in Sweden without finding a pizzeria. This is more or less true for any small town in the country, but this was obviously not always the case. Nor do we need to go very far back in history to find a time when it certainly was not (the mid-20th century saw the introduction of pizza into Swedish culture, and it was not an immediate success either). Similarly, the epitome of Swedish food – the Swedish meatball – is Turkish in origin, and was integrated into Swedish cuisine much like the pizza, only a couple of centuries earlier.

In short, the notion that the multicultural society is something new is a myth. Culture has always been a mongrel dog of many mixed breeds. And that is partly what keeps it alive.

I am not saying that there are no values to traditions. I am, however, suggesting that we have to understand that traditions themselves are never entirely fixed. We may talk about how a proper Swedish Christmas should be spent, for instance, but in all honesty, if we define proper as "the way they were celebrated 100 years ago" (a fairly short amount of time for judging these things), I honestly wonder how many of us truly do. Or perhaps even more strongly, how many of us even know what that would actually entail? And that is not even taking into account local variations. In more cases than not, our strongest sense of our traditions are our own memories of how things is or was, specifically for us.

Much like language, culture is a living thing. What was will not always be what is; nor will what will be be guaranteed to last forever. Culture is an ocean of ideas, values and traditions, mixing and mingling as the waves and the tides move. And we are creatures adrift on those mighty waters, sometimes pretending that we are in control of their movements.

Monday, 8 August 2011

Yesterday, I read an article by Ann Heberlein in a Swedish newspaper (i.e. the link is in Swedish) on the inherent problems in punishing crimes on the level of the Utøya massacre. Heberlein, who has a PhD in theology and has written books on ethics, evil and forgiveness, enters the debate in response to Ronnie Sandahl and Marcus Birro's respective contributions, both of whom strongly advocates the death penalty as the only reasonable punishment. Heberlein, leaning on both Hannah Arendt and Nietzsche, points to the fallacy of such reasoning in a very sound manner. There is no punishment strong enough to actually be proportionate and the deed itself is too horrible to ever be forgiven, she argues, but also points to the Nietzschean truism that we need to be careful so that our battling with monsters do not turn us into monsters ourselves when caught in this state of emotional impotence and turmoil.

Heberlein refers to Sandahl's reference to a SIFO (the Swedish Institute for Opinion Surveys) in which 33% of the Swedish population believes that there are crimes that conceivably could warrant the death penalty.* Sandahl obviously uses these statistics to question why no politician is pursuing the issue of maybe re-instating it. Heberlein's answer is simple: because it probably is not a good idea in a civilised society. And she then moves onto confronting Birro's switch from anti-abortion (on the grounds that all life is sacrosanct) to pro-death penalty (on the grounds that life is only sacrosanct if the individual has earned that status).**

However, while I appreciate Heberlein's argument, and find it important, I would nevertheless linger on those statistics. Because it strikes me that there is an inherent difference between believing that there are crimes that conceivably could warrant the death penalty and thinking that it is a good idea to institute laws of that nature.

Let me first off answer the question of that survey, the question with which I myself opened this post. Yes, I do think that there are heinous crimes that conceivably could warrant the death penalty. There are deeds where the individual's inalienable right to life can be considered spent, and basically rendered null and void. Do I think this means that the death penalty is a good idea? No, not at all. Because the idea of putting capital punishment into law is problematic on several levels. Not because it is always wrong to take life. Do not get me wrong, I am not suggesting that it is right to take lives, but let us face reality. There is basically no government on Earth that would have any compunction of ordering soldiers onto a battlefield and ordering them to take lives. I am not suggesting that war is a good solution, but sometimes, it is undeniably the only solution. Personally, I am rather glad that Hitler's vision of an expanded Third Reich was thwarted and that concentration camps were shut down. And any time such things occur, there is an ethical need to oppose that.

So, why is the death penalty wrong then? Well, first and foremost because it would be hard, I believe, to institute a law where the required evidence was so definite that an erroneous conviction was entirely impossible. After all, if you kill someone, it is mighty hard to overturn a wrongful verdict. Granted that some crimes come with that level of specificity, but how would you put that into legalese.

Then there is the issue of the legal machinery itself. In countries where the death penalty is practised, like the US, it is worthwhile noticing that this penalty is not applied evenly. In other words, different legal representation (and by default pecuniary assets) might be the difference between life and death. This all goes back to a point Heberlein makes: who decides the criteria for who gets to live and who does not?

There is an inherent ethical dilemma involved in the taking of any life. In war or in violent police actions, such a dilemma is circumvented, or temporarily suspended, by the needs of the moment. Basically, it becomes a question of a practical utilitarian principle in which the good of the many (and innocent) outweighs the rights of an individual or individuals who are posing an immediate threat to the former. Once the person committing the violent deeds is in captivity, any such suspension or circumvention is itself rendered null and void. If the threat is disposed of, it would seem as if we no longer have any moral right to ignore ethics.

In older times, vengeance was the law of most lands. But as civilisation spread and our societal bonds grew, even that practice was influenced by other means of compensation. Weregild was a concept introduced as a means of ending blood feuds and stabilising regions, and I think it is safe to say that this helped us move forward as a species. Naturally, our impulse to strike back at those who hurt us or ours have not been weeded out of the species, but we deal with it by allowing the law to handle things for us.***

At the end of the day, we might also ask the question of what a proportionate punishment means. We cannot kill the mass murderer of Utøya more than once, yet his death would seem puny next to his deeds. Even if we allow ourselves to resort to that ultimate punishment of depriving such a criminal of life, the response fails to achieve proportion.

Furthermore, it is also questionable as a punishment since it is a brief moment to pay for so much inflicted misery. The perfect metaphysical punishment for the massacre on Utøya would, in my humble opinion, be to have the perpetrator spend the rest of (at the very least) his existence reliving the events on that small island during that hour and a half through his own victims, literally as his own victim. But metaphysical punishments elude our capabilities and so we have to deal with this in a human manner, and preferably one where we do not gaze too deeply into the Nietzschean abyss ourselves.

* It is probably worthwhile noting that Sweden does not have a death penalty, and as far as I know, neither does Norway.** I will grant Birro the not at all unproblematic point that it would be possible to conceive of given rights that can be lost on account of breaches against the social contract as voiced in law. After all, we usually claim freedom to be a given, inalienable right, and yet we do not hesitate to imprison people for crimes, depriving them of that freedom. Granted that depriving someone of their life is somewhat more permanent, but in theory there is nevertheless an analogy here to be considered.*** This is why the US system of allowing victims or relatives of victims to weigh in on the legal process in cases of release on probation has always struck me as strange. Logically, either the behaviour of the convict weighed against his or her crime should warrant the release or not. Personal opinions of people probably should not be a factor.

Monday, 25 July 2011

A bomb was set off at the government building in Oslo and on a small island named Utøya, no more than 0.12 km² big (or 0.075 mi²), what appears to be the same (potentially lone) culprit, an ethnic Norwegian Islamophobe and right-wing extremist, attacked a political youth conference held by the Norwegian social democratic party's youth division. He came to the island masquerading as a police officer and proceeded to open fire at people there (mostly youngsters and kids) with a machine gun, killing atleast 86* peoplein this massacre during the next hour and a half (with at least another 7 killed in the Oslo bombing).

What is unfathomable is the unreality of this situation. We are used to be confronted with scenarios like this in literature and film. In fact, under different circumstances, we might have thought the preceding paragraph a brief synopsis, found on the back cover of a book or a DVD case. But while we are no strangers to such scenarios in fiction, on the whole, most of us (Scandinavians at the very least) have probably been fairly lucky and have never ourselves had to really look deep beneath the surface of the societal contracts.

Somehow, our brains can hardly not manage not to read this as fiction, precisely because of this.

As reality, it just seems implausible and horrible. Horrible because it shows us what we, as a species, are still and continuously capable of. And we have the gall to call such behaviour bestial, while labelling all our better sides human and even humane. But tell me which other animal on this planet of ours acts as cruelly as a human being when it pleases her to act in a such a manner.

Unfathomable. Horrible.

At least 86* people died on Utøya. But how many were wounded for life by the hours spent in horror and devastation? How many souls died on Friday on that small island?

* The latest information from the Norwegian Police indicates that the death toll from Utøya is probably going to be lowered, as the final tally is being put together. So far, however, they have not wanted to indicate by how much.

Wednesday, 13 July 2011

There is an inherent problem in acting, or perhaps even more so writing, genius.

Don't get me wrong, it is not necessarily an easy task to successfully portray people of normal wits and intelligence. And it is certainly something of a challenge to portray stupid people; in particular in acting, where timing is of the essence to make it believable. The latter is actually one of the reasons why Christina Applegate's depiction of Kelly Bundy in Married with Children is an impressive feat. We utterly believe in Kelly's stupidity; to the extent where it would be easy to assume that Applegate shares this feature with her character. Nothing could be further from the truth, of course. If she had been stupid, we would not have viewed Kelly as a stupid character, but rather as a character played (unsuccessfully) by a stupid actor. We recognise the asinine without a doubt, but we also recognise when something is out of synch with its context; in this case, the story. Stupid in synch equals brilliant portrayal of stupid character; stupid out of synch equals a poor portrayal by a stupid actor.

But I digress.

This post is not about the lower register (which it is still easier to bow down to), but the higher one, which remains ever elusive. As in all cases, fiction does not require being real, it requires seeming real. And therein lies the rub.

Acting or writing genius requires on the one hand to present the audience with an understandable entity; one which they can still comfortably understand as genius. On the other, it also requires that that understandable genius isn't transformed, as if by default, into a regular bloke. The balance is not easy and there are many examples of failure. Mostly, however, the failure does not consist of a too rigid depiction of the genius as actual genius. Perhaps this is because your normal actors and writers aren't actually geniuses themselves. Not even most of the more intellectual ones. And even if they were, chances are that they would ironically dumb down a genius character to make her or him relatable.

The other side of the coin is writing or acting upwards, telling your audience how brilliant your character is, only to stumble on the finish line by having the character absolutely clueless about something which they really ought not be clueless about. In TV shows, this is often shown by the genius character knowing little or nothing about any and all popular culture (Dr Sheldon Cooper obviously being a great exception to that rule). You only need to think about characters like Leroy Jethro Gibbs (NCIS) or Dr Temperance Brennan (Bones) who usually come off as super smart people, up until the point when a pop cultural reference appears. Because, as we all know, super smart people live entirely apart from the world, and can still stay up to date on the human condition, without ever taking in a single tabloid placard or zapping by anything pop cultural on TV. And after all, nothing of the kind would ever appear in a proper news show or newspaper either, so... Well, I guess you catch my drift.

So the trick is balance; to establish a level of genius you can sell, and without selling it short. And at the end of the day, it doesn't require genius. Only the skills to seem like one.

Monday, 27 June 2011

I am currently teaching a creative writing class based on script doctor and story consultant John Truby's book The Anatomy of Story: 22 Steps to Becoming a Master Storyteller, and in one of our net meetings a discussion about this model of the writing process ensued. At the centre of the discussion was the fact that Truby continuously talk about the need to build an organic story, while presenting a model for doing this which can easily be viewed as very technical.

Now, first off, I would say in Truby's defence that there is a difference between an organic result (which comes across as cohesive and alive) and an organic method (which might be how one describes just going with the flow and making it up as you go along). The point here is that the reader or viewer wants an organic result, and more often than not, achieving that requires at least some sort of technique. Especially when committing to writing a lengthier work like a novel, film script or a play.

But even accepting that a lengthy piece of writing requires a solid structure, and that it, by definition, is easier to lay the foundation of a building first, this discussion nevertheless got me thinking about a deeper philosophical issue in how one thinks about the writing process.

On page 84, while discussing the need to start at the end (a sound structuring advice, in my humble opinion), Truby writes:

As with any journey, before you can take your first step, you have to know the endpoint of where you're going. Otherwise, you walk in circles or wander aimlessly.

Now, Truby's metaphor reveals an obvious philosophical vision, but interestingly enough it is not the only one possible to draw out of it. After all, while Truby focuses on reaching a destination, there are those who claim that it is the journey itself that matters, not what destination is reached.

In terms of narrative, I think Truby makes a good point, because narratives (whether fictional or factual) tend to attempt to bring a certain sense of order to our understanding of the world, our lives and our selves. Often even when they deceptively seem to attempt tearing order down. In fact, even when narratives try to mimic reality, they always resort to verisimilitude, attempting to be like reality or truth, rather actually be that thing.* Mostly because if it actually achieved being the thing itself, it would not necessarily make us believe it was.** So, for a story to be credible it needs to be structurally credible as a story. Whether or not it is credible in the sense of whether it could happen in the real world is actually less important. If for nothing else because we normally apply the same rules when relating the real world as well... as if to make that too more credible. Or perhaps just to make sense of its inherent chaos. But I digress.

There are probably as many ways of writing as there are writers. This is not to say that Truby's model and approach are bad, but like I tell my students: while I am there to teach them Truby's model and examine that they have understood it (in order for them to get their credits), what they choose to do with the model after that point is entirely up to them. Planning ahead and working things through on a basic level might save the writer a few (heavier) rewrites down the line, since there will always be a clear definition of where things are heading, and a greater focus on how they can get there. From a creative point of view, it might indeed be more enjoyable to just tag along for the ride, but it also raises the question of what one wants to do with the end product. And where one wants to put in the most work.

At the end of the day, however, the metaphor of the journey hints at a very basic question: are authors to be regarded as creators, in charge of their creation (i.e. the story world and all its inhabitants) or as creative vessels, through which the story world and its characters gain entrance into our reality? I do not suggest this as an either-or proposition. Many writers speak about their writing in a manner which suggests several intermediate states, but the poles are there to be sure.

* Needless to say perhaps, a narrative can never be reality or the truth in this sense, since there is always an imposed distance. Think of René Magritte's famous painting Ceci n'est pas une pipe, for instance, as an illustration of this.

** Dialogue is a good example here: in all writing (though in particular in script- and playwriting), dialogue needs to sound genuine and authentic, like something somebody would say, but at the same time very little fictional dialogue reads like people really speak. Speakers tend to stop, start new lines of thoughts mid-sentence, correct their thought-pattern, etc, all of which would be really inefficient in fiction where most lines have to count. In short, what is required is the illusion of actual speech rather than actual speech.

Monday, 13 June 2011

Our topic today is newness. Or rather a certain obsession with the new and its supposed hierarchical superiority to the old. Well, at least in terms of artistic consumerism; i.e. reading books or comics, watching films, or listening to music. I would never suggest that the new in and of itself has a higher hierarchical position in, for instance, old-school academia. Although, it is worthwhile noticing the common critical (academic or otherwise) favouritism of originality, of which I've written before.

While I do not necessarily see the necessity for originality as a quality marker of storytelling, I can certainly understand the endeavour to go where no author, artist, film maker or songwriter has gone before (even though the likelihood of an actual success in that seems meagre and more illusory than real), what I want to discuss here and now is the notion held by more than a few people (and naturally fostered by the market place) that only the latest thing is good enough. This is not to say that whatever the latest thing is is original (or even claims to be), but the idea of defining newness in these fields as only the latest thing is foreign to me.

As a reader, viewer or listener, I utterly fail to see why I have to be obsessed with the latest thing.

Don't get me wrong. I'm not saying we cannot appreciate the latest thing to be published or produced for our reading, viewing or listening pleasures. But why be obsessed with it? When there are so many things in all temporal directions from us, yet to be discovered. Some of which we've surely not yet even heard.

Let me confess openly: I have not read every book or comic ever written or published, not seen every film film ever made nor heard every song ever recorded. In fact, I have not even read/seen/heard all the ones I know would like to. This is quite simply because the treasure chest of such material is nigh infinite (at least in comparison to my own time here on Earth), and everything that I've not (yet) read, seen or heard is something new... to me.

Obviously this is not a condition unique to me. I would dare say that there is no one out there who has literally read, seen or heard everything in any of the mentioned categories. Thus, we need to rethink what newness is, I would argue. We cannot allow our treasure chests of the imagination to be dominated by a simple market place insistence on the latest hype; that is to say, newness only as that newfangled thing which like a flash in the pan is here today and gone tomorrow. It is true that not all books, comics, films or music remain in the public consciousness (in fact, it is probably more true to say that few do). But even a passing fancy is something which somebody may pick up long after that moment is gone, and enjoy or not, in very much the same manner people could whenever the hype was on.

In fact, perhaps some of these things will find a better appeal when they are not over-marketed and only get to stand on their own two legs. I'm not saying that the fame will be eternal, but there may nevertheless be an appreciation of finding something new, as in previously unread, unseen or unheard (perhaps even unheard of).

Monday, 30 May 2011

In 1996, an acquaintance lent me two CDs. One was IQ's The Wake (which made an IQ fan out of me) and the other was Don't Bring the Rain, the first full-length CD from the lesser known Australian progressive rock gem Aragon. And I took to the latter immediately.

The band was formed in Melbourne in 1986 by Tom Behrsing (keyboards), John Poloyannis (guitar) and Les Dougan (vocals), and added two more members – Rob Bacon (bass) and Tony Italia (drums) – after having spent months writing songs. Don't Bring the Rain was first released as a mini-LP in 1988, but as it made some success in Europe, the band recorded extra tracks for a full-length CD release, which saw the light of day in 1990 (and reached my own hands some five to six years later). Bacon left the band even before the release of the CD, however, and Italia followed suit in 1991 (albeit for different reasons), leaving the band in its original trio format, in which it has remained since.

After having been introduced to the band, it was not long before I had bought Don't Bring the Rain myself as well as their 1995 concept album Mouse, and the preceding six-tracks mini-CD The Meeting (1992), which is actually Act 5 of the concept album served up as a kind of work in progress teaser (in fact, in 1999, Mouse was re-released by LaBraD'or Records as a double-CD incorporating The Meeting in its proper place in the story). I also managed to track down the rarer 1993 release Rocking Horse and Other Stories, which collects material from demos and the like, including the 20-minutes epic "Rocking Horse." While this material is recorded in lesser quality, it nevertheless provides a good glimpse into the earliest stages of the band, and "Rocking Horse" alone makes the CD worth getting.

By the time Mr. Angel was released in 1997 (as the band's first recording in their own studio and their first release on LaBraD'or Records), I was eagerly anticipating the album. At the time, it represented something of a break from the progressive rock found on Don't Bring the Rain and then developed in the concept album format on Mouse, and while I know that this slightly more pop-rock oriented music disappointed some of my friends at the time, I liked it (albeit in a different way than the earlier CDs).

That being said, when the band's latest album to date, The Angels Tear (2004), I was not in the least bit saddened by the fact that the band was returning to their progressive roots. Rather the opposite.

So what is that makes Aragon so fantastic in my humble opinion? Well, one need only consider the great melodies and the fantastic lyrics, wonderfully interpreted vocally by Les Dougan. Dougan's vocals are quite particular, and I know people who find them hard to digest (just as some people have a hard time digesting the vocals of Rush's Geddy Lee), but the emotion expressed is raw, beautiful and gets me every time.

Or to let you sample their greatness on your own, allow me to present four highly recommended tracks I found on YouTube.

First out, "In Company of Wolves" from Don't Bring the Rain, the playful lyrics of which I absolutely fell in love with the first time I heard it:

Secondly, "The Changeling" from The Meeting (and consequently Mouse), which certainly gives a good sense of where and what the band was about during this period:

Thirdly, a step back into the past, Aragon's epic 20-minutes song "Rocking Horse", which is a really well-constructed song with a good set of narrative lyrics:

And finally, I would like to leave you with a sample of the band's latest release. "Growing Up in Cuckoo Land" is the opening track on The Angels Tear:

And for those of you who find this stuff interesting, I can also inform you that the band is currently working on new material. While there is not yet any set release date, this is indeed great news for all Aragon fans out there – old and new!

(And lest I kill any people from the suspense, I better end this post with a quick tie-in with the preceding second anniversary post. The blog's new sibling (of sorts) arrived not on the day of the anniversary itself, but half an hour into the following day (as if to ensure a celebratory day of his own). So, as predicted, I was indeed elsewhere as the post went up, but after many an hours wait, my son deigned to grace us with his presence. And that, as they say, is that.)

Wednesday, 18 May 2011

Before you say it: I know, I am off-schedule. I am in fact five days early, as the next post was not really due until next Monday, May 23. But then again, who could miss an anniversary, or the opportunity to celebrate it? Not I, apparently.

It is now two years ago, on the date, since I first sat down and wrote my mission statement of sorts. And I still stick by it. Sure, there is since October 31 last year an obvious change of publishing pace, but the central parameters of the mission statement still holds and that shift itself is more of a logical extension of those parameters than a break from them.

One year ago, also on the date, I sat down and took stock of my first year as a blogger. Having tried my hand at it, I mused on whatever success (however such a thing is ever measured) I had had in my endeavours. At the time, I noted that I was still more or less on target with the 51st post in 53 weeks (yes, I admitted to having been off target by two posts) and now, 52 weeks later, I have added 38 posts, counting this one (i.e. the 89th). And still, it is not just a number's game.

Heck, I have even written a faux Post #100, the content of which is not too shabby (if I may say so myself), even though its basic numerical premise mayhap reveals its author's lack of mathematical inclinations.*

All in all, I would say it is not too shabby an accomplishment.

As a final aside (which I cannot refrain from including), this year's anniversary is in some sense doubly significant for me. Two year's ago words came from my head through my fingers into this virtual space... and onwards into the real world via your noggins. Much like Pallas Athena sprang fully formed from Zeus' head in the days of antiquity; if one is partial to such mythical excesses of origin. This year, however, as this gets posted (thanks to pre-scheduled computer wizardry), chances are quite great that your humble scribe is elsewhere, either celebrating or yet nervously anticipating the arrival of another wonderful creation of his. If the date itself proves to be auspicious (in a manner of speaking), I surely will not mind the synchronicity. But I am glad that this humble page will then at least have two years seniority over its sibling of a kind. Because let us face it: much as I love this space of words and phrases and sentences and thought, I fear that it will always play second fiddle from now on.

At any rate, next post will be on Monday May 30 (so as not to generate too vast a gap because of this five days early post), returning us to the current regular interval (i.e. every other week, Mondays at noon). See you then!

* In all honesty, I was quite baffled when I discovered the incongruity whilst preparing this post. The fact that this, as noted above, is post #89 did not quite match the preceding (and incorrect) claim that what is consequently post #85 could ever have been #100. On the bright side, this will allow us to celebrate the 100th post yet another time no more than eleven posts from now. Let us see if I cannot come up with a matching numerical theme for it, eh?

BlogCatalog

Joakim Jahlmar is a participant in the Amazon Europe S.à.r.l. Associates Programme, an affiliate advertising programme designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.co.uk/Javari.co.uk.