Links

So I got really into March Madness this year. I spent an entire day researching my bracket, and watched every single game that I could. I picked a favourite (Ohio State) and I picked a goat (Duke) and I cheered hard for the former and I jeered harder at the latter.

Of course, if you’ve been following the tournament, you know how well that worked out for me – Ohio State lost to Tennessee in their Sweet Sixteen game and tonight, Duke plays Butler for the championship (and I suspect Duke is going to win). Sucks to be me.

This reminds me, though, that sports can teach you a lot. Playing sports, like art, is one of those pursuits that often has to struggle to justify its existence to a society obsessed with pragmatism. Sports, like art, are often seen as frivolous ways to spend leisure time, things you only do when you’ve finished being useful to society and have a few hours to kill. But sports are extremely valuable and are worth supporting and pursuing, benefitting both individuals and groups/societies.

So with that in mind, I’d like to share with you two of the more useful lessons I’ve personally learned from sports and, since my NCAA bracket is so spectacularly busted, I”m going to make them lessons about failure.

If you think you’re going to fail, you’ll probably fail. So don’t.

This winter, I went snowboarding with a few friends of mine. Early into the trip, we found a bunch of glade runs (runs that haven’t been cleared of trees, rocks, etc.) and this is where we spent most of our time. While navigating one of these runs, I had a bit of an “aha” moment when I realized that I was playing things entirely too safe and that I’d be a much better snowboarder if I stopped worrying so much about not running into things. Since most people would think that “Don’t run into things” would be one of the first rules of snowboarding, this might seem a bit counter-intuitive – I’ll explain.

It’s a general principle of snowboarding that you either commit or you don’t bother. Whether you’re approaching a jump, dodging trees, or just doing some basic freeriding, you are less likely to fall and/or injure yourself if you approach your ride with confidence. If you try to ride tentatively, you instinctively start to shift your weight backwards (which puts you off-balance and out of position to make good turns quickly) and are more likely to fall or bail out (i.e., fall on purpose to “save” yourself). The impact of this is even greater when you’re riding glades – glades require you to make short, sharp, hard turns and if you bail (on purpose or otherwise), chances are you’re going to smack into something.

Sometimes, then, the best way to do a glade run is to simply huck yourself down the mountain and trust yourself to find a way to not die. You’d be amazed at what you can come up with in the heat of the moment if you have a bit of faith in your abilities, and – instead of giving up when you think you might fall – keep focussed and keep charging.

Now, I’m not suggesting that people ride above their ability or do anything reckless – and I do recommend a helmet. But if you give up on something just because you’re not 100% sure you’re going to succeed at it, then chances are that not only will you fall, you’ll hit a tree on the way down. If you trust yourself to find a way to win, then it’s a lot more likely that you will.

Failure doesn’t hurt as much as you think it will and it’s usually worth the risk

For the last few years of university I played kendo, a Japanese martial art that involves one-on-one combat with bamboo “swords” (originally used to train people in samurai sword fighting). As fun as kendo is, it’s pretty intimidating for the uninitiated, since it typically involves someone dressed in medieval-looking armour running at you, screaming, and trying to hit you with a big stick.

The thought of getting hit in the face with a stick – and believe me, you do get hit in kendo – can be pretty scary. But I believe that experiencing it can be valuable and that getting hit for the first time is one of the most important moments in a kendo career – I’ll explain.

Getting hit in kendo is never nearly as bad as you think it’s going to be. Don’t get me wrong, it doesn’t tickle – but before you experience it for the first time, it’s easy to build up the experience in your head to the point where you think that getting hit is going to be seriously traumatic.

Once you take – and shake off – that first shot to the head, you start to realize that, unpleasant though it is, getting hit is far from being the worst thing in the world. Not only that, it’s necessary to win – defensive strategies will get you ties, but they won’t score you too many points, which means that you have to be willing to go on the attack in order to win, which generally leaves you more exposed to attack from your opponent.

In other words, you need to risk getting hit in order to succeed, which is much easier when you realize that getting hit doesn’t hurt nearly as much as you think it will, and it certainly doesn’t hurt as much as losing because you were too afraid to take a risk.

The same, I think, is true of life. It’s no secret that fear of failure can be extremely paralyzing and it’s partly because the experience of failing is so built-up in people’s minds that they’re not willing to take risks. So they play it safe, and live boring, unsatisfying lives. But taking risks – and sometimes not having them work out – will ultimately take you much further than playing it safe, because a) risks are necessary for rewards and b) if you experience failure, it helps you understand that failure doesn’t hurt nearly as much as you thought it did, and it greatly reduces the paralyzing effect that the fear of it has. In kendo, as in snowboarding – and, more to the point, life in general – sometimes you have to commit to win, even if you can’t be sure of the outcome.

So there you have it – musings on failure in sports. I hope you found them useful – and if you have any, I’m interested to hear your anecdotes about things that sports taught you. And for the record, even if they win tonight – Duke sucks.

In a January 6th blog posting, actress, writer, and all-around nerd-genre-genius Felicia Day takes to task an article published in a recent issue of Vanity Fair magazine. The article, written by Vanessa Grigoriadis, is about women who have used Twitter successfully to raise their professional profile – to gain “Twilebrity” status, as the article puts it. Day herself, whose feed has more than 1.7 million subscribers, was one of those women profiled.

As Day notes, on one level the fact that this article was written at all is fairly exciting for people interested in social media, since it means that a major mainstream news outlet is taking an interest in a still fringe-y media source, providing a certain degree of validation for those whose claims that social media is the future have long been met with skepticism and ostentatious eye-rolling. Furthermore, the article focuses on women who are helping to lead the evolution of the media landscape, an area not always known for being equally accessible to both sexes. Good stuff all around.

Or at least it would be good stuff if not for the numerous examples that Day points out indicating that the writer either doesn’t know a thing about social media or doesn’t take it or its practitioners particularly seriously (or, more likely, both). Take these little gems for example:

“‘Sometimes,’ says Julia Roy, a 26-year-old New York social strategist turned twilebrity, scrunching her face, ‘when you’re Twittering all the time, you even start to think in 140 characters.'”

(I particularly like Day’s response to the second quotation: “‘Scrunching her face?!’ Oh gosh, thinking is hard!”)

I’ll dispense with any further analysis, since Day’s blog does a much better job of that than I could, so you’re probably best off just to read her posting on the subject. Go ahead, read it now – I’ll wait.

But there is one thing I would like to point out, and that’s the likely motivation for the tone of this stupid article: fear.

The traditional publishing industry is in serious trouble. More and more consumers are getting their news, entertainment gossip, and whatever else was formerly available strictly on the newsstand from online media sources. Now that hand-held web-browsers are in such wide circulation, the rate at which people abandon their daily newspapers and monthly celebrity magazines in favour of their iPhones and BlackBerries is only going to increase.

Not only that, the rise of social media has made it possible for anyone to be a publisher (this is not news). And as more and more talented writers choose to publish online rather than through traditional channels, those online sources are rapidly gaining credibility (this is sort of news), taking away one of the few major advantages the traditional media had left and flooding the media pool with competition – competition that, unlike say Vanity Fair, has low overhead and has learned that you write for business development, not for business. These people are therefore much less worried about the revenue from their media outlet and can give their content away for free, which gives them a significant competitive advantage over the old-guard (this is big news).

The scariest part for peddlers of traditional media is that these low-cost outlets are proving just as desirable for readers as their high-cost competitors. Felica Day’s Twitter feed (@feliciaday), for example, has more than 1.7 million subscribers; Vanity Fair‘s monthly subscribers only number about 1.1 million. Those are scary numbers for an organization with a vested interest in making sure that their (much more expensive) medium is the one attracting the most readers.

So it’s no wonder that they’d take such a snarky tone towards social media – if their capital can’t help them, what weapon do they have left to protect their market share, aside from barely-veiled attacks on their competition’s credibility? To be fair, it may just be that the writer isn’t into social media and – mistakenly – adopted a tone appropriate for a fluff piece about a faddish, passing trend, rather than a proper examination of a media game-changer.

Either way, you can’t ignore the fact that there is blood in the water – and from reading between the lines in this article, Vanity Fair knows it.

Anyone who’s read comic books for long enough can tell you that dying in the comics is about as serious as getting mono. It’s inconvenient and it’ll definitely keep you out of action for a while – maybe even months. But at some point you’ll be back on your feet, no more the worse for wear.

The death of a major comics character has become something of a cliche, a cheap trick used to sell books and hopefully refresh the character without doing any actual character development. Death in comics is actually beginning to be a boring non-consequence, a plot device whose overuse highlights the inherent unsustainability of trying to maintain an ongoing story and character for, in some cases, 70+ years.

As in all things, though, there is an exception and it is, oddly enough, in one of the oldest characters in comic-dom – the recent death of the Dark Knight himself.

For those of you who don’t know, Batman (well, Bruce Wayne more specifically) recently disappeared from the DC Universe, though the cause of his death seems a little muddled – he’s either been killed in a helicopter explosion after an encounter with the Black Hand or killed by Darkseid’s Omega Sanction at the end of the Final Crisis or sent back in time by said Omega Sanction or turned into a small, furtive newt-like creature. Okay, I made that last one up, but regardless, it’s all very confusing.

The fuzzy details notwithstanding, one thing is clear – Gotham City is now without its Bruce Wayne and, in typical comic-book, publicity-stunt style, there have been a whole host of special issues, cross-overs and new title launches to commemorate the occasion.

What’s not typical about this particular comic book death, though, is that – despite the fact that nobody has any business believing that this is more than a temporary set-back for the Dark Knight – Batman’s death is proving to be fertile source material for the development of one of comics’ most interesting characters: Dick Grayson.

Grayson – the original Robin – is interesting because he’s one of the few characters in comics who has experienced genuine character growth, rather than simply being batted around (hi-oh!) by tangential side-plots meant to sell a few comics before returning neatly to the status quo. After starting his comic life as Robin, Batman’s trusty Boy Wonder side-kick, Grayson eventually left the Batcave and Gotham to pursue his own crime-fighting career as Nightwing, stalwart defender of the city of Bludhaven (Gotham’s neighbour across the river).

Contrary to comic-dom’s SOP, the change has stuck to this day, and many writers of both Batman and Nightwing stories have mined the development for rich source material, as Grayson tries to establish himself as a crime-fighter in his own right, occasionally languishing in the shadow of his mentor, and as Batman attempts to find and train a new protege (with, so far, only marginal success).

With Batman’s recent “Death”, Grayson has put his activities as Nightwing on hold and has taken up the cape and cowl, attempting to fill the shoes of the original caped crusader (and do so seamlessly enough that Gotham’s cadre of supervillainy doesn’t catch on that Batman was ever killed in the first place).

Grayson’s new role has allowed some of DC’s better bat-writers – specifically, Grant Morrison (Batman & Robin) and Judd Winnick (Batman) to do some genuine character exploration into both Grayson and Bruce Wayne, as Dick grieves the loss of his friend, struggles with filling the Batman’s shoes and tries to manage the latest, and possibly most incorrigible incarnation of Robin, Bruce Wayne’s Shadow-League-trained son, Damian.

Unable to master either Batman’s voice or his style – in one particularly astutely constructed scene, he complains to Alfred about having to wear a cape again, one of the first accoutrements he shed upon leaving the Batcave – and is forced to both examine and adapt the Batman identity in order to be comfortable in his own skin, effective as a crime fighter, and worthy to fill the role of his mentor.

Leave it to Grant Morrison (We3, All-Star Superman), who seems to be the man in charge of this whole Bat-evolution (having penned both Final Crisis and the Batman story arc that culminated in Wayne’s death, and who continues the story in Batman & Robin) to pull off this kind of soft-restart of the character in a way that breathes life into the franchise without making a much-overused gimmick too… well, gimmicky. He already did it once with the (New) X-Men in the early ’00s and it looks like he may be on track to do it again with the Dark Knight.

So while it lacks the fanfare and media attention of the death of the Man of Steel in the ’90s or the recent demise of Captain America, the death of Batman may be one instance of a comic writer actually having the skill to do comic death right. If I wasn’t before, this would have made me a huge Grant Morrison fan – and I look forward to seeing how the rest of this story plays out.

Making predictions about where the world is headed – especially the world of media and technology – is a dice game at best. We’ve all heard (and laughed at) the ones that failed to come true (here are some good ones). However, the subject of the current direction of media technology just seems to keep coming up in my conversations with people these days, so let’s have a go at it. In 10 years, feel free to laugh at me when none of these come true.

DVDs are the last hard mediumHas anyone else noticed how little everyone seemed to care about the fact that Blueray beat out HD DVD to become the home movie industry’s official next-generation format? For something that was supposed to change the home entertainment market as much as the VHS/Beta battle or the invention of the DVD itself, nobody really seems all that fussed about the fact that all of our DVD players are supposed to become obsolete in the next few years.

Maybe it’s because everything is going online – with more and more material becoming available via download-on-demand, the rapid increase in worldwide broadband penetration, and the availability of hi-def features over cable, it’s just not worth it to upgrade your hard-media player anymore. Even video games are going online – Wii users are able to download and play games from older Nintendo systems, for example, and one next generation handheld – the new PSP – makes games available by download only. With the convenience and quality of online content, there is just no reason to waste space and material on hard media – and it may just be that DVDs are the last of their kind.

Copyright laws are going to change to reflect the new media environmentI was watching a UFC event on cable the other weekend and at one point, one of the announcers had to read out the obligatory statement threatening legal action against anyone who copied or rebroadcast the program without permission. The announcer had barely finished the last sentence of the disclaimer when his partner (Joe Rogan – what a guy) piped up with something along the lines of “I’ve said it before and I’ll say it again, you can’t stop the internet, baby.” It was hilarious and, more importantly, it points out the futility of copyright laws based on 18th century principles of content distribution.

With the rise of digital media, these principles (e.g., exclusive ownership of the content by the author, exclusive control by one party over the means and the channels of distribution) just don’t hold anymore, and laws created and enforced by people who still cling to them are bound to become irrelevant and fall apart, if for no other reason that they become essentially unenforceable (ask the RIAA). The question, of course, is that if nobody is following the laws, nobody seems all that upset when they’re broken, and attempts to enforce the laws are often met with indifference or scorn, do the laws still reflect and work towards the needs of the population? And if not, shouldn’t they change?

Social networking will become the new dominant mode of communication
This is a no-brainer. We’ve all seen this. We all know that everyone – everyone – is already on some kind of social network somewhere. And at least a few people know that social networking now is as important to communication as email was in the 90s. It’s a game-changer, and before long, not having a profile on at least one social networking site will make you look about as ridiculous as you would right now if you didn’t have an email (or, for that matter, a phone number)

One-way communication will become completely obsolete
Audiences will come to expect the ability to engage with content to the point that content producers will not be able to get away with one-way communication – your resource either enables two-way communication (which means the online version becomes the centre-piece) or your resource dies. Content will be about creating a community, not about just informing an audience. One-way communication will become obsolete.

Print media’s role within the media environment will radically shiftI will stop at predicting the demise of printed media – but, given the above, plus the fact that hand-held browsers and readers take away print’s last major competitive edge (portability), it will stop being the cornerstone of…well, really anything. It will be a peripheral element of the media environment, not its centre like it is now (or was five years ago – you could argue this has already happened). It will still provide a certain type of experience that digital media can’t replicate (there are tactile elements to reading a newspaper, for example, that I imagine some people find comforting), but as generations grow up with digital media and, thus, don’t care about those types of experiences, even that advantage will fade away.

So here’s an interesting thought coming out of yesterday’s whole balloon-kid fiasco. And I’m going to keep this short because this story is getting really old really fast.

But I’m interested in the speculation over whether or not this whole thing was a hoax, fomented by the balloon-kid’s statement on Larry King that he didn’t come out of hiding when he heard his parents calling him because they “had said that we did this for a show”. At first I thought that this was just a case of a bad publicity stunt, perpetuated by attention whores of the worst ilk, being accidentally (and hilariously) debunked by a kid saying the darndest thing.

But now I’m starting to wonder about the kid’s grasp of reality. After all, this was not his first time on television, the family having made two appearances on ABC’s reality show Wife Swap. And it poses the question – does a six-year-old child have a firm enough grasp on the difference between TV and reality to know where one stops and the other starts? Especially when, during his formative years, he was at the centre of a spectacle that purposely blurs the lines between the two?

Maybe he just thinks that whenever your personal life receives attention from the outside, it’s for TV? (Which, now that I think about it, is not that far fetched.)

Or is it possible that, having grown up in an environment where the most important part of “reality” is creating a spectacle for the camera, that he behaved in the way that would prolong the spectacle because he thinks that’s just what people do? Maybe his parents didn’t put him up to it at all – maybe he did it “for the show” all by himself.

I sometimes wonder what effects reality shows featuring young kids – Supernanny, Wife Swap, or John and Kate Plus 8, for example – have on those kids and their perception of how the world works. I wonder if the balloon-kid fiasco is going to shed some light on this question. I always figured that reality TV would spell the end of humanity – but this is a wrinkle even I didn’t anticipate.

There is a cultural skirmish taking place between Generation Y and the Baby Boomers. Boomers say Gen Yers are a bunch of lazy, self-entitled brats. Gen Yers say that Boomers… well it’s funny really, Gen Yers don’t really seem to give a shit about the Boomers and are content simply to disappear onto the interwebs when the issue arises.

Either way, there’s a clash of culture and values and one of the places it seems to escalate to actual conflict most often is in the work world.

Managing Gen Yers has proven to be something of a challenge for Baby Boomers and the topic often pops up on forums like Brazen Careerist or Punk Rock HR (sorry, couldn’t find the post I wanted for this link, but trust, me, it comes up in the comments all the time) or in books such as Bruce Tulgan’s Not Everyone Gets a Trophy.

Here’s a promo for a show on PBS that deals with the topic and nicely illustrates some of the complaints that the two groups have about each other.

Ultimately, the Boomers’ position seems to come down to their perception of Gen Y as, well, like I said above, lazy and self-entitled kids with no work ethic who show no gratitude or loyalty to their employer and spend all day engaging in social networking rather than doing their jobs.

These impressions are absurd (if not offensive) for a number of reasons, not the least of which is that they’re hopelessly historically blind – weren’t the Baby Boomers the ones who said “Don’t trust anyone over 30!” and called themselves the “Me Generation”? And AS IF nobody ever slacked off at work before Facebook.

But history has another parallel that makes this generation gap far more interesting than a matter of one generation having trouble identifying with “the kids these days”, and I’m just going to come out and say it – Gen Y is fomenting the Marxist revolution.

Think about it this way – Camus sums up the gap between the bourgeois and the proletariat (as perceived by Marxism) as that between one group that uses their influence to maintain a status quo in which they retain the privileged position as owners of the means of production and another that resists said status quo, not because they hope to usurp the privileged position but because they want to return to a state where day-to-day life involves something more fulfilling than turning pieces of yourself into commodities for exchange.

Like the Marxist proletariat, Gen Y expects more from their jobs than a paycheque. They want a degree of fulfillment and all the things they do that annoy Boomers at work – bouncing between jobs to avoid getting bored, having high expectations when it comes to the type of work they’ll be doing, and just tuning out when their expectations aren’t met – are, for better or for worse, strategies for achieving that fulfillment.

Sure, Gen Ys don’t have it all together yet – crying because your boss asks you to do something tedious is silly and short-sighted – but I like to think that if this trend of demanding more from your career continues, we may be witnessing the revolution that Marx was talking about, except without all the messy violence.

Also, I’m not sure what Marx would have thought about iPhones. But whatever. Either way, it’s a bit of a revolution and it will be interesting to see if it catches on or simply falls flat when Generation Z – whatever that is – hits the workforce.

I’m a big fan of Penelope Trunk’s blog - it’s insightful, sometimes racy (which is always fun) and it talks to 20-somethings like they’re thoughtful, intelligent people rather than a generation of spoiled, irresponsible, self-entitled laze-abouts.

Last week, Penelope posted this article about why travel is a waste of time. And I had a really odd reaction to it.

I don’t disagree with it – in fact, I think there’s merit in everything she says, though I personally really enjoy travel and find that you can get a lot out of it.

No, my odd reaction came specifically from item #4 in Trunk’s list, which essentially boils down to the idea that it is far more productive and rewarding to build an every-day life that is so fulfilling, you don’t need to get away from it to find satisfaction, which makes travel kind of pointless.

In a lot of ways, this makes perfect sense. Why only enjoy your life for two weeks out of the year when you could enjoy it year-round with a bit of self-knowledge and a very small dose of enterprise? But here’s where things got interesting for me – my gut reaction to that idea was crippling anxiety.

Then I thought to myself, what the hell? Why on Earth would the thought of self-knowledge and self-fulfilment cause anxiety, of all things? Why would the thought of figuring out what makes me happy, and then doing it, make me want to run screaming from my computer?

And I don’t think it’s just me that has trouble with that idea. I sometimes wonder – moreso after today – whether the 20-something identity crisis is more a product of anxiety than a lack of options. It’s not that we don’t know ourselves – it’s that we’re afraid to admit to ourselves what it is that we really want out of life and that anxiety makes it very difficult to move forward. But none of us has any idea where that anxiety comes from, let alone how to deal with it.

Fortunately, I have a theory – and it’s just one, and very unscientific, so do with it as you will. But here it is nonetheless.

All our lives we’ve been told “if you can dream it, you can do it! :D”

This is a nice thought. But it inspires people to take a very goal-oriented approach to happiness. In other words, it’s a way of thinking in which your only interaction with your potential happiness is to imagine some kind of end result.

Now, having the goal in mind is crucial, even essential, to success. But being so focussed on the goal that you lose sight of the process makes the gap between where you are and where you’re trying to get absolutely enormous. Not just enormous – insurmountable. And since nobody has been saying anything about the process – just about “Dreams!” and “Shooting for the Stars!” and all that nonsense, everybody is completely focussed on the finish line with no idea how to even get to the starting gate.

So this is where the anxiety comes from – we’ve all grown up dreaming about all the things we’re going to do when we grow up and now we’re suddenly in our 20s and in a position where we have to actually do something to make them happen. But nobody really knows what that something is. And that makes the gap between where we are and where we want to be so intimidating that many of us simply don’t bother. Instead, we get an unfulfilling 9-to-5 job and try to fill the gap with toys (sometimes literally, since much of the market for escapism right now is based on nostalgia for when we were kids in the ’80s, e.g., the new Transformers and GI Joe movies). Or alcohol. And we work and hate our lives and take two weeks vacation every year to try to get some fulfilment when what we need most is an honest examination of what we want and a practical look at what it takes to get there. And a serious reduction in this “If you can dream it, you can do it! :D” BS.