Update 12/3/2011: Unfortunately, the information below can’t quite be trusted. I’ve taken a closer look at the results I pulled from the Seattle Marathon site at the time and the results today and I can definitely say that the data I pulled was not official and the results today are also incomplete. I’ll need to make an update to this post and all this research some time when official, complete results are available. What’s wrong? In the data I initially pulled, I see things like women’s winner, Trisha Steidl’s splits as 1:10:29 for the first half and 1:34:09 for the second half for a 3:03:38 (which is wrong and doesn’t add to that final chip time) – today the results say 1:29:32 and 1:34:09 (which seems right). This suggests today’s results are closer, however today the pacer chips are missing from the results. Anyway – I’ll work on this again when I can…

I’ve started collecting the data from the Seattle Marathon, 2007 to present and am doing some analysis on it, specifically from the perspective of the marathon pacers since I organized the pacers this year and we just finished the race. If I find the time to keep analyzing the data, this will probably be the first of many posts on the subject. If I don’t, this might be the first of one.

Assuming I’ll keep writing on this – here are the methods I’m using for the data source…

I pulled down all full and half, male and female results from 2007-2011 and dumped the data into Excel. This represents over 45,000 finishers over that period between the races.

I did a little data cleansing – many records contained no data for the first split and just turned these into 0′s for processing in an Excel PivotTable.

I used an Excel 5 minute rounding function to approximate the pacer that some finisher would be behind (e.g. a full finisher crossing the finish line at 4:13:35 evaluates to a 4:15 pacer, a full participant crossing the midway point at 2:02:41 would be behind the 4:10 full pacer, and so on).

Through 2010, the Seattle Marathon only offered full pacers for 3:30, 3:45, 4:00, and 4:45 (unhappy with this, in 2010 I lobbied for us to add 3:10 and paced that myself). In 2011 I organized the pacing and changed the pacer structure to offer more times3:00, 3:10, 3:15, 3:20, 3:30, 3:40, 3:45, 4:00, 4:15, 4:30, and 4:45).

When trying to process this in the past, I had frequently tried to look at the finish result. This is pretty impossible to make any conclusions off of because (if you hadn’t heard) a marathon is hard and there are all sorts of reasons people do or do not make their results. While it’s a lot more important to ultimately answer “did people make their goals?” without a questionnaire that’s fairly impossible to tell. It *is* pretty easy to tell from the first half split, though, where people were setting their goals, and looking at some of that data, I see a clear indication that the pacers and pace groups matter.

The following chart shows a plot of full finishers over these 5 years of races and highlights the pacers that were offered for the races in those years. The data shown is based on what 5 minute group they were running with at the first half split (not the finish) and the red rings highlight the 5 minute segments for which we had a pacer.

2007-2010 there was some pretty clear clustering in most years of a large group clustered around the pacer segments. Sometimes the spike is a little outside the circled block, but I believe there is some pretty clear visual correlation (this includes 2010 when I had a group on track for 3:10 at the half)

2007-2010 looking at the distribution of the field outside the pace groups shows a fairly smooth distribution of finishers. I think this further suggests that when there isn’t a pacer to associate with, runners tend to just distribute themselves more evenly.

In 2011, the distribution is much more choppy with more clusters of runners in the race with most clusters inside a pace group and most of the rarefied sections outside the pace groups.

This doesn’t help understand whether people are achieving tougher goals and there is no sophisticated analysis in here at all (maybe I’ll get to some of that in a later post) but I believe that it definitely indicates that runners will choose to run with a pace group if one is offered in the race.

A couple weeks ago as I was leaving my house and walking down the steps I felt and heard the familiar, distinctive, and disgusting sound of a snail shell being crushed under my shoe. At the time I had no reason to doubt my intuition that: “This was the single most disgusting thing I will experience all month.”

Fast forward to going out on a cold winter night and finding a fresh, live slug “incorporating” some of the crushed remains. Fast forward a couple minutes later to the time where I forgot that replacement slug was devouring its ancestor.

I use emacs every day and for as much work as I can on a computer and have for about 9 years. It was not easy to learn, though, and I used it casually for about 8 years before starting to use it seriously and all the time in about 2002.

Learning was harder than I think it should have been – primarily because the main tutorial (invoked with C-h t) focuses on lesson after lesson of basic file and editing operations instead of trying to teach you just a couple very basic and core lessons about emacs itself. So, I attempt to present:

The only emacs tutorial you’ll ever need

Emacs does a lot and new users definitely needn’t try to understand all of it. I really ramped up dramatically faster in my learning curve once I discovered and mastered a very short list of basic functions that help explain the major interactions with the software. Before this, I was very, very often feeling trapped by it and it convinced me (many times) to turn away (to vim, TextPad, WinEdit, notepad, and other software). Now I can’t imagine trying to use something else to get work done.

The short version: I believe that if you start by learning describe-function, describe-key (and where-is), apropos, modes (and describe-mode), and ctrl+g, you will ramp up on emacs much, much more quickly than if you do not.

Every key press in emacs executes a function. Whether you press the “a” key or some key sequence involving the control (“C-”) or Meta (“M-”, usually by pressing ALT or the Escape key) keys, you are running some function. This is probably different from most software you normally work with.

Every function has documentation. You can see this documentation by executing the function “describe-function” and typing the name of the function you want to get documentation on.

Many functions can be invoked by name. You do this by pressing “M-x” and entering the function name in the minibuffer. For example, if you type “M-x describe-function [ENTER]” emacs runs “describe-function” which asks you for a function name. Type “describe-function” and you will see the documentation on “describe-function”. I said “many” and not “all” functions can be invoked by name – in the function’s definition it must be declared to be interactive for this to work. Emacs has a lot of non-interactive functions (e.g. basic lisp functions like car) which cannot be executed interactively.

“describe-key” (and its close sibling “where-is”) can help you explore keymappings. I mentioned that when you press “a” it runs a function – to see what function that key sequence runs, type “M-x describe-key [ENTER] a”. This tells you pressing “a” executes “self-insert-command” and shows the documentation of self-insert-command (that it will “Insert the character you type.”). Similarly you could use “M-x describe-key [ENTER] M-x” to see that M-x is bound to execute-extended-command (which opens up the minibuffer and asks you for a function to run). Cool! So let’s say that you know there is a function called “goto-line” which lets you jump to a specific line in a file. You’re lazy, though, and don’t want to type that whole thing out whenever you want to use it. ”M-x goto-line” – so much typing! Instead, you can type “M-x where-is [ENTER] goto-line [ENTER]” and emacs will tell you what keysequences goto-line is mapped to. In my setup, they are: M-g g, M-g M-g, <menu-bar> <edit> <goto> <go-to-line> – so I have three ways to get to it. Another invocation of “where-is” and I learn that “describe-key” is bound to “C-h k” – so the quick way to do the first operation in this section (“what function is run when I press ‘a’?”) is: “C-h k a”.

“apropos” can help you find (or remember) useful functions. Say you didn’t know that goto-line was the function to jump to a line in a file. If you type “M-x apropos [ENTER] goto” you’ll get a list of (interactive) functions that include “goto” in their name. Personally, I find this more useful to remind myself of a function I can’t quite remember than to find a function I don’t know at all, but it’s very useful. (short way: “C-h a goto”)

Your major mode sets up a number of default behaviors about your interaction with emacs. All interactions take place in a single major mode and you can see this mode in the modeline it might be (“Lisp Interaction”, “Apropos”, “Shell” and others). Depending on your mode, your keys will behave differently! This can be very confusing to new emacs users. For instance, when I press “C-h k <TAB>” (to inspect what the TAB key does) in Lisp Interaction mode it runs indent-for-tab-command (to indent some line for lisp programming), in Shell mode it runs comint-dynamic-complete (to try to tab-complete a function or file name), and in Apropos mode it runs forward-button (to navigate to the next linked entry in the apropos output). ”describe-mode” will tell you what mode you are in (and what minor modes are enabled) and what many of the major keybindings are for that mode. (short way: “C-h m”)

Minor modes can be mixed in to add more customizations. Most of your keymap will be defined by the major mode you’re in, but there are some editing conveniences that can be put on top of this that may transcend any particular mode. A pretty good example is “folding” – a behavior that lets you collapse large sections of a document and see a larger structure.

Ctrl+g runs “keyboard-quit” You may find yourself locked in the minibuffer or with emacs trying to get you to complete some command you don’t understand – ctrl+g can frequently get you out of this. (note: it’s not perfect – you might wind up in recursive edit but that’s another story).

These are the things I wish I knew before I started any of the tutorials. The tutorials *are* good and the reference cards *are* handy, but I was frequently frustrated and confused why the keyboard didn’t react in the ways I wanted (I didn’t understand modes in general, least of all the one I was in), I didn’t understand how keys worked anyway (didn’t know about describe-key), didn’t know how to increase my proficiency once I started getting a little more comfortable (didn’t know about where-is or apropos), and didn’t know how to learn more about many of the functions (didn’t know about describe-function or apropos). Those are commands I still use every day when using emacs today.

Music is my life. Well, then again, not really – there’s friends, family, pets, computers and running. But music is way up there. And lately I’ve got a few things I’m newly into. Here’s a short rundown – in no particular order. Every link is to a song that I think is worth listening to.

Male Bonding – I just posted the youtube clip of their incredible track – Bones – from their most recent album. I was on a training run about two weeks ago listening to their new album for the first time when I first heard it and it’s one of those incredible experiences when you first hear a song and it just stuns you. Previously I’d seen their video for Year’s Not Long, which I guess would probably be called gay-positive in the sense that it winds up with all the guys in the video making out with each other. But Bones – *6 minutes* of pretty serious (if poppy) thrashing. There’s not a lot of complexity to these cats and you’ll probably immediately know you love them or they’ll bore you to tears. I saw them play at Chop Suey as part of City Arts Fest and they were great, but it was a little strange to see a show so poorly attended (I’d say there were 50-100 people there and we basically all fit on the main floor).

Jay Reatard – died ahead of his time. He looks and acts like a reject from the carny and the “pool-party-gone-wrong” theme of It Ain’t Gonna Save Me are an inspiring testament to someone I wish I’d gotten to see live.

Frank Turner – speaking of testaments – Eulogy is easily the most perfect <1 minute song I’ve ever heard (I was never a big D Boon fan). I saw him at Neumos and then, like in the linked clip, they led into “Try This at Home” which has some of the most perfect sing-along choruses I’ve heard in years. By the end of the show, he insisted on and succeeded in getting every member of the audience to sing along to Photosynthesis – and it was magic.

Carissa’s Wierd – It’s hard to know what to say about this band. Listen to Heather Rhodes and lines like “saw someone today who looked exactly like you – it’s funny how the years go by” or One Night Stand and “please don’t ask me what my thoughts are cause I don’t care about yours” and you’ll find tragic desperation that is just destined to be the soundtrack for sad memories and for the discount bins. Which is really unfortunate because they made incredible music and S still is.

Pajo – Keeping that thread going, David Pajo played guitar for Slint and apparently he’s still making music but as far as I can tell pretty much flying beneath the radar of everyone. At least I just found a copy of “1968″ used at Sonic Boom in Ballard and it had been getting marked down for the past 3 years. When I listen to his cover of Where Eagles Dare or basically anything from 1968, I think “this must be what people got out of Elliot Smith.”

The Gglitch – this is hard to write about because this is the band that my excellent and incredibly talented cousin was in before he died of cancer. I just visited with his brother and he travelled a little this summer and was pursuing an excellent effort to try to get their last album into some public libraries. Anyway, my cousin’s keyboards on the lead track from their last album (which is Angeldust if you have Spotify) shows their amazing range. I don’t even know what style to call it, but I know that I love good, passionate music and that beyond missing my cousin – I believe this is it.

Jay-Z and Kanye – somewhere this post turned very melancholy and I want to turn it to an uplifting note and that comes from the Frank Ocean cut off Watch the Throne – Made in America. I could listen to the layers they put down on this over and over – and have. And I can do all that and look past the Big Ghost Chronicles review which trashes this track pretty hard because even Big Ghost has to eventually concede that “its still a pretty tight project son”

The 2011 Seattle Marathon is just a couple weeks away. I’m organizing the pacers this year…

Aside: being pacer organizer is good but there are some weird experiences. I got email from some guy in Toronto who wanted to pace and another email from someone interested in pacing but who wouldn’t be in Seattle for the next year or two, so how about pacing then? But I digress…

…and in past years in the start area there have not been any signs helping the starters line up by pace. There’s just one giant start area for both the half and the full, though the races start at very different times. I tried to lobby to get some signs set up on one side of the start chute for the half with minute/mile pace markers and signs on the other side for the full (there are >3x as many half finishers as full finishers, so the pace groups for those paces will definitely be very different). It seems like we’re not going to get that, so I ran some numbers.

I went to the Seattle Marathon’s website for results and pulled down results from 2008-2010 for the half and full (mens and womens). I sorted results by chip time to figure out where people really *ought* to line up given how fast they finish the race, and here are the results I found:

The key part for where we should line up with our signs are (assuming we can approximate how far back “the back” is and that the crows is uniformly distributed):

1:45 half / 3:30 full should be about 1/10 of the way back from the start.

2:07 half / 4:15 full should be about 1/2 way back from the start.

A 2:30 half / 5:00 full would be ~4/5 back from the start (however Team in Training are going to provide 5+ hour pacers this year)

A friend recently posted a comment mentioning latency that made me want to dump some thoughts about what I feel like I know about the topic. There are better sources on it in greater depth (go to Steve Souders’ blog or there tons of good things James Hamilton has written on it) but I have a pretty broad working knowledge of it so here goes. I’ll try to introduce what latency is, how it’s measured, and how it’s analyzed.

Web latency can roughly be defined as “how long it takes your pages to load.” It’s common to start thinking about this by wondering what the connection speed of the clients reaching your site are, but this often doesn’t really matter. Even if you get this, you’ll just know how fast your clients are, but what matters is the their perceived speed of your pages. It’s important to understand that there is both server-side latency (how long it takes your site to generate pages) and client-side latency (how long it takes customers to get your pages). Server-side latency is pretty easy to measure – you can generate markers in server-side code that extensively measures how quickly you can generate pages, but users don’t care if you generate your pages quickly – they care if they get them quickly.

Latency is typically measured in milliseconds, both for server or client-side. I mentioned it’s easy to add instrumentation to measure serverside latency but measuring client-side latency is trickier. You generally can’t measure the “speed” of a client, nor would you really need to. To do this, you need to conduct fairly extensive tests sending multiple files back and forth between the client and server and at the end of the day, you’d only know approximately how that client is (or your average client) and, again, this doesn’t matter for the client’s perceived latency of your site. The typical way to measure client-side latency is to include some javascript in the pages you send to the client which call back to some server code tracking those markers. Once you have this, you can measure a few markers:

Time to first byte – how long it takes for your page to start reaching the client

Time to (some key metric) - you might want to measure some skeleton of the page that starts to render, “the fold” (the chunk of a page which renders in the initial screen of the browser without any scrolling), or some other key feature on the page which may be above or below the fold

Time to page loaded – how long it takes for the entire page to reach the client

Each of these matter for different applications and you need to decide which are most important for your site (though initially it’s probably best to just focus on time to page loaded). Also, the injection of all those callbacks to measure this don’t come for free and will impact the latency of your pages, so it’s common to measure this selectively to understand the overall health of the site at different times throughout the day. You might exhaustively measure this for all transactions (all pages, all clients) to get a quick, comprehensive measure of latency and understand weak points in the experience with your site but typically you can more selectively add this data collection and monitor it over time.

So those are some of the key ways to measure latency – once you’re collecting all this data, there are different ways to analyze it. The ways I know best are those that we use in my current job so those are some of what I’ll focus on. When measuring this you can look at different percentiles or using understats. Percentiles are similar to SAT scoring and are usually reviewed at intervals like “p10″ (10th percentile – the 10% fastest clients), “p50″ (50th percentile – the midpoint), “p90″, “p99″, and “p99.9″ (the slowest 0.1%). It might be obvious, but p99 > p90 > p50 > p10, and all points in between. Even if you have a completely homogeneous user base, these will vary. You might find that they are all pretty fast or slow, but you’ll probably find that they vary over the course of the day and week and you’ll probably find p90 is dramatically slower than p50. They will vary over the day as when your services are higher/lower loads, when your hosts are undergoing maintenance, or when the user base varies (if your site draws worldwide traffic then in the middle of the night you’re seeing overseas traffic which will have some increased inherent latency you can’t easily reduce). When you have your instrumentation data collection system in place, you can track this over the course of the day and week and identify different tolerances that are important to you. It’s not important to measure all of these immediately, but it is important to settle on a few key thresholds and focus on improvements to those over time.

The other main way I understand to look at latency is using “understats.” Understats can be plotted in two interesting ways and are sometimes preferred to the time percentile statistics. Understand pivot the same data you’d see from percentiles along an axis which analyzes percentage of customers receiving pages in a certain time – u1000 tells what percent of users received the page in 1000ms, u2000 tells the same for customers receiving the page in 2 seconds (2000ms), and so on. This can be plotted over the course of the day (u1000 might fluctuate from 40%-80%), or aggregated over some time period. In the second approach, latency for the requests is sorted from fastest to slowest and shown in an a plot (usually an asymptotic arc) that reveals all understats for that time period (y-axis from 0-100% and x-axis ranging from fastest client (lowest latency) to slowest (highest latency)). The profile of the understats graph plotted in the second way can reveal a lot about the latency profile of a site, so I like it for at-a-glance view of latency, but for regular operations it’s more practical to set up rules like “I never want my p50 to exceed 1200ms” or “I never want my u2000 to exceed 30%” and run operations based on that.

[note: this would probably be more illustrative if I added some pictures illustrating these - maybe I'll do a followup post with some of that later]

There are a couple other interesting aspects of latency measurements. You probably won’t have uniform latency across your site. It’s probably important for your landing page to load quickly. If it’s not, users will immediately leave the site and not bother learning what it’s about, so this page might be almost completely static – most serverside latency is passed straight to the client, so building a complex and slow landing page could be death for a site. It might also be important to have this page distributed from edge cache servers or distributed via some CDN. You probably can’t have servers everywhere in the country you’re serving, but you can outsource this to a hosting company (CDN) that can have fast servers throughout the geographic region where your customer base is.

You’ll also have different numbers of resources required to render your pages. Consolidating and minifying these helps. Consolidate your javascript into a single file which can be fetched in a single request or series of requests rather than across multiple fetches – do the same with CSS. Also, for production, run both through a minifying system that strips whitespace, abstracts variable names to single characters throughout the JS (the client browser doesn’t need to know friendly variable names to render), and so on. Another technique is to use “spriting.” This gets its name from spriting in video games and the technique involves creating a single, large image which contains the multitude of visual components that are used throughout a page (buttons, icons, and other UI elements) and then using CSS to render a visible portion of a single sprite out of that single resource on the page. Without spriting, you might have 50 small images required to render a page, each requiring an HTTP GET, vs. a single GET which is reused throughout that page – and reused throughout other pages in your site.

Further on the fact that the landing page might (should) be fast – other pages will be more complicated and slower. For this reason, it’s important to have a sense of what classes of pages you serve and to measure and work on those independently. The p50 of your landing page might be 500ms while your slowest page might be 2500ms. If you look at an aggregated measure of “site p50″ – you might be able to guess which are the weak points but if you set up instrumentation by type of page, you can identify and improve weak points much more quickly.

Finally, there might be some classes of clients that impact different pages differently. Mobile browsers (phones and tablets) might experience latency one way (at p99) – IE/Firefox/Chrome will probably have a different experience, and these might point to simple optimizations to increase overall latency. Client browser type can be tracked to help you focus on weak points there, too.

Where to from here? There are a lot of frameworks for measuring this and I won’t go into those (mostly because I’m not that familiar with them, just what we use in house, but google’s analytics packages are very good from what I understand). There are also some good tools (like ySlow) that can immediately point to low-hanging fruit to improve your site speed (do you minimize asset requests? is compression turned on for pages sent back? does the server support pipelining?). When you start looking at this space, you can usually find a ton of low-hanging fruit to greatly improve latency. You can probably make a 500-1000ms latency improvement with minimal effort. After that is when you start getting to the interesting and hard work. So the first month of working on latency should probably be identifying a toolset and making those initial improvements, then you can expand the metrics you look at and get to the more subtle ways to make your site faster.

Somehow this year I made it onto a team for Hood to Coast. I’ve wanted to do this race (or some similar race) for a couple years but always had conflicts. This year, Dana, from ChuckIt (who turned out to be much cooler than I’d know or expected) posted to Facebook with an opening on the team she was participating with and I hopped on it. I’m going to try not to turn this into a White River sized post and get to some of the highlights and lowlights.

First, and no offense to any of the participants or finishers, I really felt like this barely qualified as a race. Sure, there is a chip, the course is probably measured, and people can be disqualified, but on the whole, I think this is a race about as much Bay to Breakers is and treating it another way is kinda silly. This isn’t necessarily a bad thing, but there are some terrific athletes who’ll find themselves “beaten” (in terms of team clock time and finish place) by total couch potatoes.

It *is* a lot of fun and an unique experience. A lot of teams have very creative themes in their vans, team names, costumes, and so on.

I’ve heard people say that even though it’s only about 18 miles, the need to run, sit in a van, sleep, and run again make it as hard as a marathon. They’re wrong. Anyone who says this hasn’t really tried to run a solid marathon. It’s not easy, but it’s just not even close to being as hard as a marathon.

The logistics are interesting and it’s fun to start to understand the legs. The race is run with 2 vans of 6 people each over 36 legs of varying difficulty on the 200 mile course. The course itself is almost all pavement (which is decent, though trails would sure be nice) with a couple sections of gravel road (which is *awful* – the vans kick up an insane amount of dust) – usually running alongside lightly used county roads from Mt. Hood to the Oregon coast. There are a series of wave starts with ~20 teams starting every 15 minute wave from about 4AM to about 6PM on Friday with the slowest projected teams starting first and the elite teams at the end. Van 1 had each of their 6 runners run a leg of the course, then hands off to van 2 while their runners run legs on the course and van 1 speeds 6 legs ahead to the next van transition. After 5 van transitions and 3 legs each to all the runners, everyone winds up in Seaside, Oregon for the finish.

I had leg 11 and in hindsight, I think that’s a pretty good leg. Legs 1 and 7 (the first legs in each van) have a tough time. Leg 1 is terrible because it’s given to some poor sacrificial sucker whose legs will get shredded. It’s nearly impossible to find an elevation map of the entire HTC and I think it might mainly be two reasons: A) it shows that it’s not a hilly course (despite what most of the people doing it seem to act like) and B) it shows that leg 1 is an absurd downhill that looks designed to destroy the legs of whoever runs it. So it’s out. Leg 7 (first leg of van 2) has the downside of needing to run immediately after sleep and negotiating the exchange from the other van (both of which are also downsides of leg 1). Legs 6 and 12 (the last legs of the vans) also have the “coordinate with other van” strike against them and you need to go directly from running to getting what sleep you can on the overnight transition. So I think the middle legs in the van seem preferable – not taking into account the difficulty of any of the legs.

The race organizers let way too many people into this thing. We spent a highly significant amount of time sitting in traffic as vans tried to get into an exchange (especially the last van excahnge between legs 30-31). We were one of the later vans to get into these exchanges and as we were entering, we saw a lot of vans sending runners on foot to get to the exchange so they wouldn’t miss the handoff. This is a mistake that I think should be fixed in future years (my friend Tien said he’s never doing HTC again because of this and I wouldn’t blame him).

As for my personal race experience: it felt good – great, really, to get out and run kind of hard again. It also felt great to feel like I was really kicking ass because there were so many middle of the road or weekend runners on the course. I encouraged every person I passed and think they should be proud of their accomplishments, but with this kind of structure of race I could see it being a little discouraging when a modestly fit runner gets tossed on the course with a ton of people who aren’t runners at all or have been “training” for a couple weeks for the event. Anyway, due to my screwed up knee I advised that my time projections for the course be set for a 44:00 10k runner since I didn’t think I should run faster than that. I beat these times on all legs, coming in with an adjusted performance of a 42:00 10k runner. My knee did hurt, but it didn’t stop me – the worst leg was the 3rd, where my knee went from “noticeable” to “hurting” within the first half mile, but it was manageable the whole time. Over my legs, I passed a total of 39 other runners (“roadkill” in the parlance of many of the participants) and wasn’t passed by anyone, which was all nice but didn’t feel like an incredible achievement.

The team support is really nice. I was cynical about this and want to say it doesn’t matter but regardless of whether these people are your friends or even your team, it’s really great to see a handful of people cheering you along on the course. The positivity is borderline overwhelming and at times it’s hard not to think “I bet this is what it would be like to be in TnT…” but if you can suspend that for just about 24 hours, you’ll be happier.

Finally, and in the spirit of the event over the race, here are some things I’d like to remember for future such races:

Costumes – would be fun. Wigs? Makeup? Team theme?

Noisemakers for the van – for runners on the road. Silly string?

Possibly bring a hammock to tie up at exchanges?

A folding chair is great

Look for a race that isn’t all pavement

Try to ensure that the team has one of the giant 15 person vans – consider not doing it in a minivan (though the minivan was probably the main stakeholder in the marketshare of car types at HTC)

Every teammate should bring a single bottle with a giant beverage dispenser / nuun / concentrated Gatorade (not a thousand disposable plastic bottles).

The team should either be competitive and stacked with athletes (or otherwise people who actually know their fitness levels) or strictly focused on having fun and not worry about goals – probably the latter.

Ideally join a team that gets seeded in a place such that they will reach the finish well before the course cutoff (we didn’t get to the finish till ~7PM and the course closed at 9PM and it felt like things were already winding down).

I think that’s about it. This was fun and challenging in different ways from any other race I’ve done, but a lot of the mystique was definitely worn off and I found myself much more interested in the Ragnar series than I thought I’d be, but I do hope to go back some day and try a different leg.

I feel a little like I’ve forgotten how to blog. I’m definitely out of practice with WTPB being basically down for probably a year and the glory days of MCWOT running on bloxsom ancient history.

But despite never caring about having a popular blog that gains loads of readers (which is still the case), I need to realize that if I think to myself “how could I be bothered to read / proof this whole thing?” then a post is probably not worth posting at all.

That said, I think I’ll work on another White River post that cuts to the chase. Or maybe not – but I’m going to try not to do that again.

Yesterday was the 2011 White River 50 mile endurance run which I’m extremely pleased to say I completed. I could go on and on about this race (and probably will) but I’ll try to be kind of brief in this recap.

[edit: I failed]

Reaching the starting line

Getting to the race has been a long, long process. I was registered for the 2010 race and went to the course preview runs (2 and 3 weeks before the race, there are group runs of the two halves of the course) but then on the one off weekend before the race, I got bit by a raccoon in my kitchen in a story I’ve told a thousand times and it put me on the DNS list. This year I made it to the preview runs again and just kept saying “stay healthy – make it to the race…” I managed to do this, but yesterday morning on the way to the race I got a flat tire on Crystal Mountain Boulevard! I scrambled with Joe’s help to change the tire, he flagged another car down (a woman who probably thought she was on the way to be late for the start) and we got to Ranger Creek at about 6:32. This was just in time to hear Scott McCoubrey (the race director) calling out “40 seconds!”

Getting to the race and registration

I’m getting a little ahead of myself though…this year I drove to Crystal Joe Creighton (2 time finisher and still 8-hour hopeful) and Greg Crowther, Seattle Running Club president and 6th fastest WR course finisher. We met at Fleet Feet store where I learned Phil Kochik and Brian Morrison planned to run WR just to get the lottery time to enter Western States. It’s really amazing and inspiring to be in the presence of that kind of talent and from such genuinely nice people, but this probably the kind of thing you understand (if you’re a runner) or isn’t that interesting (for most people) and I probably can’t make it make sense.

So, we drove down to Crystal Mountain, arriving a little after 4PM and mingled for a little bit before the dinner. We got our bibs and goodie bags, which are refreshingly sparse for an event like this. For the past two years at White River, they’ve allowed runners to pick our bib numbers. I ran in 144, in honor of my grandfather’s record for points scored in a season when he was an athlete at the University of Iowa. We settled in to our room for a bit, got to the spaghetti dinner, I met Gary Robbins, and we listened to the opening music to the White River video DVD about 3,000 times as Scott told us about the course in the Snorting Elk Cellar.

Joe and I went back to our room for a long night of me crushing his head against turnbuckles in WWE SmackDown vs. RAW on the Xbox in our room and a little bit of Cabella’s Big Game Hunter (aside: is a hunting video game actually any different than a first-person shooter where you just go around and kill defenseless enemies?) and turned in for the night. We woke up bright and early on Saturday at 5AM for the 6:30 start. Joe had taken a shower the night before and I learned that morning that our shower offered temperatures ranging from “frigid” to “how does it stay a liquid form of matter when it’s this cold?” but still managed to get ready. We watched a little Fast Times at Ridgemont High and soon it was time to hit the road. We only got about 4 miles down the road from Crystal, though, when I noticed the car running rough. I pulled over and sure enough, my rear right tire had two holes and was flat. I panicked a little – we had been perfectly on track to get to the race, but now we were going to have to change a tire and I learned Joe had no experience with this. So I pulled out the spare, jack, etc. and got the tire changed in probably 10 minutes. During this time Joe flagged down another car and got a woman to presumably carry the message to the start that we would be late, but were on the way. In about 10-15 minutes, I had the tires swapped and we were on our way. As we pulled in to Buck Creek, we saw Trish Steidl walking Forest away from the start and figured we had missed the countdown, but then saw the mass of people on the road and knew we were going to make it. I hastily parked, heard RD Scott McCoubrey tossed a bag in the Buck Creek drop bag area, Joe got up near the front, and I went back to give Katie, who’d made it down for the race, a hug and moments later I was the last person to cross the starting line of the start of the 2011 White River race.

To Ranger Creek

The White River course is simply some of the most beautiful terrain I’ve ever run. You can see a map I made of the course here, but it’s essentially 1/2 mile on a road alongside the Ranger Creek airstrip followed by 37 miles of dual and single track trails, then a 6 mile downhill on the gravel Sun Top Road, followed by more single track trail back to the finish. The first stretch of the course winds along the mostly flat Dalles trail. Through this section I was just trying to stay relaxed and stay slow. The course splits I had projected (and the split projections from the website) advise reaching to the first aid station in about 43 minutes, but I got there in 37 and still felt like I’d been crawling. I kept trying to remind myself of this as we made the turn to the Palisades trail and started the first climb. I love this section of the White River course. The switchbacks in those initial climbs, the waterfalls along the route, and the section you reach shortly after with stairs followed by sharp switchbacks is incredibly fun and beautiful. Whenever we reach the stairs, I always look up and am reminded of Donkey Kong, with the ladders, repeated cuts back and forth in the trail, and seeing the people who are ahead on the course weaving along the path.

The viewpoints from the points that jut out from the Palisades trail are all breathtaking. I made the mistake of stopping to admire one yesterday and was pretty intimidated by what I’d signed up for. I could see the Ranger Creek airstrip miles and thousands of feet below. Further in the distance, I recognized the mountain I’d climb in the second half of the trail and identified Sun Top a couple miles further north – the peak in the second half of the trail. I was only about 6-8 miles into this and I extended my appreciation of how long and hard this day would be.

Except for the initial climb through Palisades, most of the rest of this is pretty straightforward, beautiful trails – almost all perfectly runnable and so that’s what I did. Having started at the very back of the pack and knowing my goals were pretty modest, I just tried to slowly reel people in and take my time at the aid stations, which I felt successful at throughout the day.

Ranger Creek to Corral Pass

Two weeks ago stretches of this part of the course were really impassable. We got a little way up Noble Nob trail, but in the snow it was impossible to identify the trail at all – the only thing that was possible was to slide all over the place in the snow, so we cut the training run short of the trip all the way to Corral Pass and turned back. Before race day, Scott and Eric had gone out with shovels and cleared a lot of the trail and I found the conditions totally reasonable. I really love running with some element of danger, though, and I didn’t mind slipping in the snow at all. On the way through some of the snow (we were on the “out” in the one “out and back” section of the course), I asked another runner for odds on how long until we would see Uli, and sure enough, he came leading the field a few minutes later. This is probably a double-edged section of the White River course. Just about every runner gets to see just about every other runner on the course, but the narrow single track can make some passes tricky. Still, it’s within the first 20 miles and most of us are fresh enough to avoid most real danger.

So, Uli passed, Tim Olson wasn’t far behind, and I made my way to Corral Pass, congratulating returning runners and telling people I passed “good work” on our way. At Corral Pass, I saw Katie again, who was politely staying behind the “crew” line and got some information about the course splits (which I’d intended to bring with me but forgot in the scramble to make it to the start). At this point I was about 10 minutes ahead of pace for the 10 hour finish, so I took my time here, said hi to Greg and thanked him for volunteering, filled my bottle and Hydrapak bladder and tried to eat as much as I could before I was back on my way.

Corral Pass to Buck Creek

Exiting Corral Pass I ran past tons, and tons of runners on the out and back and eventually got back to the clearing where I wasn’t seeing regular or early start runners. I knew I was way ahead of where I needed to be and had a *lot* of race to go, so I started to walk for a bit and got my ipod out of my backpack, which I’d run with for the next ~7 hours of the race. This whole time I was still feeling good and relaxed and knew that the really hard part would be the climb up to Sun Top, so I was just taking things easy. I retraced the path through the snow, definitely passing some people there who were probably wondering “what the hell is that idiot doing?” as I flew past in my road shoes and got back to the Ranger Creek boyscout hut. There, I met Josh Barringer who was using WR as a warm up to his first 100 miler at Cascade Crest in less than a month. Josh was battling some dehydration and working to get the wheels back on, which he did over the course of the next 30 miles.

The descent from Ranger Creek is where I started to notice a problem. I’ve been wearing a patella strap since about March when I developed a knee problem that stopped me from starting Chuckanut and put me on the DL for a month. I wore it yesterday during White River as a preventative measure but on that pounding downhill (you lose about 3000′ over 7 miles) was taking its toll. I wasn’t positive how bad this was, but was definitely getting worried.

One uplifting story from this descent – as I passed a female runner, I was told “you’re the first guy to pass me who hasn’t had terrible BO!” I thanked her for the kind words and then realized I was reeling in Minimalist Ted who had probably passed her a few seconds earlier. He was probably still within earshot.

I got in to Buck Creek at about 5:04, so I was still feeling pretty good about my goal. I was still ahead of pace and feeling good (though worried about my knee) and – most significantly – had not yet experienced the debilitating cramps I’ve had every other time in my life that I’ve run that long and far (every other ultra or marathon, I’ve gotten terrible cramps between 60-80% of the way through the race). Katie was here again and incredibly eager to help and incredibly helpful. I was difficult because despite the race going well, I definitely was tired and needed time to figure out what I needed. So, I ate, chatted, drank (a lot), filled my bottle and bladder again, ate some more and after what was somehow nearly 15 minutes, got back on my way.

Buck Creek to Fawn Ridge

I thought the flat trail section exiting Buck Creek should be a good place to run, but I’d drunken too much Gatorade to execute on this. Plus, maybe since I’d stopped so long at Buck Creek, my knee was really starting to hurt. After winding through the rolling trails to the Sun Top trailhead, I saw Katie again before that turn off and was so happy to see her, but was starting to feel demoralized about my knee. I told her (somewhat ridiculously, in hindsight) that I thought I might run the course limit – my knee was in a bad way, but I was determined to finish. She told me I was a stud and wished me well as I went to the second climb.

This is where the hard part came in. In past ultras, I’ve gotten to a point of exhaustion where my approach is to try to run as much as possible, but just get through the race. Often this has meant that if I can get 5 steps in running on a flat or downhill section, I’ll do it, but a lot of the uphills I just have to power walk. I realized yesterday that with my knee, that would not fly, because running on downhills was just impossible. I could try, but the pain was definitely getting worse with every mile and a lot of the downhills were slower than the climbs. This put me in an interesting place, though, where I actually still had strength to run and any time I passed anyone, it was on an uphill. A small group of us made our way through the exposed switchbacks on this section of the course. When I caught a group, I felt content to stick with them for a while because I expected to lose time on the Sun Top Road and wanted to keep something in the tank for the Skookum trail (the final leg of the race). Eventually I made it to the Fawn Ridge aid station where the excellent volunteers have a luau themed party. This was really fun and uplifting (they had decorated the last ~1/10 mile before you pop out on the road with inflatable pool animals) and I was glad to get there. I was struggling to get here, though, and still demoralized about my knee, but surprised that despite making, I though, terrible, terrible time to get there, that I was still on or close to 10 hour pace. I took a long time at this aid station again, though, and talked briefly with Brian, who had decided to drop at that point. I was a little freaked out, too, because one of the volunteers came up to me and said “you’re Patrick, in 144?” – “Yes” – “We had news that you’d dropped – you’re still in the race?” – “Why yes I am!” I did not want to move up from last year’s DNS to a DNF…

Fawn Ridge to Sun Top – the death of a dream

This is definitely where things started to take a turn for the worse. There are a couple false summits between Fawn Ridge and Sun Top and my knee was killing me with every one of them. I was losing a ton of time on each downhill where I would basically walk and, for the first time in the race, get passed by a bunch of other runners. Once or twice I actually had to stop and step off the trail. Once I stopped to pee and realized that I was very dehydrated. So, I forged ahead as best I was able, but probably lost 15-20 minutes in this leg from my planned pace. Despite this, I was happy with the fact that I did keep going and was able to replan my strategy around trying to stay strong on the climbs and lose as little on the descents as possible.

One more note: more than most sections of the course, this part really rolls a lot. I could manage decently on fairly flat sections, but this part is usually gradual climbs or descents which were either challenging (but semi-doable) or too painful to attempt to run. Eventually, though, I made it to the Sun Top road crossing, and lumbered up the 1/2 mile to the top. As happy as I was to see Katie at every other point on the course, seeing Glenn Tachiyama was possibly the happiest sight on the course that far because I knew “Glenn is here at the top of Sun Top to take our pictures and I have finished climbing the second mountain in White River.” It was awesome.

I stayed a while at the top of Sun Top getting sprayed, eating, drinking, and putting off what I knew was going to be a torturous descent to the last aid station at Skookum Flats. I also basically knew there was no way I would make 10 hours, but knew I would finish and also knew I would finish in the qualifying time to enter the lottery for Western States. I doubt I’ll ever run a 100 mile race and I’d still like to run a regular marathon that would qualify me for Boston some day, but in terms of running accomplishments, I’m proud of this, even with my eventual 10+ hour finish.

Sun Top to the finish

The descent from Sun Top was brutal. I started running and made it about a 10th of a mile before I had to walk. Then I walked backwards. Then I tried to walk forwards and then walked backwards some more. I tried a lot of things with varied success (shorter strides, only running on the shallower parts of the descent) but I basically walked for four miles. All the while, I’m getting passed by people I’d seen on the course who I didn’t want to be passed by and who I’d last seen a long, long time ago. But there I was, so I just resigned to the realization “OK, I may as well rehydrate now, relax, and just do the best I can and plan on running the flat section in Skookum…” I got through the first 5 miles of the descent (the real “hill” portion of the descent) in about a 13 minute mile pace. On the course preview run I did this descent in something under 6:30 miles, and I didn’t expect that yesterday but it was still disappointing to know I was so close to the finish and I couldn’t run – I wasn’t missing my goals because I was giving up, but I just had no choice.

Eventually I got to the flat section though, and now I was ready to tear shit up. I ran well to the Skookum aid station. I stopped, met Josh again and a couple others I’d met on the preview runs, looked around for Katie but realized she was probably at the finish because Joe may have finished then or earlier, and I was on my way. Once I got in to Skookum I was totally invigorated. I couldn’t believe I was running this well 45 miles into a race – still no real cramping and basically managing well. This high didn’t last all the way in Skookum and I walked in a couple sections, but I did far less walking than many people who I reeled in on the way through the trails.

Having done the course preview runs twice now, I realized at one set of turns that I was finally getting close to the end of the trail where you turn on to the road for the last ~1/4 mile to the finish line and once I got there, Glenn was trumped again for “best sight on the course” when I saw Katie in her green top standing up on the road back to Buck Creek. I knew where the guys in front of and behind me were at this point, so I took a short breather to tell her how happy I was to see her, ask how Joe and Uli finished, and enjoy the moment. Eventually the next guy behind me popped out on the road and I said “ok, I have to run now – he can’t catch me” and I must have been tearing up the road to the finish at at least a 9 minute mile pace. It felt good to run, good to be so close to the finish, and incredible to not just feel my whole body breaking down after such an incredible event.

I crossed the finish line in 10:41 and change and someone (Crowther?) gave me a bottle and Scott trucker hat and the race was over.

Aftermath

I socialized for a bit, ate, Katie gave me a daisy, I met Neil and his wife who were there hanging out with the McCoubreys, congratulated Uli (actually, I interrupted him as he said he was “teaching Forest to dumpster dive”), ate potato chips, iced (thanks again, Katie!) talked with Josh and Paul about their races, cheered as more finishers came through, pet Jack, and filled myself to the point of explosion on chicken, salad, and really-not-very-much-food-but-all-I-could-handle. Katie offered to leave early and bring Io home from the dog boarding. Later, Joe, Greg and I started the ride back to Seattle on my spare tire. We stopped in Enumclaw where I checked the bolts on the spare at a McDonalds where I treated to Rolo Flurries (which, by the way, McDonalds does not offer in 32oz size) and were back on the road. I got home about 9:30, hobbled up the stairs and Io, Jupiter, and I told each other how much we missed each other before I ate a few more snacks and hit the sack after the most physically demanding day of my life.