Geologic map of San Francisco’s Marina District, adapted from US Geological Survey Professional Paper 1551-F

Say what you will about San Francisco’s Marina District. It has its fans and its detractors, and plenty of stereotypes to go around no matter which side your opinion falls on. But today, the anniversary of the M6.9 1989 Loma Prieta earthquake, everyone’s heart should go out at least a little bit to that part of the city that sustained the most damage. While Loma Prieta luckily didn’t come close to destroying San Francisco in a way to compare to 1906, the Marina suffered more structural collapses – and more fire – than anywhere else in town that day.

Unfortunately, though, the reason the Marina came closest to getting wiped off the map that October afternoon ties directly into the way the Marina put itself onto the map to begin with: artificial fill. The land of the Marina is not natural land at all. It’s a combination of sand and brick and wreckage put in place to recapture some real estate from the Bay. And when it comes to earthquakes, that’s a very bad kind of land to be on.

1851 Coast Guard survey map of the cove that eventually became the Marina District (USGS).

The shoreline of San Francisco today has been altered substantially from its natural state, both to make a better docking area for ships and to increase square footage for built development as the city became more established. This early filling was mostly concentrated along the Embarcadero, the Financial District, and SoMa, and it used all kinds of things, from sand and rock to sunken ships. There was some going on in the Marina, but that was effectively the outskirts of town at the time. By the end of the 1800s, some fill had been put in place to round out the edges of the natural cove and cover over a slough, several piers extended out into the Bay, and a seawall had been placed, but this still enclosed a large area of marshy water.

The Marina in 1912, shortly before the landfill for the 1915 Exposition began. The entire watery area enclosed by the seawall is full of houses now (USGS).

The real landfill action in the Marina started in the wake of the 1906 earthquake, though not in the most obvious way. Most of the wreckage was dumped into Mission Bay, though some of it did make it into the northern edge of the Marina. Rather, the final filling in of the Marina came with San Francisco’s way of showing everyone that it had overcome the cataclysm of 1906 and was a world-class city once again: the 1915 Panama-Pacific Exposition. Between April and September of 1912, the remaining water in Marina Cove was filled in with sand and sludge, to make way for the temporary but grand expo buildings. Structures that are not meant to last were probably the best sort of thing to put on this not-really-land, but once they were gone, more permanent ones took their place, setting the stage for the major problems that came with Loma Prieta, and will come with other large Bay Area earthquakes.

So, what’s so bad about this fill? First of all, it shakes harder than solid rock does. When a seismic wave travels through a material, it has a certain energy budget that must be balanced between shaking and moving forward. The more it shakes something side to side, the less fast it can go. With solid rock, everything is consolidated, so there’s not much side to side shaking, which means waves can go through this rock pretty quickly. With softer stuff – and especially with completely unconsolidated fill – each and every particle of it can be shaken side to side quite a lot, so more energy goes into this shaking and less goes into propagating, which leads to stronger shaking for longer. Because the water table in the Marina is still so high, liquefaction – the process of seismic shaking stirring water and soil together until they basically become quicksand – compounds the problem. Adding an earthquake to some artificial fill is a recipe for destabilized foundations, separated pipes and gas lines, and any number of other problems that could come from combining those elements.

I find it simultaneously sobering, ironic, and spooky that fill largely placed for a celebration from recovering from one earthquake led to such concentrated destruction in the next earthquake. It should also absolutely be a cautionary tale that, despite successful recovery from Loma Prieta, the underlying issues that produced the damage have not gone away, and will behave that way in the next one. The ground underneath the Marina is definitely working against it. It certainly wouldn’t be a bad thing if a fanatical fixation on earthquake preparedness worked its way into the stereotypical Marina culture.

There’s a lot of frustration and restarting and re-coding and waiting inherent in numerical modeling. Once everything works, running the models needed to answer a given question is just a matter of processing time, but up until that time when it finally works? There can generally be a need to get those frustrations out there. My personal venting method, as many of you have probably seen me do, has been kvetching about obscure details of the code I use on Twitter. But I may have to get a little more creative, because Jennifer, one of the other grad students in our lab, came up with pretty much the best model frustration vent I have ever seen.

Jennifer was having trouble setting up her fault models in a way that they actually ran. She got so fed up with them on Tuesday that she went home early and baked a cake. A cake structured and decorated like her study area.

The area in question is the San Gorgonio Pass in southern California. This is a nice simple place to run a freeway through, if you’re trying to get from L.A. to Palm Springs, but if you’re a fault trying to traverse the area? Good luck. A colloquial name for this area among the seismological community is the San Gorgonio Knot, because the easily-traced continuous part of the San Andreas Fault ends here, and comes into a complex mess of other strike slip faults and thrust faults, which may or may not be connected, and may or may not be able to rupture together. The issue of whether you can have an end to end rupture of the southern San Andreas is drastically complicated by the San Gorgonio Knot. There’s a ton to study here, both in terms of determining what the fault structure actually is, and then trying to figure out how a rupture might be able to traverse from fault to fault.

Jennifer is working on one of the many fault junctions within San Gorgonio Pass. This one involves a bent thrust fault that is joined (or possibly cut) at a nearly 90-degree angle by a right-lateral strike slip fault. While she’s still working on making a viable mesh of this fault junction, she very successfully made a viable cake.

The step in the surface of the cake represents the thrust fault, its scarp, and the San Bernardino Mountains behind it. The little line of butterscotch chips marks the trace of the strike-slip fault.

But surface geometry is only part of the issue in understanding San Gorgonio, and Jennifer’s cake takes that into account. She cut the cake subparallel to the strike-slip fault to reveal a wonderful offset in the chocolate cake/white cake sedimentary layers right where the thrust faults in. Also note the marble cake used to represent the metamorphosed roots of the mountains.Another wonderful thing about this method of venting about fault modeling frustration is that the rest of the lab got to partake. And it was delicious.

This gives me all kinds of ideas for cakes inspired by my own work. This will require me learning how to bake without setting off my smoke detector, but I think that may be a worthwhile pursuit. Have any of the rest of you made baked goods (or other food) inspired by your research? That might just be an Accretionary Wedge topic in the making, yes?

(For those who are wondering, yes, I have Jennifer’s permission to blog both about her cake and about her study area.)

Share this:

Today, I flew on Alaska Airlines from ONT to PDX. I had a window seat on the right side of the plane. If you are geologically inclined, or a fan of natural landmarks of California and Oregon, you will want to do this too at some point.

Here’s why:

The San Andreas Fault. (I was even able to see the infamous Palmdale Roadcut through the zoom of my camera, but the actual photo turned out blurry.)The Garlock Fault.

The Sierra Nevada. I’m about 75% sure that the serrated-looking mountain two thirds of the way to the top and two thirds of the way to the right is Mt. Whitney.Yosemite. Mono Lake. We also flew past Long Valley Caldera, but it was enough in the distance that the photos of it ended up too washed out to make it clear what it was. Pity.

Lake Tahoe. And then we got into the Cascades. Shasta was visible out the left side of the plane. I could see it past people’s heads, but I didn’t take a picture because that would’ve been very awkward. Not to mention difficult to frame. There were also a few smaller volcanic-looking features out the right side of the plane before we got into the big stuff.

Three Sisters. Mt. Jefferson. Clouds, you’re cramping my style. Mt. Hood, barely. And Mt. Adams, even more just barely. I imagine Mt. St. Helens would’ve also been visible if it hadn’t been cloudy. Or if it hadn’t blown half of itself into oblivion in 1980, but yeah.

And all of that in the span of two hours! You want to book this flight now, don’t you?

Share this:

I really truly always mean to liveblog the conferences that I go to. Each time, I tell myself that I’m going to be on top of it, and that I’ll sum up the thoughts of the rampant livetweeting into a post each night. But then the discussions go on past the end of the sessions, and then there is dinner and the inevitable conference beer, and then more discussion, and before I know it, I realize I’d better get to bed if I want to stand a chance of making it to that really interesting-sounding 8:30 AM talk. And then I end up not blogging.

This is definitely what happened with the most recent conference I went to, the annual meeting of the Seismological Society of America, which was this past 17-19 April in San Diego. I was so occupied with discussions and brainstorming and really interesting science that I didn’t even get much of a chance to explore San Diego, let alone to sit at my computer and write. But given the content of the meeting, I view this as a good thing.

I can honestly say this was one of the best conferences I’ve ever been to, and for several reasons. One is just the size of the meeting (and SSA’s meetings in general): enough people for there to be a wide range of talks on varied topics, but still small enough to find everyone you might want to talk to, and with enough time for discussion. Another was that I was very excited to give my first invited talk at a major conference. Not only was my presence specifically requested, but I got to share some really exciting new results. I felt like my talk went very well, and for the rest of the meeting, people kept coming up to me to say they enjoyed the talk, to ask if I could use this modeling method to look at some other sort of problem, or to suggest things to read and look at. That was good!

The really outstanding thing about this meeting, though, was the programming and the conversation that followed. A lot of the sessions at this year’s SSA had direct relevance to the types of problems I’m working on in my own research. There was a session on determining inputs for models, and on what observational people and modeling people might want to see from each other. Since I am a modeler, and since much of my work is very sensitive to how detailed the inputs can be, this was an excellent way to start off the meeting. The following day, there was a debate on fault segmentation, how it’s defined, and how strict/permanent those boundaries between fault segments are, if they even exist. That was followed by a debate on probabilistic hazard calculation, in which defining fault segments is a major element. I look at fault geometry issues, and geometry is a possible definer of segmentation, and I hope that my work can feed into probabilistic hazard work, so it was really good to hear these debates and consider how they might inform my work. Lastly, the final day of the conference was almost completely full of all sorts of talks about the San Jacinto Fault, from observational to structural to paleoseismic. I’m currently working on the San Jacinto, so this session couldn’t have come at a better time. The discussions after this session gave me an idea for another paper, and opened the door for several possible collaborations. Extra win!

There were also two conference field trips, and I had a hard time deciding which to sign up for: one was to look at paleoseismic rupture evidence on the southern Elsinore Fault, and the other was to visit the giant shake table at UC San Diego. I ended up going on the Elsinore trip, which was excellent (though hot), and which deserves a post of its own.

I have gone to the SSA meeting three times now, and while it has always been very good, this year’s content and discussion put it up there among the best meetings I’ve ever gone to. And given all that conversation, all those talks, plus that field trip, I realize I should not feel so guilty about not liveblogging this time…

(But I do feel guilty about leaving this unfinished except for the last paragraph, then forgetting about it, and then not posting it until the middle of June. D’oh!)

Share this:

Recently, I came across a link to the California Digital Newspaper Collection, which includes scans and full texts of many newspapers from around the state, dating from the 1840s into the present. It’s a fantastic resource, and yet somehow one I did not know about until only a month or so ago, never mind that it’s operated partly by UC Riverside. Surely, this database is enough to make any California history buff squeal.

But out of this whole database, I’m sure you can all guess which city’s papers I looked up first, and which dates.

Yes, the famous Call-Chronicle-Examiner is in there, the April 19th, 1906 joint issue between San Francisco’s three major newspapers, printed in Oakland after newspaper row at Third, Market, Kearny, and Geary got all but obliterated, detailing how the rest of the City was sure to be completely wiped off the face of the map before the day was out. It’s not just the famous headline there – it’s the whole thing, its full dire and frantic text. I was definitely excited to be able to read the full issue associated with that famous headline.

I marvel at the fact that any copy of this paper still exists. I admittedly don’t know what the standard paper route distribution times were like in 1906, but given that the M7.8 mainshock occurred at 5:12 AM, I cannot imagine that it had been long since the copies had finished printing, let alone made it into the outer reaches of the city, where the flames did not reach. The burning of the Call Building is one of the most iconic images of the 1906 earthquake and fire. I am impressed that any piece of paper found its way out of the building and lasted long enough to make it onto a 21st-century scanner.

The Call Building withstood the earthquake, but was completely gutted by the fire.

This individual copy of a newspaper’s survival story might be near heroic for a piece of ephemera, but what it does provide is a far more mundane look at the City that was nearly lost and forever changed in the predawn of its release date. And it was mundane – a slow news day, even. Enrico Caruso’s performance with the Metropolitan Opera, which is a key event described in so many histories of the disaster as a prime example of pre-quake opulence, does not even make an appearance until page 5 (apparently Caruso was great, but the production “as a whole, however, is lacking to some degree in distinction”). No, the front page news is of how an insurance magnate was arrested for driving on the wrong side of the street, how a rich merchant married a nurse who helped bring him back to health after long illness, how Standard Oil is worth a huge amount of money as a company (some things don’t change, earthquake or no earthquake), how the son of a local millionaire is in jail for forging his father’s signature on a check, how one toddler shot his playmate, but another child went on his own volition to earn money to support his family, how the Supreme Court has invalidated divorces granted by states in which the former couple did not reside. Later pages have news of bad weather in Shasta, of how one Reverend Crapsey (what a name!) is on probation by the church for violating Episcopal teachings in his sermons, how the Czar can finally pay some of his debts, how plans for the County Fair are going, and, even, how some people were lucky to escape a warehouse fire in North Beach. News is news – things that happened in the past day, things of some note or merely local interest. I do doubt, however, that many of the things in this paper would have had much bearing come April 19th, even if April 18th had been a completely normal day.

Why yes, it was warmer that Wednesday! Just not in the way this forecast meant.

This newspaper shows a city bustling along in its day to day life, entertaining itself, worrying about its money, indulging in gasps over scandals, being tempted by flashy ads (towels are on sale – don’t forget yours!), cheering victory for the home team…This newspaper shows a city doing the kinds of things that cities do now, but the details within, the particular zeitgeist filling out those template topics, those were cut short the day this paper was printed. This is the last day of Old San Francisco, and the fact that it was just any other day drives home how completely sudden the earthquake was, how completely consuming the fire was. One last normal day, forty-five seconds of movement leading to three days of inferno leading to a long tail of recovery before the City could have template article slow news days again.

Of course nobody had any idea what was coming, or that they needed to tidy up their business by the end of April 17th. Nobody knew that earthquakes and faults had anything to do with each other before the 1906 earthquake. (Only a handful of geologists had a concept that such a thing as “The San Andreas Fault” even existed!) The fact that we know this correlation now, that we can study and model faults as the earthquake source, means we certainly have a better understanding of earthquake hazard in San Francisco, the rest of California, the rest of the world. But we still can’t predict the next Big One any better than we could in 1906. Haiti had no idea in 2010, Japan’s foreshocks still weren’t enough to make it anticipate what was about to happen in 2011. The next one in California, in San Francisco, will be just as sudden and unexpected. We won’t know it’s coming until it’s here.

I am grateful that the topic of this month’s Accretionary Wedge was expanded from just countertops to include any sort of decorative rock, because I grew up in a sadly formica- and linoleum-laden household. Sure, there were nice hardwood floors (once we got rid of the horrible ’70s burnt orange shag carpet), but there was no decorative rock to be had. Goodness knows there are plenty of stone monuments in the greater Washington D.C. area, but I am not going to write about those either. Instead, I’ll be writing about an unassuming floor nearly halfway across the world from where I live now.

I briefly visited London in the summer of 2010, and one of the exceptions I made to my usual short-trip-to-a-city method of wandering around and not actually going into places was the Natural History Museum. I would go into how much I loved this museum, how much I enjoyed the geology exhibits, and into my small complaint that the Kobe earthquake simulator room didn’t shake hard enough that you’d notice it was moving if you weren’t standing completely still yourself, but the prompt is not about our favorite museum exhibits (though that could be a future topic, if it hasn’t been done already), so I digress.

The floor in question is on one of the staircase landings coming down from the geology exhibit. I don’t normally spend time staring at floors, but as someone who studies faults, I guess I have some sort of mental radar that picks up on this sort of thing:

A brochure corner doesn't make for the best scale ever, but that top edge is about an inch and a half long.

I don’t actually know anything about this rock. I assume it’s marble (though I’m the first to admit that rock identification is not my strong suit), but I have no idea where it’s from or how old it is. There are definitely at least two episodes of deformation in here, though: a ductile phase, during which the long tight folds occurred, and a later brittle phase, during which the faults formed and offset the folds.

But anyway, I was both excited to see some faulting in the floor this close to an actual exhibit about faults, and disappointed that the museum didn’t have any sort of sign or suggestion to actually look for this. Though I suppose it could be for the better to not have people staring at the ground while going up and down a staircase…

Share this:

The El Mayor-Cucapah earthquake was a M7.2 centered in Baja California, Mexico, that occurred on 4 April 2010. It was the largest earthquake in Baja California since 1892, and the largest to shake southern California since the M7.3 1992 Landers earthquake. It was also the most significant geologic event that I personally have experienced.

The earthquake hit in the middle of the afternoon on Easter Sunday. Seeing as I have not celebrated Easter since I stopped believing in giant bunnies with eggs, and seeing as the first draft my Master’s thesis was due to my committee by the end of April, I was working in the lab that afternoon, all by myself. I had just opened – I swear I’m not making this up – a pdf of an article on ground motion in the greater Los Angeles area when the lab started to shake.

The initial shaking didn’t feel like much. It was side to side, but not a very big motion. My initial thought was that they were surface waves from an upper 4s earthquake on one of the local faults. It reminded me shaking from events that size on the Elsinore and the San Jacinto. But then it got stronger, still side to side, and I went, “Huh.” And then the Rayleigh waves started. These were textbook Rayleigh waves: big rolling motions, but fortunately not very hard or fast. The amount things were moving, no matter how slowly, and the fact that this had been going on for a while, finally triggered my, “Oh crap, this is a big earthquake,” sense, and so I got my ShakeOut on and dropped, covered, and held on. I knew it had to be far away, and I hoped it was in the desert and not, say, under downtown Los Angeles.

I’m a bad judge of time, so I’m not sure exactly how long it lasted, especially since I didn’t notice the P waves. Judging by the amount of music that had elapsed on iTunes, I’d say a good 25 seconds. I stayed under the desk for a bit longer, then got up and immediately started flailing all over Twitter, alternating giddy excitement with as much proper science as I could find. (I tried to go back that far in my Twitter feed to show how ridiculous I was being when the preliminary magnitude was posted. I can only go as far back as December 2010, but I distinctly recall tweeting “six point nine” several times in a row before it got upgraded to 7.2.)

So, that was exciting! And distracting! I didn’t write any more of my thesis that afternoon, that’s for sure. But the most exciting part was yet to come. Namely, about two hours later, I got a text message that said something to the extent of, “Anyone want to go install GPS equipment and look for the fault in Mexico tomorrow?”

Um, yes? Hell yes.

We left Riverside at the crack of dawn the next day, with a Subaru Outback and a Ford Focus full of GPS equipment. We stopped in Calexico, which didn’t have all of its electricity, but didn’t have any obvious damage where we were, then met up with a group from UC San Diego before we all crossed the border. They just waved us across. Immediately, it was clear that Mexicali had fared worse than Calexico. It was by no means total devastation, but most of the power was out, and there were a lot of broken windows and unhappy-looking building facades. But there was also proof that reinforcing your masonry helps: we saw a lot of squat cinder block buildings with no damage, and the ones that did have some chunks that fell off had reinforcing rebar sticking out for all to see.

The building on the left was unreinforced, the one on the right had rebar reinforcing the cinder blocks.

As we drove toward the epicenter, the damage became more severe. We did see a few completely collapsed structures, as well as some nasty cracks (that were not offset) in the road. There was also a lot of evidence for liquefaction, despite this being the desert; the area south of Mexicali is irrigated and agricultural. The worst example we saw was a house whose front yard had basically turned into a pond.

Flooding from liquefaction: a hazard even in the desert.

There was no evidence of offset, or even cracking, at the coordinates we had for the epicenter. I’m not sure if this is due to the dip of the fault, or if the coordinates we had that early in the game weren’t quite correct. At any rate, we stopped there – a rocky foothill at the base of a larger mountain – and decided to set up three GPS stations in the vicinity, corresponding with past USGS placemarks, in order to track the postseismic deformation in the area. It is rather difficult, I can tell you now, to set up your tripod in a completely level manner when there are aftershocks rolling past you fairly frequently. We could feel even the small ones, and the idea that these are propagating waves was really driven home: they felt like someone was rippling the ground below us like a carpet, but just one ripple at and then it was gone, as opposed to consistent shaking. Even with no structures around to shake and make noise, there was still a deep audible rumble with each aftershock.

We ate lunch near our GPS stations, small aftershocks rippling past the whole time. I lost track of exactly how many I felt. After lunch, we decided to split up – some people went to go scope out other good sites for GPS, and some went in search of the fault. I was in the latter group. We were all expecting to find surface rupture, since the crust in Baja California is so thin (on account of being so close to a spreading center), and we had been surprised that we didn’t see any such thing at the epicenter. We knew there would be hordes of mappers out there eventually, finding each fault strand and conjugate fracture, but we wanted to see it for ourselves, because how often do you really get to see fresh surface rupture?

We set off looking for the Laguna Salada Fault. At the time, it was thought to be the prime candidate fault for the 2010 earthquake, since it was the one that ruptured in a similarly sized event in 1892, which caused damage in the same area. But the Laguna Salada Fault didn’t want to show us anything. We found plenty of liquefaction features, and lots of spreading cracks in the road, but no long linear offsets. It was perplexing, and given how early we’d all woken up, also quite frustrating. But I, for one, was not having the idea of leaving without finding surface rupture, no matter how exhausted we all were, so I insisted we try going further west on Highway 2.

And that’s where we found this:

Oh. Em. Gee.

Seventeen strands of right-lateral strike-slip rupture slicing the freeway, and slowing cars enough that their drivers could really gawk at what the fault had wrought. Construction crews were already at work patching the road, so we wasted no time. We parked on the shoulder, hopped out with field notebook, measuring tape, and handheld GPS unit, and measured, photographed, described, and took waypoints for all seventeen strands. We measured a total of ~1.2 meters of right-lateral offset across the road.

As it turns out, the reason we hadn’t found the rupture where we thought it would be is because the rupture wasn’t actually on the Laguna Salada Fault, at least not that far north. This was a very complex earthquake, and it ruptured several faults that hadn’t previously been identified or named. As more data was collected, researchers determined that the strike slip faults involved were actually subsidiary branches connecting to a larger normal fault – not actually surprising, given that Baja is in the transition from a transform boundary to the north and a spreading center to the south. After I read that paper, I realized that I had photos showing the normal fault contact as well as photos showing the strike slip ruptures:

Roadcut along Mexico Highway 2. Left: obvious contact between different rock types, with no evidence of rupture. Right: multiple strands of rupture in the hanging wall material.

We were pretty excited to find all of this surface expression. We were also very punchy and exhausted by this point. As a way to cap off the day, we decided to take inspiration from that famous photo of a woman standing next to the 1906 rupture on the San Andreas Fault, but with our rather less proper 2010 twist:

Though who knows, maybe the 1906 shot was considered a wholly undignified pose at the time.

Crossing the border back into the United States took forever, since the northbound crossing structure in Mexicali had sustained damage in the earthquake, so everyone went to the next crossing over. Even the frustration of that (and of getting back to Riverside at 3 AM) was not enough to negate the excitement of this major seismic event, and of being part of the immediate field response to it.

Obligatory offset fence.

All photographs in this post (except for the 1906 one, obviously) are my own. If you’d like to use them for anything, please ask me first.

There is a poster that displays the results of my latest models, all happily rolled up in its poster tube, sitting in the back seat of my car and waiting to be driven to San Francisco in the morning.

Getting it into that position, however, has probably been the biggest struggle of my conference-poster-making experience so far. Seriously, this whole process has been a veritable checklist of things that can go wrong with numerical modeling, ranging from code idiosyncrasies to outright computer fail to human error.

Allow me to elaborate:

The new, fast code does not yet support complex fault geometry. It doesn’t even support planar things that aren’t at some multiple of 90 degrees.

So we decided to use the old code, which is a mid-’90s hack of a code that goes all the way back to the ’60s. The old code requires an ornery old mesher, which was the subject of my entire last blog post. Meshing took almost a week, but it looked very pretty.

Except for the part where, because the mesher is old and the code is old, sometimes they both get pathological and don’t work with each other. Why this happens is unclear. My advisor is the one who turned the ’60s code into an earthquake dynamics program, and even he is not sure why this happened. It’s happened in his models before. He’s had to abandon some meshes. And it happened in my models. My big beautiful complex mesh, useless. (Maybe because I blogged about it? That would be my luck.)

So we went back to the new code, in the interest of time. This meant modeling a complex stepover region with bends in the individual strands as…two planar faults. Sigh.

I ran a suite of models with different nucleation points. Because everyone else was also running models, they went reeeeeeeallly reeeeeeally sloooowly. It became evident that, with the current goal of the poster, there was no way I’d be able to run all of the models I’d need to represent nucleation location – stress state – fault basal depth parameter space.

So we decided to focus on the different stress states. I ran a bunch of models with fixed nucleation points and different stress drops and fault strengths. I used exactly the same stresses as in the old code. I set up these five models to run overnight.

The computer turned itself off overnight. The models did not run.

So I ran them again. They finished. I post-processed them. And none of the model ruptures propagated past the forced nucleation point. As it turns out, the new code forces nucleation in a different way than the old code, and requires higher stress amplitudes to actually work.

I multiplied all the stresses by three. I checked, by hand, to make sure there would be no issues with the critical patch size required to slip before sustained rupture could occur. I ran the models again.

The models worked. None of the ruptures jumped, even the ones with extreme stress drops. Then I realized that the coefficients of friction in the macro I’d used to calculate the regional stress field were not the same as in the model input file. They were left over from an older project. Because of this, I hadn’t run the models I’d meant to run, so I needed to run four more.

I ran them. They worked. I put them on the poster and printed it…in the middle of this afternoon. What deadline?!

The poster printer stopped working during the next thing to print after my poster. I guess that’s a little luck?

And so here we are. The data is on the poster. The poster is in the car. In the end, this is where things need to be. And in the morning, I will go to where I need to be, and I am very much looking forward to this conference, despite every ounce of frustration that led up to it.

Share this:

Anyone who follows me on Twitter has surely seen me kvetching a whole lot about fault meshing lately. I figure, given my whines and curses, that I owe you all an explanation of what exactly it is that I’ve been doing.

I’m a fault/rupture modeler. I try to answer questions about fault physics, hypothetical earthquakes, and ground motion by way of computer models. In order to have a real tectonic earthquake, you need a real fault to engage in slip. The same holds true for a model – the duration of the model itself is the earthquake, but prior to that, you need some model faults. That’s what the mesh is – it sets up the chunk of earth in which you’re interested, and puts some faults in it.

The reason it’s a mesh and not a continuum has nothing to do with the actual geology and everything to do with the computational method. There are types of models that use a continuum, but the one I use, called finite-element modeling, requires a grid of some sort. It would be quite difficult to figure out the stresses and slip rates and ground motions through an undivided area over every timestep of a model. In this method, the model software looks at each individual element and does the physics there – Newton’s laws, equations of motion and of elasticity – then sums together all of the nodes at each timestep to get what’s going on the fault and in the area around it throughout the course of the model earthquake.

So what’s so tricky about making one of these meshes? For a fault with simple geometry, it’s not a big deal. A nice planar fault is easy to put into a mesh of elements of any shape, but faults in nature are almost never planar. Putting a bend in a fault can be easy or hard depending on the meshing software – sometimes it’s built in, sometimes it requires nudging nodes on the fault manually – but either way, you have to be aware of the angle of the bend and whether or not it’s distorting the elements too much for them to have reliable output. If there’s a break in the fault trace, if the mesher isn’t pre-coded for such things, you have to be worried about conforming the non-fault part of the mesh around these breaks. In some meshers, you have to set up the individual blocks of material around the fault, not just set up a solid block and stick faults in it; in those cases, you need to make sure the nodes and elements all line up, and that only gets trickier with more blocks and with more geometrical complexities.

The fault I’ve been meshing is full of geometrical complexities (and yet still far simplified from the USGS map of its surface expression!). It consists of two main segments, each with three bends, separated by between 1 and 5 kilometers, and overlapping by about 25 kilometers. There’s a small planar fault between the two big ones. This system is far too complicated for the built-in mesh software in the modeling code I normally use. I’ve had to do the trigonometry on more bends and fit more mesh blocks into gaps than is really good for personal sanity. There exist commercial meshing programs that could do this incredibly easily, but configuring their output to work in the fault modeling software is a whole different and frustrating can of worms. Someone who interns in our lab is working on doing that; for meshing this particular complex fault, we went with the older software. It may be clumsy, but it works.

At the time of writing this, the mesh of that system is completed at last, and I’m inflicting some hypothetical earthquakes on it as we speak. This means, now that you’ve reached the end of this post, I probably won’t be whining about this any more for a while! If you’re interested in the results of the models, drop by my poster at the AGU fall meeting in San Francisco on Thursday the 7th of December!

I used to write a lot of things in a lot of places around the internet, and I slowly fell out of the habit of writing in any of them. It wasn’t something I ever intended to let go, but it just happened. The more I felt obligated to post, the more I procrastinated, and the less and less it ever happened. The Livejournal, sure, that was probably at a time it could have been abandoned, since that dates all the way back to high school. The first geoblog, however, I really regret letting go, since I started reading fewer of other people’s blog posts then as well.

I do think that trying to write about different things in different places was part of what caused me to fizzle out in all of them, so I’m going to do it differently this time. This is the blog. Period. There’s also the Twitter, but this is the blog. I still have some site formatting to do, some links lists to add and update, but I wanted to get this much out here for now.

So, what is this, then? Seeing as I am a geoscientist, a great deal of this will be geoblog. But seeing as I also draw and play music and travel, it will be about those things, too. A earthquake-San Francisco-comics of personified things-music from around the world-and maybe even Riverside blog. And I intend to keep it up this time.