Ink Spots is a blog dedicated to the discussion of security issues across the spectrum of conflict and around the world. Our contributors are security professionals with interests and expertise ranging from counterinsurgency, stability operations, and post-conflict environments to national security strategy, security cooperation, and materiel acquisition. We hope this site will be a forum for discussion on both the issues of the day and broader, long-term developments in the security sphere.

Monday, August 31, 2009

TheWashington Postyesterdayreported some vague detailsof the metrics to be included in GEN McChrystal's Afghanistan assessment, which should hit the White House some time in the very near future. Here's Karen DeYoung:

The White House has assembled a list of about 50 measurements to gauge progress inAfghanistanandPakistanas it tries to calm rising public and congressional anxiety about its war strategy.

Administration officials are conducting what one called a "test run" of the metrics, comparing current numbers in a range of categories -- including newly trained Afghan army recruits, Pakistani counterinsurgency missions and on-time delivery of promised U.S. resources -- with baselines set earlier in the year. The results will be used to fine-tune the list before it is presented to Congress by Sept. 24.

Lawmakers set that deadline in the spring as a condition for approving additional war funding, holding President Obama to his promise of "clear benchmarks" and no "blank check."

As anyone who's been paying attention will know by now, the military is big on "metrics." I've yet to see a really detailed explanation of the way this works inside DoD, though, probably because it would be too boring. But here's a quick primer.

When doing an assessment of any activity, the military looks at two things: Measures of Performance (MOPs) and Measures of Effectiveness (MOEs). The former category usually gives you raw numbers: how many personnel trained, how many pieces of gear supplied, how many patrols run, how many enemy KIA. If you think about it, the term is pretty self-explanatory: you're measuring performance in whatever activity you're engaged in, but measuring that performance in a quanititative way. MOEs are concerned with how much the thing you're doing has helped to accomplish your objectives. In the case of Afghanistan, that would be something like whether or not the ANA units you've trained meet a certain capability level, or whether the small arms you've supplied have increased marksmanship scores, or whether the increased ANP presence in a certain village has positively impacted voter turnout.

Military and development aid to Pakistan provides perhaps an even simpler way to demonstrate the distinction: a MOP would be "military grant aid provided via Foreign Military Finance, Pakistan Counterinsurgency Capability Funds, Coalition Support Funds, etc., in dollars," and the associated MOE would be something along the lines of "Pakistani counterinsurgency missions conducted in the NWFP."

The whole concept is sort of confusing, because the distinction between a MOP and a MOE is only meaningful if weunderstand what it is we're trying to do. Just tallying up numbers, obviously, is meaningless. So if the mission is "kill the enemy," and we want to understand the utility of firing weapons at the enemy to accomplish this mission, then we can use rounds fired by coalition forces as an MOP, and number of enemy dead as an MOE. But if we're looking at the bigger picture (and generally, that's what we want to do), then something like a body count is always going to be a MOP. The MOE, then, would be related to a next-order effect, like "number of districts in which coalition forces can operate without taking fire or being engaged by IEDs."

All of which is to say that without understanding the context -- what the mission is, and which "metrics" are MOPs versus MOEs -- a list of a whole bunch of things like this is kind of meaningless. I don't suppose I should say "meaningless," because itdoesillustrate areas of focus and emphasis to some extent. The three examples listed in the linked article -- "newly trained Afghan army recruits, Pakistani counterinsurgency missions and on-time delivery of promised U.S. resources" -- shows a focus on building partner capacity, on training and equipping our allies and the host nation government, rather than on U.S. combat operations.

Understanding HN capabilities is a good thing, as is a focus on improving them. I'm going to express the same reservation here that I have in the past, though: without a sufficient level of security and governmental reform, efforts to empower the Afghan National Security Forces look more than a little Sisyphian. And there's precedent: in the pre-Surge days in Iraq, the days of the Together Forward operations, everything coming out of MNF-I and CENTCOM indicated a focus on training up the Iraqis so we could get out. The war was hugely unpopular, and as the saying went about Iraqi security forces, "these guys are our exit strategy." The Surge was not just about a troop increase or a change in tactical and operational approach to emphasize counterinsurgency, but the institutional realization that trying to get the Iraqis ready to do the job themselves, at that point in the war, was a near-complete waste of time and effort. So the options were to accept massive risk of an unacceptable reverse once U.S. forces departed, or commit materially to trying to change those circumstances. There's a reason Ricks' book was called "The Gamble": in early 2007, the idea of throwing more U.S. troops into the fight seemed like a Hail Mary, and perhaps it only could've been considered by a President who was so wildly unpopular.

And so it's starting to look like we're in a similar spot in Afghanistan, that 2010 there may look a lot like 2007 in Iraq: everyone seems to understand by now that we're going to need more U.S. troops in country before the security situation approaches anything near acceptability, never mind an environment where shifting the burden to Afghan forces is a plausibly achievable "exit strategy." So now we're back to the politics. Recent polls show that somewhere on the order of 50 percent of Americans now oppose the war. Casualties have gone up this year, and no one wants to see that trend line continue in the direction it's going. The President has an ambitious domestic agenda, and spending a ton of money in South Asia isn't going to help with that.

So what's the answer? Well,Kilcullen was probably right: "We'll fight for two years and then a successful transition, or we'll fight for two years and we'll lose and go home." Notable for its absence is any mention of "winning."

Does a "successful transition" look possible in the next two years? I'm not optimistic. But if we look at the metrics that GEN McChrystal's team has identified, then it certainly seems like "transition" has become the priority. And while that's probably best for America, I'm not sure it bodes well for Afghanistan.

2 comments:

Not sure if this is a doctrinal thing or just a technique, but I was also taught that we had to define a "how to measure it" for the MOE beforehand. That helps to keep people honest when tracking the MOE. Since MOE is usually qualitative, it is open to being manipulated by staff officers who really want the plan to work and are willing to rationalize anything into a positive MOE. Establishing the "how to measure it" beforehand helps to reign in that behavior. It can be modified later, but at least when the staff deviates from it, the commander immediately recognizes it and asks, "wait a minute, why are we using this new way of measuring?" He can more easily step back and consider if this makes sense or if his staff is trying to over-rationalize.

Contributors / Email / Twitter

Disclaimer

The opinions expressed by our contributors are solely their own. These views should not be taken to represent the official or unofficial position of their employers, nor of any government or other institution.