NGO Security

Thursday, February 25, 2010

GPS and cameras in Afghanistan

Wired's Danger Room has a short article on work-for-pay programs in Afghanistan. And I quote:

"So how do you track cash-for-work in a place like Helmand, where fighting still rages? John Stephens, who manages programs in Afghanistan for the U.S. charity Mercy Corps, came up with one solution: Use cameras with GPS to verify aid projects in insecure places where expatriate staff can’t oversee projects in person."

"The idea is simple: If an area is too dicey to send in expats, Mercy Corps sends in Afghan staff with GPS cameras — either a Nikon point-and-shoot, or a Garmin handheld GPS with built-in camera — to verify that the projects are actually being undertaken in the right places, so they can pay wages. The data is then uploaded to a Google Earth–style program, so Mercy Corps — which implements USAID projects — can track projects and their participants."

Let me get this straight. National staff is being sent into potentially hot conflict zones with a digital camera and GPS and told to take pictures? I hope there's a little more to this story in terms of risk management. I suspect packing a Western camera and GPS receiver around the Helmand Province might just be viewed as spying by the Taliban. What do you think? With some potentially nasty consequences for the Afghan employee. On the surface this sounds like using national staff as, excuse the pun, Canon-fodder.

Any readers from Mercy Corps that are hip to the details care to comment?

Sunday, February 14, 2010

Using the Likert Scale to Assess Risk

Humanitarian security practitioners often use impact/probability charts like the following one to determine levels of risk. You take a potential incident and mark the chart where the impact and probability intersect. It's a handy tool for allowing you to assess and prioritize a number of different threats. (The process of doing this with every possible negative incident you can think of is called bulletproofing.)

I've always thought this type of an impact/probability chart is a bit simplistic and doesn't really give you enough granularity to make the best, informed decisions. Instead, I use a variation based on the Likert Scale. In the 1930s, psychologist Rensis Likert developed a way to measure attitudes consisting of either a 5 or 7 point scale - 7 points gives you a higher degree of accuracy.

You don't need to be a math or stats-guru to use a Likert Scale, it's actually quite simple to implement and understand (an especially good feature when explaining the rationale for security decisions to management). For risk assessment, here's how it works.

1 - Very insignificant if it happens2 - Insignificant if it happens3 - Somewhat insignificant if it happens4 - Neither significant or insignificant if it happens5 - Somewhat significant if it happens6 - Significant if it happens7 - Very significant if it happens

Take the rating values for a possible incident and multiple them together. For example, let's say the potential of someone stealing office supplies at a large NGO's HQ is probable (6) but insignificant (2). That gives the incident a value of 12.

Compare that to the potential of a staff member being abducted in a certain conflict zone. Let's say it's somewhat probable (5) and very significant (7) if it happens. This incident tallies up as a 35.

The higher the number, the more time and effort you should devote toward preventative and contingency measures.

You can make this particular Likert Scale even easier to use by multiplying the total by two. When we multiple probability by impact we start out with a possible range of values from 1 to 49. If we multiple by two, the range then goes from 2 to 98, which is close to the familiar 1 to 100 scale. From a cognitive standpoint it's easier for someone to relate to a score of 24 for the pencil thief incident and 70 for the more dire abduction scenario.

You can get good quantitative results quite quickly by plugging the numbers into a spreadsheet and then sorting after you're finished.

It's worthwhile to mention that two heads are better than one (usually), and it's useful to have several people that are knowledgeable about the operating environment work up incident ratings. You can either go for a consensus view or simply take the average of the different responses and use that as your rating.