Recently, I attended the Humanitarian Technology: Science, Systems and Global Impact 2014 conference in Cambridge, MA, where I presented our work in the semantic knowledge management and Linked Data space. The conference was well attended by representatives from US government agencies, NGOs, International Organizations, and commercial entities, all with the goal of contributing to the emerging ecosystem of humanitarian technology providers and implementers. It was great to see so many examples of technologies that have been successful in the commercial world being applied to a domain that impacts so many people around the world.

I have long been an evangelist for the power of semantic knowledge management and a web-of-data approach. Our previous work in the US Civil-Military Operations domain has provided us a view into the complex nature of information and knowledge management in dynamic environments such as humanitarian events, either sudden-onset (the Haiti earthquake of 2010), or long-lead (food crises in the Horn of Africa). The variable geographic scale, duration, and availability of international, national, and local responders leads to situations that are not very conducive to obtaining optimally-sampled, timely, and structured data. Instead, best available information is used to make very important decisions with critical consequences. Getting knowledge to those in the field and their support systems is key, and humanitarian actors are doing an admirable job carrying out their work in the face of such conditions. That said, this conference highlighted that there is much room for improvement in the way that data, information, and knowledge are obtained and shared, and how well that scales.

In my talk, I highlighted our work towards developing and fielding semLayer, a geospatially-enabled Semantic MediaWiki instance, with added capability for field data collection using a mobile phone app. I demonstrated our enhancements to this platform, including the integration of a PostGIS backend with dynamic geospatial property calculation, a Query Builder GUI for constructing geospatial-semantic queries, and our Open Data Kit collection app that has been hardened to US Department of Defense Information Assurance specifications. Beyond this, I discussed where we are going with the next iterations of semLayer, including support for reasoners based on RDF triples and publishing URIs compliant with a Linked Data approach. I believe this kind of flexible, scalable, and traceable system for managing knowledge would contribute significantly to the humanitarian information ecosystem.

There were several innovative efforts addressing pain points experienced in this domain. In particular, I was impressed to see so many talks focused on deploying low-cost unmanned aerial vehicles (UAVs) or drones for impact evaluations, especially after a sudden-onset disaster. Patrick Meier of QCRI and Emanuele Lubrano of Drone Adventures gave great presentations on an emerging grassroots movement for building a network of local responders who can deploy UAVs in the immediate aftermath of an event - in many cases, 72 hours before any help from abroad arrives. There is much to be explored in this space, including efficiently routing a network of UAVs to provide the most coverage, to deploying autonomous teams of UAVs that can adapt to new information as it is discovered, and dynamically replan based on reasoning of information collection priorities. MIT CSAIL’s Julie Shah gave a wonderful talk about adaptive robots that learn from the actions of humans with whom they are working, and dynamically replan their tasks based on this new information. This has tremendous potential for aiding UAVs in collection plans based on evaluations of human patterns observed, such as IDP camp evolution and spontaneous congregations of people.

Another very notable talk was given by David Megginson, who is leading the UN OCHA effort towards standardization of data elements through the Humanitarian Data Exchange (HDX). The Humanitarian Exchange Language (HXL) augmentation effort is now funded through the Humanitarian Innovation Fund, which presents an exciting possibility for a measure of standardization that has limited information sharing at scale in this (and other) communities. David’s discussion was honest, in that he didn’t claim that such standards will solve all problems - in fact, imposing rigid collection requirements based on all the possible information elements one might want in these situations can introduce as many problems as it attempts to solve. Instead, I took away two main points: 1) data provenance is sorely missing and must propagate for collection efforts are to be trustworthy, and 2) humanitarian operators use spreadsheets, so any technology we develop needs to work with these “low-tech” capabilities (e.g. Excel). It was a very practical discussion, and one I feel is necessary to get to the eventual web of data I and others envision.

Overall, this conference brought together some great minds and passionate people who want to contribute their time and efforts to helping those in need. It was a pleasure to see what has been done to date, and I very much look forward to being part of this community moving forward.

I attended SemTechBiz 2013 in San Francisco last week to give a lighting talk on semLayer - our mobile enabled spatially aware semantic knowledge management platform built on open source components. semLayer integrates and extends the Semantic MediaWiki, PostGIS spatial database, and Open Data Kit (ODK) platforms to create a semantic knowledge management application that enables users to collect, organize, tag, search, browse, visualize, and share structured knowledge. semLayer is also available with the ROGUE-JCTD (Rapid Open Geospatial User-Driven Enterprise - Joint Capability Technology Demonstration) OpenGeo Suite for military applications.

In semLayer, form definitions provide a rich set of semantic annotations for mobile data collection. Forms are defined in the semantic store and pushed automatically to mobile devices. When the mobile form submissions are pushed, the app automatically syncs with the semantic store and is annotated with the knowledge base ontology. PostGIS spatial objects and functions are represented as categories and properties in the semantic store, thus providing the capability of including spatial constructs in semantic searches.

There were a number of interesting keynote presentations. Walmart Labs CTO Abhishek Gattani discussed the integration of semantic technology to Walmart's online store search engine Polaris. Walmart is making a push for semantics integration having acquired a number of startups: Kosmix categorization engine, Grabble mobile POS receipt technology, OneRiot mobile social ad network, Social Calendar app, and others. Walmart's semantic platform is built around the semantic categories consisting of people, topics, events, places, products. Mapping a consumer's keyword search onto the right category means millions of additional sales for Walmart, which uses contextual knowledge from Wikipedia, Twitter, and the like to accomplish this task.

Semantics of keyword search is essential for eCommerce. For instance, if a user is shopping for a green lantern, does your search engine display lanterns with green color, or DVDs, comic books, action figures about the superhero Green Lantern? If the user is searching for a Kindle that your store doesn't carry, does your search engine infer that Kindle is an instance of the product category eReader, and display competing products (e.g. Nook) from other vendors that your store carries?

There were several prototype technology presentations as well. Eric Freese of codeMantra gave a proof of concept semantically enabled eReading demo - CloudShelf. The prototype is browser based where the knowledge is encoded in the text, and scripting extracts RDFa to create internal knowledge. Readers can view the internal knowledge base using the knowledge base navigation panel that contains known and inferred semantic facts about the entities in the document. In addition, users can ask questions of the knowledge base using natural language and get answers with an NLP pattern matching process that transforms the query into SPARQL. Very cool.

With regards to our social agent honeynet research, here we were presenting initial findings from an effort to develop an agent based dynamic honeynet that simulates user interactions with social networks for the purposes of developing attack models. You can check out our demo here. Our solution allows security professionals to create networks simulating user activity for companies and government entities through the provision of a set of parameters. Our research pointed to the importance of instantiating a social dimension to our virtual agents, providing the agent with the ability to interact with a variety of social networks. For this purpose, we developed influence models to learn patterns from actual users’ activity on social networks to improve the effectiveness of the social agents.

One of the questions from the audience was why use agents to collect attack data when regular users in the course of interacting with social networks get attacked enough as it is? Our response was that a deception network enables us to feed false information to the adversary as needed, track adversarial movements to learn attack patterns and attributes, and use the information collected during the attempted infiltration for the purposes of building more robust defenses and developing more targeted offensive operations. Additionally, deception networks force our adversaries to expend resources attacking our fake network. Another line of questioning asked if we were wasting people’s time who decided to follow our fake agents since about 50% of the followers of our agents were real and 50% were found to be malicious. This generated a lively debate, whereby someone else in the audience responded with the idea that identifying these people might be useful for preventative defense. Maybe these are people who are more vulnerable and would be more likely to click on spam and that perhaps Twitter or others might want to know this. A further question had to do with how do we know that the users following our agents are malicious? This is fairly straightforward because the users attempted to pass us links that are associated with known bad actors. As a future effort we plan to automatically parse the tweets and see if the embedded links are already in a black list which would trigger alerts. We maintain what we believe to be the world’s largest intelligence database on botnets to cross-reference our malicious entities as well. You can check out that project here.

There were several ideas that came out of the collaboration at this conference related to our agents. One idea was to use our agents to collect and harvest social media artifacts for the purpose of understanding Arab Spring-like events. Additionally, our agents could potentially interact with users to explore the shaping of opinion, collaborating with users beyond just posting information to Twitter and following other users. We will definitely be exploring these avenues in the near future, so keep your eyes peeled for developments in this space.

One of the most interesting presentations I attended was from Laurin Buchanan of Secure Decisions who was involved in the CAMUS project, Mapping Cyber Assets to Missions and Users. This project was very relevant to our Commander’s Learning Agent (CLEARN) and Cyber Incident Mission Incident Assessment (CIMIA) work, which is an existing capability developed as part of an AFRL SBIR Phase II Enhancement that automatically learns the commander’s mission while bringing contextual knowledge and assigning priorities to resources supporting the commander’s mission in Air Operations planning and execution support. CLEARN/CIMIA monitors the workflow of operations personnel using Joint Operation Planning and Execution System (JOPES), the Air Mobility Command (AMC) Global Decision Support System (GDSS), Consolidated Air Mobility Planning System (CAMPS), and Global Air Transportation Execution System (GATES) to learn the resources necessary for each mission, and recommend workarounds when one or more the resources become unavailable.

Our semantic wiki work also generated interest during the poster session. One presentation that was interesting and tangentially related was SPAN (Smart Phone Ad Hoc Networks) by MITRE, which utilizes mobile ad hoc network technology to provide a resilient backup framework for communication when all other infrastructure is unavailable. I thought it was pretty neat that this was also an open source project. This research was interesting given our work in using mobile devices for data collection in austere environments during operations and exercises in the PACOM AOR in our MARCIMS (Marine Corps Civil Information Management System) project. Pretty cool to see all of the developments in this area.

I attended SemTechBiz 2012 in San Francisco last week. This annual conference on semantic technology, which is in its eight year, does a nice job in balancing the interests of research vs. commercial communities. This year the conference was tilted towards commercial vendor interests after all the vendors do sponsor the event although the product pitches were confined to a clearly identified solutions track. Here are my semantic annotations about this semantic technology conference.

Given our focus on open source platforms, I enjoyed the session on wikis and semantics. In this session, Joel Natividad of Ontodia gave an overview of NYFacets - a crowd knowing solution built with Semantic Mediawiki. Ontodia's site won the NYC BigApps - a contest started by Bloomberg as part of his grand plan to make NYC the capital of the digital world. NYFacets has a semantic data dictionary with 3.5M facts. Ontodia's vision is to socialize data conversations about data, and eventually build NYCpedia. I wondered why public libraries don't take this idea and run with it: Bostonpedia by Boston Public Library, Concordpedia by Concord Public Library and so on.

Stephen Larson gave an overview of NeuroLex - a periodic table of elements for neuroscience built with SMW under the NIF program. They built a master table of neurons and exposed as a SPARQL end point with rows consisting of 270 neuron classes, and columns consisting of 30 properties. NeuroLex demonstrates the value of a shared ontology for neuroscience by representing knowledge in a machine understandable form.

In the session - Wikipedia’s Next Big Thing, Denny Vrandecic, Wikimedia Deutschland gave an overview of Wiki Data project, which addresses the manual maintenance deficiencies of Wikipedia by bringing a number of the Semantic Mediawiki features to its fold. For instance, all info boxes in Wikipedia will become a semantic form stored in a central repository eliminating the need for maintaining the same content duplicated on many pages of Wikipedia. Semantic search capability will also come to Wikipedia to the applause of folks who maintain Wikipedia list of lists, list of lists of lists by replacing these manually maintained huge lists with a single semantic query. One of the novelties of Wikidata that it will be a secondary database of referenced sources for every fact. For instance, if one source says the population is 4.5M while another says 4,449,000, each source will be listed in the database, thus enabling a belief based inference.

It was nice to see several evangelists of linked data from the government sector at the conference. Dennis Wisnosky, and Jonathan Underly of the U.S. Department of Defense gave a nice overview of EIW Enterprise Information Web. It was refreshing to hear that DoD is looking at linked data as a cost reduction driver. Given the Cloud First mandate of the Defense Authorization Act 2012, the importance of semantic technology in the government will accelerate. In another session, Steve Harris of Garlik, now part of Experian gave an overview of Garlik DataPatrol - a semantic store of fraudulent activities for finance. I could not help wonder if someone from the Department of Homeland Defense was in attendance to hear the details of this application. Steve found no need for complex ontologies, reasoning, and NLP in this large scale application, which records about 400M instances of personal information (e.g. Social Security Number mentioned an IRC channel) every day.

Matthew Perry, and Xavier Lopez of Oracle gave an overview of OGC GeoSPARQL Standard, which aims to support representing and querying geospatial data on the Semantic Web. GeoSPARQL defines a vocabulary such as union, intersection, buffer, polygon, line, point for representing geospatial data in RDF, and it defines an extension to the SPARQL query language for processing geospatial data using distance, buffer, convex hull, intersection, union,envelope, and boundary functions.

Linked data being essentially about the plumbing of semantic infrastructure, it is hard to give engaging presentations on this topic. Two presentations bucked this trend. The presentation by Mateja Verlic from the Slovenian startup Zemanta rocked. Zemanta developed a DBpedia extension - LODGrefine for Google Refine under the LOD2 program. Google Refine supports large transformations of open source data sources, and LODGrefine exposes Refine results as a SPARQL endpoint. Mateja managed to give two impressive live demoes in ten minutes. The other rock star presentation was by Bart van Leeuwen - a professional firefighter, on Real-time Emergency Response Using Semantic Web Technology. Everyone in attendance got the gist of how FIREbrary - a linked data library for fire response, can help firefighters in the real world with a presentation sprinkled with live videos of fire emergency responses. It was instructive to see how semantic technology can make a difference in managing extreme events such as a chemical fire as there are no plans by definition for these types of events.

Bringing good user interface design practices to linked data enabled applications was another theme of the conference. Christian Doegl of Uma gave a demo of Semantic Skin, which is a whole wall interactive visualization driven by semantics. Siemens used it to build an identity map of their company. It uses Intel Audience Impression Metrics Suite to detect the age gender, etc. of the person walking in front of the wall for personalization of content driven by semantics. Pretty cool stuff.

Recently, OECD published its life satisfaction index across its member countries. How's Life?: Measuring Well Being gives an overview of the OECD Better Life Index methodology, which is focused on households and individuals rather than aggregate economic conditions, and on well-being outcomes as opposed to well-being drivers. In particular, life satisfaction measures how people rate their general satisfaction with life on a scale from 0 to 10. The surveys show that Hungary, Portugal, Russia, Turkey and Greece have a relatively low level of overall life satisfaction while Denmark, Norway, the Netherlands and Switzerland have a high level of overall life satisfaction.

In addition to life satisfaction, OECD survey also captures various indicators under the topics of housing, income, jobs, community, education, environment, civic engagement, health, safety, and work-life balance. For instance, the housing topic includes the rooms per person (average number of rooms shared by person in a dwelling), dwellings with basic facilities (percentage of people with indoor flushing toilets), and housing expenditure (housing expenditure as a percentage of disposable household income) indicators. In contrast, the jobs topic includes the employment rate (percentage of people currently employed in a paid job), long term unemployment rate (percentage of unemployed people who have been actively looking for a job for over year), personal earnings (average annual earnings for a full-time employee), and job security (share of employment with a tenure less than 6 months) indicators.
The chart on the left shows the indicator values for Hungary. In general, Hungarians are less satisfied with their lives than the OECD average. 65% of Hungarians have more positive experiences (feelings of rest, pride in accomplishment, enjoyment, etc) than negative ones (pain, worry, sadness, boredom, etc) on a daily basis in contrast to the OECD average of 72%.

As the chart shows Hungarians feel safe, are happy with the water and air quality, find their work-life balance acceptable, have a strong sense of community, and are happy with the general quality of the education. For instance, 89% of Hungarians believe that they know someone they could rely on in time of need. Yet in life satisfaction, Hungary scores the lowest.

Referring back to the chart, the scores for housing, jobs, civic engagement, and health for Hungary are lower than the OECD average while income is considerably lower than the OECD average. In Hungary, the average person earns about $13K a year, less than the OECD average of $22K a year. It seems like the low life satisfaction score for Hungary is connected to low living standards stemming from sub-par income levels coupled with a lack of jobs. For instance, around 55% of Hungarians aged 15 to 64 have a paid job, well below the OECD employment average of 66%.

In contrast, sense of community is the lowest score for Turkey. For instance, 69% of Turks believe that they know someone they could rely on in time of need, lower than the OECD average of 91%. Is there a correlation between life satisfaction and indicators for living conditions and quality of life? If yes, does the correlation hold across the OECD countries?

The OECD Web site has a mixer tool that lets a user to select the relative rankings of the indicators and analyze the ranked list of countries based on these preferences. The customized index enables the comparison of well-being across countries based on personal preference of the importance of 11 topics the OECD has identified as essential, in the areas of material living conditions and quality of life.

While the OECD mixer is a nice tool for engaging readers, as a modeler, we see the life satisfaction sentiment indicator as an output while the rest of the indicators (housing, income, jobs, etc.) as inputs. In other words, we believe that the input indicators drive people how people feel about their life experiences. To test this hypothesis, we performed a correlation analysis between the life satisfaction index and the rest of the indicators in order to understand which factors contribute the most or the least to the life satisfaction sentiment. The correlation analysis is shown on the right.

In terms of topics, life satisfaction has high correlation with income, jobs, housing, health, low correlation with education and no correlation with safety. In terms of individual indicators, room per person has the highest correlation with life satisfaction while job security, housing expenditure, employees working long hours, educational attainment, years in education, student skills, consultation on rule making, air pollution, homicide and assault rate indicators have very low correlation with life satisfaction.

The indicators under each topic show some interesting results. For the jobs topic, while employment rate, personal earnings, long term unemployment rate indicators are correlated with life satisfaction whereas job security is not. Similarly, for the environment topic, while water quality has a high correlation with life satisfaction, air pollution does not.

It would be interesting of comparison if there was a similar survey for non-OECD countries. Perhaps the OECD country values are dominated by the population's desire for the ability to collect as many material possessions as possible. Relatively poorer country values may not follow this correlation.

The following is the second installment of our MARCIM Semantic Wiki Newsletter, sent May 1, 2012 to those involved in the MARCIM technology demonstration. If you are interested in being added to the mailing list for these newsletters, please email Lmooney@milcord.com.

Semantic Wiki News - May 1, 2012

Hello,

Our participation in Balikatan 2012 exercises within Palawan, Philippines reinforced many lessons learned during Cobra Gold 2012, as well as elucidated fresh insights that have inspired our team's evolution of the Semantic Wiki. We look forward to keeping the team updated on the exciting progress being made through this MARCIM Semantic Wiki Newsletter. We kept the distribution of the newsletter to individuals directly involved in the project; please let us know if there are others we should include in the mailing list!

The following features have been implemented in the Semantic Wiki since our participation in Balikatan 2012:

In response to user feedback, we have taken our semantic Event Calendar (detailed in our last newsletter installment) a step further by allowing users to populate this calendar themselves via their mobile devices. Using the "Event" form within the mobile app, users may now enter the time, date, and details about a particular event. This data is automatically ingested into the Wiki, and placed upon a monthly calendar.

As you can see above, Balikatan users added information to these calendars about events such as barangay meetings, CMO operations meetings, VETCAP and MDVCAP outreach, and site dedication ceremonies. For users that choose to populate the calendar with events that are relevant only to their teams, we have created "Team Calendars" (such as the BK12 North Calendarwhich lists all activities being conducted by CA Team North). For operations personnel that desire an aggregate view of events, we've created calendars that contain all events (such as the BK12 Joint Medical Task Force Calendar which lists all MEDCAP and VETCAP related activities, irrespective of which teams are involved in the activity). The dynamic nature of this calendar serves to increase the quality of collaboration among the operational planning team and units in the field.

In the Philippines we observed that users found it difficult to search for site-specific information. This led us to recognize a distinct need to address ontological distinctions in the Wiki; that is, the need to draw sharper distinctions between Site pages, the schools that sites are associated with, ENCAP and MEDCAP activity that occurs at these sites, etc. As you can see from the screenshot for Buena Vista Elementary School, below, we have implemented tabbed site pages which address this issue. By having tabs for relevant data about the site (i.e. ENCAP Progress, ENCAP Description, School Information, and Village/Subdistrict Information), all the information that relates to a single site exists within the same page, so that accessing site-specific information is made increasingly intuitive for users.

Since Cobra Gold 2012, we've introduced an enhanced tagging scheme for all photographs ingested from the mobile device. Included in this enhanced tagging scheme are coordinates, which allow us to geolocate photographs on a map, as can be seen on the Balikatan 2012 photographs page. This allows users to zoom into an area of interest within the map and view images that have been submitted.

During Balikatan exercises we identified a way to place content (such as charts, tables, pictures, or text) on an internal timer within the Wiki so that the content doesn't appear until a defined date. This keeps pages from being cluttered with information that either is not relevant, or doesn't exist, until a particular point in time. For example, the tables on the MDVCAP site pages aggregate and analyze the demographic data collected from MDVCAP patient registration (i.e. see the Cabayugan National High School Patient Registration Data). These tables, which don't have any information until the registration process begins, are now placed on timers so that they appear when registration commences.

We are excited about this solution as it increases the practicality and sustainability of the Wiki, and allows us to feed users semantic reports and other content when we know they'll need them. These timers can be customized down to the very second that the user needs the designated content to appear.

Before discussing the innovation in our enhanced dynamic graphs, we'll first delve into some Semantic Query 101. The semantic reports (i.e. tables, charts, calendars, etc.) that you see within the Wiki go beyond simple analysis that can be completed in Excel; they're unique because every time you visit, the reports are created anew for you. They refresh every time you visit the page in which they are embedded. The reports can be automated in this way because every page within the Wiki (i.e. every assessment, every school site, every village) is tagged using a "subject, property, object" semantic annotation format. For example, Bangkok (Subject) has a Population (Property) of 8,300,000 (Value). Because of the way the data is structured, we are able to explore relationships between and among Wiki pages. This allows us to ask the database questions and receive answers (such as, what is the population of Bangkok?).

In constructing more complex reports, we need to conduct searches for properties that are semantic queries in and of themselves. In such reports, the information we need is not tagged within the pages, but by nesting a semantic query as a property value, we can infer knowledge from the other semantic relationships that exist. We used this logic to create the ENCAP progress graph (below) which you can view on the BK12 Engineering Civic Action (ENCAP) Activity page. Behind this graph is a semantic query that is asking the Wiki to deliver the most recent Percent Completion rate entered within the SITREPs for all ENCAP sites. This is a query within a query, as we are delving into the multidimensional semantic relationships that exist, rather than the tags within the page, to deliver this information.

This is a galvanizing development as it demonstrates that our visualizations and reports can be enhanced to drill down into multiple dimensions of the data, querying for relationships nested among other relationships, to derive insight and produce refined visualizations that provide value in operations.

To track usage of the Wiki over time, we have created a MARCIM Semantic Wiki Statistics page. This page dynamically tracks aggregate statistics (i.e. number of views, edits, and assessments), and well as statistics by operation (i.e. how many new user accounts were created for Cobra Gold v. Balikatan? How many photographs were ingested? How many assessments were ingested; and how many of these were medical assessments in either exercise?). The page also contains a dynamic bar chart that tracks user account creation over time, and dynamic pie graphs which detail the number of assessments completed by operation.

The following is our first MARCIM Semantic Wiki Newsletter, sent March 9, 2012 to those involved in the MARCIM technology demonstration. If you are interested in being added to the mailing list for these newsletters, please email Lmooney@milcord.com.

Semantic Wiki News - March 9, 2012

Hello,

Annotated content for 946 Thai and Philippine NGOs, dynamic calendars for Balikatan, and automated BMI calculations: these are a few of the changes that have been made to MARCIM Semantic Wiki this week! The Milcord team has been working to address user requirements observed during Cobra Gold 2012 and implement innovative solutions, so that the second deployment of our MARCIM Semantic Wiki within the Philippines will met with increased success. In order to keep the MARCIM team apprised of solutions as they're employed, we hope to begin communicating new updates through this bimonthly newsletter. We kept the distribution of the newsletter to individuals directly involved in the project; please let us know if there are others we should include in the mailing list!

The following updates have been implemented within the Wiki in the past week:

In an attempt to address reporting requirements identified by users in Thailand, we have created a dynamic calendar labeled with important events for Balikatan 2012. The monthly calendar view is one of many export formats enabled by the Semantic Search capability. The calendar can be accessed here.

As you can see, the calendar posts we've created include dates for deployment and redeployment, opening and closing ceremonies, as well as Medical/Veterinary Outreach, among other events. To add an event to the calendar, a user may click "Add page using form" in the left sidebar, type the name of the event he/she desires to post within the text box that appears, and within the dropdown menu choose the category "Event." The event the user creates will automatically populate to the calendar.

It is our hope that this calendar will support staff reporting functions in Balikatan, and serve to increase the frequency and quality of collaboration among the operational planning team.

What Links Here Template

As part of an ongoing effort to enable automatic associations between annotated non-page entities in the Wiki, we have created a template that allows users to generate a bulleted list of pages that link to any given tag. Let's take an example that was presented to us by a user in Thailand. Many teams working at the MDVCAP sites consistently mentioned "diabetes" as an issue within their SITREPs. Any tags to diabetes created red hyperlinks; however, even if the user created a page for "Diabetes" the page itself would not generate an easily viewable list of pages with mentions to Diabetes. We've addressed this issue by allowing users to embed a template within the free text area of the page in question. By typing {{What Links Here}} within the text of a Wiki page, a list of pages with tags to diabetes will be generated. For further exploration, navigate to the Diabetes page.

After a single user inputs {{What Links Here}} within the free text area of the page, every user thereafter will be able to view a list of pages that link to the tag in question.

BMI Calculations

We have codified a process for dynamically calculating Body Mass Index statistics for every patient that passes through Medical Registration at a MEDCAP Site, in response to feedback from the Environmental Health Officer for Iii Mef. Aggregate BMI statistics now automatically feed into our dynamic tables located within MEDCAP Site pages. To view the new dynamic tables, follow the Cobra Gold Site 1 link.

New Sociocultural Content

In preparation for Balikatan we completed data ingest of all major geographic divisions for the Philippines (to include regions, cities, and municipalities). We have also imported 422 Philippine NGOs, and 524 Thai NGOs to satisfy user requirements. Sociocultural content may be accessed from both the Philippines and Thailand country pages.

In addition, all site information for Balikatan currently resides on the Semantic Wiki. You may view this content within the Balikatan 2012 operation page. Below is a screenshot of the Site page for the Tagbarungis Elementary School, an ENCAP Site in Palawan.

Main Page Restructuring

We have integrated feedback from users and MARCIM team members to restructure our Wiki Main Page. We now have content divided by Operation/Exercise, and by Area of Operation - the latter of which is particularly designated for ongoing operations not associated with a specific exercise. Let us know what you think of our new Main Page.

We hope you found our first Semantic Wiki Newsletter contained useful, relevant information. We value your feedback on how we can improve our updates.

From January-February 2012 Milcord participated in Cobra Gold military exercises in Thailand, demonstrating our MARCIM (US Marine Corps Civil Information Management) Semantic Wiki. This is the second year we have participated in the exercises; last year, Laura Cassani represented Milcord by presenting our sociocultural knowledge base. You can read Laura's post here for background on the exercises, and details about our participation in Cobra Gold 2011. Since then, we’ve developed another knowledge base built upon a Semantic Wiki platform, tailored to support the Civil Information Management needs identified in Thailand.

The MARCIM Semantic Wiki supports real time data collection, visualization, and analysis by automatically ingesting assessments and surveys conducted by Civil Military Operations (CMO) teams submitted via mobile devices, and semantically tagging and generating relationships with the field collected data. During Cobra Gold 2012, the MARCIM Semantic Wiki was placed in the hands of the exercise’s planning and operations team. This team, stationed at the CJCMOTF (Combined Joint Civil Military Operations Task Force) in Korat, Thailand, is responsible for overseeing all CMO activity within the country. I spent three weeks observing, interacting with, and supporting the users, and, based on their feedback, we customized the Wiki so that it could best assist and advance the efforts of CMO personnel. It was incredible to see how the Wiki evolved throughout the exercises from being something that was built on a conceptual level by Milcord to being a living, breathing tool that took shape around user feedback as we worked continuously to tailor the Wiki so that it could confer the utmost benefit to the troops. On a daily basis within the CJCMOTF, the staff used the Wiki to submit their daily reports, analyze demographic information within the area of operations, monitor team activity, and visualize responses to surveys and assessments.

During my time in Thailand, I gained an appreciation for the nature of the data collected during CMO missions; information is collected about the local infrastructure, medical needs of the population, progress being made at engineering sites, as well as sentiments of the Thai people toward the troops. Instead of placing this data onto inaccessible hard drives where it is unlikely to be utilized, the Wiki structures the data and places it into an analyzable form for users, thus presenting the value of the aggregated data to the troops. In addition to helping the troops understand the impact they’re making on the ground, the aggregation and analysis of this data also prevents duplication of effort by CMO teams by alerting them to what has already been achieved within the area of operation, and what activities and projects should be prioritized in the future.

Although our work within the CJCMOTF kept me busy, I was still able to sneak in some sightseeing. I visited the Weekend Market in Bangkok (the largest market in Thailand), toured the Royal Palace and Wat Pho, and visited Khmer ruins within Korat. The entire trip was a culinary escapade, and I quickly developed an appetite for som tam (spicy papaya salad with shrimp) and chai yen (Thai iced tea).

Since our participation in Cobra Gold 2012, we have been invited to participate in a number of other exercises, including Balikatan 2012 exercise in the Philippines, Pacific Partnership 2012 exercise in Southeast Asia and Oceania, and Black Sea Rotational Force 2012 operation in Eastern Europe. We look forward to posting further updates on the evolution of the MARCIM Semantic Wiki as we progressively gain insights from these operations and exercises!

In our companion Domestic Political Violence Model blog, we published yesterday the list of countries predicted to have increases in political violence for 2011 to 2015. The map below shows the countries with expected increase in political violence grouped by Very High Risk, High Risk, and Medium Risk. Our forecast is based on four different models. In the Very High Risk category, all four models predicted an increase. In the High Risk category, three models predicted an increase. In the Medium Risk category, half of the models predict an increase in violence. The countries in each category are sorted based on the size of the mean residual, so the states with the most pent-up demand for violence are listed at the top. The residuals imply that these are states that we expect to observe increases in violence although not necessarily high levels of violence. So United Kingdom and Israel are not expected to have the same level of violence but are expected to have the same magnitude increase in political violence.

United Kingdom, Israel, Sri Lanka, Iran, Colombia, Zimbabwe, South Africa, Haiti, Egypt, Philippines, Guinea-Bissau, Venezuela, Chile, Syria, Chad, Belarus, Guinea, Kyrgyzstan, Greece make up the very high risk list. Israel, Sri Lanka, Iran, Colombia, South Africa, Egypt, Chile, Syria, Chad, Belarus, and Kyrgyzstan are returning countries from our 2010-2014 forecast. Of our 2010-2014 forecast, Syria, Egypt, and Libya saw the most violent protest in the Arab Spring of 2011. United Kingdom, Zimbabwe, Haiti, Philippines, Guinea-Bissau, Venezuela, Guinea, and Greece are the new additions to our very high risk list. United Kingdom tops the list as the pent-up demand for increased violence was certainly evident in the London Riots over the Summer of 2011. Greece saw substantial increase in political violence due to the measures introduced by the Greek government to address the debt crisis.

It is worth noting that our 2011-2015 forecast model is based on events dataset which captures both the frequency and the intensity of political violence from 1990 to 2010. Similarly, our 2010-2014 forecast model is based on events dataset which captures both the frequency and the intensity of political violence from 1990 to 2009. We publish our forecast based on our acquisition date of the event dataset. As the event dataset is available on a real-time basis - albeit at a higher cost, we can publish our forecast in real-time if needed.

Using a regression model applied to a large number of drivers of conflict variables spanning numerous open source social science datasets, our model uses a novel Negative Residuals technique. Negative Residuals result from the model predicting higher levels of violence than actually experienced, indicating nation states that are pre-disposed to increasing levels of violence based on the presence of environmental conditions and drivers of conflict with demonstrated correlation with measured political violence. In our model, the magnitude of future political violence directed towards the state is heightened by coercion, often thought of as violations of physical integrity rights, and by coordination, or the tools by which groups can associate and organize against the state. Conversely, the magnitude of political violence is lessened by capacity, defined as the ability of the state to project itself throughout its territory.

For the event dataset, we use the Integrated Data for Event Analysis (IDEA) framework. IDEA event dataset is based on the Reuters Global News Service, and organized in a “who” did “what” to “whom” manner for each particular event. This framework allows researchers to isolate events of interest for their particular project. Using this framework allows us to capture and isolate domestic anti-government violence. For the dependent variable, our model uses the Goldstein scores that captures the overall level and intensity of domestic antigovernment violence within a state in a given year.

Surveys and interviews form the central methodology for analyzing and discovering attitudes and opinions in social science research. With the advent of Web, online surveys have become an efficient way for researchers to collect and analyze large amounts of data. The popularity of the online survey tools like SurveyMonkey , Zoomerang, SurveyGizmo , etc. are testament to the productivity enabled by surveys. However, surveys represent a top-down rigid methodology forcing the survey designer to account for all possible answers up front, which is an impossible feat. In contrast, interviews allow the unanticipated information to bubble up bottoms up from the respondents. For instance, Integrity Watch Afghanistan (IWA), Afghan Perceptions and Experiences with Corruption: A National Survey 2010 primary data, involves interviewing randomly selected 6,500 respondents in 32 provinces on over 100 questions that deal with sectors where people experienced corruption; levels of bribes people paid to obtain services; what type of access people had to essential services; who people trusted to combat corruption; and experiences with corruption in the judiciary, police, and land management. However, the interview methodology is expensive and time-consuming as it requires implementation by research companies with expertise in effective research design, and precise management of data collection over several months.

Is there an alternative to surveys and interviews in social science research? Prof. Salganik's team at Princeton came up with a hybrid approach, "wiki surveys", that combines the structure of a survey with the open-endedness of an interview. To date, various organizations have created more than 1000 wiki surveys on the project Web site - All Our Ideas, generating in 45,000 ideas with 2 million votes. Wiki surveys range from the New York City Mayor's Office's engagement with citizens in shaping the city’s long term sustainability plan to the Catholic Relief Services surveying their 4000 employees to find out what makes an ideal relief worker. The figure below shows how the third question in Tactical Conflict Assessment Planning Framework (TCAPF) would be be implemented as a wiki survey:

Inspired by extending the kittenwar concept to ideas, the user interface guides the respondent to choose between two random alternatives, while encouraging the respondents to add their ideas into the mix of alternative responses. The additional ideas are added into the survey’s marketplace and voted up or down by the other survey-takers. Prof. Salganik says that “One of the patterns we see consistently is that ideas that are uploaded by users sometimes score better than the best ideas that started it off. Because no matter how hard you try, there are just ideas out there that you don’t know.”

All Our ideas have some basic visualization features to make sense of the wiki survey responses. Here is the visualization for the responses - "What do you think the Digital Public Library of America (DPLA) should be like?":

It is worth noting that the top scoring 15 ideas starting with DPLA interoperability with Government Printing Office (GPO), Defense Technical Information Center (DTIC), an National Records Archive Administration (NARA) are all uploaded ideas not in the original set of alternatives. A powerful argument for crowd sourcing!

Admittedly, we still need boots on the ground to collect TCAPF data in Afghanistan given the demographics of the people we want to reach. On the other hand, wiki surveys hold great potential in reaching the younger generation fueling the Arab spring and the like.

In our Building Intent project, we developed a geoprofiling algorithm that predicts the location of facilities that support adversary operations in the urban environment. Geoprofiling is a technique that is widely used in serial crime investigations. In our project, we researched and developed a building intent inference system based on terrorist preferences, building characteristics, and social network behavior. Our approach learns the utility function that the adversaries are using, and classifies and predicts the potential utility of a facility to the adversaries based on the derived metadata of each facility using influence networks.
For terrorist preferences, we have studied Military Studies in the Jihad Against the Tyrants: The Al-Qaeda Training Manual in order to find building use tactics that the adversary is training its recruits, and found a significant number of building use related tactics and procedures embodied in these manuals. In collaboration with the Terrorism Research Center in Fulbright College, University of Arkansas, we then studied the international terrorism cases in the American Terrorism Study, and found empirical evidence that shows the practice of terrorism manual tactics in the observed data. Based on these findings, we developed a baseline set of indicators for modeling building intent, and researched the likely causal connections among these variables. We then built extractors to derive a set of metadata for these indicators, and used machine learning algorithms to find the causal connections between the incidents or events and building attributes, and model parameters, and build classifiers based terrorist process preferences , building characteristics, and guilt by association data.

As shown in the figure below, our geoprofiling algorithm does a nice job in predicting the Japanese Red Army terrorist Yu Kikumura's residence in New York based on the American Terrorism study. Here the blue markers signify police stations and white arrows signify the egress points. As shown in the figure, Yu Kikumura's residence at 327 East 34th Street, NY is in the red hotspot area predicted by our algorithm. Avoiding police stations and ease of egress were two of the primary factors in Kikumura’s choice of housing. Not only is his apartment equidistant from the nearest police departments – all of which are over one kilometer away – it’s back-alley access road to the underground Queens Memorial Tunnel provides a quick get-away by car. In addition, the examination of the residence floor plan reveals that the apartment building had numerous staircases (one of which is private to the unit) to the basement level with a rear exit.

The Al Qaeda Training Manual gives several instructions for renting a residence as shown in the table below. For instance, it is preferable to rent apartments on the first floor for ease of egress, avoid apartments near police stations and government buildings, and in isolated or deserted locations, rent in newly developed areas, and the like. In particular, the Al Qaeda Training Manual calls for the use if the following tactics in renting an apartment:

So how does the location of Bin Laden's secret hideout in in Abbottabad follow the advice of the Al Qaeda Training Manual? Not that closely. Bin Laden clearly did not follow the tactics for selecting a ground floor location by living on the third floor, for avoiding police stations and government buildings by selecting a location near the Pakistan Military Academy, for finding an apartment in newly developed areas where people do not know each other by choosing a neighborhood with retired Army Generals, and for preparing ways of vacating the premises in case of a surprise attack by not building exit stairs. The only tactic that Bin Laden has used from the list above is avoiding an isolated location. One wonders if Bin Laden made a concerted effort to avoid his own tactical advice in order to thwart geoprofiling techniques. Perhaps another consideration that will need to be taken into account in future geoprofiling is the assistance from outside forces, given the possible connection to a support network that included elements of the Pakistani military or intelligence services in the Abottabad area.

This past month Milcord participated in the Cobra Gold military exercises in Thailand, demonstrating our Office of the Secretary of Defense Human Social Cultural Behavior (HSCB) Modeling Program project, a Socio-Cultural Knowledgebase using a Semantic Wiki. Cobra Gold is an annual joint training exercise held in Thailand and sponsored by the U.S. Pacific Command and the Royal Supreme Thai Command. One of the world's largest multinational exercises, it draws participants from 24 nations, including the armed forces of Thailand, Republic of Singapore, Japan, Republic of Indonesia, Republic of Korea and the United States. Nearly 13,000 military personnel, approximately 7,300 of them American troops, participated in Cobra Gold 2011. The event improves participating nations' ability to conduct relevant and dynamic training while strengthening relationships between the militaries and local communities.

Participating in the exercises was a fantastic experience, as we traveled across the country speaking with Soldiers and Marines at various bases gaining valuable feedback regarding how our tool can support socio-cultural data management for complex operations with the ultimate objective of transitioning our ONR supported R&D into operational use in the field.

One of the highlights of the trip, in meeting with a group that had recently deployed to Afghanistan, we used the Socio-Cultural Knowledgebase to look up the exact area of their deployment and view information about the tribal dynamics, provincial and district contextual knowledge, and data on political figures and powerbrokers relevant for their area. For the Afghanistan and Pakistan area, the Semantic Wiki covers more than 3,000 tribes and ethnic groups, documenting their traditional alliances, disputes, human terrain map, and other pertinent information to operations. The wiki also has articles for almost 700 individuals of significance for the region.

Our use of a semantic wiki platform enables the representation of the human terrain knowledge as facts and relationships. The representation of this knowledge in a semantic wiki has the additional advantage for faceted browsing and answers engine queries. For instance, the semantic wiki can answer questions like “What are the tribes in Kandahar Province and their traditional disputes?” as a table which dynamically is generated every time a new fact is added that fits this question. Getting firsthand feedback from the very people you want your research to support is a rewarding experience. We hope to be able to return next year and participate in the field exercises, showing how our tool can directly support socio-cultural knowledge management for civil affairs and humanitarian operations. The picture above is from the opening ceremony of the exercise in Chiang Mai as I present our Socio-Cultural Knowledgebase using a Semantic Wiki to the dignitaries in attendance while the picture below is from our travelling road show.

Additionally, while it was quite the busy schedule for the two and half weeks I was there, we were still able to find time for sightseeing, taking in historic temples, a Muay Thai boxing match, and even a visit to a fish spa. And of course, sampling the incredible array of Thai street food was amazing; I still dream of the delicious steamed pork buns I had in Bangkok and Chiang Mai.

In collaboration with our academic partners Prof. Cingranelli at the Political Science Department, SUNY Binghamton University and Profs. Sam Bell and Amanda Murdie at the Department of Political Science, Kansas State University, we developed a Domestic Political Violence Model that forecasts political violence levels five years into the future. The model enables policymakers, particularly in the COCOMs, to proactively plan for instances of increased domestic political violence, with implications for resource allocation and intelligence asset assignment. Our model uses the IDEA dataset for political event coding, plus numerous indicators from the CIRI Human Rights Dataset, Polity IV Dataset, World Bank, OECD, Correlates of War project, and Fearon and Laitin datasets. Here is our model's forecast for 2010 - 2014 as a ranked list:

Iran

Sri Lanka

Russia

Georgia

Israel

Turkey

Burundi

Chad

Honduras

Czech Republic

China

Italy

Colombia

Ukraine

Indonesia

Malaysia

Jordan

Mexico

Kenya

South Africa

Ireland

Peru

Chile

Armenia

Tunisia

Democratic Republic of the Congo

Belarus

Argentina

Albania

Ecuador

Sudan

Austria

Nigeria

Syria

Kyrgyz Republic

Egypt

Belgium

Using a regression model applied to a large number of drivers of conflict variables spanning numerous open source social science datasets, our model uses a novel Negative Residuals technique. Negative Residuals result from the model predicting higher levels of violence than actually experienced, indicating nation states that are pre-disposed to increasing levels of violence based on the presence of environmental conditions and drivers of conflict with demonstrated correlation with measured political violence. The residuals imply that these are states that we expect to observe increases in violence although not necessarily high levels of violence. So Iran and Sri Lanka are not expected to have the same level of violence but are expected to have the same magnitude increase in violence.

There some unexpected countries on our list like Czech Republic and Italy. Time will tell the accuracy of our model's predictions although recent political violence in Ecuador is an early indicator of the model's effective performance. The model uses nuanced measures of repression and captures variables that can be manipulated by policy makers. Our project page has further details on the model.

Mobile devices such as the iPod Touch and iPhone have spurred the “every soldier a sensor” vision into reality. Inspired by the rapid-transition success of TIGR, we built an Android App - RouteRisk - for risk-based route planning to investigate the design issues involved to support server infrastructure, Web services and soldier-sourced tactical data input requirements.
httpv://www.youtube.com/watch?v=Xz9U1wc7UYM

Current path planning systems such as the US Army’s Battlespace Terrain Reasoning and Awareness – Battle Command (BTRA-BC) involve time intensive terrain analysis computations, and require an expert user with GIS experience and knowledge of terrain analysis. These systems do not provide an easy-to-use web accessible interface by the boots on the ground. As a planning and re-planning system, RouteRisk calculates risk and recommends routes based on soldier-sourced data provided through tactical intelligence and route planning systems like TIGR (Tactical Ground Reporting), DCGS-A (Distribute Common Ground System – Army), and BFT (Blue Force Tracker). And when new intelligence is discovered, like a previously unreported poppy field by a soldier on patrol or an S2, that the intelligence gets pushed out to all units, because the servers and smartphones are connected through the cloud.

RouteRisk leverages our Risk Based Route Planning web service solution developed in earlier projects. Risk-based Route Planning is a Google Maps web service application allowing the user to plan safe routes in Baghdad, Iraq by avoiding known hotspots and predicted hotspots learned from patterns of past incidents. The web service application generates a risk surface from the incident reports using a Bayesian spatial similarity approach. Our Bayesian model learns the causal relationship between attack characteristics (such as attack type, the intended target, emplacement method, explosive device characteristics, etc.) and spatial attributes (distance to proximal features such as overpasses, government facilities, police checkpoints, etc.). For a given region, we use spatial attributes (distance to nearest overpass, major religion, within 300m of district border, neighborhood) as evidence in the model and we perform inference on the data.

By selecting the “Route” tab on the main navigation, the user can easily create a new route plan. The map is launched and the user is instructed to tap points on the map to define waypoints for the route (starting, intermediate and ending locations). To drag waypoints the user would Press-and-Hold. Optionally, the user can also bookmark locations or search for locations by placename (e.g. “Camp Helmand” or “Paktika District”) or grid reference. By pressing and holding down on waypoints, the user can choose among several actions to perform, such as “move waypoint” or “define time window”. Once a pair of waypoints are defined or a new one is added, a route plan is automatically computed and shown using the current routing preferences and selected factors. The user can change the routing preferences by clicking a button that animates the corner of the map to curl up and reveal the routing preferences. The user can select preferences such as “fastest route” or “shortest distance” or “safest route”.

We are currently researching the software architecture design alternatives for adding voice control capabilities to our RouteRisk app.

I will not be able to cover some of the really interesting presentations in this public forum due to the sensitivity of the topics, but here are a couple of tidbits for general consumption. "Emerging Threats in 2010" by Dave Marcus, Director of Security Research and Communications, McAfee Labs was one of my favorite presentations of the conference.

Next week we will be presenting a paper at the International Conference on Cross-Cultural Decision Making in Miami, Florida. I am looking forward to participating in a highly informative and interesting session, bridging modeling and simulation disciplines with socio-cultural data for military operations. In our paper entitled “Geospatial Campaign Management for Complex Operations”, we report initial findings from a research effort to understand the complexity of modern day insurgencies and the effects of counterinsurgency measures, integrating data-driven models, such as Bayesian belief networks, and goal-driven models, including multi-criteria decision analysis (MCDA), into a geospatial modeling environment in support of decision making for campaign management. Our Decision Modeler tool instantiates MCDA, a discipline for solving complex problems that involve a set of alternatives evaluated on the basis of various metrics. MCDA breaks a problem down into a goal or set of goals, objectives that need to be met to achieve that goal, factors that effect those objectives, and the metrics used to evaluate the factor. Since the selection of metrics for specified objectives and data for computing metrics are the biggest hurdles in using MCDA in practice, both the metrics and associated data are part of our tool's library for user reuse. Below is an image of the MCDA structure. Click on any of the images in the post to see more detail.
Our decision modeling tool also incorporates a weighting system that enables analysts to apply their preferences to the metrics that are most critical for the mission. Linking these decision models in a shared space within the tool creates a repository of knowledge about progress along lines of effort in an operation, providing a source for knowledge transfer for units rotating into and out of the theater. The alternatives considered in the decision model are different courses of action that can be evaluated against metrics to determine the optimal action for accomplishing the commander’s goals. Of course, working in a complex human system such as the one found in counterinsurgency and stability operation environments, our tool is not meant to be a ‘black box’ model that simply reports to the user what to do, but rather the decision analysis provides insight through both qualitative and data-driven models about what courses of action will set the conditions for a more successful outcome based on the commander’s intent.

In evaluating our tool with users, we determined that one of the most important features involves the visualization of the tradeoffs for various courses of action in the decision model. To address this, we compute the uncertainty of data based on its distribution and propagate its effect analytically into the decision space, presenting it visually to the commander. A greater dispersion represents more uncertainty, while a clustered set of data points indicates more certainty regarding the cost and effectiveness metrics for a particular course of action. In this way, we are able to represent the high levels of uncertainty inherent in socio-cultural information without negatively impacting the ability of our tool to calculate a decision model. By incorporating a visual representation of uncertainty in the model, scenarios can then be played out to determine optimization for various courses of action based on data inputs and user preferences, translating model outputs into a form that can more readily be used by military users.

To demonstrate an example of how the visualization of uncertainty would work in the tool, in the image below we have analyzed two potential courses of action relating to the essential services line of effort with the objective of supporting healthcare initiatives in an area of operations. In this case, we are deciding where to focus our efforts, comparing two districts, Arghandab and Anar Dara in Southern Afghanistan. Here we are only examining a few potential metrics: the cost of building healthcare centers proposed by local development councils; the number of basic healthcare centers already in the district; and the number of people that identified a lack of healthcare as the major problem facing their village, a question that is collected in the Tactical Conflict and Assessment Planning Framework (TCAPF) data. Our MCDA tool would compute and display the effectiveness versus costs data points from metrics corresponding to the two proposed courses of action. We want to determine which district would optimize our goal of restoring essential services with the objective of supporting healthcare initiatives by leveraging the data inputs. In considering the uncertainty, we have represented the distribution in the ellipsoid around the data point. This allows a military planner to visually analyze and evaluate the potential courses of action based on cost versus effectiveness metrics, while accounting for the uncertainty of the data. In addition, the weighting system, sliders shown on the right hand of the image, allows a military planner to experiment to determine how a change in metrics will affect the proposed courses of action.

One of the key benefits of our approach is that it allows for real-time knowledge generation. By updating the model with new data the Decision Modeler will re-evaluate the outlined courses of action against the new information, allowing the user to view trends over time in the effectiveness and cost metrics for particular courses of action. In the example below, perhaps the cost estimates went up for the proposed course of action in Anar Dara given deterioration in the security situation that affected the ability of hiring contractors to execute the project. In Arghandab, the metric could have changed according to our collection of TCAPF data, emphasizing that more people responded that healthcare is the major problem facing their village, therefore, increasing the effectiveness against our objective if we built a healthcare center there. Given the increased need, the villagers have offered to provide labor at decreased cost and will contribute a certain percentage of funds to the project, therefore representing the decreased costs associated with Arghandab data points. In this way the tool will provide course of action forecasting based on an analysis of data for the purposes of proactively planning operations that optimize the commander’s objectives.

We will be presenting a more detailed analysis of our research results at the conference, so keep an eye out for links to our papers and presentation.

Under the sponsorship of the OSD Human Social Culture Behavior (HSCB) program, we are developing asemantic wiki for Complex Operations. The envisioned operational impact of our effort is to foster collaboration and sharing of knowledge for whole-of-government approach, and to improve COIN/SSTR operations analysis and execution byfocusing on population as center of gravity. The development of such a wiki presents several challenges that include the broad domain area of knowledge complex operations require, a large number of doctrine publications to wikify and semantify, several out of print key references, etc. With these challenges, we saw an opportunity to develop an open source culturepedia for Afghan and Pakistan human terrain as such knowledge is not aggregated and not readily available.

The Complex Operations wiki currently contains more than 1,000 articles on the various tribal dynamics and locational knowledge for the Afghanistan and Pakistan region, outlining tribal meta-knowledge such as the sub-groups, primary locations, traditional alliances, and traditional disputes of various groups to support situational awareness about the human terrain. Here is the wiki page for the covered Afghanistan Organizational Groups. We have created over 150 concept maps (an example shown below) to capture the knowledge about 1,000 ethnic groups, tribes, sub-tribes, clans within Afghanistan and Pakistan region to make this human terrain knowledge readily accessible to the complex operations practitioner.

Our use of a semantic wiki platform enables the representation of the human terrain knowledge as facts and relationships. For instance, the wiki page for the Achakzai tribal group lists the the known facts and relationships about this ethnic group both a human consumable form using semantic forms:

, and a machine consumable form as semantic RDF relationships:

By inspecting the semantic form, the reader can deduce that Achakzai is a sub-tribe of Zirak, which is a sub-tribe of the Durrani super-tribe, primarily located in the Chora and Khas Uruzgan districts, and traditionally have disputes with the Nurzai, Panjpai and Kakar tribes. The representation of this knowledge in a semantic wiki has the additional advantage for faceted browsing and answers engine queries. For instance, the semantic wiki can answer questions like "What are the tribes in Kandahar Province and their traditional disputes?" as a table which gets automatically updated every time a new tribe in this province is added to the wiki:
There are also several groups in Afghanistan that do not organize around tribal kinship ties, including Uzbeks, Tajiks, and Hazaras. In addition to tribal affiliation, social organizations such as solidarity groups - a group of people that acts as a single unit and organizes on the basis of some shared identity, and patronage networks - led by local warlord or khan - play an important role in understanding of the human terrain. Afghan and Pakistan human terrain and situational awareness knowledge base can be extended to include other populations of interest to the community, such as Yemen or Somalia.

Sometime back in February 2010 I started a working paper titled "Shuffling_Methodology_for_Sanitizing_TCAPF_Microdata" (click to download as PDF) which outlined the methodology I used for data sanitization of TCAPF data. The sanitization approach I discuss is applicable to cases where its desired to share unclassified data while preserving the privacy (and operational security) inherent in the data.

Essentially the data which was shared with us by USAID, although it was unclassified it had distribution restrictions due to the sensitive nature of the data which was collected by 24th MEU and other units in Afghanistan. We felt compelled to publish the results from a bayesian analysis we performed on the data and thought it best to sanitize the data first and then publish the results from the cleansed data. In order to do so, we had to maintain the analytical value of the data by preserving the distributional properties of the dataset for the results obtained to remain valid. We had to balance this need for preserving analytical value with the privacy needs to withhold or obfuscate data fields deemed too sensitive to disclose.

The discussion in the paper where I go through a thought process of what could go wrong should get you thinking, at least. I welcome your feedback and ideas in the comments below.

Naval Postgraduate School in Monterey is one of our government's educational jewels. Nestled in the beautiful landscape of the Monterey Peninsula, this institution brings togethers a diverse group of educators, researchers and student practitioners to promote a vigorous debate of the issues facing our national defense, and the advancement of solutions addressing these issues. Last week I had the pleasure of giving a couple of talks and participating in a panel discussion at NPS. Here is a quick rundown.

The first day, I was the invited speaker for a panel discussion on Socio-Cultural Modeling & Analysis. This panel discussion explored the problem of modeling and analysis to provide insights to decision makers on complex socio-cultural issues from the perspective of both social scientists and computational modelers. The panel discussion addressed the questions:

How does the inherent variability within humans impact the ability to draw insights from modeling and analysis?

What strategies can be used to address the challenges of modeling and analysis in the human domain?

My presentation sparked some interesting questions like how can we convince the Commander to help with data collection when the Commander sees no immediate return on the invested overhead. I suggested that DoD can replicate what consulting companies do: Put a resource who has no execution task other than recording knowledge in project executions. Panel discussion generated a lively debate between social vs. computational scientists. One of the computational scientists on the panel said that everyone wants to solve "easy to model" instead of "hard to model", which is what the decision maker is interested in. For instance, coloring the map of failing states using the Political Instability Task Force (PITF ) or our Predictive Societal Indicators of Radicalism (PSIR) models provides hardly any new insight to the General in charge.

Another criticism was the publishing delay in social science data sets (e.g. CIRI, MAR, Uppsala, etc.). For instance, human rights data set publishers wait for the State Department and Amnesty International to publish their annual reports for the previous year in spring this year. Then they take a couple of more months to code the reported incidents and publish. Such a delay does not exactly match DoD operations focusing on the current. I advocated the need for publishing real-time social science indicators that can be adjusted later like the government's GDP revisions six months later.

Social scientists on the panel stressed the importance of representing qualitative in addition to quantitative knowledge in these models. For instance, socio-cultural responses to color can be significant as the color red represents celebration in Chinese, purity in Indian and danger in Western cultures. This kind of knowledge is certainly relevant in SSTR operations. Dr. Guttieri cautioned against the public perception of manipulation using socio-cultural models citing Project Camelot.

It was nice to see the articulation of the healthy tension between the social and computational scientists in the audience. In closing, I advocated packaging of social science for tactical operations where warfighters are serving as or advising governors, town managers, mayors - jobs that they were not trained for.

The second day, I gave a brown bag seminar at the NPS Cebrowski Institute on our Semantic Wiki for Complex Operations project. This project aims to address the gaps in current solutions supporting COIN/SSTR operations:

Semantic wikis enable community-powered structured knowledge production using semantic forms, faceted browsing of structured content, powering answer engines and linking different data sets. There was significant interest in using our semantic wiki for teaching as such an approach can significantly increase the amount of learned knowledge NPS students take to the field of practice, and provide an effective reach back capability from the field.

I visited TRAC-Monterey, which has a number of interesting projects. In particular, I found the Cultural Geography project interesting as an agent application. This project started as Urban Cultural Geography for Stability Operations. The Cultural Geography model employs issue based segmentation of the social network of leaders, followers using communication theory and weapons of influence concepts to predict the future based on population identity groups. The mind of the agent is a belief network that develops actions based on the beliefs, values, interests of the associated identity group. COIN IPB and Center of Gravity (COG) is the target result.

I also paid a visit to Defense Resources Management Institute (DRMI) at NPS. Here I found the Multi Criteria Decision Making (MCDM) course of particular interest as it relates to the SSTR Campaign Planner tool we are developing in our PSIR project. DRMI teaches the MCDM course as a 2-day, 2-week, 4-week and quarter formats to a wide audience from DoD, DHS, Emergency Response Teams. MCDM is widely used as a decision-aid tool for ranking decision alternatives. DRMI course emphasizes visualization of the decision space instead of ranking alternatives by scores. Such an approach enables the user to detect conflicting criteria, cluster alternatives, eliminate undesirable alternatives, and select the optimal alternative.