In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters. I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply throughcomplexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computingrevolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background…

Programmatic Context for HoloLens

An enduring aspect of working on new technologies is stealthiness. It isn’t always the right approach – sometimes open collaboration beyond company borders has superior value to a quiet insular team. I learned the distinction well when I was in government (Intellipedia and A-Space were among our results) and in the startup culture in Silicon Valley before that.

But stealth has an electric appeal, the spark of conspiracy. At Microsoft and some other companies, the terminology is “Tented” – you have no clue about the work if you’re outside the tent, enforced as rigorously as in a SCIF.

Last March my Microsoft Institute team at Microsoft was quietly invited to add our efforts to a startling tented project in Redmond – one which was already gaining steam based on its revolutionary promise and technical wizardry, but which would require extraordinary stealth in development, for a variety of reasons. I won’t share anything proprietary of course, but will say that our secrecy was Apple-esque, to use a Valley term of high praise.

That project is being announced to the world today, as HoloLens. I couldn’t be prouder of my (erstwhile) colleagues at Microsoft who are launching a revolutionary platform. The praise is already rolling in. WIRED‘s story is “Our Exclusive Hands-On With Microsoft’s Unbelievable New Holographic Goggles” while TechCrunch quickly assesses “Augmented reality has had some false starts on mobile, but in this context, it seems more viable, and thus more credible than it ever has before.”)

Next, let’s look at some background on the technical area which HoloLens now stands astride like a colossus among the Oculus Rift and Google Glass lesser-rans. Then below I’ll sketch some initial observations on the relevance for government uses and the world at large.

Technology Context: Ambient Computing

I’ve been writing about virtual reality and augmented reality (the VR/AR split) for a decade, first inside government and over the past seven years on this blog. The term I prefer for the overall approach is “Ambient Computing” – combining advanced projection, immersion, machine vision, and environmental sensing.

Ambient computing devices are embedded all around in the environment, are addressable or usable via traditional human senses like sight, hearing, and touch/gestures, and can understand people’s intent, and even operate on their behalf.

We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015. In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of “spatial AR,” and the promise of controlling a live 3D environment of realtime data.

That vision begins to become “virtually real” with today’s HoloLens announcement.

Competitive Context

I’ll leave to analysts, and to the holiday-market later this year and next, to judge where the competing technologies lie on the “hype-curve” of reality and utility. I can list the efforts I’m playing closest attention to, and why:

Samsung’s Gear VR and Project Beyond: The Gear VR headset hasn’t lit expectations very brightly among analysts or the tech media, but it does now have alongside it the recently announced “Project Beyond,” a 360-degree panopticon camera module which is planned to capture a gigapixel of surrounding 3D footage every second, and stream that footage back to someone wearing a Gear VR headset, “essentially transporting them into that world.” Unlike HoloLens, it’s not a full computer.

Google Glass: The granddaddy of widely available AR experiments. Withdrawn from the public last week, not before inspiring a raft of venture-funded lookalikes which are also now also-rans. Google undoubtedly learned a great deal by dipping its giant toe into the virtual realm so enthusiastically with its Explorers program, but most of my friends who participated developed a “ho-hum” attitude about the device, which now gathers dust on shelves across the world.

Magic Leap: Google’s withdrawal of Glass can be seen in the context of the revelation that the search/advertising giant has now instead plowed in a large amount of cash to this start-up, followed by several A-list Silicon Valley VC funds. Magic Leap has now raised an astonishing $542 million in Series B funding – yes, that’s half a billion, with no product or launch date in sight, but a long list of developer openings on its website. (But don’t worry, the company just hired a novelist as its Chief Futurist.)

Oculus VR and the Rift (or its follow-ons): Oculus Rift has to be considered the leading rival to Microsoft’s HoloLens, so much so that Facebook acquired its parent startup company for an eye-opening $2 billion, ten months ago. Mark Zuckerberg at the time indicated patience and the long-view in his strategy, but industry watchers don’t expect a device release until late 2015 or 2016. And Rift, as of its descriptions to date, isn’t a full computing experience, merely a virtual-reality immersion. There’s also no see-through aspect to its headset (unlike the visible real-world context of HoloLens), which has led to widely-reported nausea problems among Rift prototype users.

These all feel a bit laggard now, particularly because the companies involved (with the exception of Google) don’t have the experience of Microsoft in launching global computing platforms on which communities of developers can make magic. Most importantly, none of these efforts are audacious enough to incorporate a full computing device (CPU, GPU, wirelessly connected) into a comfortably wearable device.

Bottom Line for Government…

Ambient computational resources are driving a new revolution, which the private sector is exploiting rapidly. That industrial and consumer revolution is in useful parallel with a virtuous cycle of ubiquitous sensing (Internet of Things) producing zettabytes of Big Data, being manipulated and mined by pioneering Machine Learning techniques for so-called “soft AI” (see IBM’s Irving Wladawsky-Berger in last week’s Wall Street Journal, “Soft Artificial Intelligence is Suddenly Everywhere“).

We humans, we of soft tissue, need all the help we can get to preside over those new and accelerating forces of technological change. The real magic is when our tools give us such powerful command in a simple and fun way. That is the promise of Holographic Computing, and HoloLens.

There are inevitably challenges. There’ll be devious uses of Holographic Computing, of course. Already we see the deceptive capabilities of regular screen-based “virtual reality,” and one can only imagine the perils of viewing these techniques from the wrong hands in full 3D immersion; check out these examples from the Emmy-winning special-video-effects (VFX) team behind HBO’s “Boardwalk Empire”:

We can’t allow government to waddle slowly behind, as real people live their lives increasingly affected by immersive technologies used for good or ill.

Governments exist to answer the needs of their citizens; government agencies and personnel should be using up-to-date tools capable of keeping up with what individual citizens are using, if only to avoid embarrassment and dinosauric irrelevance!

Government workforce and workplace transformed by collaboration transformation already evident in early applications like HoloSkype;

Awareness of and contemporaneous familiarity with the technological changes affecting society, through consumer and entertainment channels.

I’ll end with the newly-released overall video on Microsoft’s Holographic Computing; note the NASA/Jet Propulsion Lab scenes studying the surface of Mars first-hand. Note the 3D modeling from HoloStudio and its infinite shelf of parts. Note the HoloSkype example of real-time step-by-step advice on technical repair, from someone remote yet as near as by your side.

According to the calendar, summer ended yesterday, and September has closed that door and opened others.

One door which opened for me is that I have just been elected as the new 2014-2015 Deputy Chairman of the AFCEA Intelligence Committee, serving under incoming Chair Jake Jacoby, retired USN Vice Admiral, whose day job is as EVP of defense giant CACI International, but I like to think of him as my old boss as Director of the Defense Intelligence Agency a decade ago. I’ve written before about the AFCEA Intelligence Committee which I described as “a prestigious collection of some of the smartest minds in that field… driving innovation, not only in intelligence but in the broader national security realm.” We proved that last week by joining INSA in hosting the well-reported Intelligence and National Security Summit in Washington DC, which made quite a bit of news with speakers like DNI Jim Clapper, the Directors of CIA, NSA, DIA, NGA, FBI, and a myriad of other experts from inside and outside government – including privacy advocates, journalists, and government critics as panelists. I’m looking forward to more exciting activities and research over the next year with AFCEA

At the same time, a door is closing. Due to Microsoft’s corporate restructuring, on Thursday September 18, 2014, the company made several tough decisions (see “Microsoft to close Microsoft Research lab in Silicon Valley” among other news stories). And that day marked the final day at the company for the merry band of brothers in the esteemed Microsoft Institute for Advanced Technology in Governments, which I have led since 2010. It was a pleasure to lead these extraordinary individuals and a privilege to work daily alongside the world’s most talented experts in their fields, guys like Dave Aucsmith, Bob Hayes, Bruce Harris, andAris Pappas, who are each brilliant leaders and sterling friends.

I’m still on the payroll at Microsoft, and may or may not stay in the company, but I can’t say enough good things about what we all accomplished since I joined the Institute nearly seven years ago to work alongside geniuses like George Spix. (By the way, that’s the longest I have ever spent in any one place in my entire fun-packed career.) When I joined the group as its first Chief Technology Officer (CTO) straight from DIA, I found it filled with like-minded innovators, eager to enable difficult government missions with cutting-edge research and technical solutions. Much of what we did remains, necessarily, shrouded in corporate proprietary information and the nature of the sensitive counsel we provided senior government executives. But also along the way we wrote innovative white papers, conducted seminars, and traveled the world working with Microsoft’s field teams and solutions architects to devise unbelievable capabilities, for local and national governments trying to serve and protect their citizens. Most of all, we had a blast working together.

Hundreds of senior government leaders from around the globe have visited the Microsoft Institute; some badges from our Executive Briefing Center

In the parlance of our day, I’m “updating my LinkedIn profile.” But I even consider that as fun, too – because of the serendipitous breadth I see there, for a kid who has gone from writing dusty political science papers on civil-military relations, serving as a Cold-War Pentagon Kremlinologist for Andy Marshall, doing policy and speeches for the mayors of San Francisco and San Jose, helping launch an artificial-intelligence data-mining startup (successful!) in Silicon Valley – to then helping the IC answer the attacks of 9/11 and fight the Global War on Terror.

My time with Microsoft has been another incredible ride in a long, fun roadtrip … and I’m eager to turn the wheel around the next bend and floor it.

Like this:

Amid the continuing controversies sparked by Edward Snowden’s whistleblowingdefection revelations, and their burgeoning effects on American technology companies and the tech industry worldwide, the afflicted U.S. intelligence community has quietly released a job advertisement for a premier position: the DNI’s National Intelligence Officer for Technology.

You can view the job posting at the USAJOBS site (I first noticed it on ODNI’s anodyne Twitter feed @ODNI_NIC), and naturally I encourage any interested and qualified individuals to apply. Keep reading after this “editorial-comment-via-photo”:

How you’ll often feel if you take this job…

Whether you find the NSA revelations to be infuriating or unsurprising (or even heartening), most will acknowledge that it is in the nation’s interest to have a smart, au courant technologist advising the IC’s leadership on trends and directions in the world of evolving technical capabilities.

In the interest of wider exposure I excerpt below some of the notable elements in the job-posting and description…. and I add a particular observation at the bottom.

Job Title: National Intelligence Officer for Technology – 28259

Agency: Office of the Director of National Intelligence

Job Announcement Number: 28259

Salary Range: $118,932.00 to $170,000.00

Major Duties and Responsibilities:

Oversees and integrates all aspects of the IC’s collection and analytic efforts, as well as the mid- and long-term strategic analysis on technology.

Serves as the single focal point within the ODNI for all activities related to technology and serves as the DNI’s personal representative on this issue.

Maintains senior-level contacts within the intelligence, policymaking, and defense communities to ensure that the full range of informational needs related to emerging technologies are met on a daily basis, while setting strategic guidance to enhance the quality of IC collection and analysis over the long term.

Direct and oversee national intelligence related to technology areas of responsibility; set collection, analysis, and intelligence operations priorities on behalf of the ODNI, in consonance with the National Intelligence Priorities Framework and direction from the National Security Staff.

In concert with the National Intelligence Managers/NIOs for Science and Technology and Economic Issues, determine the state of collection, analysis, or intelligence operations resource gaps; develop and publish an UIS which identifies and formulates strategies to mitigate gaps; advise the Integration Management Council and Integration Management Board of the gaps, mitigation strategies, progress against the strategies, and assessment of the effectiveness of both the strategies and the closing of the intelligence gaps.

Direct and oversee Community-wide mid- and long-term strategic analysis on technology. Serve as subject matter expert and support the DNI’s role as the principal intelligence adviser to the President.

Oversee IC-wide production and coordination of NIEs and other community papers (National Intelligence Council (NIC) Assessments, NIC Memorandums, and Sense of the Community Memorandums) concerning technology.

Liaise and collaborate with senior policymakers in order to articulate substantive intelligence priorities to guide national-level intelligence collection and analysis. Regularly author personal assessments of critical emerging technologies for the President, DNI, and other senior policymakers.

Develop and sustain a professional network with outside experts and IC analysts, analytic managers, and collection managers to ensure timely and appropriate intelligence support to policy customers.

Brief senior IC members, policymakers, military decisionmakers, and other major stakeholders.

Review and preside over the research and production plans on technology by the Community’s analytic components; identify redundancies and gaps, direct strategies to address gaps, and advise the DNI on gaps and shortfalls in analytic capabilities across the IC.

Determine the state of collection on technology, identify gaps, and support integrated Community-wide strategies to mitigate any gaps.

Administer National Intelligence Officer-Technology resource allocations, budget processes and activities, to include the establishment of controls to ensure equities remain within budget.

Lead, manage, and direct a professional level staff, evaluate performance, collaborate on goal setting, and provide feedback and guidance regarding personal and professional development opportunities.

Establish and manage liaison relationships with academia, the business community, and other non-government subject matter experts to ensure the IC has a comprehensive understanding of technology and its intersection with global military, security, economic, financial, and/or energy issues.

…

Technical Qualifications:

Recognized expertise in major technology trends and knowledge of analytic and collection issues sufficient to lead the IC.

Superior ability to work with and fairly represent the IC when analytic views differ among agencies.

Superior communication skills, including ability to exert influence with senior leadership and communicate effectively with people at all staff levels, both internal and external to the organization, to give oral presentations and to otherwise represent the NIC in interagency meetings.

Expert leadership and managerial capabilities, including the ability to effectively direct taskings, assess and manage performance, and support personal and professional development of all levels of personnel.

Superior critical thinking skills and the ability to prepare finished intelligence assessments and other written products with an emphasis on clear organization, concise, and logical presentation.

Executive Core Qualifications (ECQs):

Leading People: This core qualification involves the ability to lead people toward meeting the organization’s vision, mission, and goals. Inherent to this ECQ is the ability to provide an inclusive workplace that fosters the development of others, facilitates cooperation and teamwork, and supports constructive resolution of conflicts. Competencies: Conflict Management, Leveraging Diversity, Developing Others, and Team Building

Leading Change: This core qualification involves the ability to bring about strategic change, both within and outside the organization, to meet organizational goals. Inherent to this ECQ is the ability to establish an organizational vision and to implement it in a continuously changing environment. Competencies: Creativity and Innovation, External Awareness, Flexibility, Resilience, Strategic Thinking, and Vision.

…

HOW YOU WILL BE EVALUATED:

You will be evaluated based upon the responses you provide to each required Technical Qualifications (TQ’s) and Executive Core Qualifications (ECQ’s). When describing your Technical Qualifications (TQ’s) and Executive Core Qualifications (ECQ’s), please be sure to give examples and explain how often you used these skills, the complexity of the knowledge you possessed, the level of the people you interacted with, the sensitivity of the issues you handled, etc. Your responses should describe the experience; education; and accomplishments which have provided you with the skills and knowledge required for this position. Current IC senior officers are not required to submit ECQs, but must address the TQs.

Only one note on the entire description, and it’s about that last line: “Current IC senior officers are not required to submit Executive Core Qualifications, but must address the Technical Qualifications.” This is perhaps the most important element in the entire description; it is assumed that “current IC senior officers” know how to lead bureaucratically, how to manage a staff – but in my experience it cannot be assumed that they are necessarily current on actual trends and advances in the larger world of technology. In fact, some might say the presumption would be against that currency. Yet they must be, for a variety of reasons never more salient than in today’s chaotically-evolving world.

Good luck to applicants.

[note: my title is of course a nod to the impressive education-reform documentary “Waiting for Superman“]

I’m always afraid of engaging in a “battle of wits” only half-armed. So I usually choose my debate opponents judiciously.

Unfortunately, I recently had a contest thrust upon me with a superior foe: my friend Mark Lowenthal, Ph.D. from Harvard, an intelligence community graybeard (literally!) and former Assistant Director of Central Intelligence (ADCI) for Analysis and Production, Vice Chairman of the National Intelligence Council – and as if that weren’t enough, a past national Jeopardy! “Tournament of Champions” winner.

As we both sit on the AFCEA Intelligence Committee and have also collaborated on a few small projects, Mark and I have had occasion to explore one another’s biases and beliefs about the role of technology in the business of intelligence. We’ve had several voluble but collegial debates about that topic, in long-winded email threads and over grubby lunches. Now, the debate has spilled onto the pages of SIGNAL Magazine, which serves as something of a house journal for the defense and intelligence extended communities.

SIGNAL Editor Bob Ackerman suggested a “Point/Counterpoint” short debate on the topic: “Is Big Data the Way Ahead for Intelligence?” Our pieces are side-by-side in the new October issue, and are available here on the magazine’s site.

Mark did an excellent job of marshalling the skeptic’s view on Big Data, under the not-so-equivocal title, “Another Overhyped Fad.” Below you will find an early draft of my own piece, an edited version of which is published under the title “A Longtime Tool of the Community”:

Visit the National Cryptologic Museum in Ft. Meade, Maryland, and you’ll see three large-machine displays, labeled HARVEST and TRACTOR, TELLMAN and RISSMAN, and the mighty Cray XMP-24. They’re credited with helping win the Cold War, from the 1950s through the end of the 1980s. In fact, they are pioneering big-data computers.

Here’s a secret: the Intelligence Community has necessarily been a pioneer in “big data” since inception – both our modern IC and the science of big data were conceived during the decade after the Second World War. The IC and big-data science have always intertwined because of their shared goal: producing and refining information describing the world around us, for important and utilitarian purposes

What do modern intelligence agencies run on? They are internal combustion engines burning pipelines of data, and the more fuel they burn the better their mileage. Analysts and decisionmakers are the drivers of these vast engines, but to keep them from hoofing it, we need big data.

Let’s stipulate that today’s big-data mantra is overhyped. Too many technology vendors are busily rebranding storage or analytics as “big data systems” under the gun from their marketing departments. That caricature is, rightly, derided by both IT cognoscenti and non-techie analysts.

I personally get the disdain for machines, as I had the archetypal humanities background and was once a leather-elbow-patched tweed-jacketed Kremlinologist, reading newspapers and HUMINT for my data. I stared into space a lot, pondering the Chernenko-Gorbachev transition. Yet as Silicon Valley’s information revolution transformed modern business, media, and social behavior across the globe, I learned to keep up – and so has the IC.

Twitter may be new, but the IC is no Johnny-come-lately in big data on foreign targets. US Government funding of computing research in the 1940s and ‘50s stretched from World War II’s radar/countermeasures battles to the elemental ELINT and SIGINT research at Stanford and MIT, leading to the U-2 and OXCART (ELINT/IMINT platforms) and the Sunnyvale roots of NRO.

In all this effort to analyze massive observational traces and electronic signatures, big data was the goal and the bounty.

War planning and peacetime collection were built on collection of ever-more-massive amounts of foreign data from technical platforms – telling the US what the Soviets could and couldn’t do, and therefore where we should and shouldn’t fly, or aim, or collect. And all along, the development of analog and then digital computers to answer those questions, from Vannevar Bush through George Bush, was fortified by massive government investment in big-data technology for military and intelligence applications.

Those three Ft. Meade museum displays demonstrate how NSA and the IC pioneered those “modern” big data tasks. Storage is represented by TELLMAN/RISSMAN, running from the 1960’s throughout the Cold War using innovation from Intel. Search/retrieval were the hallmark of HARVEST/TRACTOR, built by IBM and StorageTek in the late 1950s. Repetitive what-if analytic runs boomed in 1983 when Cray delivered a supercomputer to a customer site for the first time ever.

The benefit of IC early adoption of big data wasn’t only to cryptology – although decrypting enemy secrets would be impossible without it. More broadly, computational big-data horsepower was in use constantly during the Cold War and after, producing intelligence that guided US defense policy and treaty negotiations or verification. Individual analysts formulated requirements for tasked big-data collection with the same intent as when they tasked HUMINT collection: to fill gaps in our knowledge of hidden or emerging patterns of adversary activities.

That’s the sense-making pattern that leads from data to information, to intelligence and knowledge. Humans are good at it, one by one. Murray Feshbach, a little-known Census Bureau demographic researcher, made astonishing contributions to the IC’s understanding of the crumbling Soviet economy and its sociopolitical implications by studying reams of infant-mortality statistics, and noticing patterns of missing data. Humans can provide that insight, brilliantly, but at the speed of hand-eye coordination.

Machines make a passable rote attempt, but at blistering speed, and they don’t balk at repetitive mindnumbing data volume. Amid the data, patterns emerge. Today’s Feshbachs want an Excel spreadsheet or Hadoop table at hand, so they’re not limited to the data they can reasonably carry in their mind’s eye.

To cite a recent joint research paper from Microsoft Research and MIT, “Big Data is notable not because of its size, but because of its relationality to other data. Due to efforts to mine and aggregate data, Big Data is fundamentally networked. Its value comes from the patterns that can be derived by making connections between pieces of data, about an individual, about individuals in relation to others, about groups of people, or simply about the structure of information itself.” That reads like a subset of core requirements for IC analysis, whether social or military, tactical or strategic.

The synergy of human and machine for knowledge work is much like modern agricultural advances – why would a farmer today want to trudge behind an ox-pulled plow? There’s no zero-sum choice to be made between technology and analysts, and the relationship between CIOs and managers of analysts needs to be nurtured, not cleaved apart.

What’s the return for big-data spending? Outside the IC, I challenge humanities researchers to go a day without a search engine. The IC record’s just as clear. ISR, targeting and warning are better because of big data; data-enabled machine translation of foreign sources opens the world; correlation of anomalies amid large-scale financial data pinpoint otherwise unseen hands behind global events. Why, in retrospect, the Iraq WMD conclusion was a result of remarkably-small-data manipulation.

Humans will never lose their edge in analyses requiring creativity, smart hunches, and understanding of unique individuals or groups. If that’s all we need to understand the 21st century, then put down your smartphone. But as long as humans learn by observation, and by counting or categorizing those observations, I say crank the machines for all their robotic worth.

Make sure to read both sides, and feel free to argue your own perspective in a comment on the SIGNAL site.

I recall, one year ago this week, sitting at home on the edge of my seat, intently watching on my wallscreen the live countdown to Felix Baumgartner‘s stunning Red Bull Stratos mission to “transcend human limits” by calmly stepping off an ultra-high-altitude balloon capsule. On the way down he would go supersonic and set numerous records, most significantly the highest-altitude human jump (128,100 feet).

To mark the anniversary, the Stratos folks have just released a well-done information-visualization of his feat, featuring for the first time Felix’s own actual view of the jump – a nicely arranged combination of synchronized views as he hurtled to earth captured by three cameras mounted on Felix’s space-suit, including his helmet cam. You’ll also see gauges noting his Altitude, Airspeed, G-Force, and “Biomed” (heart rate, breath rate).

A couple of datapoints which stood out for me: After his ledge salute and headfirst dive, Felix goes from zero to 100 mph in 4.4 seconds, hitting Mach 1 (or 689 mph) in just 33.2 seconds. It’s also fascinating to watch his heart rate, which (exemplifying his astronaut coolness under pressure) actually decreases from 181 bpm at jump to around 163 bpm as he quickly adjusts; it then rises and falls as he encounters and then controls a severe spin.

His chute deploys about halfway into this nine-minute video, but watching to the end is worth it as he masterfully glides to earth, landing in a suave trot on his feet. Enjoy this look back at a universal Superman.

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

The Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics :)

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0″ movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection. Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S. election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

The year draws to a close… and while the banality and divisiveness of politics and government has been on full display around the world during the past twelve months, the past year has been rewarding for me personally when I can retreat into the world of research. Fortunately there’s a great deal of it going on among my colleagues.

2012 has been a great year for Microsoft Research, and I thought I’d link you to a quick set of year-in-review summaries of some of the exciting work that’s been performed and the advances made:

The work ranges from our Silicon Valley lab work in “erasure code” to social-media research at the New England lab in Cambridge, MA; from “transcending the architecture of quantum computers” at our Station Q in Santa Barbara, to work on cloud data systems and analytics by the eXtreme Computing Group (XCG) in Redmond itself.

Across global boundaries we have seen “work towards a formal proof of the Feit-Thompson Theorem” at Microsoft Research Cambridge (UK), and improvements for Bing search in Arab countries made at our Advanced Technology Labs in Cairo, Egypt.

All in all, an impressive array of research advance, benefiting from an increasing amount of collaboration with academic and other researchers as well. The record is one more fitting tribute to our just-departing Chief Research and Strategy Officer Craig Mundie, who is turning over his reins including MSR oversight to Eric Rudder (see his bio here), while Craig focuses for the next two years on special work reporting to CEO Steve Ballmer. Eric’s a great guy and a savvy technologist, and has been a supporter of our Microsoft Institute’s work as well … I did say he’s savvy :)

There’s a lot of hard work already going on in projects that should pay off in 2013, and the New Year promises to be a great one for technologists and scientists everywhere – with the possible exception of any remaining Mayan-apocalypse/ancient-alien-astronaut-theorists. But even to them, and perhaps most poignantly to them, I say Happy New Year!