As part of the Journalism 360 Challenge, Google News Lab, Knight Foundation and the Online News Association are awarding $285,000 (£221,700) to 11 projects that aim to accelerate the use of immersive storytelling in news.

Announced today (11 July), the winners of the Journalism 360 Challenge, which launched in March, include projects that will explore the formats, ethics and production of virtual reality (VR), augmented reality (AR), and 360-degree video.

The recipients of the funding include initiatives focusing on making these technologies more available to the wider public, apps or platforms that recreate news events which the audience would otherwise be unable to access, and tools that incorporate data and information visualisation with immersive storytelling.

The Washington Post has been awarded $30,000 (£23,300) to develop 'Facing bias', a smartphone tool that will use augmented reality to analyse people's facial expressions when they read stories or view images that either affirm or contradict their beliefs.

Emily Yount, interaction designer at The Post and the project's lead, told Journalism.co.uk the idea came from brainstorming ways to help readers understand and be aware of "how their own thoughts and beliefs affect how they perceive news".

"We'd heard about this API called the Microsoft Emotion API which can use your device's camera to read your facial expressions and tell you what you may be feeling based on your expression.

"There's a lot of research into micro-expressions, the little things we do with our faces that tell a bigger picture of what we're feeling, even if we're not trying to tell people or even if we are trying to hide it, and there is also lot of research about bias in news.

"So we're going to be reaching out to researchers across a couple of different disciplines and pull everything together into this experience."

The tool, which should be built and available in the next six to 12 months, will likely work across platforms, so that it can be integrated with any experiences that require access to a camera on different devices.

A person will either read an article or be presented with a series of statements or images, and the Emotion API will perform an analysis of their facial expressions in real-time to provide them with an idea on what their perspective on an issue might be based on their reactions. For privacy reasons, users will be told their facial expressions will be analysed, but the information will not be stored or re-used.