A meeting place where researchers, experts and thinkers can present results from their latest work in the conference thematic areas.

Special academic sessions (e.g. demonstrations, workshops and multidisciplinary sessions) held parallel to the Mindtrek conference.

A real chance for media enthusiasts to think outside the box.

Keynote speakers announced later.

We are pleased to invite you to the 21st International Academic Mindtrek conference, 20th to 21st September 2017. Academic Mindtrek is a meeting place where researchers, experts and thinkers present results from their latest work regarding the development of novel technology, media and digital culture for the society of tomorrow.

Academic Mindtrek is part of the renowned Mindtrek business conference. Mindtrek brings together people not only from various fields and domains but also from different sectors: from companies, startups, academia and various governmental institutions. This makes Mindtrek the perfect opportunity for advancing research results towards practical utilization by the industry, as well as getting out-of-the-box research ideas based on the interaction with practitioners.

Mindtrek events are accessible for the Academic Mindtrek attendees, and vice versa.

The academic conference features the following major themes:

Human-computer interaction (HCI)

Interaction design and user experience

Developer experience

Games and gamification

Virtual, augmented and mixed reality

Collaboration, literacies and multimedia technologies in education

Crowdsourcing and citizen participation

Open data and data science

New forms of journalism and media

Theatre, performance and media

Enhancing work in socio-technological environments

We are especially enthusiastic about applied research and papers related to practical work.

Academic Mindtrek is organized in cooperation with ACM SIGMM, and ACM SIGCHI. The conference proceedings will be published in the ACM Digital Library, which includes full papers, posters, workshop proposals and demonstration proposals. All papers should follow the style guidelines of the conference (more information under submission guidelines). In the Finnish classification of publication forums, Academic Mindtrek proceedings are classified as Jufo 1.

[The VR add-on described in this story from Seeker should enhance presence – note the last short paragraph, in which the MindMaze creator and CEO says “We’re moving away from VR as a technological experience to being a real human experience…” The original story includes other images and a video. See coverage of Google’s related tech in an ISPR Presence News post from a few months ago. –Matthew]

If Facebook CEO Mark Zuckerberg is reading his crystal ball correctly, then the next big thing will be social virtual reality. In the very near future, you’ll put on a virtual reality headset and meet up with friends for virtual hangouts, live concerts, and interactive games.

But as anyone who survived the early Second Life scene can attest, virtual avatars can be pretty socially inept. After all, there’s only so much you can say with a permasmile frozen on your face.

This week, a neurotechnology company based in Switzerland called MindMaze unveiled a product that can synchronize a variety of human facial expressions on virtual avatars. Called MASK, the technology reads your brain signals to predict a smile or a wink milliseconds before you even move. The result is a faster-than-real-time reflection of your changing facial expressions that has the potential to add new emotional depth to social and gaming interactions in VR and bring the technology’s use further into the mainstream. Read more on MindMaze’s neural VR interface reads your mind to reflect your facial expression…

Interactive technology, such as virtual agents, to train social skills can improve training curricula. For example police officers can train for interviewing suspects or detecting deception with a virtual agent. Other examples of application areas include (but are not limited to) social workers (training for dealing with broken homes), psychiatrists (training for interviewing people with various difficulties / vulnerabilities / personalities), training of social skills such as job interviews, or social stress management.

We invite all researchers who investigate the design, implementation, and evaluation of technology to submit their work to this special issue on Virtual Agents for Social Skills Training (VASST). With this technology we mean virtual agents for social skills training and any supporting technology. The aim of this special issue is to give an overview of recent developments of interactive virtual agent applications with the goal: improving social skills. Research on VASST reaches across multiple research domains: intelligent virtual agent, (serious) game mechanics, human factors, (social) signal processing, user-specific feedback mechanisms, automated education, and artificial intelligence.

SCOPE

We welcome (literature) studies describing the state-of-the-art for sensing user behaviour, reasoning about this behaviour, and generation of virtual agent behaviour in training scenarios. Topics related to VASST include, but are not limited to:

Recognition and interpretation of (non)verbal social user behaviours;

Training and fusion of user’s signs detected in different modalities;

User/student profiling, such as level or training style preference;

Anonymously processing of user data;

Dialogue and turn-taking management;

Social-emotional and cognitive models;

Automatic improvement of knowledge representations;

Coordination of signs to be displayed by the virtual agents in several modalities,

[The evolution of presence-evoking technology will increasingly make it harder to distinguish the ‘real’ from the artificial, with both positive and negative consequences. This story is from TechCrunch, where it includes a video of the (real) Lyre bird in action. –Matthew]

A Montreal-based AI startup called Lyrebird has taken the wraps off a voice imitation algorithm that the team says can not only mimic the speech of a real person but shift its emotional cadence — and do all this with just a tiny snippet of real world audio.

The public demo, released online yesterday, consists of a series audio samples of (fake) speech generated using their algorithm and one minute voice samples of the speakers. They’ve used voice samples from Presidents Trump, Obama and Hillary Clinton to demo the tech in action — and for maximum FAKE NEWS impact, obviously.

And here’s a totally fabricated discussion between fake Trump, fake Obama and fake Clinton. Truly we live in the strangest times…

Lyrebird says its intention is to offer an API in the future so that third parties can make use of the audio mimicry technology for their own ends. So if you think fake news online is bad now, wait until there’s a tech that lets anyone generate a ‘recording’ of a person apparently incriminating themselves, trivially easily. Read more on Lyrebird is a voice mimic for the fake news era…

The 23rd ACM Symposium on Virtual Reality Software and Technology (VRST 2017) is an international forum for the exchange of experience and knowledge among researchers, developers, and industry concerned with virtual and augmented reality (VR/AR) software and technology. VRST provides an opportunity for VR/AR researchers to interact, share new results, show live demonstrations of their work, and discuss emerging directions for the field. The event is sponsored by ACM SIGCHI and ACM SIGGRAPH.

We invite original, high-quality research papers in all areas of virtual reality, augmented reality, mixed reality, as well as 3D interaction. Research papers should describe results that contribute to advancements in the following areas:

[Can presence provide the motivation and distraction for long-term physical fitness? This story from Bloomberg examines some of the issues (and includes two more images). –Matthew]

Virtual Reality Hits the Gym

Icaros lets exercisers feel like they’re flying or diving

Skeptics say gimmicks won’t trick brain into making body work

by Yuji Nakamura
April 26, 2017

Johannes Scholl is betting virtual reality can keep people excited about working out.

Scholl’s startup, Munich-based Icaros GmbH, has developed a VR exercise machine that delivers a core workout by making it seem like users are flying and deep-ocean diving. About 200 gyms and entertainment centers from London to Tokyo have installed the machines, which cost about $10,000 after including shipping and other costs. A cheaper home version for about $2,000 is under development and could be unveiled around the start of next year.

“There’s no comparable thing you can do at a gym,” says Scholl, who co-founded Icaros in 2015 with fellow industrial designer Michael Schmidt.

The fitness industry has been trying for decades to make exercise less boring — from TVs embedded in treadmills to apps nudging users to stay on schedule — but technology has yet to find a cure for the monotony of working out. Scholl is part of a nascent community that believes the addictive pull of video games combined with the immersive power of VR will do the trick. Read more on VR and presence at the gym…

[Image: Applied Biomechanics Laboratory at UNC (Courtesy of the UNC/NC State Department of Biomedical Engineering)]

Can virtual reality help us prevent falls in the elderly and others?

For the elderly and people with neurodegenerative conditions, balance is not taken for granted. UNC and NC State biomedical engineers are using a new virtual reality system that might one day be used to reveal balance impairments currently undetectable during conventional testing or normal walking.

CHAPEL HILL, NC – Every year, falls lead to hospitalization or death for hundreds of thousands of elderly Americans. Standard clinical techniques generally cannot diagnose balance impairments before they lead to falls. But researchers from the University of North Carolina at Chapel Hill and North Carolina State University have found evidence that virtual reality (VR) could be a big help – not only for detecting balance impairments early, but perhaps also for reversing those impairments and preventing falls.

In a study published in Nature Scientific Reports, a research team led by Jason R. Franz, PhD, assistant professor in the Joint UNC/NC State department of biomedical engineering, used a novel VR system to create the visual illusion of a loss of balance as study participants walked on a treadmill. By perturbing their sense of balance in this way and recording their movements, Franz’s team was able to determine how the participants’ muscles responded. In principle, a similar setup could be used in clinical settings to diagnose balance impairments, or even to train people to improve their balance while walking. Read more on Using VR to help prevent falls in the elderly and others…

Automotive user interfaces and especially automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different user groups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective and quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers.

The main aim of this workshop is to discuss methods and models for the quantification of quality criteria for automotive user interfaces in the transition from manual to automated driving (human factors perspective).

[This story from Haaretz discusses design decisions and the power and ethics of presence experiences across media in the most serious of contexts. The original version includes more images and a 0:51 minute video. –Matthew]

How Virtual Reality Is Reinventing Holocaust Remembrance

In ‘The Last Goodbye’ at the Tribeca Virtual Arcade this month, the viewer wears a virtual-reality headset as a survivor recounts his ordeal at Majdanek. It’s an experience more authentic than ‘Shoah,’ its producer says

NEW YORK − When asked a question, Pinchas Gutter doesn’t simply provide an answer − the 85-year-old Holocaust survivor tells a story.

In an interview Saturday during a lunch in his honor at the Tribeca Film Festival, Gutter recalled how he barely survived five concentration camps and a death march from Germany to Czechoslovakia. In the early ‘50s, the Jewish orphan who lost his family at Majdanek decided to volunteer for the Israeli army. He later moved to Jerusalem and found himself working in construction. The project he was helping build was the Yad Vashem Holocaust memorial museum.

While Yad Vashem − with its vast archive, outdoor sculptures and memorial sites such as the Children’s Memorial and Hall of Remembrance − set the standard for remembrance centers around the world, two new initiatives featuring Gutter can teach us something about the future of Holocaust education and preservation.

The first is “New Dimensions in Testimony,” which premiered last year at the international documentary festival in Sheffield, England. It featured a 3-D responsive hologram of Gutter − letting audiences ask questions and receive answers based on his prerecorded memories. The second initiative, which can be seen at the Tribeca Virtual Arcade until April 29, is “The Last Goodbye” − the first-ever immersive recreation of a concentration camp, shot at Majdanek last summer.

Gutter, who carefully leads the viewer of “The Last Goodbye” through Majdanek while recounting his tale of survival and loss, is a remarkable storyteller. In a moment of self-reflection he states with a smile, “I guess that’s why they chose me as their guinea pig.” “They” refers to the team behind this virtual-reality work, which was directed by Gabo Arora and Ari Palitz and produced by Stephen Smith in association with the University of Southern California and the USC Shoah Foundation.

While Gutter and 11 other survivors in “New Dimensions” were transformed into responsive holograms, his participation in “The Last Goodbye” takes memorialization and technology one step further. Upon entering a white exhibition space at Tribeca’s Spring Studios on Varick Street, you’re asked to take off your shoes and put on a VR headset covering your eyes and ears. You then meet Gutter in an unlikely place: a hotel bathroom in which the octogenarian shaves in front of a small mirror. You’re barefoot while Gutter is wearing a white bathrobe. Using a voice-over, Gutter confesses that he’s extremely anxious about going back to Majdanek for what he describes as “my very last visit to the camp.” Read more on How VR and presence are reinventing Holocaust remembrance…