augmented.orghttp://www.augmented.org/blog
augmented reality news & blogTue, 26 Sep 2017 14:25:11 +0000en-UShourly1https://wordpress.org/?v=4.7.6http://www.augmented.org/blog/wp-content/uploads/2014/07/cropped-augmentedorg_squarelogo_170_border-75x75.pngaugmented.orghttp://www.augmented.org/blog
3232AWE Conference – Augmented Reality coming homehttp://www.augmented.org/blog/2017/09/awe-conference-augmented-reality-coming-home/
Tue, 26 Sep 2017 14:22:58 +0000http://www.augmented.org/blog/?p=3996The Augmented World Expo AR Conference is coming home to Europe, to Munich in October 2017. That’s so great! Of course, I’m biased, since I’ve been living and working there for many years, working with lots of companies on AR, including metaio. Munich sure has been and still is a center of AR activities in Europe – and the world! […]

]]>The Augmented World Expo AR Conference is coming home to Europe, to Munich in October 2017. That’s so great! Of course, I’m biased, since I’ve been living and working there for many years, working with lots of companies on AR, including metaio. Munich sure has been and still is a center of AR activities in Europe – and the world! – and though metaio was swallowed by Apple, other big players like Microsoft or Google are still on-site Munich big time and Bavarian mixture of Laptop and Lederhosen will sure attract more people in the future – and to the upcoming AWE conference! augmented.org will be there and report live from the event. So, why not take a sneak peak into what we can expect today!?

Short Facts

The two-day conference with many talks on augmented reality and a large exhibition area will be held on October, 19th + 20th 2017. All takes place in the MOC Exhibition Center in the north of Munich (easily reached by public transport). This is the European branch of AWE with around 1500 attendees and ~100 speakers as stated by them. I don’t care about how huge the event is, but more importantly many big players and well-known hardware and software companies present their work and high-class experts share their superhero wisdom with the crowd. If you have one business trip left to choose in 2017, I’d make it this one. Let’s take a look:

The first day is focusing more on the technology, current use cases and how to implement AR in your processes. Some details (keynote speaker?) are yet to be revealed, but besides some hopefully stunning presentations (e.g. by META), we can see different interesting blocks. One is about the underlying tech like 3D scanning, time-of-flight cameras and the resulting SDKs like Tango or software-solutions with computer vision to understand the world better (“3D Map Data: Digital Scaffolding for the 21st Century”). Tech companies like nvidia show off their latest stuff and I especially liked the title on “Symbiotic AI” – to let computers learn how they should live in a human world – and not vise-versa! Good thought! More tech? E.g. Avegant will dive into light field issues and concepts.

E.g. Bosch and Audi will show how to integrate AR in your enterprise in a productive environment and so will VisionLib, talking about killing the pain from going from prototype in a controlled environment out into the street. Other companies will showcase their stories and let the newbie learn about where AR is being used in the industry today. E.g. using AR for logistic scenarios for pickers or allowing better in-house navigation in factories and warehouses. HOLOGATE will talk about location-based VR/AR entertainment today.

Currently used technology will be shown on day 1, too. The usual suspects will present their latest SDK updates and let us know what they have in stock, like wikitude talking about how to go beyond ARKit and ARCore (which is honestly quite restrictive). Vuforia will present their new model targets feature to better include real world physical objects. Also web-tech like A-Frame will get it’s moment.

Day 2

While day 1 focused more on technology and the status quo, the second day is more content- and crowd-driven. By crowd I mean companies and people who want to present and pitch their ideas and connect. Special slots for Startup Pitches and an Investor Networking Event are planned.

More talks dive into design, content creation and tools, user interface design in an MR space and what we need to think about. “Immersive Entertainment” will tell us about the impact on music, film, TV – the old media. How will AR and VR even change art and the creation of art? Storytellers should gather to listen to “Imagine” and the future of their métier and the talk on “Exploring the Frontiers of How Storytellers Are teaming up to Transform Lives with VR”.

To me, it sounds like a good mixture of content, tech and future vision. Ori will wrap it all up and also announce the Auggie Awards winners on day 2 – for the best and innovative AR concepts. Do you like the agenda? What is your favorite? Happy to hear about it via @augmentedorg with #AWE2017.

So make sure to join! If you have read this far, you are worth a benefit! If you send me a mail to AWE-at-augmented.org you have the chance of winning a 45% discount for the ticket! (Lottery on Oct, 5th!) I’m also happy to meet up with you during the conference, let me know your thoughts. If you are in Munich longer, be sure to check our regular’s table #ARMUC and enjoy the (hopefully) sunny fall in Munich. My tip: check out the surfer’s wave Eisbach with a bottle of the best local beer: an Augustiner.

]]>Google’s ARCore attacks Apple’s ARKithttp://www.augmented.org/blog/2017/08/googles-arcore-attacks-apples-arkit/
Tue, 29 Aug 2017 17:58:12 +0000http://www.augmented.org/blog/?p=3984Google watching WWDC with Apple announcing ARKit: “Ooh, they launched their AR stuff without waiting for a depth sensor? Didn’t see that coming. Oh well, then let’s just //-comment out the Tango calls and make it another SDK quickly!”. This is not how it happened at Google. But feels like it. Out of the blue, Google launches their new SDK […]

]]>Google watching WWDC with Apple announcing ARKit: “Ooh, they launched their AR stuff without waiting for a depth sensor? Didn’t see that coming. Oh well, then let’s just //-comment out the Tango calls and make it another SDK quickly!”. This is not how it happened at Google. But feels like it. Out of the blue, Google launches their new SDK called “ARCore” today. It’s dedicated to mobile phone augmented reality using standard sensors. A direct reply to Apple’s freshly presented ARKit. Wow, that came a little bit surprising!

…but then again, it did not came that surprising. Often, Google developers have discussed the tracking concept of Tango – turns out it mostly relies on the RGB/IMU input only anyway. Makes sense to ship it for as many phones as possible now?

Funny. Google is kind of taking the backward approach. While Apple started without IR-depth-sensors and might include it later, Google did long trials with Tango and now skips it… Will it kill Tango technology? Definitely not! But it seems more like a must-have step that was needed to close the gap for all developers co-developing for Android and iOS. Since competitors couldn’t wait for “full AR”, it feels like Google joined the circle to not lose touch with the users.

So, is this good or bad for the users? Definitely good! Well, short-term. We love AR and want to see more of it now. But it could also lead to poor experiences and a lot of crappy less attractive demos, that could eat on the reputation of the term AR (again). Will smartphone producers now rush to include more expensive Tango hardware? Feels like it could heavily delay roll-out of more Tango gadgets. Or it has always been a long-term plan to fill the gap between four dots on their roadmap A-B-C-D. A to B: dumb smartphone to ARCore-phone, B to C: ARCore-phone to Tango-phone, C to D: Tango to AR-glasses (not Google Glass).

Let’s not speculate today, but enjoy two videos to dive into the smartphone fun coming from Google and showing some more experimental fun they did already:

But why another SDK?

Couldn’t it just be a subset of Tango? Or Tango a subset of ARCore in the future as it could work out with a long-term roadmap connecting the four dots? Well, we will see. Let’s take a brief look at the SDK while installing all the stuff needed – currently only working on a Pixel or Galaxy S8 phone (for now, many more devices to come):

Environmental understanding – to allow virtual objects to be placed “in a way that physically connects with the real world.”

Motion Tracking – to allow “users to walk around and interact with virtual content that is rendered in the 3D world.”

Light Estimation – to create “realistic looking objects by having its own light change dynamically according to the environment lighting.”

Google’s approach also ships a light estimation function with a float pixelIntensity. Although, in the Unity documentation they describe the “EnvironmentalLight” as a “component that automatically adjust lighting settings for the scene to be inline with those estimated by ARCore” – maybe they will do something more to my Shaders automatically?

Their tracking plane object “TrackedPlane” lists position, rotation and boundaries. Seems like the plane detection is limited to horizontal 2D surfaces for now (like ARKit). They ship the anchor concept to solidly place virtual objects in a learned environment.

Talking about environment… You get access to the point cloud data sets as well. You can check each point from the cloud. Pretty neat! … so, will we see an update to this environmental package including the Tango stuff?

In any case. Good to see Google close the gap, support Unity and Unreal on day 1 and let’s see what else will come next weeks to update the many AR SDKs of the world! So, back to development…

]]>There won’t be one augmented world!http://www.augmented.org/blog/2017/08/wont-one-augmented-world/
Tue, 22 Aug 2017 16:17:44 +0000http://www.augmented.org/blog/?p=3939We love to dream of a cyberspace in virtual reality and the equivalent of an augmented reality to enhance our lives. But will it ever become a reality like we dreamt it up? What might stand in its way? Let me dive into some daydreaming on it today. It’s not a summer slump thing while we wait for the upcoming […]

]]>We love to dream of a cyberspace in virtual reality and the equivalent of an augmented reality to enhance our lives. But will it ever become a reality like we dreamt it up? What might stand in its way? Let me dive into some daydreaming on it today. It’s not a summer slump thing while we wait for the upcoming conferences (like ISMAR or AWE EU) in October, but rather a really important matter I want to raise awareness for today!

The tech-nerd dream we dream

When we think of an augmented reality future where everybody wears slim wireless glasses we always think of one world. One augmented parallel space we all share. People with different glasses can see the same holograms, share their content and experiences with their friends, family and coworkers – or even strangers, as easy as a messenger-click. Virtual objects should be persistent and co-exist for all. We want to share information and enjoy joint mixed reality moments like we do today with pen & paper, smartphones or computer screens: multiple people can see the content at the same time. We want to collaborate on this content as if it were a real piece of clay or white board. We want to have private assets we can grant access to, but easily share and look at augmented stuff together. Only by reaching this goal, we can throw away all our screens as Meron Gribetz from Meta plans. At the same time we want fully natural gesture and voice interaction with the computer (whatever natural means interacting with a computer – human-like while performing un-human new tasks?). We want force-feedback air-tap keyboards and more. Wow, a long tech list to go, not diving into the hardware and software or tracking issues yet. But the most important dream I share with many others, is: to have a democratic and open augmented space for everyone. The augmented parallel world needs to be an open-source and royalty free zone without real-estate agents to sell us a piece of thin air we as a society should own, or rather not own, but just share together.

The current situation

This leads us to the current situation of our digital world. Yes, the whole digital world for starters. I’m a child of the 80s and 90s and started into the internet with my noisy modem, I saw the democratic citizen internet grow through the 90s. Everybody had a flying toaster or dancing Jesus website (not pretty, but your own) back then. Now 2017 the websites we consume went down dramatically. Big players have taken over, accumulate services and the private website disappeared and we throw all our private data into the throats of big brother dragons. We’ve become their voluntary content creator slaves. Yeah, yeah. I don’t want to cry wolf or re-start some well-know negative vibes and second thoughts on IT. We can be quite happy that there are less dancing Jesus pages right now, one might even say! My point is: during early internet days people used the new media very democratically and thanks to some simple HTML standard and some lighthouse logo browser we could slowly have access to it from any device or operating system.

Now (due to a number of reasons) the number of digital platforms is rather slim, owned by some big cheese companies. Luckily there are also a number of popping up start-ups trying their luck during the gold digging phase of AR. Bubbles burst, some prevail, some show up again, Phoenix yeah yeah, but the focus seems to be different. Nowadays, when I want to enter some page or app, first thing I will see is a paywall or login-wall. One gated community after another. Everybody creates their own garden of happiness. Sometimes, because users are lazy, Google or Facebook will get some of the user juice, full nakedness and tracking of all users by signing up with their existing FB/G accounts (but this big brother topic does not belong here today). The consumer seems to be less sceptic and rather take the advantage and quick-personal win without additional effort or investment.

What happened? Do people not care anymore about open-source or democratic accessibility of data? Will this continue as is in a parallel augmented space? Are people less nerdy or tech-savvy to do so? Are we too lazy or is it just too early to do so? Is there even a better way to create this dreamy AR space as a shared vision?

What big players don’t want to share

Obviously it’s hard to create light field displays in your garage with a couple of remote friends or a simple github repo. The big companies have millions of investments and need it to create great hardware we all want to use. Comparing it to mobile phones, it’s only later when open-source hardware manufacturing kicks in and some tech pieces are a commodity. Big players producing the first AR glasses have big investments and huge risks. It’s totally fair that they want to maximize their profit and have a break-even soon enough to survive.

But today’s digital money is measured in personal data of users. Additionally, it will be world knowledge for AR. The user-generated content reaches a new level with 3D-world-scanning à la Tango or earlier stuff like Microsoft’s Photosynth. You need a huge user base for this. We could happily accept this for the greater good – if we had a choice or even knew what happened with our data and had some control. Creating a 3D world representation and stable AR tracking is a huge task, first on local scale, then city-scale, … world scale. Hardware production is obviously not easier.

With the hardware the companies have their lead on the market, but world data should be a shared asset that we all own. If we all only feed the big players and their gated communities they will again own the monopoly of this digital space. Will we silently accept when they track every move we make and feed us ads all the time? This could lead to an ad-clustered field of view like in Keiichi’s Domestic Robocop or AR ad graffiti problems: “Behold the AR internet – it’s full of ads!”.

Why should I care?

It’s not only to avoid ads in your glasses or about user behaviour tracking. It’s about democratic access and control of the parallel digital space – and about persistency. You don’t want to see your epitaph and last words of your grandparents disappear only because the AR-grave service just went bankrupt.

Would you rather use a standard JPG file format to save your holiday memories or go for a TGA or some other weird proprietary file format? Having closed systems and different companies working on these issues in parallel is not bad at all! But only until we reach a certain level of social impact and acceptance. As a citizen you would get dependent and reliant on a small company. As a developer you are annoyed by hundreds of SDKs to support or missing access. Ultimately evolution of applications is hindered due to limited access of resources (like geo-spatial data for AR tracking).

What do we need?

The ultimate goal must be to provide an open-source AR infrastructure, where profit-oriented companies can build up upon – just like the internet does today – or did until everyone started deploying their own closed apps to use. I’m happy that big companies take all the risk to create great AR glasses – but afterwards we need an open-world with democratic content and a descriptive system that we can build upon. Their USP might be the glasses, the tech. But once the infrastructure has been established, we should move on to a citizen-driven AR space.

Thinking about the tech for describing AR space… the base for tracking and positioning has been tackled in the past already. Georgia Tech did an approach to it around 2011, called KHARMA. It was an extension to GoogleEarth KML files to allow additional location-based info. There is the Augmented Reality Markup Language (ARML) and there is stuff like OpenStreetMap to describe the world. We need further open standards like it to not only describe the world, but also to interconnect entities. Standards like the metric system, ASCII, HTML, car engines with the same gas or SMS helped to connect and have a society work as a whole. Independently of their manufacturer. Some small companies (like Escher Reality) intend to be the middleware for such an AR-space. I love the spirit, but we need an open standard for all of it – at least for the shared, multi-user tasks we dreamt about in the beginning.

Those activities do exist, take the Khronos group with openXR or WebVR for instance. But to me, it seems that too little invest is flowing into this. Especially from the big players. The big M’s – Microsoft, Meta, MagicLeap – are missing on the OpenXR page (at least not listed). OpenXR seems more focused on VR lately, but let’s not wait longer for the AR initiative! So, what do we do?

Let’s get cracking and hacking!

There are ways to deal with the symptoms (like blocking ads in AR), but obviously it would be better to build up an augmented shared democratic space now. Still, it’s early and people are just learning. But let’s not wait. As I see it, we should start hacking the systems now (don’t look to the image on the right!), learn from it, improve on it and share it. Then let’s build a digital democratic world-base like wikipedia. It must be non-profit and contain 3D assets and world tracking data (like OpenStreetmap or OpenGeoSpatial). The user can decide, to which layers of reality to subscribe to and set automatic filters to switch visibility levels easily. A standard link structure should lead to external (private, non-public, commerical) applications through a common interface. Further, we need these open file format standards to use on any device (like the small step by Rob Manson showing AR in a webbrowser).

Ok, that headline was a bit cheesy. But, honestly – how to get there? Do we need to? There won’t be one single augmented world – which is good for some business or special use cases of course. There will be many spaces. But the one parallel augmented space we dream about should be as open-source as the next wall to scratch into – open to everyone, treated with respect! What would you say if vodafone phones could only call other vodafone phones? That’s exactly what’s happening in the digital space for years. I believe it is a shame and a huge step backwards for humanity. There can always be special interest spaces, but some standard should exist. Like being able to call or write your friends, watch a 3D video or interactive content, discuss 3D data and meet up remotely in a form of telepresence.

How to get there? I’m not so sure. People need free time to invest in this non-profit idea. Probably they must first run against the walls of limitations with the different gated AR communities, get fed up with it, the ads or restrictions there and see the advantages of the alternative. It’s future talk of an enthusiast. Society must feel the need internally.

Let’s remember what we want. Where do we – as users and content creators – profit most? What might speed up society and not block it (again)? Let’s keep this in mind. Closed-source is not always bad, but socially relevant parts should be accessible to all (like electricity, water, internet). It’s still early days of AR – and that’s exactly the moment we have think about these things to avoid puny walled garden fights and to create a long-lasting digital space. So, let’s support democracy with open standards today to have an open AR/VR/MR world for the generations to come!

]]>World War Z – let the blind see with depth!http://www.augmented.org/blog/2017/07/world-war-z-let-blind-see-depth/
Thu, 20 Jul 2017 15:31:56 +0000http://www.augmented.org/blog/?p=3914Today, on a lazy summer day, I`d like to think a little bit about two things on AR. First, the technological part of tracking and sensors with upcoming vendor´s fights (couldn`t resist the z-depth-buffer joke) and second, the form factor of today`s and tomorrow`s AR devices. Will we really wear glasses soon? What might be the showbreaker? Two cents on […]

]]>Today, on a lazy summer day, I`d like to think a little bit about two things on AR. First, the technological part of tracking and sensors with upcoming vendor´s fights (couldn`t resist the z-depth-buffer joke) and second, the form factor of today`s and tomorrow`s AR devices. Will we really wear glasses soon? What might be the showbreaker? Two cents on it today.

Still buffering… depth sensing to come… some day.

When thinking about consumer AR devices all we have are RGB cameras in phones. There is no other consumer option for private at-home use. Hence we are still stuck with old school vision-based tracking approaches and algorithms like extended SLAM to understand the world at least a little bit for our augmentation purposes. metaio`s Apple`s ARKit does a pretty damn good surface estimation and let`s us place objects within the screen`s frame. But it remains a visual overlay with no further knowledge of the surroundings. Also, interaction is limited to touch-screen-only (if we don`t consider “change of perspective by moving around” an interaction). Accurate occlusion, exact positioning and distance measurement will only come with more sensoric information. Devices like the Hololens use infrared echo sounding or infrared grid recordings to better estimate depth. Google`s Tango still suffers from non-availability.

No questions asked, depth sensing will allow a lot of new applications including indoor navigation, 3D object scanning and better object recognition. So why does it take so long? Shrinking the tech seems to be a real issue. At least we went down from first Tango devkit over the Lenovo phablet to a real (but still big) smartphone-sized device with the Zenfone AR. But will others follow? Rumors say that Samsung could be integrating more camera sensors (Tri-Cam style) into their new Note 8 and run Tango, too. Will Apple integrate depth sensors (to front and back cameras) as rumored in their release in fall 2017? I do think so. The ARKit was there to get everybody hooked and started. The next ARKit SDK update could include depth access. It should! It has to! Apple can still take the lead here, though Google had the lead technologically here (please hurry up, awesome Johnny Lee!), Apple might leap forward and take over. Having exact depth info will enable way more AR applications. The war on the dominance of THE AR platform is on. But still, one problem remains…

The unsolved interaction issue

… how do we interact with the virtual content? ARKit is fun to drag-drop-place virtual furniture, but you always interact by holding up your phone like a stupid tourist in the Louvre. Real life does not work like that. You want to touch and interact with the objects around you (maybe you shouldn`t do so with the Mona Lisa). ARKit today only allows smudging your screen while moving virtual objects around on the screen. Regards to this issue, first ARToolkit from 1999 or any other marker-based AR solution were better already. Computing power was not good enough for another solution, but the advantage of a printed marker: you can touch and grab some kind of reference object. Placing an IKEA catalogue on the floor might be an example where we can say: happy these days are over. But for direct manipulation of virtual objects, concepts like the MergeVR Cube are way more intuitive to use and direct. This even beats hand gesture interaction in mid-air. A Leap Motion for AR would not solve the problem of haptics and reliablility of our world interactions. With better object recognition (possibly through depth cameras and better algorithms) in the future we can enable any real world object to be our haptic link to the virtual objects. AR needs to get smarter and really see the world and their objects. Right now it`s all still too blind.

Solving the digital tourist syndrome?

If we had slim glasses with accurate depth sensors yet, we could possibly use gestures and any physical tool to interact with augmented digital objects. But there is still some way to go. Inbetween better equipped phones can close the gap and new devices like the passive-tech AR glasses from Lenovo and Disney or Mira Reality could help out to get our hands free again.

I`d always say that this is the way AR was meant to be and to be worn. Only with very slim glasses it can be true AR. But it will take longer to reach a meaningful quality for every-day usage. The Prism glasses and others depend on the smartphone that clicks into it. Will we get a better front-cam in the new iphone that works with Prism (then I guess I have to ditch my Tango) and what will see from Lenovo and others next? These plastic cardboards for AR could go big in marketing very soon, but once you`ve used it, it probably lies around getting dusty – and will be found by the next generation laughing about our baby steps in AR. But, hey, you gotta go through it…

… or de we have to?

In business situations like with logistic pickers or other advertised scenarios from the Google`s Glass relaunch AR smartglasses work already. Does it mean that we all will get assimilated? Will it spread like any other specialized high-tech solution (often military first) like teflon during the Manhattan project or AR-style for airborne trainings (with Tom Furness) before it goes main stream? Or do we reach the end of the line for mainstream before?

AR glasses are distracting in a social context. Today, we are annoyed when someone won`t take out their earpods or take down their sunglasses while in a close conversation. When we take a picture of a group of friends as a memory we are disturbed when someone leaves their shades on (at least for me). When we are having a meaningful conversation the chime sound on your phone can kill the moment. When people meet up for a beer night, they already build towers of smartphones – the one who can`t resist and grabs his or her first pays the next round. Unless you are Steve Mann or the Borg-version of Picard everybody will be freaked out if you have some tech in your face to distract you. Humans are very sensitive when it comes to faces and “things that shouldn`t be there”. So, are AR glasses doomed?

If you use your phone with Tango, ARKit or whatever and do hand-held AR like the tourist you win twice: it is possible today and your friend will know when you are distracted. With glasses (unless the blinking red light is enough) you won`t know… is this a generation issue or a showbreaker? Time will tell of course. Snapchat generation will adopt, but I guess it will take way longer to go mainstream than everybody screams today. Maybe 10 years, 15, 20? If the advantages of it and changes of society demand it, it will happen. But until then I`m also happy to enjoy my supposedly already-dead smartphone, that can rest aside unused, not blocking my view of the sea and my friends.

What do you think? Happy to exchange thoughts on twitter, facebook or directly. Drop me a line or share your summer thoughts with all! Hmm, feels like it`s time for a beer now. ¡Prost!

]]>Interview with Audi – AR, VR and a Hackathon at Digilityhttp://www.augmented.org/blog/2017/06/interview-with-audi-ar-vr-and-a-hackathon/
Tue, 27 Jun 2017 12:07:01 +0000http://www.augmented.org/blog/?p=3886Today, I’m happy to publish my interview I had with Audi. I was interested in learning more about their current AR/VR activities and their plans for the Digility conference next week. Since they have a talk there and organize the before-mentioned hackathon, I wanted to give my readers some insights ahead of time. Find out about their mixed reality plans […]

]]>Today, I’m happy to publish my interview I had with Audi. I was interested in learning more about their current AR/VR activities and their plans for the Digility conference next week. Since they have a talk there and organize the before-mentioned hackathon, I wanted to give my readers some insights ahead of time. Find out about their mixed reality plans today.

Jan Pflüger, Audi

I had the chance to talk to Jan Pflüger (pictured aside) and Jens Angerer (pictured in the featured image). Jan is coordinator of the “Center of Competence” at Audi to control the activities and roll-outs of AR/VR solutions. He is long-term involved in this field and was lecturer at the Universities of Applied Sciences Pforzheim and Northwestern Switzerland before he switched to the industry. I know him since his days at RTT AG / Dassault Systemes / 3DExcite, before he switched to Audi. Jens Angerer is Project Lead for Human Machine Interfaces at the Audi Production Lab, which he co-founded in 2012. He specialized in VR User Interfaces longer ago. Currently, Jens is also teaching VR/AR to UX Design and Computer Science students as an adjunct professor at the University of Applied Sciences Ingolstadt. The interview became a little larger, but I didn’t want to cut down on it. So, let’s jump right into it!

Let’s talk about AR/VR

augmented.org: Hi Jens, thanks for taking your time. This year’s Digility conference is right around the corner and you will be talking about “The true meaning of Mixed Reality”. Can you explain what you mean by that?

Jens: Sure! Essentially Augmented and Virtual Reality are just different forms of Mixed Reality, which I conveniently abbreviate with xR. But if you ask someone else he might use different terms and abbreviations for the same things. How can that be? I think we are missing a common understanding as well as common terminology and standards for these technologies. Even more, the xR community – especially in Europe – is missing a joint effort to advance this field together. There won’t come a better time to collaborate and create meaningful xR experiences – both in B2B and B2C. At Digility I will talk about my approach to this and show some examples how Audi is using xR today. And thanks to Jan, Audi is also shaping the future of xR by sponsoring the Digility Hackathon.

Hi Jan, great to chat again! I’m looking forward to meet up in person soon again. Typically, we meet at conferences, where Audi is present with an R&D booth or a center stage demo. How come that you organize and support a hackathon at this year’s Digility conference?

Jan: It is always exciting to see the outcome and what is happening during the limited time of a hackathon. We take this as an opportunity to support and establish the European xR scene. Digility provides an excellent framework for this and we look forward to come in touch with the creative minds to advance topics such as virtual collaboration, universal interaction and intuitive UI / UX designs.

Hackathon tasks are often revealed only shortly before the event. Can you already give an idea what the developers can expect? What do you expect as Audi, as a take-away or result for you?

Jan: I do not want to spoil anything just yet, but the participants can expect some exciting use cases that are relevant not only from an automotive perspective. Since we have a wide range of application areas, there will surely be something for everyone. Working in interdisciplinary teams together with the mentors of the Hackathon will be a unique experience for every participant. Not to mention the very cool prizes for the winners! Of course, we are also interested in talents and hope for some new approaches, which we will pursue together in the future.

Cool. Before we check the future, let´s talk a little bit about the beginning. VW/Audi has been long-time involved in AR/VR activities, I’m thinking of research projects like ARVIDA, ARVIKA, etc. Can you give us an idea when and why research started in this field?

Jan: Personally, I had the first contacts to the big research projects in my role as Research Associate at the Institute for Interface Design in Switzerland. The foundation for research was laid way before I started working for Audi. As far as I know they are carrying out research in this field for almost 20 years. We saw the potential of Augmented and Virtual Reality at an early stage and worked hard to bring those research projects into application at brands like Audi and VW. But the results and insights of this work have spread far beyond this. Think for example of metaio, close partner of us for years, whose technology still today at Apple drives the industry.

Example for AR at Audi for the consumer: eKurzinfo from 2014; More going on behind closed doors…

So, how do you currently support this field today?

Jan: Virtual techniques have been established as an important part of the digital transformation. This was obvious for us long time ago and as one of the outcomes we founded the Center of Competence AR&VR. This serves as a central network node for xR and provides advice for our internal customers regarding the use of AR/VR technologies. At the same time, we are collecting requirements and discuss them with our external partners to improve the solutions in this field continuously. This is where Jens and I are closely collaborating.

Jens: Absolutely. Jan and I are both driven to bring new technologies into application. I’m often describing my work at the Audi Production Lab as “Science Fiction”. I am inspired by the Fiction part about xR and use Science to make it come to life. The prototypes that we build in our lab then come into application at our production sites and logistics at Audi worldwide.

Can you give us an example of where AR is being used in a productive or pilot environment today at Audi? What technology are you using? Where is the biggest practical advantage in an industrial environment?

Jens: There is a broad range of use cases we explore today. In logistics for example we train people with VR. In production, we use wearables like Google Glass to guide our employees in very complex engine assemblies. Our studies have shown that this reduces assembly time and improves Human Machine Interaction as well as flexibility. AR however is still very challenging to implement, but training is definitely a promising area.

Jan: AR has a great potential. For the launch of HoloLens we developed a service and aftersales scenario together with Microsoft, which we are now implementing successively into our productive processes. The biggest advantage, of course, is that you can use AR to set information into a real context. Our teams and specialists are spread all over the world – working collaboratively together on projects is a key factor to enhance the product process and solve issues when they appear. Using technology helps us to connect the right people with the correspondent information – additional preparation work and specialization on every side can thus be reduced. In general, I see the advantages of AR across all areas – some can be realized in a timely manner, for others processes must be adjusted appropriately.

Talking about adjustments… 3D-Rendering gets better and more performant, tracking quality rises and practical AR glasses are (hopefully) close to release. Still, hardware and software is sometimes a bit complicated to use or error-prone. What do you see as the remaining main challenges?

Jan: In my eyes the most important question is: What is the main part? I do not want to wear AR hardware for the sake of wearing strange goggles. I saw a lot of demos and “use cases”, but to be honest – most of them are manually generated demonstrators without connection to the “real” environment. The challenge will be to deliver the right information in the right context. If this could be fulfilled in an automated manner the future of AR will be ready.

OK, tech is one thing. The other are the users and the integration into their workflow. People are used to their IT landscape and day job routine. Did you already solve to integrate these technologies into productive workflows, from a technical standpoint but also from a user perspective? Is to say: is it mature? What’s missing?

Jens: As a User Interface guy, I still see huge challenges before we have ubiquitous AR. Just as an example: I am avoiding to wear my prescription glasses whenever I can. The weight of this relatively light frame is unacceptable for me, although it gives me eyesight. That’s a high barrier for AR glasses to take.

Jan: We are working on the solutions to ensure the integration into our future IT landscape and process chains. We also help to reduce the barriers to the use of technology by showing the advantage of such technologies in various events.
As a company, you have to take responsibility for the upcoming changes. This leads to many questions. How does the solution affect the daily work of the employees? What qualifications will be required in the future? Also, new technologies do not usually have robust studies on health aspects and work safety. There must be appropriate answers for topics such as hygiene and so on… With this not yet complete list, I would like to point out that behind a truly productive integration into the corporate context lies much more than to implement technological solutions. For us, therefore, hardware or technological issues are not in the main focus. The solution lies rather in the integration into processes or rather – how must the processes be designed to take advantage of the technology! We have evolved suitable fields of application to take advantage of AR, but I still see a way ahead of us, until it will naturally be part of the workplace and our daily environment.

Thanks for the insights, maybe let’s take a step back and check the current AR/VR landscape in general. You do AR/VR for a long time now and we see the advantages in specific industrial scenarios. Now, in 2016 and 2017 AR has gained a bigger popularity outside of research labs. Everybody talks about it and recently Apple released their ARKit. How do you recognize the current hype around AR? Is it justified? A bubble or for real?

Jan: For me it is for real – I always believed in the power of AR! ;-) By now we have the first time an easy access to the technology. Platforms enabling artists and developers to create stunning content and pushing the technology to its limits. AR can be experienced by everyone. The hype around Pokemon was a precursor. It’s a question of expectation – in my opinion, it is unrealistic to wait for the one killer app that carries AR at once in all households. This will be done successively and along with platforms that provide the relevant content. At some point, AR technology will be integrated into our everyday life as naturally as the use of the smartphone is now.

Jens: I agree. Smartphone based AR will be the main platform for the years to come. Everything after that will need to make the smartphone obsolete.

Comparing your industrial view and specific AR there. The industry has other requirements and more money to set up big tracking volumes or render farms. John Doe on the street might not have all this available, but is willing to use AR, too. How do you see the maturity level of AR for consumers today? As you say, will it be the smartphone for consumer AR for a longer period? Will e.g. ARKit cause a big bang? Or what is missing?

Jan: With regard to operability and scaling, we must also take other paths and challenge classical infrastructures. Solutions we want to bring to the customer must ultimately also work on a generally accessible basis. Everything that helps to simplify the access to and the use of AR content will promote the penetration into our everyday life. ARKit makes the platform discussion exciting again and, of course, also the field of applications we can expect in near future. If we talk about AR HMDs, which in our context are usually more interesting, the consumer market is much more limited.

Jens: I believe that without John Doe using AR, it will also stay to a niche for the industry. So I’m hoping for a big bang in AR – regardless of what causes it.

Speaking about missing pieces. What are you hoping for to come next in this field of tech? What do you wish for? Or maybe, … you can even give us a short sneak peek into Audi’s activities here? What can the consumer expect in the future?

Jan: For me it would be great to overcome some limitations such as low field of view, bad tracking experience and especially latency of the system to speak about tech. But the challenge is another one: we need to transform our processes to be ready for the future. Only then can we not only support our current development in a better way, but also enter new areas of product development and product experience. We are combining areas such as machine learning and AI with current data analytics and sensor / IoT topics and are trying to make the findings come true in reality. Audi is working in a lot of areas today to push not only technology, but also customer experience to a new level. So I am confident that you will see more and more in the future.

Jens: I will show some of our activities in my presentation and talk about THE Killer App for xR: being together. I believe there is nothing more important for the future of xR than the possibility to communicate and collaborate over great distances. And of course, Audi will have a booth at the Expo and the Hackathon. So really good reasons to attend Digility 2017 I believe.

augmented.org: Well, thank you so much. Very glad for the insights you shared. Looking forward to your presentation then for more to be revealed! Good luck with the Hackathon during Digility!

Conclusion

So, there you have it. Another voice supporting the smartphone access next to AR. For me, it’s a tough call to hear from Jens that regular glasses already annoy him during work – even if they are superslim (without AR). But let’s see how things turn out, I’d be happy to put on slim AR glasses if they helped me with my task. No, I’m not willing to put in AR contact lenses – ever! Anyway, a good chat and happy to share some opion from industry perspective. There are still some hackathon spots left, so feel free to jump over to Digility page and sign up.

]]>Digility Conference July, 5-6 – Win your ticket!http://www.augmented.org/blog/2017/06/digility-conference-july-5-6-win-ticket/
Wed, 14 Jun 2017 11:30:16 +0000http://www.augmented.org/blog/?p=3865On July, 5th and 6th the Digility Conference on AR and VR will open it’s doors in Cologne, Germany, for the second time. augmented.org is media partner again and today I’d like to give you the heads-up on this conference, my recommendation where to go – and you will even get the chance to win your free ticket for the […]

]]>On July, 5th and 6th the Digility Conference on AR and VR will open it’s doors in Cologne, Germany, for the second time. augmented.org is media partner again and today I’d like to give you the heads-up on this conference, my recommendation where to go – and you will even get the chance to win your free ticket for the show! So, let’s dive into it!

Digility Conference on AR/VR

The conference lays focus on all about augmented and virtual reality, including 360° imaging and video, wearable and things about the bigger picture and connection like society impacts and latest research on AI for instance. Last year I’ve been on location and was very pleased with the mixture of business-related and developer-friendly discussions that were encapsuled by more philosophical discussions on technology and where this could lead. (You can find my posts from 2016 here.)

This year the show will offer two day panels and presentations plus the show floor where exhibitors, live demos and partners can be checked out. In 2017 a new spin has been added to the event, being a HACKATHON for the developers and hackers interested in AR/VR speed programming in a team. Everybody (up to a hundred people) can join, there will be tutors to help you out and results will be shown at the very end of the conference. So, if you are more interested in actively coding and showing off your results on stage in the end – you might want to check out their info here. But if you prefer to hear talks, panel discussions and see more, you should read on below. (Also to win your ticket.) Before we jump into it, a quick 2016 recap from Digility to give you the idea:

Things you cannot miss

During the two days, two tracks will lay different focuses. The “big” track on day 1 goes for brands and enterprise-related business developments, track 2 aims at designers and developers. Day 2 will be more philosophically, envision the future of AR/VR and where the bigger impacts might happen. It will be rounded up by track 2 giving start-ups a chance and to discuss investments.

On day 1 I highly recommend the opening talk about “Making Superhumans” (no, it’s not AWE), that will be hold by Jody Medich (Singularity University), discussing the future of human machine interaction. Nvidia will also be on stage to show off with their latest R&D-activities and name “initiatives like near-eye holographic displays, foveated rendering, 16Khz panels, varifocal displays like lightfield displays”. So, hopefully we can get some updated insights here, too. The Captury will show their markerless body capture for VR. I’m sure they made some good progress since last presentation I’ve seen (2 years ago at FMX). Sticking with hardware and body captures: Manus VR will also be there to show their VR gloves and discuss human interaction for MR.

On day 2 you need to pick again. So, take a close look at their program. I’d say it’s definitely worth to check out the German research institute Fraunhofer HHI from Berlin. They are very dedicated to AR/VR, video capture and 3D reconstruction tech, as you can also check out before on their pages to get inspired. Patrick Ehlen (Loop AI Labs) will talk about the future of AI and how this could – or should – impact our lives and influence the technology. What should we do with all of this? The philosophical part continues with Jelle van Dijk (University of Twente) giving his talk “Out of your mind, into the body? From theory to inspiration”, seems like we dive into some transcendence discussions here including thoughts on first-person phenomenology for VR and it’s impacts. The following panel discussions will continue these human-computer-connections and also addresses the more down-to-earth question “What Happens When Apple Goes AR?”. Well, now Apple did go AR. So, let’s learn about their updated thoughts on this one during the panel.

If you want to be part of this, you need to you need to get 539 € from your ATM for a 2 day-ticket… or…

Win your ticket with augmented.org!

I’m glad that I can give away 7 two-day tickets to my dearest readers for free again! This year it`s so incredibly easy, just:

send me an email to win at augmented.org

I`d be happy if you follow augmented.org on facebook, on twitter and spread the AR word! When writing me there, please include your thoughts or feedback. Always keen to discuss AR. Deadline for the lottery draw will be Monday, 26.06.2017 at 23:59:59 CET. Winners will be informed by June, 28th. So, after that day you have one week left to plan your trip to Cologne!

See you on the other side at Digility if all goes well and you are one of the lucky winners. :-)

]]>Well, well, well, look who wants to join and play! Apple’s developer conference WWDC just started and it’s finally the moment to see some in-house augmented reality development from Apple hit the stage! I didn’t even have the time to sort all my AWE conference notes, check all videos or talk about Ori’s keynote to push superheroes out into the world! Hm, guess I can´t resist, but need to write up the Apple news today:

One more thing… AR

So, Apple talks about their own big brother speaker for your living room, some other hardware and iOS updates, etc.pp, but then finally we get to learn about Apple’s plans to jump into AR! Pokémon plays the well-known example for the masses again. But this time using the new “ARKit” by Apple. Their new SDK toolset for developers that brings AR…

The presentation of this new toolkit is nicely done and it feels like AR has never been seen before. Craig Federighi is really excited – “you guys are actually in the shot here” – that one could think people at Apple were only thinking about VR lately and are surprised to see a camera feed in the same scene. He claims that so many fake videos have been around and now Apple is finally showing “something for real”. (Nice chat, but honestly, there have been others before. But, let’s focus:) Obviously Apple is good in marketing and knows their tech well. They have been investing in this a lot and now we can see the first public piece: in the demo we see how the RGB camera of the tablet finds a plain wooden surface of the table and how he can easily add a coffee cup, a vase or a lamp to it. The objects are nicely rendered (as expected in 2017) and have fun little details like steam rising from the coffee, etc. The shown demo is a developer demo snippet and shows how to move around the objects – and how they influence each other regarding lighting and shadows. The lamp causes the cup to cast a shadow on the real table and changes to object movements accordingly. In the demo section one could try it out and get a closer look – I’ve edited the short clip below to summarize on this. Next, we see a pretty awesome Unreal-rendered “Wingnut AR” demo showing some gaming content in AR on the table. Let’s take a look now! Scrubb to 1:25:29 in the linked video below or jump to youtube page to start off at the right time code directly.

The demos show pretty stable tracking (under the prepared demo conditions), Apple states that the mobile sensors (gyro, etc.) support the great visual software part using the RGB camera. They talk about “fast stable motion tracking” and as it was shown this can be given a “thumbs up”. The starting point seems to be the plane estimation to register a surface to place objects on. They don’t talk about the “basic boundaries” in detail – how is a surface registered? Does it have clear borders? In the Unreal demo we briefly see a character fall off the scenery into darkness, but maybe this works only in the prep’ed demo context. Would it work at home? Can the system register more than one surface? Or is it (today) limited to one height only level to augment stuff? We don’t learn about this and the demo (I would have done the same) avoid these questions. But let’s find out about this later below when looking at the SDK.

Apple seems pretty happy about the real-time light calculation to give a more realistic look to it. They talk about “ambient light estimation”, but in the demo we only see some shadows of the cup and vase moving in reference to the (also virtual) lamp. This is out of the box functionality of any 3D graphics engine. But it seems they plan way bigger things, actually considering the real world light, hue, white balance or other details to better integrate AR objects. Metaio (now part of Apple and probably leading this dev) showed some of these concepts during their 2014 conference in Munich (see in my video from back then) using the secondary camera (face-facing) to estimate the real world light situation. I would have been more pleased if Apple showed some more on this, too. After all, it’s the developer conference, not the consumer marketing event. Why don’t they switch off the lights or use a changing spotlight with some real reference object on the table?

Federighi briefly talks about scale estimation, support for Unity, Unreal and SceneKit to render and that developers will get Xcode app templates to start things quickly. With so many existing iOS devices out in the market they claim to have become “the largest AR platform in the world” over night. Don’t know the numbers, but agreed that the phone will stay the AR platform of everybody’s (= consumer big time market) choice these days. No doubt about that. But also no innovation by Apple seen today.

The Unreal Engine demo afterwards shows some more details on tracking stability (going closer, moving faster – it really looks rock solid to me! Well done!) and how well the rendering quality and performance can be. No real interaction concept is shown, though – what is the advantage when playing this in AR? Also, the presentation felt a bit uninspired – reading from the teleprompter in a monotone voice. Let’s get more excited, shall we? Or won’t we? Maybe we are not so excited, since it has all been seen before? Even the fun Lego demo reminds us of the really cool Lego Digital Box by metaio.

A look at the ARToolkit

The toolkit’s documentation is now also available online, so I planned to spend hours there last night. But to admit, it’s quite slim as of today (was good to get some more sleep), but gives a good initial overview for developers. We learn a thing or two:

first, multiple planes are possible. The world detection might be (today) more limited than on a Tango or Hololens device, but their system focuses on close-to-horizontal oriented surfaces. The documentation talks about “If you enable horizontal plane detection […] notifies you […] whenever its analysis of captured video images detects an area that appears to be a flat surface.” and “orientations of a detected plane with respect to gravity”. Further it seems that surfaces are rectangular areas since “the estimated width and length of the detected plane” can be read as attributes.

Second, the lighting estimation seems to include only one value to use: “var ambientIntensity: CGFloat”, that returns the estimated intensity in lumens of ambient light throughout the currently recognized scene. No light direction for cast shadows or other info so far. But obviously a solid start to help for a better integration.

They don’t talk about other things regarding world recognition. E.g. there is no reconstruction listed that would allow for assumed geometry to be used for occlusions. But, well, let’s hit F5 in our browsers during the next weeks to see what’s coming. Relying on ambient light only and stable 2D surfaces as world anchors feels like “play it safe decisions”, which will allow less nerdy fun stuff today, but will probably give the best and most stable user experience.

AR in the fall?

Speaking about what’s next. What’s next? Apple made a move that was overdue to me. I don’t want to ruin it for 3rd party developers creating great AR toolkits, but it was inevitable to come. While a third party SDK has the huge advantage of taking care of cross-platform-ness, it is obvious that companies like Apple or Google want to squeeze the best out of their devices by coding better low-level features into their systems (like ARKit or Tango). The announcement during WWDC felt more like “ah, yeah, finally! Now, please, can we play with it until you release something worthy of it in the fall?” Maybe we will see the iphone 8 shipping a tri-cam setup like Tango – or the twin-camera-setup is enough for more world scanning?

I definitely want to see more possibilities to include the real world, be it lighting conditions, reflections or object recognition and room awareness (for walls, floors and mobile objects)… AR is just more fun and useful if you really integrate it into your world and allow easier interaction. Real interaction. Not only walking around a hologram. The Unreal demo sure was only to show off rendering capabilities, but what do I do with it? Where is the advantage over a VR game (with possibly added positional tracking for my device)? AR only wins if it plays this advantage: to seamlessly integrate into our life and our real world vision, our current situation and enable a natural interaction.

Guess now it’s wait and see (and code and develop) with the SDK until we see some consumer update in November. This week, it was a geeky developer event, but we can only see if it all prevails when it hits the stores for all consumers. The race is on. While Microsoft claims the phone to be dead soon (but does not show a consumer alternative just yet), Google sure could step up and push some more Tango devices out there to take the lead during summer.

So, … let’s enjoy the sunny days waiting for more AR to arrive in 2017!

]]>The next 10 years – all through augmented windows?http://www.augmented.org/blog/2017/05/the-next-10-years-of-ar-all-through-windows/
Fri, 26 May 2017 10:56:44 +0000http://www.augmented.org/blog/?p=3810How will real augmented reality hit the mass market? Which system will take the lead? … and I am not talking about Microsoft Windows here, I used a lowercase “W”. It´s rather about the devices than the brand or operating system today. Will we all wear tiny AR glasses soon or will we stay with a hand-held or fixed-position augmented […]

]]>How will real augmented reality hit the mass market? Which system will take the lead? … and I am not talking about Microsoft Windows here, I used a lowercase “W”. It´s rather about the devices than the brand or operating system today. Will we all wear tiny AR glasses soon or will we stay with a hand-held or fixed-position augmented windows metaphor for a longer while, let`s say, 10 more years?

The Augmented Window

My first AR experiences I compiled myself was the magic book and the snowman ARToolkit demo on a hiro marker. I had to use a webcam on a long cable to walk around the marker and then twist my head to watch the result on a near-by screen. Movement was limited and obviously usefulness of this setup is equal to zero. But it was good fun and laid the base for all my carreer. I was excited and wanted my AR glasses on the spot!

But then reality hit me and made me (and us all) realize that it`s still a bit sci-fi as of today (back then). Other concepts filled the gap: the augmented mirror was seen a lot: people stand in front of a big screen, pretending to be a mirror (showing the flipped camera image). Many successful demos hit the stores to try on sunglasses, check the content of a LEGO box or have some other virtual clothing and try-ons besides all that marketing effect demos.

The mirror carried the Wow-effect nicely, but did only let us observe reality through pixel copies. Video-see-through is kind of annoying. Unlike in Futurama, television will never have better resolution than the real world. Optical concepts showed up more and “holographic” mirror demos could be seen on every fair, just placing a screen face-down on a pyramid of glass and mirrors or bringing TuPac back to stage. Optical overlays that don`t need digital devices worn by the user can be very helpful and are obviously easy to use. Back in Disneyworld I experienced it as a child already – using the very same mirror trick to place ghosts in my cart or dancing in the hall.

Fun, but probably not useful for real life “productive environments”? The company Realfiction just presented their “DeepFrame” lens concept in this video. The results can be seen a bit better in the linked video below:

You can look through a window frame and see augmented content place in the landscape or in your room. Perspective changes seem to work nicely. The frame itself seems to have a fixed position – no tracking/movement of the frame itself is shown. But as it seems it works well that multiple users can stand in front of the frame and have the matching perspective each. No tracking needed or limitations. Too good to be true? How about focus and convergence issues and stereo depth cues? Well, that`s not the point today. The idea is clear: use AR window frames could add extended data access to our life and also create new fun experiences – without the need to block our vision or pretty faces with dorky glasses.

Glassholes 2.0?

Talking about glassees. Google Glass (not being AR, but pretending to be back then) failed. Not only because of the missing use cases, the small screen or the missing tracking to achieve real AR – but for social reasons. Microsoft Research just showed a smaller pair of glasses, but Snap glasses are already kind-of a thing of the past again.

Today, glasses are too big (or even remind us of Dark Helmet), have too little battery life, to slim field of view, tracking is not yet world-scale without suffering under a too big spatial map triangle mass, etc. The existing glasses are a lot of fun for us developers, but are still not ready for mass market. Meta is kind of “soon” starting to ship their dev kits, we still don`t know how well gesture interaction will really work like without tactile feedback and all legal questions are not yet answered. How about work safety with a frozen blue screen of death on your MR glasses blocking your vision?

We just don`t have enough data to tell if smaller AR glasses would be accepted within the next years. I feel uncomfortable if a phone is pointed at me, what if the glasses camera is constantly observing? Will the next snap generation not care?

But even then, let`s say, this concept is widely accepted and noone cares anymore about privacy… how would the social experience be? If I see my virtual rabbit next to me, but my (real world) friend can`t see it? The idea of “glasses don`t take us out of reality” seems to be a fraud to me. We are even more distracted all the time! We create our own reality within our reality for us alone! … unless we share it. But how could we share our virtual assets if everyone is so keen on creating gated digital communities? Apple versus Meta versus Microsoft versus Snap versus Facebook versus some-small-open-source-crew… If we don`t create a common parallel digital space to share the whole AR glasses idea will die as an EPIC FAIL.

Never change a winning team?

Allright, got a bit carried away here. Getting back to technology issues to solve first. If glasses are still too far off for consumers? What to do? Well, again the smartphone will fill in the gap as the augmented window. Stationary solutions like DeepFrame work at the office, the train window or the theme parks, etc. They are a good medium to share digital information easily with others. It`s a good social experience that is also kind of reduced in presentation and invisible technology. I´m quite positive that we will see more of this style soon – after all we get the fully connected home already, where even fridges have touchscreens. But on-the-go we still need something portable.

With the likes of Pokemon, object recognition, Google`s AR translator and all the snap and facebook-styled augmented mirror apps we can see AR getting slowly but surely integrated into all consumer`s hands – without them even knowing the term AR. When Tango (or similar concepts) kick in on mobile devices we can see even more integration of the real world and virtual objects seamlessly. The new video by Johnny Lee let`s me believe once more that Tango needs to become a standard feature for new Android phones, asap!

The smartphone AR will be socially accepted, since everybody accepts people with smartphones in their hand all the time (not commenting on if this is good or bad). You can also easily share your digital data by showing the phone to your friend. – The smartphone as an augmented window to the mixed reality world. – The AR glasses in contrast would always be secret information that would scare people away (like in this odd marketing video). This could have bigger impact on the acceptance of AR glasses than previously thought, I`d say. Let me put it a bit more catchy, hehe:

“The concept of secret information in AR glasses will scare people away until we find a social solution to share digital data seamlessly with others. We must start working on open standards today.”

– Tobias Kammann, augmented.org

Change the winning team… step by step

So, what will happen next? When will we see AR glasses on the streets big time? The tech needs to shrink, the operating systems need to get ready, etc. But as said, the social component is the biggest hurdle or challenge. Probably we will keep using our AR-enabled phones during the next 10 years, adding some stationary AR screens in our environment. But when will we finally switch to AR glasses? It sure took around 10 years until everyone had a mobile phone and the distraction or the parallel texting to remote friends was accepted – when in social local situations. The same will probably happen to glasses. Technology updates could be way quicker, but social changes would take their time. If we make it right and offer a social experience that allows to share digital AR information – not blocking out people with other (or no) devices, it could be sped up.

… but then again: if AI develops faster and faster and takes care of all details of our lives… maybe we don`t need puny peasant-like HUDs anymore while running through the streets. Maybe it all runs in the background by then and we can be more social without digital helpers or interruptions. :-)

]]>Future Education in a Mixed Realityhttp://www.augmented.org/blog/2017/05/future-education-in-a-mixed-reality/
Wed, 03 May 2017 10:14:21 +0000http://www.augmented.org/blog/?p=3797Kicking off into May 2017. Lots to see! Vfx-Conference FMX running in Stuttgart right now, animayo with dedicated parts to Mixed Reality world in Las Palmas, Unity Vision VR/AR Summit just ended a day ago! Neat. Unity team continues to position themselves in the AR/VR space nicely. Facebook Spaces is all done in Unity, native integration of Tango will follow […]

]]>Kicking off into May 2017. Lots to see! Vfx-Conference FMX running in Stuttgart right now, animayo with dedicated parts to Mixed Reality world in Las Palmas, Unity Vision VR/AR Summit just ended a day ago! Neat. Unity team continues to position themselves in the AR/VR space nicely. Facebook Spaces is all done in Unity, native integration of Tango will follow in Unity! Yesterday Microsoft presented some more on MR in a New York event yesterday, including MR and the Hololens. Let’s take a brief look on their news today.

Microsoft showed off some fun stuff out of scope here, but they also presented the next update for Windows 10 – while the Creators Update is still being rolled out for some of us. A new update should seamlessly integrate a mixed reality viewer into Windows 10! They present it by editing the NASA rover in 3D in Windows and then placing it using a regular tablet / laptop webcam on stage. The markerless tracking seems to work well – but is only shown really, really short. But the bigger news is that Microsoft is already establishing a solid pipeline (so it seems) for full 3D and MR integration into their world. The new and upcoming MR device from Acer is shown as well, diving into a 3D solar system representation in Virtual Reality.

The new mode “view mixed reality” in Windows 10 can be seen in the middle of the clip recorded by The Verge. Microsoft is further pushing towards an easy access and easy creation of 3D content approach. The Paint3D was being laughed at recently – but honestly, this boils it down perfectly to the core idea of things. Their nice marketing videos make us believe it, I know. I’m not falling for that. But the idea is to make 3D and MR content creation as easy as 2D painting in Windows. What will this enable?

The Future of Education and Society

If everyone could create content as easy as a quick and simple 2D painting or a website (using whatever CMS) or a facebook profile page, we reach new access to digital data. If a next generation is used to it as we are being used to taking pictures with our mobile or writing text messages and surfing the web, what will this enable and change? What are the dangers?

No, well, let’s not talk about dangers again today. The biggest dangers are as always dumbing down our brains, handing over our decision making processes and memories over to a machine (and a company of our choice), becoming dependent and addicted. Become (further) enslaved by the companies that hold our data and allow access to our beloved mixed reality. Entry is not for free. Open standards? Don’t see them coming yet. But let’s take a look at the bright side!

If content creation is so easy, what could everyone do with it? If the process is dumbed down to the level of writing a Word file – many more people will use it. Who? Microsoft (and other companies) show nice marketing videos of a future with MR:

The learning in the future is presented by different demos. The well known anatomy / skeleton demo for biology class is shown, the 3D periodic table as one simple example of chemistry. A physics visualizer shows fields of magnetism (so it seems), a laser and mirror game presents the concept of reflection angles. The (for me) best demo I’ve seen: a quick snippet of a trumpet player wearing a Hololens, the notes to play scrolling by before his eyes!

Overall the message is clear. We must adapt to support a 21st century education approach and get the next generation playfully ready to work with the new media. We will move from books to mixed reality, 3D chalkboards could support presentation of data way better, diving into a solar system would just be more fun and visual in 3D and VR than it is in a boring dusty book, where scale of planets is off, where nothing moves, where imagination would need to take place.

This could be the only criticism to state: it’s not always good to present all as is in 1:1 scale and with realistic looks. Sometimes we need to go abstract from the presentation and let our brains work, fill the gaps and extrapolate and think about it – not just looking at it. Yeah yeah, I know, quote – I hear and I forget. I see and I remember. I do and I understand – unquote. It makes sense to experience things better – sometimes. Sometimes it’s just a gimmick. We must focus on the part where it really makes sense. I don’t want many more marketing videos to destroy really useful scenarios and let the companies burn and burry the potential. Let’s create concepts that make sense. A 2D periodic table on the Hololens does not make sense at all – unless you underlay additional interactive information. Let’s build colaborative experiences where we learn better together.

The video shows it nicely. Damn, rich school! I would want to be a student there. Everybody-gets-a-Hololens day as it seems! How will learing processes change if all have slimmer, smaller MR glasses in their pockets? Will it remain a 1st world luxury fun thing in the 10s and 20s of the 21st century? The new devices, if spread democratically and access is easy, could enable so much more. Potential is big. Hey, can’t wait to see more. Maybe the metaverse only is 3 years away as Epic’s Tim Sweeney stated recently. Can’t wait to dive deeper into it. :-)

]]>Last time I presented the 2007 TV series Denno Coil, that has AR technology and its impacts as the pure topic. Also, I showed the Strange Beasts AR short film that captures the same thought: what happened if an augmented reality were indistinguishable from reality? Three parts need to be tackled – realistic behavior, realistic visuals and realistic interaction and (virtual) haptics to make us believe it. So, what if? What if all felt so real?

Impact of too real augmented reality

Well, there are some minor technicality problems like bumping into real walls (we see an augmented open door) or falling off a cliff (we see a bridge) that need to be addressed. A fail-safe system that needs to pass governmental and insurance approval will take care of it. Some chaperone/Guardian-like system (like for VR today) will warn us. No worries. But what are the bigger questions? How can we profit from it, what will change and what will be worse?

On the plus side, we get all the goodies we wait for today. Like all AR videos show us. We can work spatially everywhere we want, get full AR-extended view, enjoy great new mixed reality entertainment, use telepresence to jump to friends, buy virtual decoration for us and our living room, etc. Like in Strange Beasts we get a companion, a helper, tutor or buddy that is always with us. A pet could help reduce anxiety, stress. Our digital personal trainer or hologram (like Al in good old TV-series Quantum Leap) will help us out. Never lost or speechless again. Learn more, experience more, discover more. All marketing can go wild.

Let`s look at a few: only buying virtual decoration could help reduce our carbon footprint and save resources (not talking about the needed resources to create perfect AR infrastructure of course). We could easily exchange all our living room decoration with one click. We would only need that furniture, kitchen and bathroom objects that are needed by our body. A bed, a shower, no way around that! (Though you might tend to shower less when you are living in an Ernest Cline-like VR space most of the time…) I like this idea of removing more stuff from my physical room or library. After all, it`s just the next step: mp3s instead of vinyl, ebooks instead of paper, virtual decoration up next! In a shared AR-space, we could share AR´ed decoration with friends and make it an actual part of our reality.

AR technology would help in all areas of our daily life. It could help us to self-improve dramatically. Also on a clinical, psychological side we could profit with new forms of therapy. Fear of height is already treated in VR (well, doing tests), we`ve seen arachnophobia desensitization with AR (virtual spiders crawling over your hand) and the before mentioned companion (a holographic prompter, tutor or pet) could have the biggest impact. Your virtual dog could help you to overcome trauma and loss. It could let you reduce anxiety or stress (like advertised in Strange Beasts, too).

A huge feature: this AI-AR-companion would never let you down. It would never go away, abandon you (unless programmed to) or die. You could actually keep the very same pet, that never ages, from your early childhood days to your last breath! This could give you stability and a feeling of safety.

The problem of disconnecting

You hear the criticism coming: what if… the augmented pet felt real and we pull the plug on it? What if a system crash deletes our companion and his memory (a virtual friend with amnesia)?

We will be depending on augmented content and their existence. Be it the navigation system that lets us find our way, the job monkey application that makes us earn money (like shown in Keiichi`s Hyper-Reality) or the disappearing status symbols we carry around or stuff into our physical home to show off with friends or strangers.

If this content goes away, we get scared, like today with no 4G. During my longer trials with the Hololens at home I already experienced it, too: the missing virtual calendar or the missing holographic cat on the floor irritated me when I did not wear the glasses. What happened? you will ask. I can imagine feeling lost and helpless easily.

Augmented Reality – Dangerous for our minds?

But it gets even harder, if it was for a companion: a virtual friend that we connected with emotionally. He or she knows all about us, reacts to our jokes, helps us out and supports us. If the plug is pulled, we could experience the same loss like we do today with the death of a family member. With a long-term connection and a realistic appearance that integrates seamless into our reality, it could get even harder than losing a real friend. You heard me. If your virtual friend was with you for all your life instead of a real friend that was with you only for a few years? What would hurt more?

The bigger Problem: we don`t want to disconnect

A system failure with a backup could be resolved. But what if we don`t want to disconnect anymore? What if we are hooked to the augmented part of our reality?

We could escape ugly reality, replace parts of it, avoid confrontations and reality. We could replace that lost item by an augmented version of it. We could actually replace our dead pet or our dead daughter by AR (you should have seen Strange Beasts by now). Recently a German TV-Show “Tatort” had the moment when an A.I. version of a dead person called a family member (episode “Echolot”) – unnoticed. We would on purpose or unconsciously avoid coping with loss and lose a part of being human. We could supposedly escape many problems and flee into a happy AR reality – until we die alone.

I´m aware that I´m getting carried away and we are far away from this dystopian piece. I love AR and want to look on the bright side. But even more important to think about possible crossroads today, about the right path to pursue, the right way to live today. Not thinking about it before it`s too late could result in our total dependency and helpless drone existence – getting enslaved in total surveillance. Society to crash? People to suffer from new psychosis when virtual friends are lost or if we don`t let go on real dead family members or friends? What happens to the one not wearing glasses / lenses? Do we split society into two? On which part would you stand?

Mixed Reality is at its beginning. VR devices and experiences are getting discussed a lot today. Further studies are needed to see what it might cause. Same applies for AR in the future. What needs to be done along the road?

For example, the virtual companion could stay abstract on purpose. Maybe emotional impact does not go all the way then? Maybe it needs to switch off with a disclaimer once in a while? Or companies that sell us virtual pets will make them die after a while (to sell us new ones)? What rules apply and who defines them?

Willl privacy prevail? Or will we give up on all our private data – to feed into the facebook/google-like machines, to “give us a better experience”? I do understand that the cloud computing power will enable great A.I. companions better. But do I want to share all my life with a big cheese company for smart AR fun? As always, a great open standard would be key. Time to support it! Will AR hardware of the future allow usage independent of the big players? Or will we be left out? Can we hack it? (Please do!)

If a younger generation learns how to interact with virtual characters and gets a brain connection trained to actually feel them – they should also get trained in telling the difference. Keep the worlds apart, step back, learn AR competence. Disconnect.

Some good might come from being left out or of disconnecting. We don`t need to blow up all our physical or delete all our virtual belongings. But clean up your life, remove unnecessary rubbish and focus on the real content. Discover real-world-caused emotions that can help your goals in a physical space your body will live in until the last day. Let`s switch off the tech once in a while and go out and play and meet your family and friends.