It was a short piece, mostly referencing an email from Fabian Giesen, a demoscene coder (and more) who was doing some VR work at Valve as a contractor. I’ll be honest, his message was a real downer for me, and I had my own Notch moment. Why was I working towards something that, if successful, would ultimately be used just to provide value to Facebook?

Over the past nine months, a surprising number of you have told me how those early Metaverse articles had actually been very helpful to you. A few of you said that you had a Metaverse effort going, but most of you were creating multiplayer virtual environments. Thank you all for your feedback and support!

I think the moment that it all crystallized and brought me back to Metaversing was seeing the return of Valve with the HTC Vive. Suddenly, it seemed like there were possibilities once again. Thanks, Gabe. I’m looking forward to learning more about your shared entertainment universe… perhaps a non-traditional Metaverse?

As I look back over last year’s body of work, I think most of the pieces have held up well enough. Perhaps the most controversial article was on the Virtual Home. The name, alone, drew an immediate comparison to PlayStation Home (closed in March, 2015), which turns out to be wildly unpopular with VR enthusiasts as the basis for a Metaverse implementation.

PlayStation Home was not where I was heading, so I can agree with much of the upset. Still, the article itself was far too ambitious. I tried to decompress way too many ideas into a short amount of space. I’ve learned my lesson — I’ll try to keep future articles more contained.

The PlayStation Home, now abandoned by Sony

What many of you may not have realized was that most of the articles from last year formed the discrete parts of a global design for a Metaverse. That Metaverse, ultimately, was never described in its entirety. I still have what appears to be a very unique blueprint for a Metaverse that I hope to describe in detail. I’m convinced that this model is not only viable (from multiple vantage points), but that it also has the ability to become wildly successful.

This year I intend to return to my work of laying down more of the design elements and then finally tying it all together. For now, I’ve got to see what happened to some illustrative artwork that was commissioned last year in JanusVR in support of an article I never published. It seems that some of the recent work by Valve (and now Oculus) has made that topic extremely relevant…

This blog is about going beyond the science fiction descriptions of the Metaverse and actually fleshing out some of the concepts, designs, and details that are useful in bringing it to life. The ideas described here are not to be interpreted as the exclusive way for the Metaverse to be designed. We’re here to put a stake in the ground. We hope to start the conversation (where it doesn’t already exist) and to move the conversation forward.

How do you navigate between unrelated virtual worlds?

Back in August 2013 when I first envisioned how I wanted a different model of the Metaverse to work, one of the fundamental questions I had was in how to glue everything together. Instead of building one large Metaverse and splitting it into pieces, as has been done before, I looked at a different solution. How do we start with a bunch of unrelated pieces of software and combine them together to form a larger Metaverse?

Our universe starts with completely different and unconnected virtual environments, games, and virtual worlds. There are different authors, languages, graphics libraries, and more. If you wanted to create a way for players (avatars) to actually move between them, how could it be done? How would you move from JanusVR to Minecraft? How do you walk from Minecraft into VRChat?

What do we want to communicate between worlds?

To start, we should try to figure out what kind of things we might want to communicate.

Identity information. (Who is the user? Are they under 13 years of age? What country are they in?)

Technical capabilities. (How much network bandwidth and latency? What kind of input devices and settings are they using? What kind of output devices and settings?)

Real-time positional information. (Position and orientation relative to a share reference point in both worlds? Specific destination in the remote virtual world? Limb positions? Standing or sitting?)

This really doesn’t strike me as a difficult problem. We might want to start simple and evolve into greater capabilities as time goes by. I, for one, believe that the most important elements for version 1.0 might be only two factors:

Basic identity information

Real-time positional information

Why would I put real-time positioning as a key feature? The transfer will be facilitated by a limited and pre-defined shared environment, which a later post will explain and illustrate in more detail.

If we could get two different pieces of software to closely render a very limited shared environment, maintaining the relative position and orientation of the player would avoid an abrupt jump during the hand-off between environments. If I am trying to create the feeling of a cohesive virtual world using completely different software elements, at a minimum I need to create the visual illusion of continuity.

How do we want to communicate that information between worlds?

It is pretty obvious that we’d want a standard data format for communicating that information. The client could simply initiate a new session at the destination and pass along the information. We could also have the source and the destination talk with each other. Or maybe we could go through a third party service which handles some of the identity services. Say, wasn’t Brendan Iribe saying something about that here recently?

While Iribe admits that a billion-person MMO is “going to take a bigger network than exists in the world today,” he says Facebook’s network makes a great place to start, and suggested it could be a Metaverse that joins disparate virtual worlds. Source: The Verge, Oculus wants to build a billion-person MMO with Facebook

Well, it looks like Oculus/Facebook may already be pursuing this direction. Disparate virtual worlds joined together by Facebook’s network.

Wait a second, did he say disparate virtual worlds?

Myself, I had been searching for a better way to communicate the idea of connecting unrelated virtual worlds which were based on totally different platforms. When I saw the phrase “disparate virtual worlds”, I found it to be astonishingly accurate, if not somewhat curious. It struck me that someone had really given some thought about how to communicate a very specific idea.

I took to Google to see if I could find how and when the phrase had been used. [I have since watched the TechCrunch Disrupt video and did not hear Iribe say those actual words.] As it turns out, it has been used a few times since 2006 and… OH NO… wait a second…

Patent US 8,584,025 B2 – VIRTUAL WORLD TELEPORTATION

Image Source: US 8,584,025 B2 via Google Patents

As I was writing this article, I discovered that there is actually a patent which does a pretty good job of covering this whole idea. It was submitted by IBM employees in 2008. Crap. This is bad. They patented the whole freaking idea, didn’t they?

SIDE TOPIC: This illustrates an aspect of software patents that makes people angry. It isn’t so much that there are any really interesting or unique software solutions being patented here. Instead, it seems to be more about applying standard solutions to situations that have been identified before anyone else has had the chance to evaluate them. At times, once the problem is identified, it is all too easy to arrive at the same solution as the person who addressed it years earlier. Do some software patents boil down to nothing more than a race to identify the problem first?

I know from my own experience of submitting and being granted a patent that you cannot simply patent ideas (no matter what the TV commercials may tell you). You have to patent methods or specific ways of implementing those ideas. When you’re evaluating someone else’s patent, the important part to pay attention to is the claims section. That’s what they’re really claiming the exclusive rights to.

How would I defend myself from the claims in this patent?

I know that I have to focus on the claims. I’ll work in a few pieces from my own Metaverse design to illustrate where I think the patent has holes. Do I have enough to work around IBM’s claims?

Basically, they’re describing the migration of a resource inside of a cluster.

The entire procedure which is documented in claim 1 is little more than the migration of a resource inside of a computing cluster. Freeze the resource to prevent it from being altered, find the best place to put the resource, copy the resource, disable the resource, and start it up on a different node.

The difference is that instead of migration a computer program or a computer resource (IP address, SAN storage, etc) between servers, they are migrating a record that is associated with a user’s avatar.

Even if I acknowledge that they were ahead of the time in putting some significance to the problem, I can award them no points for the novelty of their solution. “A cluster failover procedure, except, with an avatar resource.”

It is interesting that IBM specifically makes claims about teleportingavatars.

What if there is no avatar to teleport? If I am in my Virtual Home and I use a themed virtual interface to launch a local copy of Team Fortress 2 or Minecraft, there is no avatar transfer involved. I’m simply launching a local binary.

What if instead of teleporting, we create a shared space to migrate the users between applications? Program A creates a version of the holodeck. Once the user steps inside the holodeck, it transfers the relative position and orientation over to Program B. Program B then continues the simulation from the holodeck and into their own custom environment. Or perhaps we could simply simulate the remote environment closely enough to cut the user over.

Is there any importance in the definitions of the words used in their claims? I don’t see “teleporting” actually defined anywhere in their patent. What exactly does and doesn’t teleporting consist of? I don’t see it. On the other hand, they seem to clearly define what a disparate virtual environment is. “Each environment is disparate in that each may be instantiated by different service providers, utilize different proprietary systems, and require the creation of a unique account to participate.”

IBM’s claim is based, in part, on analyzing information in the received persona profile and automatically selecting a region.

…analyzing information in the received persona profile and automatically selecting a region in the first virtual world to locate the inbound avatar based on the analyzed information in the received persona profile;

If a user, inside a different application, wishes to open a JanusVR virtual room explicitly at the location http://www.reddit.com/r/VRsites, is any analysis needed? Is the intended destination even part of the “persona profile”? There is no need to automatically select a region on basis of the user’s profile if the destination is explicitly stated (or if the region is chosen by one another means).

[EDIT: May 16, 2014] This really is more about automatically putting your avatar with similar people, be they selected friends, others with like-minded interests, associations, or more. By inserting an extra step, perhaps where the user manually confirms (by pop-up GUI, by movement through a second doorway at the destination, or by some other means) that they wished to be placed with others based on their shared profile, an automatic selection would be avoided. Beyond avoiding a patent claim, identifying good destinations and giving the user the choice might create a better user experience.

IBM has a serious procedural error in their claim.

“…automatically selecting a region in the first virtual world to locate the inbound avatar…“

IBM’s claims involve migrating from a first virtual world to a destination virtual world. There is no value in selecting a region in the first virtual world to locate the inbound avatar. Instead, the obvious implementation would be to select a region in the destination virtual world to locate the inbound avatar. In the claims section (where it matters), they seem to have chosen the wrong destination.

[UPDATE: A continuation of the patent which was published in January 2014 changes the language in a way that seems to avoid this mistake. It is interesting to note that it completely reverses the role of the first virtual world and the disparate virtual world, as described in the rest of the application.

They have a second continuation, similar to the first, which slightly changes the preamble. I have since found a third continuation of the patent. It looks like they were trying to cover a number of challenges based on the specifics of the wording.]

The first claim contains multiple (A,B,C) requirements which must all be met.

The multiple steps in the first claim include: creating a persona profile, transferring the unchanged profile, disabling the avatar in the original virtual world, and granting or denying access to the first virtual world.

To avoid this claim, we need to avoid using every single one of the listed steps. Instead, we could change the avatar’s record (in the originating virtual world) after the transfer. We don’t have to disable the avatar (a process which isn’t clearly defined — did they mean logging out?) in the originating virtual world. We don’t have to be responsible for granting or denying access to the destination virtual world.

The remaining claims (2-7) are dependent on the conditions of first claim having been met.

In claim #2, IBM envisioned a remote shared database being used to transfer that information. I think that if Oculus were to use Facebook services to connect disparate virtual worlds, they’d be looking at a remote shared database as well. There might be a clever way around this claim, but my architecture didn’t involve this component so I’ll leave that to someone else to defend.

In claim #3, IBM envisioned that the user profile would be transferred directly between servers. I originally envisioned the client being responsible for transmitting the information necessary to coordinate the hand-off between two virtual worlds. The servers would communicate and synchronize through the client.

That’s one way around the problem. If I were Oculus, I’d use the APIs in the Oculus SDK to provide that service. But what about the audit logs and user reputation information? If it became necessary to transfer sensitive information between the servers, it can still be communicated between the servers, indirectly, through the client buy using a carefully chosen encryption protocol.

The remaining claims are very broad.

Claims 4-7 seem tough to counter. They basically claim the idea of sending virtual world information in a standardized format between virtual worlds. Perhaps the defense against claim #1 would be enough to knock these out? I really don’t know. Where is Pamela Jones from Groklaw when you need her?

Where to go from here?

I’d be interested in what others have to say about IBM’s patent (and the overall topic of virtual reality patents.) Perhaps related, I was excited to see the video of Michael Abrash and Dov Katz of Oculus VR talking about the great deal of R&D investment that Facebook is going to fund for virtual reality research for 5-10 years in the future. This also has me concerned. Corporate funded research will likely yield patents. Could VR become a terrible patent minefield (once again) in just a few years time?

In any case, in my next few posts, I hope to get back to the topic of specifying some specifics on how a live user transfer between disparate virtual worlds could be accomplished. If you’ve got some JanusVR virtual room coding skills, I might have a small job for you to create some illustrations.

]]>https://metaversing.com/2014/05/11/travelling-between-unrelated-virtual-worlds/feed/7jmccormImages: VRChat, JanusVR, Anarchy Arcade, MinecraftImage Source: US 8,584,025 B2 via Google PatentsToday’s Glimpse into the Virtual Homehttps://metaversing.com/2014/04/27/todays-glimpse-into-the-virtual-home/
https://metaversing.com/2014/04/27/todays-glimpse-into-the-virtual-home/#commentsMon, 28 Apr 2014 00:25:55 +0000http://metaversing.com/?p=947Earlier, I wrote about the concept of the Virtual Home as the center of your activities in virtual reality. It is personal space, lounge, hangout, and launching pad. There are a number of ways to handle the user interface for the Virtual Home.

We’re going to quickly look at VirtualReality.io, cover the concept for the Rift Navigator, and go back and pick up a great Virtual Home that I missed called Anarchy Arcade. After that, the conversation will switch gears to highlight a fundamental problem inside the Virtual Home (and virtual reality as a whole).

VirtualReality.io

Screenshot from VirtualReality.io

VirtualReality.io is a no-nonsense launching pad for VR software. It doesn’t do a lot. It doesn’t have the personal space, lounge, or hangout. What it does do, though, it does correctly. They’ve got the launching pad covered for the novice user.

The user selects an application from a catalog of third party software and it installs it onto their system. When the user selects the installed program, the interface quickly moves out of the way, but returns when the application terminates. There is no need to remove a head-mounted display. At a basic level, a Virtual Home needs to behave similarly.

Currently, the major downside to VirtualReality.io is content. Content is centrally managed. In order for an application to be listed, a request has to be submitted, and the application has to meet certain requirements. My hope is that as VR software matures, these simple technical requirements will be a non-issue for most applications. Perhaps a set of command-line parameters can be standardized to make the adaptation easier.

Rift Navigator

Image Source: Rift Navigator Concept

Some concept images for Rift Navigator were released yesterday. It has good art direction. From the images, it appears to have everything we’re looking for: a private space with some customization, a place to hang out and invite others, a place to launch VR software or Metaverse destinations from, and also a place to consume media from.

As a design choice, they aren’t currently looking at creating an environment which can be fully explored. Transportation from room-to-room or within a room would be via teleportation. Why? They state that they wish to avoid the inconsistency between our real bodies and our virtual bodies, which may break immersion. The unique touch in Rift Navigator is the user interface: a virtual tablet which can be invoked and dismissed at will.

Anarchy Arcade

Image Source: Metaversing, an in-world screenshot from Anarchy Arcade

It is easy to be too complimentary of a concept image because it may have yet to be compromised by real-world design choices. These choices can force us to accept something that is less than our own vision.

I’m a little late to discover Anarchy Arcade, a virtual world that seems to have just about everything we’re looking for in a Virtual Home. If this is the kind of thing that you might be interested in, I’d encourage you to go and download the free prototype. A Steam client is required to run the demo (because the code is based off of Source SDK Base 2013 Multiplayer).

Starting from a base room, you fill it with props (objects such as chairs, tables) and media (YouTube pages, images, movies, Steam software, PC applications). In this particular implementation, you don’t start out with the full library of props, but you continue to unlock more objects as you add additional content.

This unlock system is mildly motivating, but it is annoying to be denied the big-screen TV and other must-haves until you’ve accomplished specific goals. I manipulated my own score to quickly gain several unlocks by moving high-valued objects and redeploying them.

The art style could use a bit of work, but to be fair, I wasn’t using one of the standard demonstration maps. Once populated (and populating the map was not an experience for a novice user), I was ready to go into multiuser mode.

The fact that this programmer is out there and ahead of everyone else also means that his is the first real demonstration of where the Virtual Home concept starts to run into problems.

Image Source: Manipulating movie metadata in Anarchy Arcade

There is a lot of metadata that needs to be managed in order to put a clickable movie on the wall. We’ve got the title, description, media type, the actual media, optional trailer media, a static logo, a marquee, and a URL to send someone to in order to purchase the media. A lot of that is pulled together automatically using scrapers.

Scrapers are an unfortunate problem. A fully automatic scraper probably isn’t going to do a great job. A targeted scraper may not be difficult to code, but when you have to maintain a number of different scrapers, you’ll be working with third parties who don’t make collecting metadata so easy and won’t notify you before things break.

While you may share the filename of the movie, the actual content of the movie is not shared. It assumes that the file distribution for images, movies, and software all happens outside of Anarchy Arcade (or not at all). “Anarchy Arcade is not screen-sharing or peer-to-peer file sharing software,” says software designer Elijah Newman-Gomez.

In Multiuser mode, if a piece of media (images, movies, and more) is missing on the other end, it doesn’t play. The other users must literally have “C:/Movies/Guardians of the Galaxy/Guardians of the Galaxy.mp4”. The final version of the software will try to bridge the gap by offering the user choices of where the media may be obtained. There is no way to guarantee that the two versions will be synchronized.

As a launching pad for software (Steam or otherwise), Anarchy Arcade does a fair job. As a virtual reality launcher, it has some of the same immersion-preserving requirements as VirtualReality.io, but it doesn’t actively enforce them. It is up to the user to configure the connected software to behave appropriately, or to deal with the consequences.

Additionally, the Source Engine doesn’t lend itself to being a lightweight app launcher. When launching outside applications, the program suspends itself to conserve system resources. Once control is given back to Anarchy Arcade, the current version requires a click from the user and a fair amount of time to establish control again.

Anarchy Arcade is still in development and has a few more twists waiting for you. If you’d like to read more about where the developer is heading, then visit his Oculus Developer Forum post from January 29th, 2014.

Bonus Topic: Fix the Internet Hole

Image Source: origin is unknown

The timing of these media management examples couldn’t be better, considering that they will be the topic of an upcoming post. Media management is even more necessary for Virtual Worlds than for the Internet. A piece of the Internet is missing. When and if someone is able to solve it, they’re going to be the king of the Metaverse.

How do we handle content? VirtualReality.io goes for a centrally managed approach. They themselves define what media is available, where it can be downloaded, and provide the necessary screenshots and decisions for you to choose.

Anarchy Arcade attempts to lean on Steam as a content provider (today for games, tomorrow perhaps for media as well). Where that doesn’t work, Anarchy Arcade hacks around it with scrapers. Considering that they’re up against a problem as fundamental as media management, they do as good of a job as you can expect.

Image Source: Marvel Studios, Guardians of the Galaxy

There is no common registry that allows two users to recognize that they are both legitimate owners of the same movie title. If we’re both legitimate owners of the same movie, we can’t recognize that fact, and we can’t be given permission to bridge the technical divide between us.

Alternatively, if I have the Guardians of the Galaxy movie and you don’t, there is no direct way to codify what exactly I have so that the system can automatically determine how you need to go through your preferred media provider to receive a copy. There is no authority that I can go to in order to get permission… even voluntarily pay for permission… to share my media with you. That’s big.

ASIDE: Good for Marvel. As recently recently popularized by John Carmack, they’ve provided an API which describes their comic book media. Having a common dictionary to map media to is at least part of the problem that needs to be solved… for all media.

There is no method of matching movie capabilities (Full HD, SD), variants (European version, Director’s Cut), or media (Blu-Ray, streaming rental). If I want a marginal upgrade, there is no recognition for what I currently own. My resource is stranded, and no amount of money will convert my participation into something else. I either have to live with what I have, or I have to pay full price for only a marginal difference.

Fess up. Is the business model to screw me? Or does it just happen because there isn’t a better way? Will piracy be the only relief that I am offered?

Image Source: Futurama, “Attack of the Killer App”

In writing to the Australian Government, Google says, “We believe there is significant, credible evidence emerging that online piracy is primarily an availability and pricing problem.” You better believe it. I can’t even throw money at them!

In Closing

In this article, we reviewed VirtualReality.io, the Rift Navigator concept, and the demonstration release of Anarchy Arcade. We showed how the thread of media management runs through them all and illustrates an upcoming problem in virtual worlds.

If the issue of media continues to go unsolved, either virtual reality will be a diminished social experience, or it’ll be the most clear and unfortunate demonstration yet of why media, content, and consumption needs to be brought under a rational framework.

This blog is about going beyond the science fiction descriptions of the Metaverse and actually fleshing out some of the concepts, designs, and details that are useful in bringing it to life. The ideas described here are not to be interpreted as the exclusive way for the Metaverse to be designed. We’re here to put a stake in the ground. We hope to start the conversation (where it doesn’t already exist) and to move the conversation forward.

“The Basement” from Ready Player One, recreated in Second Life. Image Source: New World Notes Blog

I’m convinced that the Virtual Home is at the center of the user experience in the Metaverse. There is so much ground to cover, more than will fit in a single post. How do I convey a universe?

My design sensibility tells me that we’re going to have to iterate this over time in order to figure out what exactly this space needs to be. My gut tells me that we’re going to need quite a bit of competition to make those iterations happen.

The Virtual Home is born out of four concepts: the Launch Pad, the Personal Space, the Utility Space, and the Trusted Space. We’ll talk about each of these, and then we’ll talk about three different ways that this set of concepts play out.

The set of four concepts for the Virtual Home

Launch Pad: A beginning. A spawn point. The Windows desktop. Home base. The 1995 Yahoo! guide to the Internet. In real life, it is where you wake up in the morning and start your day. The Virtual Home should be the point from which your journey through the virtual reality begins, and where you eventually return.

The Windows 8.1 start screen. It worked well for touchscreen tablets, but not for the desktop PC. Can Microsoft avoid the temptation to extend the design into Virtual Reality with a Minority Report set of inputs? Do our users want an interface with the casual ease of a touchscreen tablet? The utilitarian function of a PC? Something else? (Image Source: TechCrunch)

Personal Space: The ability to create without seeking outside permission. A place for building, for customizing, and for self expression. The place where you bring back some of the cool things you’ve found in the Metaverse. A trophy wall (as mentioned in an earlier post) which is filled with rewards from other worlds. Pinterest. Your Minecraft castle (or cave).

Utility Space: Useful widgets. Locally saved functions. Management of your preferred providers (media providers, music providers, pizza providers, storage providers, etc). A shared standard. In a later post, I may describe this as a bridge to a single virtual reality experience.

Trusted Space: The privacy of one’s residence. Complete control over content. Information you might not want to reveal or actions you might not want to do in a public space. A place for safe objects that aren’t allowed to report back on user behavior.

The Launch Pad, Personal Space, Utility Space, and Trusted space are at the core of the Virtual Home concept. As mentioned, we’re going to discuss how these four pieces play with three other topics: the pocket universe, service providers, and finally, the worlds adjacent to the Metaverse.

The Personal Space and the Trusted Space the core properties that I want to embrace in the Virtual Home. We want a navigable environment that is closed off from most of the outside world, and we want to keep the contents in the Virtual Home secure.

The Personal and Trusted aspects suggest that we want a well defined trust boundary to surround our code. I’ll have to defer that conversation to a later article.

It also suggests that we don’t want someone else in the Metaverse to be able to randomly stumble upon our home, even if only just to view the models and textures contained within. For this reason, I’m going to select a non-Euclidean space as the area in which our Virtual Home resides. Access is tightly controlled (whitelisted) and what happens inside the space is not visible to those on the outside.

Land and property rental are common in some virtual environments. In Second Life, you do not start with any land; it must be purchased. Here is one such parcel for rent Second Life. (Listing was selected at random, no known relation.)

It is apparent that users are going to start with a piece of property: their own home. They’ll be able to populate it as they see fit. That isn’t to say that there won’t be a market for virtual real-estate. Real-estate (which also includes a hosting service) has been a core revenue generator for years at Linden Labs. It’s just that private personal space won’t need to be bought and sold at a premium. Eventually, we may need a mechanism to keep the homescape from accumulating (lag-inducing) clutter.

It is worth noting that extensive use of private space would tend to reduce a feeling of community in a greater virtual environment. Perhaps as a nod to a more traditional notion of the Metaverse, the shell of user’s Virtual Home could exist in a publicly accessible virtual landscape, but the actual interior would belong in a pocket universe. (The TARDIS from Doctor Who is a great example of how this concept works out.) Fortunately, the Virtual Home is not the whole of the Metaverse.

Basing the player in a pocket universe also informs us how we might travel in the Metaverse. The prevailing notion has been that the Metaverse is a singular and contiguous three-dimensional space. Conventionally, you might walk, fly, or teleport to the X, Y, and Z coordinates of where you want to go. With a non-Euclidean pocket universe, that model can work out a little differently. We’ll revisit this later in another article.

How will the Launch Pad function work? The causal user will want something more than a list of destinations as icons or text results in a dialog box. In an advanced implementation, perhaps the user tells The Librarian (a Siri with a virtual presence) something about the destination that they’re looking for. The matches are displayed in screens along the wall. The user presses one of them, and then walks through the door and into their selection. The power user will likely prefer a no-nonsense approach. Image Source: LitReactor, The Matrix

The Trusted Space also seems to suggest that not only do we want to keep our Virtual Home private from other users but, where possible, we also want to keep it private from service providers. The Utility Space aspects tells us that the Virtual Home is going to have basic services that we always want available and configurable. A combination of these things tells me that the Virtual Home is something we want running on our own hardware. That hardware could be built into the HMD. It could be built into a laptop, desktop, game console, smartphone, or it could be running in a dedicated media server.

I’ve said several times now that we need to be careful about using the sci-fi Metaverse as an example to follow, but this is another case where sci-fi seems to have mostly gotten it right. Ready Player One and Snow Crash both have VR gear with a limited off-network internal environment built into it. In the Snow Crash universe, the functionality was far more complex and allowed for an Intelligent Agent to run inside that space. We have to be careful about the flow of information. Without a set of enforced rules, most users will blindly give up security for convenience.

The service provider: resource location, privacy, benefits, and advertising.

If I were a service provider, I might follow the Virtual Home approach… with a twist. I’d appreciate the overall idea, but I would want something where I was the provider of a service, not of software. I may not overtly bring this to the users’ attention, but I would assume a role inside the users’ Trusted Space, and I would collect or even host their personal details on my servers.

As a service provider, I would offer some additional features as part of the arrangement. The Home would automatically be backed up and accessible from any location with Internet connectivity. I would address security concerns globally on my centralized architecture. I would manage the VR marketplace where people can obtain objects and customizations for their personal environment.

Because the Virtual Home acts as the Launch pad, I would let my advertising partners take advantage of my control of the Home environment to promote their destinations. As much as I could convince my users to accept it, I’d throw in some overt advertisements or perhaps some subtle product placements. I could temporarily create a bonus room to their house to run a very special in-world promotion.

I would want to know what kind of things my users have in their Virtual Home, what kind of activities they perform there, where they go, and what their preferences are. All of these things may give me better insight into how to sell advertisements to them. I would steer them towards my preferred set of providers for the Utility Space, and every time they sit down with a good friend in the Metaverse and use their preferred provider to share a real-world pizza, I would receive a commission.

The Virtual Home: more than just the Metaverse

It is important to note that the Virtual Home won’t just act as a launch pad for Metaverse destinations. It also acts as a launch pad for local software such as the intelligent agent mentioned earlier, video games, and any manner of virtual reality applications that exist outside of the Metaverse. It doesn’t just launch the Metaverse. It launches VR applications, too.

If I’m a service provider, I’m going to have to have my own marketplace to sell VR software, too. Perhaps I can offer some bonus integration features? The Virtual Home will become the Windows Desktop, the Yahoo! Guide, and the Steam Store for virtual reality.

For years to come, non-Metaverse software is going to be the driver for consumer adoption for VR hardware. Games will help us figure out the details of critical Metaverse components (for example: the user interface, diverse sensor inputs, and avatar movement). As mentioned in the introduction, the Metaverse will need a lot of iterative refinement. We can achieve some of this through video games.

For the Metaverse to succeed, it needs to not just run side-by-side with the gaming market, but to latch itself onto it. The Virtual Home will help keep that marriage together. I expect to see some cross-pollination, such as Metaverse destinations and reward items for virtual reality games. (I would expect Valve Software to be the master of this.)

In this article, I originally included an introduction to a related concept which I called the Real Home. It served as an additional bridge between the Metaverse and Augmented Reality. I have so many concepts to share and so little space; I’m going to have to save that for another article as well.

Wrapping up

I’ve gone way over budget with the number of words in this article, and yet there is so much more about the Virtual Home that we could discuss. I realize that I have yet to talk about the structure of the Metaverse, which ties up more of the loose ends. That, too, will be the topic of an upcoming article.

In this article, we outlined one core component of the Metaverse: the Virtual Home. We covered the Launch Pad, Personal Space, Utility Space, and Trusted Space. We looked into three different ways those worked together: in the pocket universe, with service providers, and in bringing together the worlds which are adjacent to the Metaverse.

Does the Virtual Home make sense? Does the overall design resonate with you? Let me know.

]]>https://metaversing.com/2014/04/15/the-metaverse-and-the-virtual-home/feed/0jmccorm"The Basement" from Ready Player One, recreated in Second Life, via New World Notes BlogThe Windows 8.1 start menu. Wrong for the PC desktop, completely wrong for Virtual Reality. (Image Source: TechCrunch)Parcel for rent in Second Life for US$5/month.If you can't get past the desktop icon model, at least work it into the theme. (Image Source: LitReactor)The Gallery: Six Elements "The Gallery Greenlit & Valve VR Experience!"