Travelling Between Unrelated Virtual Worlds
If this is your first time visiting Metaversing, please read:
This blog is about going beyond the science fiction descriptions of the Metaverse and actually fleshing out some of the concepts, designs, and details that are useful in bringing it to life. The ideas described here are not to be interpreted as the exclusive way for the Metaverse to be designed. We’re here to put a stake in the ground. We hope to start the conversation (where it doesn’t already exist) and to move the conversation forward.
How do you navigate between unrelated virtual worlds?
Back in August 2013 when I first envisioned how I wanted a different model of the Metaverse to work, one of the fundamental questions I had was in how to glue everything together. Instead of building one large Metaverse and splitting it into pieces, as has been done before, I looked at a different solution. How do we start with a bunch of unrelated pieces of software and combine them together to form a larger Metaverse?
Our universe starts with completely different and unconnected virtual environments, games, and virtual worlds. There are different authors, languages, graphics libraries, and more. If you wanted to create a way for players (avatars) to actually move between them, how could it be done? How would you move from JanusVR to Minecraft? How do you walk from Minecraft into VRChat?
What do we want to communicate between worlds?
To start, we should try to figure out what kind of things we might want to communicate.
- Identity information. (Who is the user? Are they under 13 years of age? What country are they in?)
- Avatar information. (Models? Textures? Appearances at different resolutions? Avatar scale?)
- Inventory information. (What virtual items does the avatar possess? What media items does the avatar possess?)
- Preferences. (Volume? Graphics quality? Control schemes? Player height?)
- Technical capabilities. (How much network bandwidth and latency? What kind of input devices and settings are they using? What kind of output devices and settings?)
- Real-time positional information. (Position and orientation relative to a share reference point in both worlds? Specific destination in the remote virtual world? Limb positions? Standing or sitting?)
This really doesn’t strike me as a difficult problem. We might want to start simple and evolve into greater capabilities as time goes by. I, for one, believe that the most important elements for version 1.0 might be only two factors:
- Basic identity information
- Real-time positional information
Why would I put real-time positioning as a key feature? The transfer will be facilitated by a limited and pre-defined shared environment, which a later post will explain and illustrate in more detail.
If we could get two different pieces of software to closely render a very limited shared environment, maintaining the relative position and orientation of the player would avoid an abrupt jump during the hand-off between environments. If I am trying to create the feeling of a cohesive virtual world using completely different software elements, at a minimum I need to create the visual illusion of continuity.
How do we want to communicate that information between worlds?
It is pretty obvious that we’d want a standard data format for communicating that information. The client could simply initiate a new session at the destination and pass along the information. We could also have the source and the destination talk with each other. Or maybe we could go through a third party service which handles some of the identity services. Say, wasn’t Brendan Iribe saying something about that here recently?
While Iribe admits that a billion-person MMO is “going to take a bigger network than exists in the world today,” he says Facebook’s network makes a great place to start, and suggested it could be a Metaverse that joins disparate virtual worlds. Source: The Verge, Oculus wants to build a billion-person MMO with Facebook
Well, it looks like Oculus/Facebook may already be pursuing this direction. Disparate virtual worlds joined together by Facebook’s network.
Wait a second, did he say disparate virtual worlds?
Myself, I had been searching for a better way to communicate the idea of connecting unrelated virtual worlds which were based on totally different platforms. When I saw the phrase “disparate virtual worlds”, I found it to be astonishingly accurate, if not somewhat curious. It struck me that someone had really given some thought about how to communicate a very specific idea.
I took to Google to see if I could find how and when the phrase had been used. [I have since watched the TechCrunch Disrupt video and did not hear Iribe say those actual words.] As it turns out, it has been used a few times since 2006 and… OH NO… wait a second…
Patent US 8,584,025 B2 – VIRTUAL WORLD TELEPORTATION
As I was writing this article, I discovered that there is actually a patent which does a pretty good job of covering this whole idea. It was submitted by IBM employees in 2008. Crap. This is bad. They patented the whole freaking idea, didn’t they?
SIDE TOPIC: This illustrates an aspect of software patents that makes people angry. It isn’t so much that there are any really interesting or unique software solutions being patented here. Instead, it seems to be more about applying standard solutions to situations that have been identified before anyone else has had the chance to evaluate them. At times, once the problem is identified, it is all too easy to arrive at the same solution as the person who addressed it years earlier. Do some software patents boil down to nothing more than a race to identify the problem first?
I know from my own experience of submitting and being granted a patent that you cannot simply patent ideas (no matter what the TV commercials may tell you). You have to patent methods or specific ways of implementing those ideas. When you’re evaluating someone else’s patent, the important part to pay attention to is the claims section. That’s what they’re really claiming the exclusive rights to.
- Overall specification of methods and components for teleporting avatars between disparate virtual worlds
- Loading the user’s profile to a remote shared database
- Transferring the user’s profile directly between disparate virtual worlds
- Including rendering information (the avatar) in the profile
- Also including user traits and inventory in the profile
- Also including an audit trail and violation history in the profile
- Using a common data construct to communicate this information
How would I defend myself from the claims in this patent?
I know that I have to focus on the claims. I’ll work in a few pieces from my own Metaverse design to illustrate where I think the patent has holes. Do I have enough to work around IBM’s claims?
Basically, they’re describing the migration of a resource inside of a cluster.
The entire procedure which is documented in claim 1 is little more than the migration of a resource inside of a computing cluster. Freeze the resource to prevent it from being altered, find the best place to put the resource, copy the resource, disable the resource, and start it up on a different node.
The difference is that instead of migration a computer program or a computer resource (IP address, SAN storage, etc) between servers, they are migrating a record that is associated with a user’s avatar.
Even if I acknowledge that they were ahead of the time in putting some significance to the problem, I can award them no points for the novelty of their solution. “A cluster failover procedure, except, with an avatar resource.”
It is interesting that IBM specifically makes claims about teleporting avatars.
1. A method for teleporting avatars between disparate virtual worlds, comprising:
What if there is no avatar to teleport? If I am in my Virtual Home and I use a themed virtual interface to launch a local copy of Team Fortress 2 or Minecraft, there is no avatar transfer involved. I’m simply launching a local binary.
What if instead of teleporting, we create a shared space to migrate the users between applications? Program A creates a version of the holodeck. Once the user steps inside the holodeck, it transfers the relative position and orientation over to Program B. Program B then continues the simulation from the holodeck and into their own custom environment. Or perhaps we could simply simulate the remote environment closely enough to cut the user over.
Is there any importance in the definitions of the words used in their claims? I don’t see “teleporting” actually defined anywhere in their patent. What exactly does and doesn’t teleporting consist of? I don’t see it. On the other hand, they seem to clearly define what a disparate virtual environment is. “Each environment is disparate in that each may be instantiated by different service providers, utilize different proprietary systems, and require the creation of a unique account to participate.”
IBM’s claim is based, in part, on analyzing information in the received persona profile and automatically selecting a region.
…analyzing information in the received persona profile and automatically selecting a region in the first virtual world to locate the inbound avatar based on the analyzed information in the received persona profile;
If a user, inside a different application, wishes to open a JanusVR virtual room explicitly at the location http://www.reddit.com/r/VRsites, is any analysis needed? Is the intended destination even part of the “persona profile”? There is no need to automatically select a region on basis of the user’s profile if the destination is explicitly stated (or if the region is chosen by one another means).
[EDIT: May 16, 2014] This really is more about automatically putting your avatar with similar people, be they selected friends, others with like-minded interests, associations, or more. By inserting an extra step, perhaps where the user manually confirms (by pop-up GUI, by movement through a second doorway at the destination, or by some other means) that they wished to be placed with others based on their shared profile, an automatic selection would be avoided. Beyond avoiding a patent claim, identifying good destinations and giving the user the choice might create a better user experience.
IBM has a serious procedural error in their claim.
“…automatically selecting a region in the first virtual world to locate the inbound avatar…“
IBM’s claims involve migrating from a first virtual world to a destination virtual world. There is no value in selecting a region in the first virtual world to locate the inbound avatar. Instead, the obvious implementation would be to select a region in the destination virtual world to locate the inbound avatar. In the claims section (where it matters), they seem to have chosen the wrong destination.
[UPDATE: A continuation of the patent which was published in January 2014 changes the language in a way that seems to avoid this mistake. It is interesting to note that it completely reverses the role of the first virtual world and the disparate virtual world, as described in the rest of the application.
They have a second continuation, similar to the first, which slightly changes the preamble. I have since found a third continuation of the patent. It looks like they were trying to cover a number of challenges based on the specifics of the wording.]
The first claim contains multiple (A,B,C) requirements which must all be met.
The multiple steps in the first claim include: creating a persona profile, transferring the unchanged profile, disabling the avatar in the original virtual world, and granting or denying access to the first virtual world.
To avoid this claim, we need to avoid using every single one of the listed steps. Instead, we could change the avatar’s record (in the originating virtual world) after the transfer. We don’t have to disable the avatar (a process which isn’t clearly defined — did they mean logging out?) in the originating virtual world. We don’t have to be responsible for granting or denying access to the destination virtual world.
The remaining claims (2-7) are dependent on the conditions of first claim having been met.
IBM’s additional claims involves transferring the profile through a remote shared database, or directly between virtual worlds.
2. The method for deploying of claim 1, wherein the transfer includes loading the persona profile to a remote shared database. [image]
3. The method for deploying of claim 1, wherein the transfer sends the persona profile directly to the disparate virtual world. [image]
In claim #2, IBM envisioned a remote shared database being used to transfer that information. I think that if Oculus were to use Facebook services to connect disparate virtual worlds, they’d be looking at a remote shared database as well. There might be a clever way around this claim, but my architecture didn’t involve this component so I’ll leave that to someone else to defend.
In claim #3, IBM envisioned that the user profile would be transferred directly between servers. I originally envisioned the client being responsible for transmitting the information necessary to coordinate the hand-off between two virtual worlds. The servers would communicate and synchronize through the client.
That’s one way around the problem. If I were Oculus, I’d use the APIs in the Oculus SDK to provide that service. But what about the audit logs and user reputation information? If it became necessary to transfer sensitive information between the servers, it can still be communicated between the servers, indirectly, through the client buy using a carefully chosen encryption protocol.
The remaining claims are very broad.
Claims 4-7 seem tough to counter. They basically claim the idea of sending virtual world information in a standardized format between virtual worlds. Perhaps the defense against claim #1 would be enough to knock these out? I really don’t know. Where is Pamela Jones from Groklaw when you need her?
Where to go from here?
I’d be interested in what others have to say about IBM’s patent (and the overall topic of virtual reality patents.) Perhaps related, I was excited to see the video of Michael Abrash and Dov Katz of Oculus VR talking about the great deal of R&D investment that Facebook is going to fund for virtual reality research for 5-10 years in the future. This also has me concerned. Corporate funded research will likely yield patents. Could VR become a terrible patent minefield (once again) in just a few years time?
In any case, in my next few posts, I hope to get back to the topic of specifying some specifics on how a live user transfer between disparate virtual worlds could be accomplished. If you’ve got some JanusVR virtual room coding skills, I might have a small job for you to create some illustrations.