The Valve/Oculus Layered Compositors (Magic Glue for VR)


UPDATE: The speculation didn’t last long. Valve has just released their OpenVR SDK which includes documentation for the Compositor. The actual implementation differs in some interesting ways, but the Use and Features section, below, is still a good summary of what Valve and Oculus are trying to achieve here. More details are at the end of this article.


INTRODUCTION

In March, Valve released a new concept into SteamVR called the VR Compositor. Like everything else at this point, the specification is not yet public. (So insert the standard speculative disclaimers here. If I flubbed something, please be forgiving, but let me know.) It shouldn’t be too hard for us to tease together what its function and purpose might be.

VR Compositor:

  • This is a new component of SteamVR that simplifies the process of adding VR support to an application.
  • Continues to draw an environment even if the application hangs.
  • Simplifies handing off from one application to another without full screen context changes by owning the window on the headset.

-Programmer Joe (Valve)

Let’s break that down a bit. The compositor grabs the VR display, owns it, and continues running. When a compositor-aware application wants to use the HMD, it goes to the compositor to request access to the HMD. The compositor hands a buffer to the application and tells the application to render into that buffer.

The buffer that the Compositor hands to the application can be thought of as a layer in Photoshop. I realize that the next analogy may be a little dated, but it is also like using multiple transparent overlays on a classroom overhead projector. Each transparency is a layer that adds something to the overall scene, but also can be independently added or removed at any time.

So you’re probably wondering how layers work for VR. That’s the tricky part. Each layer is rendered and then placed over the previous layer, with the highest priority layer being placed on top. Anything drawn in one layer will cover up anything that was rendered directly below it. If the order of the layers matches the depth of the objects on screen, this works out very well.

If the order of the layers is not respected, then you could have a distant object that covers up part of a closer object. Such a conflict may create a 3D image that the human brain cannot correctly process, resulting in discomfort.

The image of the left is from Valve's HelloVR. The image from the right is also from HelloVR, but the Compositor placed a layer from a completely different program on top of it. The entire scene tracks with any movement of the HMD. (Apologies for the picture quality. Because a second layer does not mirror onto the desktop in SteamVR, each picture was taken inside of the left eye of a DK2.)

The image of the left is from Valve’s HelloVR. The image from the right is also from HelloVR, but the Compositor placed a layer from a completely different program on top of it. The entire scene tracks with any movement of the HMD. (Apologies for the picture quality. Because a second layer does not mirror onto the desktop in SteamVR, each picture was taken inside of the left eye of a DK2.)

The Compositor is not a tool which should be used casually — layers must be well thought out. Luckily, you can still take advantage of some of the Compositor’s other benefits even without carving your display in multiple layers.

It important to know that the Compositor is ultimately responsible for ordering all of the layers and sending them to the GPU for rendering. The application (or applications) no longer directly task the GPU.

USES AND FEATURES

So, how do we see the Compositor being used? The most well-known example is the Chaperone, Valve’s warning system for the HTC Vive, which is used to indicate to when you’re reaching the physical edge of your working area. Earlier, before you launched your compositor-aware application, the Chaperone had already asked for and received its own layer to work with.

The Chaperone is, in fact, an independent process which runs on your PC. It is already pre-programmed with the boundaries of your working area. In the background, it is regularly monitoring your absolute position via the SteamVR API, and the program is responsible for making you aware of the boundary as you approach it. How does it warn you?

When it detects that you are closing in on a boundary, it simply renders a representation of the boundary into the buffer. Because it uses the Compositor, the image of the boundary is automatically pasted into the scene which is being generated by your other application and is sent to your HMD.

What is slightly unusual in this situation is that the Chaperone is likely to be drawing in the same area of your display that your application is. This can result in mismatched depth cues. I have not seen the Chaperone in action, so it is unclear to me how this issue is resolved. I suspect that some degree of transparency (transparency of the boundary markers, or transparency of the scene) would be used to reduce the visual conflict.

The Chaperone starts to illustrate how we can let multiple applications to share the HMD at the same time, but having to draw into the same space is unfortunate. What if we could define specific portions of the screen (static or dynamic) that we want other applications to be responsible for?

“The Facebook advertisements are coming through the compositor!” (Just kidding, guys. I don’t think we’ll ever live that down.) But give some thought to what innovative things a multi-application approach might allow you to do.

So, what else does the Compositor do for us? The Compositor’s responsibility for rendering is, by itself, a feature. Instead of the application being responsible for knowing all of the underlying details and optimizations of any particular HMD, that task is offloaded to the Compositor. It is now the Compositor’s job to figure that out for you.

If done correctly, that should allow developers to focus more on content and less on some of the intricate details of a particular display (including a subset of the arcane and evolving optimization tricks that are out there). All of those goodies will now find their home inside of the Compositor.


Worth noting: this means that new performance tuning algorithms can be dynamically added which increase the performance of applications after they’ve already been published. At the same time, this opens the possibility to changes which can break applications after they’ve been published. This isn’t unprecedented. Companies like Nvidia incorporate some very specific performance tweaks into their video card drivers. It works well, as long as it is done carefully.


Joe mentioned two other simple functions which add to the overall quality of the VR experience. If your application hangs, the Compositor is still responsible for processing the scene. As a result, your display won’t lock up on you.

It will still track with your HMD (although there won’t be any new content or outside movement to render).  This also means that you’ll still get a Chaperone warning even if the game you are playing has locked up. Perhaps you could still summon a SteamVR overlay and exit out? In any case, the Compositor provides continuity.

The Compositor also provides continuity in a different way. We’re not having to re-initialize the display each time we hand off the HMD to another program. So if you are going from a launcher like SteamVR and into a game, your HMD doesn’t need to black out and come back to life each time that happens. It can be a very smooth experience when you go from one application to another.

As it turns out, Oculus has also been working towards its own compositor. In a March 5th presentation at GDC 2015 titled, “Developing VR Apps with Oculus SDK”, Anuj Gosalia gave a presentation which included details on how their VR Compositor would work. (The Oculus VR Compositor “VRC” is anticipated to be available in an upcoming release of the Oculus SDK.)

In his presentation, he gives us yet another use for the Compositor: a single application can use layers to render different parts of the same scene at different resolutions.

In their theoretical example with Elite Dangerous, a small and quality-sensitive element like text could use high sampling and high resolution, while the remainder of a more complex scene is rendered at a lower resolution which promotes a high framerate.

When small high quality area with text is pasted over the larger area, you’ve combined the benefits of what were two mutually exclusive approaches. You are able to mix quality and speed in the same frame. This can work out well for a number of different applications.

You can experiment with basic compositor functions in Windows using the VR Workbench, which is part of the SteamVR Beta. It is located in the SteamVR/tools/bin/win32 subdirectory in your local Steam installation.

You can experiment with basic compositor functions in Windows using the VR Workbench, which is part of the SteamVR Beta. It is located in the SteamVR/tools/bin/win32 subdirectory in your local Steam installation.

SUMMARY

In summary, we see that Valve and Oculus are both working towards a VR Compositor. The Compositor can allow for multiple processes to work together to render a single scene. It can be expected to abstract some of the underlying display details and optimizations from the developer. It provides continuity when a program fails or a new program takes control of the HMD. Finally, it can allow different areas of the display to be rendered at different resolutions.

The Compositor looks like it’ll be a great addition to VR. I’ve been hoping for this function for some time, and it dovetails perfectly with last year’s Virtual Home concept (in terms of launching local programs and summoning a utility space). Now if I could just work Carmack’s Time Warp into letting me rewrite that entire article to make more sense


UPDATE: Now that the Valve has released their OpenVR SDK, we can see that their implementation of the Compositor does not currently include the mixed rendering layers that Oculus describes. Instead, Valve supports 2D overlays. It also appears that the function of the Chaperone is integrated into the Compositor itself.

Should an application go non-responsive, the Compositor will continue to process the user’s head tracking. It will also fade into a grid scene to give the user the ability to reorient themselves. (My experimentation has shown that with the Oculus DK2, it draws a white screen with a dark line on the horizon.)

The functions of the Compositor are subject to change in future releases of Valve’s OpenVR or the Oculus SDK.


Tags: , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: