Examining the Valve/HTC Vive Ecosystem: Basic Sensors and Processing
This is the second article in a series on the Valve/HTC Vive Ecosystem. If you have not already done so, please begin with the first article in the series.
Today’s article will provide additional information on the Lighthouse units, explain the Lighthouse sensor system, and take a brief look at the sensor processing which is used to return the absolute position of a tracked device.
This particular article will try to tread carefully. There’s no way around it, folks. This article is going to contain facts, rumors, innuendos, and outright lies about the operation of Valve’s Lighthouse sensor system.
- We’re working with publicly available information, which is scarce.
- There is no documentation.
- It is still in development and very subject to change.
- There is no need for regular users to understand the underlying details.
- Software developers can expect to be given an API that reports position without knowing any of the underlying hardware details.
Finally, for the time being, Valve employees are busy getting this stuff ready, and their time is better spent working on the product than answering all the outside questions. See page #9 of the Valve Handbook for New Employees for more details on how that process works.
We’ll have to assume that we’re on our own, for now.
Back to the Lighthouse for a Moment
I’m going to use the earlier research and development model for a reference.
Towards the middle upper left of the enclosure is a panel that has been mounted with LEDs. The apparent purpose of these LEDs is to widely emit a flash of infrared light which could have something close to the same perspective and range as the laser beams.
As outsiders, we don’t actually know what they are used for (see disclaimer section, above). But such a panel could be used for any number of purposes, and the two most relevant suggestions include:
- Transmitting a Lighthouse unit ID number
- Transmitting a mark to synchronize timing
We will speculate on additional uses of the LEDs in a later article.
Do you remember from the previous article how each Lighthouse unit would do an X sweep (10 milliseconds) and a Y sweep (10 milliseconds), and then go dark for an equal period of time (20 milliseconds)? We believe that pattern is designed to allow two Lighthouse units to tag-team an area. For the timing to work out properly, the two Lighthouses have to be in sync.
- Lighthouse A – X Sweep Laser On, Y Sweep Laser Off (10ms)
- Lighthouse A – X Sweep Laser Off, Y Sweep Laser On (10ms)
- Lighthouse B – X Sweep Laser On, Y Sweep Laser Off (10ms)
- Lighthouse B – X Sweep Laser Off, Y Sweep Laser On (10ms)
How can they coordinate so closely? Early speculation was that the area-flash LEDs were being used to transmit a timing mark to the Vive and the other Lighthouse, but analysis of the video from the previous article did not find any presence of an LED flash. Alan Yates, our Pharologist at Valve reminds us that the iPhone has a good IR filter, and that we actually were missing out on some LED flash activity.
Remember earlier, how we said this document will contain facts, rumors, innuendos, and outright lies about the operation of Valve’s Lighthouse? There is room for all sorts of optimizations and variants.
Each beam could be active for only 8ms. The drums could be running slower. The sweeps could overlap and still be usable. Please consider these and any other specifics as a plausible but potentially inaccurate examples which are used for illustrative purposes.
Back to the picture of the enclosure, apart from the LEDs, you’ll also see the two drums in the bottom and right sides. The two laser beams responsible for the horizontal and vertical sweeps are emitted from these spinning drums. It is a sure bet that the speed and position of those drums is set and carefully regulated by the Lighthouse unit itself, and that the speed is kept constant. The Lighthouse also carefully regulates the relative alignment of the two drums, keeping them from facing the room at the same time.
Changes to when the LEDs are powered, when the lasers are powered, and how the sweep motors behave can all be combined into a different mode of operation. A different mode might provide functionality which is slightly enhanced, or even completely different than what we see the Lighthouse being used for today.
You might want to keep that thought in the back your head. Lighthouses can be reprogrammed to support goals other than the absolute tracking of HMDs and controllers.
You should be picking up on how open-ended that statement is, and some of you might be getting some ideas. There are lots of them — an upcoming article will contain concrete examples. One of them is so stunningly obvious, you’ll wonder how everyone missed it.
Okay! Now Onward to the Sensors!
Any tracked object will have a number of infrared sensors (photodiodes) mounted on the surface. These sensors are particularly sensitive to the same infrared wavelength used by the Lighthouse lasers. We don’t know for sure if they are sensitive to the infrared LEDs or not.
We expect each sensor to be wired into a specialized integrated circuit, which performs some initial signal processing, such as the rejection of false signals and other light sources (such as sunlight). When a valid hit is registered, the chip will know which sensor was hit, and using an extremely precise clock, will time when it was hit (and potentially more).
Using the synchronizing flash from the Lighthouse LEDs, we can adjust the clock inside of our tracked object to match the clock inside of the Lighthouse. That same flash can tell us (or we might be able to determine on our own) exactly when a new cycle starts and the laser is starting a new sweep (at the 0 degree mark).
Why is time so important? Recall that each drum in the lighthouse is spinning at a known rate – one full revolution every 20 milliseconds (50 times a second). If we know when the drum is at the 0 degree mark, how fast the drum is spinning, and how long the drum has been spinning since the 0 degree mark, we know exactly what angle it is facing (up to the precision of our measurements, and the characteristics of the physical components).
Knowing this, inside the tracked device, we can use a precise timer to tell us how long it took the laser’s sweep to hit a photosensor, and what angle that represents. With that basic unit of measurement, you are on your way to determining position and orientation of the entire tracked device. But you still need more data. It just so happens that more data quickly follows.
The outside of a tracked device is covered by a number of these same photosensors. As the laser sweeps across the room, it also sweeps across the tracked device, and all the excited sensors will quickly result in an exact list of what sensors were hit by the Lighthouse, and at what angle.
The list of which exact sensor was hit at what time is combined with another set of information: the exact position and orientation of the sensor relative to the body of the tracked object. If you look at the SteamVR controllers above, you’ll see the careful placement of a number of sensors at the top of the controller. They have recorded the X/Y/Z position and orientation of every individual sensor.
Placement of sensors is critical. The number of sensors is important… to a point. You do not need to blanket the outside of your tracked object with sensors.
At the same time, if you use them sparingly, you need to keep them spread out so that even if they are held at an odd angle and partially blocked by a user’s arm, enough of them will still be able to acquire the Lighthouse signal. A sensor must be bathed in the signal from a Lighthouse in order to work.
Remember that picture of the SteamVR controller, back earlier? They designed it like the hilt of a sword. The guard is above the grip (where you shouldn’t be putting your hands), and that is where they placed the sensors.
To initialize tracking, a minimum number of sensors need to be able to see the Lighthouse, but less are required to hold tracking. An IMU (inertial measurement unit) inside the tracked unit also reduces the number of sensors required at any time, and increases the tracking resolution. (Again, see the disclaimer. There are a number of different ways that this can be implemented. The IMU is not a required component.)
At this point, if enough sensors on our tracked device are lit up by the Lighthouse, we’re good to go. It becomes a well understood geometry problem, and a matter of performing a computationally light set of trigonometric calculations to arrive at the absolute position and orientation of the tracked device within a room. (There is no big number crunching algorithm to steal time away from processing more important things, like graphics and content.)
“And then the magic happens.” I know that some of you are itching for hard technical details, and this answer is unsatisfying. Can they do it with one sweep, or do they have to process them in pairs? What about the relativistic effects of movement? After the initial acquisition process, do they only look the sensors that they expect to have data? I think we’ll have to wait to find out more. If there is a great answer in the near future, I’m happy to link it into the article.
The specific method that Valve uses is not known at this time, but the method is expected to be not unlike what has been discussed for the Oculus Development Kit 2’s camera-based tracking. Actually, come to think of it, Valve had a hand in that, as well. There is reason to be confident in their solution.
Looks like I’ve blown my word budget for this article. Next article, I hope to briefly touch a bit more on processing, and to provide some interesting examples of different ways that Lighthouse technology can be used.
Until then, I’ll leave you with a question. Doctor Miranda Jones is blind. What would it take for her to benefit from using Lighthouse technology in her own home? In public spaces?
This series concludes with the third and final article, “Valve’s Lighthouse as USB: Anything More than a Bunch of Spin?“