Here’s a preview of our talk we gave at the Embedded Vision Summit 2017 in Santa Clara:

The full talk is available on YouTube, after free registration at the Embedded Vision Alliance.

Abstract:

360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first glance, these systems wouldn’t seem require computer vision since they’re simply presenting images that the cameras capture. But even relatively simple 360-degree video systems require computer vision techniques to geometrically align the cameras – both in the factory and while in use. Additionally, differences in illumination between the cameras cause color and brightness mismatches, which must be addressed when combining images from different cameras.

Computer vision also comes into play when rendering the captured 360-degree video. For example, some simple automotive systems simply provide a top-down view, but more sophisticated systems enable the driver to select the desired viewpoint. In this talk, Jacobs explores the challenges, trade-offs and lessons learned while developing 360-degree video systems, with a focus on the crucial role that computer vision plays in these systems.