空 挡 广 告 位 | 空 挡 广 告 位

Towards occlusion-aware multifocal displays

Note: We don't have the ability to review paper

PubDate: July 2020

Teams: Carnegie Mellon University

Writers: Jen-Hao Rick Chang;Anat Levin;B. V. K. Vijaya Kumar;Aswin C. Sankaranarayanan

PDF: Towards occlusion-aware multifocal displays

Abstract

The human visual system uses numerous cues for depth perception, including disparity, accommodation, motion parallax and occlusion. It is incumbent upon virtual-reality displays to satisfy these cues to provide an immersive user experience. Multifocal displays, one of the classic approaches to satisfy the accommodation cue, place virtual content at multiple focal planes, each at a different depth. However, the content on focal planes close to the eye do not occlude those farther away; this deteriorates the occlusion cue as well as reduces contrast at depth discontinuities due to leakage of the defocus blur. This paper enables occlusion-aware multifocal displays using a novel ConeTilt operator that provides an additional degree of freedom — tilting the light cone emitted at each pixel of the display panel. We show that, for scenes with relatively simple occlusion configurations, tilting the light cones provides the same effect as physical occlusion. We demonstrate that ConeTilt can be easily implemented by a phase-only spatial light modulator. Using a lab prototype, we show results that demonstrate the presence of occlusion cues and the increased contrast of the display at depth edges.

您可能还喜欢...

Paper