Virtual reality (VR) allows users to experience and/or interact with an immersive artificial environment, such that the user may feel as if they were physically in that environment. For example, virtual reality systems may display stereoscopic scenes to users in order to create an illusion of depth, and a computer may adjust the scene content in real-time to provide the illusion of moving within the scene. When a user views images through a virtual reality system, the user may thus feel as if they are moving within the scenes from a first-person point of view. Mixed reality (MR) covers a spectrum from augmented reality (AR) systems that combine computer generated information (referred to as virtual content) with views of the real world to augment, or add virtual content to, a user's view of their real environment (referred to as), to augmented virtuality (AV) systems that combine representations of real world objects with views of a computer generated three-dimensional (3D) virtual world. The simulated environments of virtual reality systems and/or the mixed environments of mixed reality systems may thus be utilized to provide an interactive user experience for multiple applications, such as applications that add virtual content to a real-time view of the viewer's environment, applications that generate 3D virtual worlds, interacting with virtual training environments, gaming, remotely controlling drones or other mechanical systems, viewing digital media content, interacting with the Internet, exploring virtual landscapes or environments, or the like.
Various embodiments of scene cameras are described. The scene cameras may, for example, be used in video see-through devices in mixed reality (MR) or virtual reality (VR) systems. In conventional video see-through devices, one or more scene cameras may be mounted at the front of the device. However, typically the entrance pupil and thus the point of view (POV) of the scene cameras is substantially offset from and thus substantially different than the POV of the user's eyes. Embodiments of scene camera configurations are described that at least partially correct the POV of the cameras to more closely match the POV of a user by imaging the entrance pupils of the cameras at a location closer to the user's eyes.
In embodiments, a scene camera system may include mirrors and cameras that capture light from the scene reflected by the mirrors. By using the mirrors to reflect the light, the cameras' entrance pupils are imaged at a location closer to a user's eyes to thus achieve a more accurate representation of the perspective of the user. In some embodiments, there are two mirrors arranged horizontally, with a first (top) mirror that reflects light from an upper region of the FOV to one or more cameras, and a second (bottom) mirror that reflects light from a lower region of the FOV to one or more cameras. The two mirrors may be straight, curved, or segmented. In some embodiments, there may be two cameras for each eye, with one that captures light reflected by the top mirror, and a second that captures light reflected by the bottom mirror. However, more or fewer cameras and/or mirrors may be used.
This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
“Comprising.” This term is open-ended. As used in the claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . .” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.).
“Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112, paragraph (f), for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
“First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, a buffer circuit may be described herein as performing write operations for “first” and “second” values. The terms “first” and “second” do not necessarily imply that the first value must be written before the second value.
“Based On” or “Dependent On.” As used herein, these terms are used to describe one or more factors that affect a determination. These terms do not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
“Or.” When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof.
Various embodiments of scene camera systems are described. Embodiments of a system are described that include mirrors and cameras that capture light from the scene reflected by the mirrors. By using the mirrors to reflect the light, the cameras' entrance pupils are imaged to a location closer to a user's eyes to thus achieve a more accurate representation of the perspective of the user. The scene camera systems may, for example, be used in mixed reality (MR) or virtual reality (VR) systems such as video see-through head-mounted displays (HMDs)
In conventional MR/VR systems that include HMDs, one or more scene cameras may be mounted at the front of the HIVID that capture images of the real-world scene in front of a user; the images are processed and displayed to display panels of the HIVID. However, typically the entrance pupil and thus the point of view (POV) of these conventional scene cameras is substantially offset from and thus different than the POV of the user's eyes. Embodiments of scene camera configurations that may, for example, be used in HMDs are described that at least partially correct the POV of the cameras to match the POV of the user by causing the entrance pupils of the cameras to be imaged at a location closer to the user's eyes. Thus, the scene cameras may capture images of the environment from substantially the same perspective as the user's eyes.
In embodiments as illustrated in
As an alternative to using arrays of cameras to capture the full FOV as described above in reference to
The system 1600 may include two horizontally oriented mirrors 1618A and 1618B that extend across the front of the system 1600 to reflect light from the FOV of the system 1600 to cameras 1612A-1612D. Mirror 1618A may be tilted to reflect light from a top portion of the FOV to top cameras 1612A and 1612C that capture images of the top portion of the FOV, and mirror 1618B may be tilted to reflect light from a bottom portion of the FOV to bottom cameras 1612B and 1612D that capture images of the bottom portion of the FOV. The two mirrors 1618A and 1618B may connect at a front edge, and may be tilted downwards with respect to a center axis of the system 1600 as shown in
As shown in
While
The images 1640 captured by the cameras 1612 are sent to the controller 1630. The captured images 1640 are processed by one or more image processing pipelines of the controller 1630 to generate composite left and right images 1642 of the scene that are displayed to respective left and right display panels 1614 of the system 1600. A user views the displayed images through respective left and right lenses 1616 of the system 1600.
In some embodiments, the scene camera systems shown in any of
The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. The various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow.
This application claims benefit of priority of U.S. Provisional Application Ser. No. 62/739,092 filed Sep. 28, 2018, the content of which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62739092 | Sep 2018 | US |