The present disclosure relates generally to the field of head-mounted devices.
Head-mounted devices may be used to show computer-generated reality content to users. These devices may include a housing and a face seal that is designed to be positioned in contact with a user's face.
One aspect of the disclosure is a head-mounted device to be worn on a head of a user. The head-mounted device includes a device housing, a support structure, and an optical module. The device housing includes a peripheral wall, an intermediate wall that is bounded by the peripheral wall, an eye chamber on a first side of the peripheral wall, a component chamber on a second side of the peripheral wall, and a face seal. The support structure is connected to the device housing and is configured to secure the device housing with respect to the head of the user. The optical module includes an optical module housing that is connected to the device housing and extends through an opening in the intermediate wall of the device housing, has an inner end that is located in the component chamber, has an outer end that is located in the eye chamber, and defines an interior space that extends between the inner end and the outer end. The optical module also includes a display that is located at the inner end of the optical module housing, and a lens assembly that is located at the outer end of the optical module housing. The optical module also includes a conformable portion that is located at the outer end of the optical module housing, is located adjacent to the lens assembly, extends at least partially around a periphery of the lens assembly, and is engageable with a face portion of the user.
Another aspect of the disclosure is a head-mounted device that includes a device housing and an optical module housing. The optical module is connected to the device housing. The optical module includes a lens assembly and a conformable portion. The lens assembly is configured to be positioned adjacent to an eye of a user and the conformable portion is engageable with a face portion of the user.
Another aspect of the disclosure is optical module that includes an optical module housing that has a first end, a second end, and an interior space that extends from the first end to the second end. The optical module also includes a display that is connected to the first end of the optical module housing and a lens assembly that is connected to the second end of the optical module. The optical module also includes a conformable portion at the second end of the optical module housing, wherein the conformable portion is configured to deform in response to engagement.
Another aspect of the disclosure is optical module that includes an optical module housing that has a first end, a second end, and an interior space that extends from the first end to the second end. The optical module also includes a display that is connected to the first end of the optical module housing and a lens assembly that is connected to the second end of the optical module. The optical module also includes a conformable portion at the second end of the optical module housing, wherein the conformable portion includes a cover portion that defines an enclosed interior space and a fluid in the enclosed interior space.
The disclosure herein relates to head-mounted devices that used to show computer-generated reality (CGR) content to users and which incorporate design features that accommodate users that have a wide variety of face shapes. The devices described herein position lens assemblies in close proximity to the user's eyes. Support structures extend around the lens assemblies to hold them in a desired position and protect the lens assemblies from damage. The support structures incorporate a conformable portion that deforms upon contact with the user's face in order to increase user comfort and accommodate users having varied facial shapes.
The device housing 102 is a structure that supports various other components that are included in the head-mounted device. The device housing 102 may be an enclosed structure such that certain components of the head-mounted device 100 are contained within the device housing 102 and thereby protected from damage. The support structure 104 is connected to the device housing 102. The support structure 104 is a component or collection of components that function to secure the device housing 102 in place with respect to the user's head so that the device housing 102 is restrained from moving with respect to the user's head and maintains a comfortable position during use. The support structure 104 can be implemented using rigid structures, elastic flexible straps, or inelastic flexible straps. Although not illustrated, the support structure 104 may include passive or active adjustment components, which may be mechanical or electromechanical. In the illustrated example, the support structure 104 is a headband type device that is connected to left and right lateral sides of the device housing 102 and is intended to extend around the user's head. Other configurations may be used for the support structure 104, such as a halo-type configuration in which the device housing 102 is supported by a structure that is connected to a top portion of the device housing 102, engages the user's forehead above the device housing 102, and extends around the user's head, or a mohawk-type configuration in which a structure extends over the user's head.
The face seal 106 is connected to the device housing 102 and is located at areas around a periphery of the device housing 102 where contact with the user's face is likely. The face seal 106 functions to conform to portions of the user's face to allow the support structure 104 to be tensioned to an extent that will restrain motion of the device housing 102 with respect to the user's head. The face seal 106 may also function to reduce the amount of light from the physical environment around the user that reaches the user's eyes. The face seal 106 may contact areas of the user's face, such as the user's forehead, temples, and cheeks. The face seal 106 may be formed from a compressible material, such as open-cell foam or closed cell foam.
The optical modules 108 are each assemblies that include multiple components. The components that are included in the optical modules support the function of displaying content to the user in a manner that supports CGR experiences. Two of the optical modules 108 are shown in the illustrated example, including a left-side optical module that is configured to display content to a user's left eye and a right-side optical module that is configured to display content to a user's right eye in a manner that supports stereo vision. Components that may be included in each of the optical modules 108 include an optical module housing that supports and contains components of the optical module 108, a display screen (which may be a common display screen shared by the optical modules 108 or a separate display screen), and a lens assembly that includes one or more lenses to direct light from the display screen to the user's eye. Other components may also be included in each of the optical modules. Although not illustrated in
As best seen in
The optical module 108 includes an optical module housing 320, a display 322, a lens assembly 324, and a conformable portion 326. Each of the optical module housings 320 is supported with respect to the device housing 102 either in a fixed position by an assembly that allows controlled movement of the optical modules 108, for example, for interpupillary distance adjustment or for eye relief adjustment. The optical module housing 320 provides a structure that supports other components, including the display 322, the lens assembly 324, and the conformable portion 326. The optical module housing 320 also protects the other components of the optical module 108 from mechanical damage, and provides a structure that other components can be sealed against to seal an interior space 328 relative to the exterior to prevent foreign particles (e.g., dust) from entering the interior space 328.
The optical module housing 320 may be a generally cylindrical, tubular structure having wall portions that extend around the interior space 328. Although shown in the illustrated example as a cylinder having a generally circular cross-section along the optical axis of the optical module 108, the optical module housing may instead utilize another shape, such as an oval shape or a rectangular shape. The shape of the optical module housing 320 need not be a regular geometric shape, and may instead be an irregular, compound shape, that incorporates various features and structures that have specific functions. The optical module housing 320 may be formed from a generally rigid and inflexible material, such as plastic or metal.
The interior space 328 of the optical module housing 320 may extend between open ends that are spaced along the optical axis of the optical module 108 (e.g., between a first end of the optical module housing 320 and a second end of the optical module housing 320). For example, an outer open end may be located in the eye chamber 210 and an inner open end may be located in the component chamber 316. The display 322 is located at the inner open end of the optical module housing 320 and the lens assembly 324 is located at the outer open end of the optical module housing 320. This configuration allows light from the display 322 to be projected along the optical axis of the optical module 108 such that the light is incident on the lens assembly 324 and is shaped by the lens assembly 324 in a manner that causes images that are projected by the display 322 to be displayed to each of the user's eyes by the optical modules 108.
The conformable portion 326 of the optical module housing 320 is configured such that it is able to conform to the user's face in the area of the orbital cavity. The conformable portion 326 is flexible and may be elastic to permit deformation and return to a nominal (e.g., uncompressed) shape. Deformation of the conformable portion 326 may occur primarily in the radial or lateral direction relative to the optical axis of the optical modules 108 (e.g., in a direction generally perpendicular to the optical axis), but some degree of compression in the longitudinal direction (e.g., in a direction aligned with the optical axis) will typically be present as well. As will be described further herein, the conformable portion 326 may be a passive structure that deforms in response to application of force without any active control of deformation, or may be an active structure that includes components that control deformation using some manner of controlled actuation.
The conformable portion 326 is located at the outer open end of the optical module housing and is adjacent to the lens assembly 324. The conformable portion 326 may form part or all of an axial end surface of the optical module housing 320 and may form part of the radial surface of the optical module housing 320. The axial end surface of the conformable portion 326 may extend outward (toward the user) relative to the axial end surface of the lens assembly 324, the axial end surface of the conformable portion 326 may be substantially flush with the axial end surface of the lens assembly 324, or the axial end surface of the lens assembly 324 may extend outward (toward the user) relative to the axial end surface of the conformable portion 326.
In some implementations, the conformable portion 326 extends continuously around the lens assembly 326 as shown in
The conformable portion 626 is in the uncompressed position (
The conformable portion 826 is in the uncompressed position (
The conformable portion 1026 is in the uncompressed position (
The conformable portion 1226 is in the uncompressed position (
The conformable portion 1426 is in the uncompressed position (
The conformable portion 1626 is in the uncompressed position (
The processor 2051 is a device that is operable to execute computer program instructions and is operable to perform operations that are described by the computer program instructions. The processor 2051 may be implemented using a conventional device, such as a central processing unit, and provided with computer-executable instructions that cause the processor 2051 to perform specific functions. The processor 2051 may be a special-purpose processor (e.g., an application-specific integrated circuit or a field-programmable gate array) that implements a limited set of functions. The memory 2052 may be a volatile, high-speed, short-term information storage device such as a random-access memory module. The storage device 2053 is intended to allow for long term storage of computer program instructions and other data. Examples of suitable devices for use as the storage device 2053 include non-volatile information storage devices of various types, such as a flash memory module, a hard drive, or a solid-state drive.
The communications device 2054 supports wired or wireless communications with other devices. Any suitable wired or wireless communications protocol may be used.
The display 2055 is a display device that is operable to output images according to signals received from the processor 2051 and/or from external devices using the communications device 2054 in order to output CGR content to the user. As an example, the display 2055 may output still images and/or video images in response to received signals. The display 2055 may include, as examples, an LED screen, an LCD screen, an OLED screen, a micro LED screen, or a micro OLED screen.
The optics 2056 are configured to guide light that is emitted by the display 2055 to the user's eyes to allow content to be presented to the user. The optics 2056 may include lenses or other suitable components. The optics 2056 allow stereoscopic images to be presented to the user in order to display CGR content to the user in a manner that causes the content to appear three-dimensional.
The sensors 2057 are components that are incorporated in the head-mounted device 100 to provide inputs to the processor 2051 for use in generating the CGR content. The sensors 2057 include components that facilitate motion tracking (e.g., head tracking and optionally handheld controller tracking in six degrees of freedom). The sensors 2057 may also include additional sensors that are used by the device to generate and/or enhance the user's experience in any way. The sensors 2057 may include conventional components such as cameras, infrared cameras, infrared emitters, depth cameras, structured-light sensing devices, accelerometers, gyroscopes, and magnetometers. The sensors 2057 may also include biometric sensors that are operable to physical or physiological features of a person, for example, for use in user identification and authorization. Biometric sensors may include fingerprint scanners, retinal scanners, and face scanners (e.g., two-dimensional and three-dimensional scanning components operable to obtain image and/or three-dimensional surface representations). Other types of devices can be incorporated in the sensors 2057. The information that is generated by the sensors 2057 is provided to other components of the head-mounted device 100, such as the processor 2051, as inputs.
The power source 2058 supplies electrical power to components of the head-mounted device 100. In some implementations, the power source 2058 is a wired connection to electrical power. In some implementations, the power source 2058 may include a battery of any suitable type, such as a rechargeable battery. In implementations that include a battery, the head-mounted device 100 may include components that facilitate wired or wireless recharging.
In some implementations of the head-mounted device 100, some or all of these components may be included in a separate device that is removable. For example, any or all of the processor 2051, the memory 2052, and/or the storage device 2053, the communications device 2054, the display 2055, and the sensors 2057 may be incorporated in a device such as a smart phone that is connected to (e.g., by docking) the other portions of the head-mounted device 100.
In some implementations of the head-mounted device 100, the processor 2051, the memory 2052, and/or the storage device 2053 are omitted, and the corresponding functions are performed by an external device that communicates with the head-mounted device 100. In such an implementation, the head-mounted device 100 may include components that support a data transfer connection with the external device using a wired connection or a wireless connection that is established using the communications device 2054.
A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
In contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands).
A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create three-dimensional or spatial audio environment that provides the perception of point audio sources in three-dimensional space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.
Examples of CGR include virtual reality and mixed reality.
A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end.
In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground.
Examples of mixed realities include augmented reality and augmented virtuality.
An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment.
An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.
An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer-generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head-mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head-mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head-mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head-mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head-mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
As described above, one aspect of the present technology is the gathering and use of data available from various sources to adjust the fit and comfort of a head-mounted device. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, a user profile may be established that stores fit and comfort related information that allows the head-mounted device to be actively adjusted for a user. Accordingly, use of such personal information data enhances the user's experience.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of storing a user profile to allow automatic adjustment of a head-mounted device, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide data regarding usage of specific applications. In yet another example, users can select to limit the length of time that application usage data is maintained or entirely prohibit the development of an application usage profile. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, fit and comfort related parameters may be determined each time the head-mounted device is used, such as by scanning a user's face as they place the device on their head, and without subsequently storing the information or associating with the particular user.
This application claims the benefit of U.S. Provisional Application No. 62/869,710, filed on Jul. 2, 2019, the content of which is hereby incorporated by reference in its entirety for all purposes.