The present description relates generally to head-mountable devices, and, more particularly, to fit detection systems for head-mountable devices.
A head-mountable device can be worn by a user to display visual information within the field of view of the user. The head-mountable device can be used as a virtual reality (VR) system, an augmented reality (AR) system, and/or a mixed reality (MR) system. A user may observe outputs provided by the head-mountable device, such as visual information provided on a display. The display can optionally allow a user to observe an environment outside of the head-mountable device. Other outputs provided by the head-mountable device can include speaker output and/or haptic feedback. A user may further interact with the head-mountable device by providing inputs for processing by one or more components of the head-mountable device. For example, the user can provide tactile inputs, voice commands, and other inputs while the device is mounted to the user's head.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several embodiments of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be clear and apparent to those skilled in the art that the subject technology is not limited to the specific details set forth herein and may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Head-mountable devices, such as head-mountable displays, headsets, visors, smartglasses, head-up display, etc., can perform a range of functions that are managed by the components (e.g., sensors, circuitry, and other hardware) included with the wearable device.
Many of the functions performed by a head-mountable device are optimally experienced when the components are in their most preferred position and orientation with respect to a user wearing the head-mountable device. For example, the head-mountable device can include a display that visually outputs display-based information toward the eyes of the user. The position and orientation of the displays relative to the eyes depends, at least in part, on how the head-mountable device is positioned on the face of the user. Due to variations in facial features across different users, a given head-mountable device may require adjustment to accommodate different users. For example, different users can have different facial features (e.g., face plane slope, forehead size, eye location). Accordingly, different users may perceive the displayed information differently unless a preferred arrangement is provided.
It can be costly to require each user to acquire an entire head-mountable device that is custom-made and specifically tailored to their facial features. In particular, such an approach would require customization of each head-mountable device and/or the ability to choose from a wide variety of head-mountable devices. It can be beneficial to provide modular features that can be individually chosen to achieve the desired fit. However, it is important to properly detect the feature of the user's head so the optimal components can be selected to provide a desired fit.
Systems of the present disclosure can provide a fitting device that can be worn by a user to facilitate detection of the user's features and guide the user to selecting components (e.g., modules) of a head-mountable device that will provide the best fit when assembled together. By providing head-mountable devices with modular features, certain modules can provide a custom fit without requiring the entire head-mountable device to be custom fitted to each user. An electronic device can be operated to guide a user to select the optimal components, such as a face seal and/or head engager for use with an HMD module.
These and other embodiments are discussed below with reference to
According to some embodiments, for example as shown in
The frame 108 can be supported on a user's head with the head engager 300. The head engager 300 can wrap around or extend along opposing sides of a user's head. The head engager 300 can optionally include earpieces for wrapping around, engaging with, or resting on a user's ears. It will be appreciated that other configurations can be applied for securing the head-mountable device 100 to a user's head. For example, one or more bands, straps, belts, caps, hats, or other components can be used in addition to or in place of the illustrated components of the head-mountable device 100. By further example, the head engager 300 can include multiple components to engage a user's head. The head engager 300 can extend from the HMD module 110 and/or the face seal 200.
The frame 108 can provide structure around a peripheral region thereof to support any internal components of the frame 108 in their assembled position. For example, the frame 108 can enclose and support various internal components (including for example integrated circuit chips, processors, memory devices and other circuitry) to provide computing and functional operations for the head-mountable device 100, as discussed further herein. While several components are shown within the frame 108, it will be understood that some or all of these components can be located anywhere within or on the head-mountable device 100. For example, one or more of these components can be positioned within the head engager 300, the face seal 200, and/or the HMD module 110 of the head-mountable device 100.
The frame 108 can include and/or support one or more cameras 130. The cameras 130 can be positioned on or near an outer side 112 of the frame 108 to capture images of views external to the head-mountable device 100. As used herein, an outer side of a portion of a head-mountable device is a side that faces away from the user and/or towards an external environment. The captured images can be used for display to the user or stored for any other purpose. Each of the cameras 130 can be movable along the outer side 112. For example, a track or other guide can be provided for facilitating movement of the camera 130 therein.
The head-mountable device 100 can include displays 140 that provide visual output for viewing by a user wearing the head-mountable device 100. One or more displays 140 can be positioned on or near an inner side 114 of the frame 108. As used herein, an inner side 114 of a portion of a head-mountable device is a side that faces toward the user and/or away from the external environment.
A display 140 can transmit light from a physical environment (e.g., as captured by a camera) for viewing by the user. Such a display 140 can include optical properties, such as lenses for vision correction based on incoming light from the physical environment. Additionally or alternatively, a display 140 can provide information as a display within a field of view of the user. Such information can be provided to the exclusion of a view of a physical environment or in addition to (e.g., overlaid with) a physical environment.
A physical environment relates to a physical world that people can sense and/or interact with without necessarily requiring the aid of an electronic device. A computer-generated reality environment relates to a wholly or partially simulated environment that people sense and/or interact with the assistance of an electronic device. Examples of computer-generated reality include mixed reality and virtual reality. Examples of mixed realities can include augmented reality and augmented virtuality. Some examples of electronic devices that enable a person to sense and/or interact with various computer-generated reality environments include head-mountable systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head-mountable device can have an integrated opaque display, have a transparent or translucent display, or be configured to accept an external opaque display (e.g., smartphone).
Each display 140 can be adjusted to align with a corresponding eye of the user. For example, each display 140 can be moved along one or more axes until a center of each display 140 is aligned with a center of the corresponding eye. Accordingly, the distance between the displays 140 can be set based on an interpupillary distance of the user. IPD is defined as the distance between the centers of the pupils of a user's eyes.
The pair of displays 140 can be mounted to the frame 108 and separated by a distance. The distance between the pair of displays 140 can be designed to correspond to the IPD of a user. The distance can be adjustable to account for different IPDs of different users that may wear the head-mountable device 100. For example, either or both of the displays 140 may be movably mounted to the frame 108 to permit the displays 140 to move or translate laterally to make the distance larger or smaller. Any type of manual or automatic mechanism may be used to permit the distance between the displays 140 to be an adjustable distance. For example, the displays 140 can be mounted to the frame 108 via slidable tracks or guides that permit manual or electronically actuated movement of one or more of the displays 140 to adjust the distance there between.
Additionally or alternatively, the displays 140 can be moved to a target location based on a desired visual effect that corresponds to user's perception of the display 140 when it is positioned at the target location. The target location can be determined based on a focal length of the user and/or optical components of the system. For example, the user's eye and/or optical components of the system can determine how the visual output of the display 140 will be perceived by the user. The distance between the display 140 and the user's eye and/or the distance between the display 140 and one or more optical components can be altered to place the display 140 at, within, or outside of a corresponding focal distance. Such adjustments can be useful to accommodate a particular user's eye, corrective lenses, and/or a desired optical effect.
The head-mountable device 100 can include one or more user sensors for tracking features of the user wearing the head-mountable device 100. Such a sensor can be located at, included with, and/or associated with the HMD module 110, the face seal 200, and/or the head engager 300.
By further example, a user sensor 170 can perform facial feature detection, facial movement detection, facial recognition, eye tracking, user mood detection, user emotion detection, voice detection, etc. Such eye tracking may be used to determine a location of information to be displayed on the displays 140 and/or a portion (e.g., object) of a view to be analyzed by the head-mountable device 100. By further example, the user sensor 170 can be a bio-sensor for tracking biometric characteristics, such as health and activity metrics. The user sensor 170 can include a bio-sensor that is configured to measure biometrics such as electrocardiographic (ECG) characteristics, galvanic skin resistance, and other electrical properties of the user's body. Additionally or alternatively, a bio-sensor can be configured to measure body temperature, exposure to UV radiation, and other health-related information.
As further shown in
The components of the head-mountable device 100 can be provided with modular configurations that facilitate engagement (e.g., assembly) and release. As used herein, “modular” or “module” can refer to a characteristic that allows an item, such as a face seal, to be connected, installed, removed, swapped, and/or exchanged by a user in conjunction with another item, such as an HMD module of a head-mountable device. Connection of a face seal, a head engager, and/or an HMD module can be performed and reversed, followed by disconnection and connection of another module replacing the prior module. As such, multiple modules can be exchangeable with each other with respect to another module.
Engagers can facilitate coupling of the HMD module 110 to the face seal 200 in a relative position and orientation that aligns the displays 140 of the HMD module 110 in a preferred position and orientation for viewing by the user. The HMD module 110 and the face seal 200 can be coupled to prevent ingress of light from an external environment. For example, HMD module engagers 180 can releasably engage face seal engagers 280. One or more of various mechanisms can be provided to secure the modules to each other. For example, mechanisms such as locks, latches, snaps, screws, clasps, threads, magnets, pins, an interference (e.g., friction) fit, knurl presses, bayoneting, and/or combinations thereof can be included to couple and/or secure the HMD module 110 and the face seal 200 together. The modules can remain secured to each other until an optional release mechanism is actuated. The release mechanism can be provided on an outer surface of the head-mountable device 100 for access by a user.
While the face seal 200 is shown schematically with a particular size and shape, it will be understood that the size and shape of the face seal 200, particularly at the inner side 214 of the face seal 200, can have a size and shape that accommodates the face of a user wearing the head-mountable device 100. For example, the inner side 214 can provide a shape that generally matches the contours of the user's face around the eyes of the user, as described further herein. The inner side 214 can be provided with one or more features that allow the face seal 200 to conform to the face of the user to enhance comfort and block light from entering the face seal 200 at the points of contact with the face. For example, the inner side 214 can provide a flexible, soft, elastic, and/or compliant structure.
While the head-mountable device 100 is worn by a user, with the inner side 214 of the face seal 200 against the face of the user and/or with the head engager 300 against the head of the user, the face seal 200 can remain in a fixed location and orientation with respect to the face and head of the user. Furthermore, in such a configuration the HMD module 110 can also be maintained in a fixed location and orientation with respect to the face and head of the user. Given the variety of head and face shapes that different users may have, it can be desirable to provide a face seal 200 with customization and exchangeability so that the HMD module 110 is in a desired position and orientation with respect to the face and head of the user during use.
Referring now to
As shown in
The sensor 412 can include one or more types of sensors. For example, the sensor 412 can include one or more image sensors, depth sensors, thermal (e.g., infrared) sensors, and the like. By further example, a depth sensor can be configured to measure a distance (e.g., range) to an object (e.g., region of the user's face) via stereo triangulation, structured light, time-of-flight, interferometry, and the like. Additionally or alternatively, the face sensor and/or the device can capture and/or process an image based on one or more of hue space, brightness, color space, luminosity, and the like.
In
The sensor 412 can measure a distance from the sensor 412 to each of multiple regions of the face of the user. For example, the sensor 412 can measure a forehead distance to a forehead 20 of the user 10. By further example, the sensor 412 can measure a nose distance to a nose 30 of the user 10. By further example, the sensor 412 can measure a cheek distance to a cheek 40 of the user 10. By further example, the sensor 412 can measure an ear distance to an ear 50 of the user 10. The sensor 412 can measure any other regions of the face, such as the hair, the eyes, and/or other portions that are not to be directly engaged by the face seal and/or the head engager. It will be understood that other regions of the face can be detected and/or measured. Additionally or alternatively, one or multiple distance measurements can be made to each of various regions, such as with respect to multiple sections of the forehead 20, nose 30, cheeks 40, and/or ears 50. Additionally or alternatively, the measurements can be made from different locations (e.g., positions and/or orientations with respect to the head of the user 10).
Optionally, as shown in
As shown in
The fitting device 500 can include one or more fiducial markers that are detectable by the electronic device 400 to provide visual references of the position and/or orientation of the components of the fitting device 500. For example, the frame 510 can include one or more frame fiducial markers 516, and the band 520 can include one or more band fiducial markers 526.
A detection can be facilitated by capturing a view of the fiducial markers 516 and 526. The fiducial markers 516 and 526 can be optically or otherwise distinguishable from other structures within the field of view of the sensor 412. The fiducial markers 516 and 526 can be or have known visual features. For example, the fiducial markers 516 and 526 may be or include a particular color scheme, a particular shape, a particular size, a particular marking, such as quick response (QR) codes or other bar codes or markings, a visual feature or marking that is exposed through image processing, and/or generally any combination thereof. It will be understood that the shape of the frame 510 and/or the band 520 can form a fiducial marker.
The image of the fiducial markers 516 and 526 as captured by the sensor 412 can be compared to the known visual feature represented by the fiducial markers 516 and 526. Additionally or alternatively, the fiducial markers 516 and 526 can have known relative positions and/or orientations with respect to each other in a nominal state, such that any changes to the relative positions and/or orientations can be detected by the electronic device 400.
The sensor (s) used for different operations described herein can be the same or different. For example, the sensor (s) used for mapping features of the user's head can include depth sensors, thermal (e.g., infrared) sensors, and the sensor (s) used for mapping fiducial markers can include one or more image sensors. It will be understood that other sensors can be employed and/or that multiple sensors can be used in any one operation.
The electronic device 400 operate one or more sensors 412 to detect regions of the head of the user 10 while the user 10 wears the fitting device 500. Such regions can include the regions that are not covered by the fitting device 500. Where both the fitting device 500 and the features of the user 10 are detected (in the same or different operations), the relative position of the fitting device 500 on the head of the user 10 can be determined. For example, by comparing the detected positions and/or orientations of the forehead 20, the nose 30, the cheeks 40, and/or the ears 50 with respect to the detected positions and/or orientations of the fiducial markers 516 and 526, the electronic device 400 can determine how the user is wearing the fitting device 500.
The electronic device 400 can then determine how a head-mountable device can be worn in a recommended configuration. Factors in such a determination can include a desired position and/or orientation with respect to the user's eyes, a desired distribution of forces on the face of the user (e.g., to reduce fatigue), and the like. The electronic device 400 can determine recommended components (e.g., face seal and/or head engager) that, when used as part of or with the head-mountable device, would achieve the desired outcomes.
Based at least in part on the distance measurements and/or the views of the user and/or the fiducial markers, a face seal can be selected with various portions that match the contours of the face of the user. Different face seals can differ from each other at least with respect to the dimensions along different portions thereof. For example, different face seals can have different thicknesses along different portions to accommodate the face of various different users. The determination of a recommended face seal can include a determination of what thicknesses at each portion of a face seal are needed to place an HMD module at a desired position and/or orientation relative to the head, face, and/or eyes of the user. Where such a desired position and/or orientation are known, the face seal can be selected as the one having the appropriate thickness to place the HMD module at the desired position and/or orientation when the face seal is engaged to the HMD module and the face of the user.
Additionally or alternatively, based at least in part on the distance measurements and/or the views of the user and/or the fiducial markers, a head engager can be selected to fit the user's head. Different head engagers can differ from each other at least with respect to the size and/or amount of tightness provided when worn by a user. For example, different head engagers can different lengths, elastic properties, and/or ranges of adjustability to accommodate different head sizes. The determination of a recommended head engager can include a determination of what amount of tension is preferred and can be provided to securely and comfortably hold an HMD module against a head of the user (e.g., when coupled to the face seal).
Referring now
As shown in
As shown in
As such, the assembly (e.g., sheet 592) can be provided in a compact, low-profile form for later assembly by the user. In such examples, the fitting device can be transported to a user for fitting prior to ordering and/or obtaining a head-mountable device. Accordingly, the fitting, detections, and determinations may occur in a manner that allows the user to be informed regarding the recommended components, so that such recommended components can be obtained along with the other portions of the head-mountable device.
Referring now
As shown in
As shown in
Referring now
As shown in
As further shown in
As further shown in
In use, the deformation arms 530 extend from the frame 510 and directly abut the head (e.g., forehead, check, nose, etc.) of the user. A deformation arm fiducial marker 536 can indicate the location of abutment, and the deformation arm fiducial marker 536 can move as the corresponding deformation arm 530 compresses, deflect, or otherwise deforms when the fitting device is worn tightly on the head of the user. A distance between a deformation arm fiducial marker 536 and a frame fiducial marker 516 can correspondingly change. Accordingly, by detecting the distance between the deformation arm fiducial marker 536 and the frame fiducial marker 516, the amount of compression can be determined, and the amount of force and/or pressure on the face of the user can be calculated. Such a detection can be made by an electronic device capturing a view of the deformation arm fiducial marker 536 and the frame fiducial marker 516 and/or by a user, who can input the measurement into an electronic device.
The process 600 can begin when the electronic device measures head features of a user (602). Such a measurement can be made by one or more sensors of the electronic device. Optionally, the measurement can be performed in response to a detection of the user, a user input, and/or an operational state of the electronic device (e.g., on/off state, application launch, and the like). A sensor of the electronic device (e.g., a depth sensor) can measure one or more distances to one or more regions of the head. Such regions can include a forehead, nose, cheeks, ears, and/or eyes of the user. Optionally, the measurements can be made while no fitting device is worn. It will be understood that the electronic device can detect the absence of a fitting device and determine whether and how to measure the head features.
A sensor of the electronic device (e.g., an image sensor) can capture a view containing one or more head features of the user and/or a fitting device while being worn by the user (604). Such regions can include a forehead, nose, cheeks, ears, and/or eyes of the user. The view can be captured while the fitting device is worn. It will be understood that the electronic device can detect the presence of a fitting device and determine whether and how to capture the view.
Based on the measured distances and/or captured views, the electronic device can determine a recommended face seal and/or head engager for use with the HMD module (606). For example, a variety of available face seals with known dimensions (e.g., thicknesses, widths, and/or heights) can be compared to the optimal thicknesses, widths, and/or heights that, based on the distance measurements, would place an HMD module at a desired position and/or orientation. The electronic device or other device can communicate with another device to retrieve information regarding the available face seals, including the dimensions thereof. By further example, a variety of available head engagers with known dimensions (e.g., sizes, tightness ranges, etc.) can be compared to the desired tightness and/or fit for the user's head. The electronic device or other device can communicate with another device to retrieve information regarding the available head engagers, including the dimensions thereof.
The electronic device or other device can provide an output to a user based on the recommended face seal and/or head engager (608). For example, the electronic device can provide a visual output on the displays, a sound, or other output that communicates to the user an indication of the recommended face seal and/or head engager. The user can then take appropriate actions to acquire, install, and/or employ the recommended face seal and/or head engager. In some examples, the electronic device can communicate with another system to order a recommended face seal and/or head engager. The output can further include instructions for installation of the face seal and/or head engager with the HMD module.
The electronic device 400 may include, among other components, a host processor 402, a memory 404, one or more input/output devices 406, a communication element 408, and/or one or more sensors 412.
The host processor 402, which may also be referred to as an application processor or a processor, may include suitable logic, circuitry, and/or code that enable processing data and/or controlling operations of the electronic device 400. In this regard, the host processor 402 may be enabled to provide control signals to various other components of the electronic device 400. The host processor 402 may also control transfers of data between various portions of the electronic device 400. Additionally, the host processor 402 may enable implementation of an operating system or otherwise execute code to manage operations of the electronic device 400. The memory 404 may include suitable logic, circuitry, and/or code that enable storage of various types of information such as received data, generated data, code, and/or configuration information. The memory 404 may include, for example, random access memory (RAM), read-only memory (ROM), flash, and/or magnetic storage.
The communication element 408 may include suitable logic, circuitry, and/or code that enables wired or wireless communication. The communication element 408 of any given device can providing a communication link with the communication element of any other device. Such communication can be direct or indirect (e.g., through an intermediary). The communication element 408 may include, for example, one or more of a Bluetooth communication element, an NFC interface, a Zigbee communication element, a WLAN communication element, a USB communication element, or generally any communication element.
The one or more sensors 412 may include, for example, one or more image sensors, one or more depth sensors, one or more infrared sensors, one or more thermal (e.g., infrared) sensors, and/or generally any sensors that may be used to detect and/or measure lenses or a user.
In one or more implementations, one or more of the host processor 402, the memory 404, the one or more sensors 412, the communication element 408, and/or one or more portions thereof, may be implemented in software (e.g., subroutines and code), may be implemented in hardware (e.g., an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable devices) and/or a combination of both.
Referring now to
As shown in
The memory 182 can store electronic data that can be used by the head-mountable device 100. For example, the memory 182 can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for the various modules, data structures or databases, and so on. The memory 182 can be configured as any type of memory. By way of example only, the memory 182 can be implemented as random access memory, read-only memory, Flash memory, removable memory, or other types of storage elements, or combinations of such devices.
The head-mountable device 100 can further include a display 140 for displaying visual information for a user. The display 140 can provide visual (e.g., image or video) output. The display 140 can be or include an opaque, transparent, and/or translucent display. The display 140 may have a transparent or translucent medium through which light representative of images is directed to a user's eyes. The display 140 may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface. The head-mountable device 100 can include an optical subassembly configured to help optically adjust and correctly project the image-based content being displayed by the display 140 for close up viewing. The optical subassembly can include one or more lenses, mirrors, or other optical devices.
The head-mountable device 100 can further include a camera 130 for capturing a view of an external environment, as described herein. The view captured by the camera can be presented by the display 140 or otherwise analyzed to provide a basis for an output on the display 140.
The head-mountable device 100 can include an input/output component 186, which can include any suitable component for connecting head-mountable device 100 to other devices. Suitable components can include, for example, audio/video jacks, data connectors, or any additional or alternative input/output components. The input/output component 186 can include buttons, keys, or another feature that can act as a keyboard for operation by the user.
The head-mountable device 100 can include the microphone 188 as described herein. The microphone 188 can be operably connected to the processor 150 for detection of sound levels and communication of detections for further processing, as described further herein.
The head-mountable device 100 can include the speakers 194 as described herein. The speakers 194 can be operably connected to the processor 150 for control of speaker output, including sound levels, as described further herein.
The head-mountable device 100 can include communications circuitry 192 for communicating with one or more servers or other devices using any suitable communications protocol. For example, communications circuitry 192 can support Wi-Fi (e.g., a 802.11 protocol), Ethernet, Bluetooth, high frequency systems (e.g., 900 MHz, 2.4 GHZ, and 5.6 GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, BitTorrent, FTP, RTP, RTSP, SSH, any other communications protocol, or any combination thereof. Communications circuitry 192 can also include an antenna for transmitting and receiving electromagnetic signals.
The head-mountable device 100 can include a battery 172, which can charge and/or power components of the head-mountable device 100. The battery 172 can also charge and/or power components connected to the head-mountable device 100.
Accordingly, embodiments of the present disclosure provide systems that include a fitting device that can be worn by a user to facilitate detection of the user's features and guide the user to selecting components (e.g., modules) of a head-mountable device that will provide the best fit when assembled together. By providing head-mountable devices with modular features, certain modules can provide a custom fit without requiring the entire head-mountable device to be custom fitted to each user. An electronic device can be operated to guide a user to select the optimal components, such as a face seal and/or head engager for use with an HMD module.
Various examples of aspects of the disclosure are described below as clauses for convenience. These are provided as examples, and do not limit the subject technology.
Clause A: An electronic device comprising: a depth sensor configured to measure distances from the sensor to head features of the user; an image sensor configured to capture a view of a fitting device on the head of the user; and a processor configured to: based on the distances and the view, determine a recommended component for use when the user wears a head-mountable device; and provide an output to the user, the output comprising an indication of the recommended component.
Clause B: a method comprising: while a user is not wearing a fitting device, measuring, with one or more sensors of an electronic device, distances from the sensor to head features of the user; while the user is wearing a fitting device, capturing, with the one or more sensors of the electronic device, a view of fiducial markers of the fitting device; based on the distances and the view of the fiducial markers, determining a recommended component of a head-mountable device; and providing an output comprising an indication of the recommended component.
Clause C: a fitting device comprising: a frame configured to be worn on a face of a user, the frame having multiple frame fiducial markers; and a band configured to secure the frame to a head of a user, the band having multiple band fiducial markers that are movable relative to each other based on an amount of tension applied to the band.
One or more of the above clauses can include one or more of the features described below. It is noted that any of the following clauses may be combined in any combination with each other, and placed into a respective independent clause, e.g., clause A, B, or C.
Clause 1: the distances comprise: a distance to a forehead of the user; a distance to a nose of the user; and a distance to a cheek of the user; and the recommended component comprises a face seal having a shape corresponding to the distance to the forehead, the distance to the nose, and the distance to the cheek.
Clause 2: the distances comprise distance to an ear of the user; the recommended component comprises a head engager having a shape corresponding to the distance to the ear.
Clause 3: an input device configured to provide a user interface for receiving an indication of a measurement of the user or the fitting device.
Clause 4: the fitting device comprises fiducial markers, wherein the image sensor is configured to detect at least one of a color of the fiducial markers, a shape of the fiducial markers, or a distance between a pair of the fiducial markers.
Clause 5: a display, wherein the processor is further configured to operate the display to provide the output.
Clause 6: the one or more sensors comprises: a depth sensor for measuring the distances; and an image sensor for capturing the view.
Clause 7: the distances comprise: a distance to a forehead of the user; a distance to a nose of the user; and a distance to a cheek of the user; and the recommended component comprises a face seal having a shape corresponding to the distance to the forehead, the distance to the nose, and the distance to the cheek.
Clause 8: the distances comprise a distance to an eye of the user; and the recommended component comprises a face seal having a shape corresponding to the distance to the eye.
Clause 9: receiving, from a user and with an input device of the electronic device providing a user interface, an indication of a measurement of the user or the fitting device.
Clause 10: the fitting device comprises fiducial markers, wherein the method further comprises comparing the view of the fiducial markers to an expected color of the fiducial markers, an expected shape of the fiducial markers, or an expected distance between a pair of the fiducial markers.
Clause 11: providing the output on a display of the electronic device.
Clause 12: a deflection arm extending from the frame, being biased to an extended configuration, and being configured to move to a deflected configuration when the deflection arm abuts a head of the user and when the fitting device is worn on the face of the user.
Clause 13: one of the frame fiducial markers comprises a deflection arm fiducial markers on the deflection arm.
Clause 14: the deflection arm fiducial marker is moveable relative to at least one other frame fiducial marker as the deflection arm moves from the extended configuration to the deflected configuration.
Clause 15: a pair of lenses coupled to the frame.
Clause 16: each of the lenses comprises a lens fiducial marker.
As described above, one aspect of the present technology may include the gathering and use of data. The present disclosure contemplates that in some instances, this gathered data may include personal information or other data that uniquely identifies or can be used to locate or contact a specific person. The present disclosure contemplates that the entities responsible for the collection, disclosure, analysis, storage, transfer, or other use of such personal information or other data will comply with well-established privacy policies and/or privacy practices. The present disclosure also contemplates embodiments in which users can selectively block the use of or access to personal information or other data (e.g., managed to minimize risks of unintentional or unauthorized access or use).
A reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. For example, “a” module may refer to one or more modules. An element proceeded by “a,” “an,” “the,” or “said” does not, without further constraints, preclude the existence of additional same elements.
Headings and subheadings, if any, are used for convenience only and do not limit the invention. The word exemplary is used to mean serving as an example or illustration. To the extent that the term include, have, or the like is used, such term is intended to be inclusive in a manner similar to the term comprise as comprise is interpreted when employed as a transitional word in a claim. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase (s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase (s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase (s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
A phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list. The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, each of the phrases “at least one of A, B, and C” or “at least one of A, B, or C” refers to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
It is understood that the specific order or hierarchy of steps, operations, or processes disclosed is an illustration of exemplary approaches. Unless explicitly stated otherwise, it is understood that the specific order or hierarchy of steps, operations, or processes may be performed in different order. Some of the steps, operations, or processes may be performed simultaneously. The accompanying method claims, if any, present elements of the various steps, operations or processes in a sample order, and are not meant to be limited to the specific order or hierarchy presented. These may be performed in serial, linearly, in parallel or in different order. It should be understood that the described instructions, operations, and systems can generally be integrated together in a single software/hardware product or packaged into multiple software/hardware products.
In one aspect, a term coupled or the like may refer to being directly coupled. In another aspect, a term coupled or the like may refer to being indirectly coupled.
Terms such as top, bottom, front, rear, side, horizontal, vertical, and the like refer to an arbitrary frame of reference, rather than to the ordinary gravitational frame of reference. Thus, such a term may extend upwardly, downwardly, diagonally, or horizontally in a gravitational frame of reference.
The disclosure is provided to enable any person skilled in the art to practice the various aspects described herein. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The disclosure provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles described herein may be applied to other aspects.
All structural and functional equivalents to the elements of the various aspects described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.
The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language of the claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way.
This application claims the benefit of U.S. Provisional Application No. 63/186,725, entitled “FIT DETECTION SYSTEM FOR HEAD-MOUNTABLE DEVICES,” filed May 10, 2021, the entirety of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US22/28628 | 5/10/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63186725 | May 2021 | US |