An example of an apparatus for displaying images to a user is a head-mounted display system. Head-mounted display systems can be generally referred to as “wearable displays,” because they are supported by a user while in use. Wearable display systems typically include image-generating devices for generating images viewable by the user. Wearable display systems may convey visual information, such as data from sensing devices, programmed entertainment such as moving or still images, and computer generated information. The visual information may be accompanied by audio signals for reception by a user's ears.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples in which the disclosure may be practiced. It is to be understood that other examples may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims. It is to be understood that features of the various examples described herein may be combined, in part or whole, with each other, unless specifically noted otherwise.
Head-mounted display systems or “wearable displays” typically display information to a user or wearer of the display. In contrast, some examples disclosed herein are directed to a wearable head-mounted projector for displaying images on a user's face for viewing by people other than the user. Some examples use depth sensing and/or eye tracking and adjust the position and/or content of the projected images to prevent impeding the user's vision with the projected images. Some examples may detect the location of facial features and project the images on to selected locations of the face determined based on the detected locations of the facial features. The projected images may provide visual effects, such as swirls around the eyes, coloring the skin, altering the appearance of the user, as well as other effects. Some examples incorporate translation technology for translating the user's speech into text that is projected onto the user's face (e.g., forehead).
In some examples, rather than projecting a single static image, the apparatus projects a series of images (e.g., video or animation). Some examples use multiple projectors to allow the series of images to be moved dynamically around the user's face. Some examples use the projected images along with facial recognition technology to provide two factor authentication of the user.
Processor 102 includes a central processing unit (CPU) or another suitable processor. In one example, memory 104 stores machine readable instructions executed by processor 102 for operating the projection apparatus 100. Memory 104 includes any suitable combination of volatile and/or non-volatile memory, such as combinations of Random Access Memory (RAM), Read-Only Memory (ROM), flash memory, and/or other suitable memory. These are examples of non-transitory computer readable storage media. The memory 104 is non-transitory in the sense that it does not encompass a transitory signal but instead is made up of at least one memory component to store machine executable instructions for performing techniques described herein.
Some or all of the functionality of microphone 106, speech-to-text translation unit 108, projection unit 110, camera 112, depth sensing unit 114, and eye tracking unit 116 may be implemented as machine executable instructions stored in memory 104 and executed by processor 102. Processor 102 may execute these instructions to perform techniques described herein. It is noted that some or all of the functionality of microphone 106, speech-to-text translation unit 108, projection unit 110, camera 112, depth sensing unit 114, and eye tracking unit 116 may be implemented using cloud computing resources.
Microphone 106 senses speech of a user and converts the speech into corresponding electrical signals. Speech-to-text translation unit 108 receives electrical signals representing speech of a user from microphone 106, and converts the signals into text. Speech-to-text translation unit 108 may also translate speech in one language (e.g., Spanish) to text of a different language (e.g., English). Projection unit 110 projects images onto a face of a user. The projected images may include images of the text generated by speech-to-text translation unit 108. Camera 112 captures images of a user's face to facilitate the detection of the locations of the user's facial features (e.g., eyes, nose, and mouth). Depth sensing unit 114 detects the distance between the unit 114 and the user's face, which may be used to facilitate the detection of the locations of the user's facial features. Eye tracking unit 116 tracks the positions of the user's eyes.
In one example, the various subcomponents or elements of the projection apparatus 100 may be embodied in a plurality of different systems, where different modules may be grouped or distributed across the plurality of different systems. To achieve its desired functionality, projection apparatus 100 may include various hardware components. Among these hardware components may be a number of processing devices, a number of data storage devices, a number of peripheral device adapters, and a number of network adapters. These hardware components may be interconnected through the use of a number of busses and/or network connections. The processing devices may include a hardware architecture to retrieve executable code from the data storage devices and execute the executable code. The executable code may, when executed by the processing devices, cause the processing devices to implement at least some of the functionality disclosed herein. Projection apparatus 100 is described in further detail below with reference to
Head-mountable apparatus 200 translates speech of the user 202 into text that is projected onto the face 214 of the user 202. As shown in
Some examples of apparatus 200 may use depth sensing by depth sensing unit 114 (
At least one of the projection apparatuses 100 of the head-mountable apparatus 400 includes a projection unit 110 (
Like apparatus 200, head-mountable apparatus 400 may also translate speech of the user into text that is projected onto the face 414 of the user 402. Some examples of apparatus 400 may use depth sensing by depth sensing unit 114 (
The head-mountable apparatuses 200 and 400 discussed above are two examples of head-mountable apparatuses that can incorporate at least one projection apparatus 100. Other types of head-mountable apparatuses may also be used to incorporate at least one projection apparatus 100, including, for example, earrings, a tiara, a hijab, or any other apparatus that can be positioned on a user's head.
One example is directed to a method of projecting images onto a face of a user.
The text in method 600 may be in a different language than the sensed speech. The method 600 may further include detecting a location of a facial feature of the user; and identifying a position to project the images of the text onto the user's face based on the detected location of the facial feature. The method 600 may further include projecting, with the head-mountable apparatus, the images of the text onto the user's face at the identified position. The head-mountable apparatus in method 600 may project the images of the text onto a forehead region of the user's face.
Another example is directed to an apparatus that includes a head-mountable structure that is wearable on a user's head. The apparatus includes a plurality of projection apparatuses positioned on the head-mountable structure to detect a location of a facial feature of the user, identify a position to project images onto the user's face based on the detected location of the facial feature, and project the images onto the user's face at the identified position.
The head-mountable structure may be a hat. The plurality of projection apparatuses may be positioned on a bottom surface of a brim of the hat. The head-mountable structure may be an eyeglasses apparatus. The eyeglasses apparatus may include a frame supporting two lenses, and the plurality of projection apparatuses may be positioned on the frame. The plurality of projection apparatuses may include three projection apparatuses positioned on the frame above the lenses. The plurality of projection apparatuses may project a predictably generated set of images onto the face of the user, and perform a two factor authentication of the user based on whether the face of the user is present and based on whether the predictably generated set of images is present. The images projected onto the user's face may comprise a video.
Yet another example is directed to an apparatus that includes a head-mountable structure that is wearable on a user's head. The apparatus includes a projection apparatus positioned on the head-mountable structure to sense speech of the user, translate the sensed speech into text, and project images of the text onto the user's face. The head-mountable structure may be one of a hat or eyeglasses.
Although specific examples have been illustrated and described herein, a variety of alternate and/or equivalent implementations may be substituted for the specific examples shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the specific examples discussed herein. Therefore, it is intended that this disclosure be limited only by the claims and the equivalents thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/044504 | 7/31/2018 | WO | 00 |