This invention relates to head worn computing. More particularly, this invention relates to technologies for the presentation of digital content in head worn computing.
Wearable computing systems have been developed and are beginning to be commercialized. Many problems persist in the wearable computing field that need to be resolved to make them meet the demands of the market.
Aspects of the present invention relate to methods and systems for presenting digital content in a field of view of a head-worn computer.
These and other systems, methods, objects, features, and advantages of the present invention will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. All documents mentioned herein are hereby incorporated in their entirety by reference.
Embodiments are described with reference to the following Figures. The same numbers may be used throughout to reference like features and components that are shown in the Figures:
a illustrate structured eye lighting systems according to the principles of the present invention.
While the invention has been described in connection with certain preferred embodiments, other embodiments would be understood by one of ordinary skill in the art and are encompassed herein.
Aspects of the present invention relate to head-worn computing (“HWC”) systems. HWC involves, in some instances, a system that mimics the appearance of head-worn glasses or sunglasses. The glasses may be a fully developed computing platform, such as including computer displays presented in each of the lenses of the glasses to the eyes of the user. In embodiments, the lenses and displays may be configured to allow a person wearing the glasses to see the environment through the lenses while also seeing, simultaneously, digital imagery, which forms an overlaid image that is perceived by the person as a digitally augmented image of the environment, or augmented reality (“AR”).
HWC involves more than just placing a computing system on a person's head. The system may need to be designed as a lightweight, compact and fully functional computer display, such as wherein the computer display includes a high resolution digital display that provides a high level of emersion comprised of the displayed digital content and the see-through view of the environmental surroundings. User interfaces and control systems suited to the HWC device may be required that are unlike those used for a more conventional computer such as a laptop. For the HWC and associated systems to be most effective, the glasses may be equipped with sensors to determine environmental conditions, geographic location, relative positioning to other points of interest, objects identified by imaging and movement by the user or other users in a connected group, and the like. The HWC may then change the mode of operation to match the conditions, location, positioning, movements, and the like, in a method generally referred to as a contextually aware HWC. The glasses also may need to be connected, wirelessly or otherwise, to other systems either locally or through a network. Controlling the glasses may be achieved through the use of an external device, automatically through contextually gathered information, through user gestures captured by the glasses sensors, and the like. Each technique may be further refined depending on the software application being used in the glasses. The glasses may further be used to control or coordinate with external devices that are associated with the glasses.
Referring to
We will now describe each of the main elements depicted on
The HWC 102 is a computing platform intended to be worn on a person's head. The HWC 102 may take many different forms to fit many different functional requirements. In some situations, the HWC 102 will be designed in the form of conventional glasses. The glasses may or may not have active computer graphics displays. In situations where the HWC 102 has integrated computer displays the displays may be configured as see-through displays such that the digital imagery can be overlaid with respect to the user's view of the environment 114. There are a number of see-through optical designs that may be used, including ones that have a reflective display (e.g. LCoS, DLP), emissive displays (e.g. OLED, LED), hologram, TIR waveguides, and the like. In embodiments, lighting systems used in connection with the display optics may be solid state lighting systems, such as LED, OLED, quantum dot, quantum dot LED, etc. In addition, the optical configuration may be monocular or binocular. It may also include vision corrective optical components. In embodiments, the optics may be packaged as contact lenses. In other embodiments, the HWC 102 may be in the form of a helmet with a see-through shield, sunglasses, safety glasses, goggles, a mask, fire helmet with see-through shield, police helmet with see through shield, military helmet with see-through shield, utility form customized to a certain work task (e.g. inventory control, logistics, repair, maintenance, etc.), and the like.
The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor, integrated power management, communication structures (e.g. cell net, WiFi, Bluetooth, local area connections, mesh connections, remote connections (e.g. client server, etc.)), and the like. The HWC 102 may also have a number of positional awareness sensors, such as GPS, electronic compass, altimeter, tilt sensor, IMU, and the like. It may also have other sensors such as a camera, rangefinder, hyper-spectral camera, Geiger counter, microphone, spectral illumination detector, temperature sensor, chemical sensor, biologic sensor, moisture sensor, ultrasonic sensor, and the like.
The HWC 102 may also have integrated control technologies. The integrated control technologies may be contextual based control, passive control, active control, user control, and the like. For example, the HWC 102 may have an integrated sensor (e.g. camera) that captures user hand or body gestures 116 such that the integrated processing system can interpret the gestures and generate control commands for the HWC 102. In another example, the HWC 102 may have sensors that detect movement (e.g. a nod, head shake, and the like) including accelerometers, gyros and other inertial measurements, where the integrated processor may interpret the movement and generate a control command in response. The HWC 102 may also automatically control itself based on measured or perceived environmental conditions. For example, if it is bright in the environment the HWC 102 may increase the brightness or contrast of the displayed image. In embodiments, the integrated control technologies may be mounted on the HWC 102 such that a user can interact with it directly. For example, the HWC 102 may have a button(s), touch capacitive interface, and the like.
As described herein, the HWC 102 may be in communication with external user interfaces 104. The external user interfaces may come in many different forms. For example, a cell phone screen may be adapted to take user input for control of an aspect of the HWC 102. The external user interface may be a dedicated UI, such as a keyboard, touch surface, button(s), joy stick, and the like. In embodiments, the external controller may be integrated into another device such as a ring, watch, bike, car, and the like. In each case, the external user interface 104 may include sensors (e.g. IMU, accelerometers, compass, altimeter, and the like) to provide additional input for controlling the HWD 104.
As described herein, the HWC 102 may control or coordinate with other local devices 108. The external devices 108 may be an audio device, visual device, vehicle, cell phone, computer, and the like. For instance, the local external device 108 may be another HWC 102, where information may then be exchanged between the separate HWCs 108.
Similar to the way the HWC 102 may control or coordinate with local devices 106, the HWC 102 may control or coordinate with remote devices 112, such as the HWC 102 communicating with the remote devices 112 through a network 110. Again, the form of the remote device 112 may have many forms. Included in these forms is another HWC 102. For example, each HWC 102 may communicate its GPS position such that all the HWCs 102 know where all of HWC 102 are located.
The light that is provided by the polarized light source 302, which is subsequently reflected by the reflective polarizer 310 before it reflects from the DLP 304, will generally be referred to as illumination light. The light that is reflected by the “off” pixels of the DLP 304 is reflected at a different angle than the light reflected by the ‘on” pixels, so that the light from the “off” pixels is generally directed away from the optical axis of the field lens 312 and toward the side of the upper optical module 202 as shown in
The DLP 304 operates as a computer controlled display and is generally thought of as a MEMs device. The DLP pixels are comprised of small mirrors that can be directed. The mirrors generally flip from one angle to another angle. The two angles are generally referred to as states. When light is used to illuminate the DLP the mirrors will reflect the light in a direction depending on the state. In embodiments herein, we generally refer to the two states as “on” and “off,” which is intended to depict the condition of a display pixel. “On” pixels will be seen by a viewer of the display as emitting light because the light is directed along the optical axis and into the field lens and the associated remainder of the display system. “Off” pixels will be seen by a viewer of the display as not emitting light because the light from these pixels is directed to the side of the optical housing and into a light trap or light dump where the light is absorbed. The pattern of “on” and “off” pixels produces image light that is perceived by a viewer of the display as a computer generated image. Full color images can be presented to a user by sequentially providing illumination light with complimentary colors such as red, green and blue. Where the sequence is presented in a recurring cycle that is faster than the user can perceive as separate images and as a result the user perceives a full color image comprised of the sum of the sequential images. Bright pixels in the image are provided by pixels that remain in the “on” state for the entire time of the cycle, while dimmer pixels in the image are provided by pixels that switch between the “on” state and “off” state within the time of the cycle, or frame time when in a video sequence of images.
The configuration illustrated in
The configuration illustrated in
Critical angle=arc-sin(1/n) Eqn 1
Where the critical angle is the angle beyond which the illumination light is reflected from the internal surface when the internal surface comprises an interface from a solid with a higher refractive index (n) to air with a refractive index of 1 (e.g. for an interface of acrylic, with a refractive index of n=1.5, to air, the critical angle is 41.8 degrees; for an interface of polycarbonate, with a refractive index of n=1.59, to air the critical angle is 38.9 degrees). Consequently, the TIR wedge 418 is associated with a thin air gap 408 along the internal surface to create an interface between a solid with a higher refractive index and air. By choosing the angle of the light source 404 relative to the DLP 402 in correspondence to the angle of the internal surface of the TIR wedge 418, illumination light is turned toward the DLP 402 at an angle suitable for providing image light 414 as reflected from “on” pixels. Wherein, the illumination light is provided to the DLP 402 at approximately twice the angle of the pixel mirrors in the DLP 402 that are in the “on” state, such that after reflecting from the pixel mirrors, the image light 414 is directed generally along the optical axis of the field lens. Depending on the state of the DLP pixels, the illumination light from “on” pixels may be reflected as image light 414 which is directed towards a field lens and a lower optical module 204, while illumination light reflected from “off” pixels (generally referred to herein as “dark” state light, “off” pixel light or “off” state light) 410 is directed in a separate direction, which may be trapped and not used for the image that is ultimately presented to the wearer's eye.
The light trap for the dark state light 410 may be located along the optical axis defined by the direction of the dark state light 410 and in the side of the housing, with the function of absorbing the dark state light. To this end, the light trap may be comprised of an area outside of the cone of image light 414 from the “on” pixels. The light trap is typically made up of materials that absorb light including coatings of black paints or other light absorbing materials to prevent light scattering from the dark state light degrading the image perceived by the user. In addition, the light trap may be recessed into the wall of the housing or include masks or guards to block scattered light and prevent the light trap from being viewed adjacent to the displayed image.
The embodiment of
The embodiment illustrated in
The angles of the faces of the wedge set 450 correspond to the needed angles to provide illumination light 452 at the angle needed by the DLP mirrors when in the “on” state so that the reflected image light 414 is reflected from the DLP along the optical axis of the field lens. The wedge set 456 provides an interior interface where a reflective polarizer film can be located to redirect the illumination light 452 toward the mirrors of the DLP 402. The wedge set also provides a matched wedge on the opposite side of the reflective polarizer 450 so that the image light 414 from the “on” pixels exits the wedge set 450 substantially perpendicular to the exit surface, while the dark state light from the ‘off’ pixels 410 exits at an oblique angle to the exit surface. As a result, the image light 414 is substantially unrefracted upon exiting the wedge set 456, while the dark state light from the “off” pixels 410 is substantially refracted upon exiting the wedge set 456 as shown in
By providing a solid transparent matched wedge set, the flatness of the interface is reduced, because variations in the flatness have a negligible effect as long as they are within the cone angle of the illuminating light 452. Which can be f#2.2 with a 26 degree cone angle. In a preferred embodiment, the reflective polarizer is bonded between the matched internal surfaces of the wedge set 456 using an optical adhesive so that Fresnel reflections at the interfaces on either side of the reflective polarizer 450 are reduced. The optical adhesive can be matched in refractive index to the material of the wedge set 456 and the pieces of the wedge set 456 can be all made from the same material such as BK7 glass or cast acrylic. Wherein the wedge material can be selected to have low birefringence as well to reduce non-uniformities in brightness. The wedge set 456 and the quarter wave film 454 can also be bonded to the DLP 402 to further reduce Fresnel reflections at the DLP interface losses. In addition, since the image light 414 is substantially normal to the exit surface of the wedge set 456, the flatness of the surface is not critical to maintain the wavefront of the image light 414 so that high image quality can be obtained in the displayed image without requiring very tightly toleranced flatness on the exit surface.
A yet further embodiment of the invention that is not illustrated, combines the embodiments illustrated in
The combiner 602 may include a holographic pattern, to form a holographic mirror. If a monochrome image is desired, there may be a single wavelength reflection design for the holographic pattern on the surface of the combiner 602. If the intention is to have multiple colors reflected from the surface of the combiner 602, a multiple wavelength holographic mirror maybe included on the combiner surface. For example, in a three-color embodiment, where red, green and blue pixels are generated in the image light, the holographic mirror may be reflective to wavelengths substantially matching the wavelengths of the red, green and blue light provided by the light source. This configuration can be used as a wavelength specific mirror where pre-determined wavelengths of light from the image light are reflected to the user's eye. This configuration may also be made such that substantially all other wavelengths in the visible pass through the combiner element 602 so the user has a substantially clear view of the surroundings when looking through the combiner element 602. The transparency between the user's eye and the surrounding may be approximately 80% when using a combiner that is a holographic mirror. Wherein holographic mirrors can be made using lasers to produce interference patterns in the holographic material of the combiner where the wavelengths of the lasers correspond to the wavelengths of light that are subsequently reflected by the holographic mirror.
In another embodiment, the combiner element 602 may include a notch mirror comprised of a multilayer coated substrate wherein the coating is designed to substantially reflect the wavelengths of light provided by the light source and substantially transmit the remaining wavelengths in the visible spectrum. For example, in the case where red, green and blue light is provided by the light source to enable full color images to be provided to the user, the notch mirror is a tristimulus notch mirror wherein the multilayer coating is designed to reflect narrow bands of red, green and blue light that are matched to the what is provided by the light source and the remaining visible wavelengths are transmitted through the coating to enable a view of the environment through the combiner. In another example where monochrome images are provided to the user, the notch mirror is designed to reflect a single narrow band of light that is matched to the wavelength range of the light provided by the light source while transmitting the remaining visible wavelengths to enable a see-thru view of the environment. The combiner 602 with the notch mirror would operate, from the user's perspective, in a manner similar to the combiner that includes a holographic pattern on the combiner element 602. The combiner, with the tristimulus notch mirror, would reflect the “on” pixels to the eye because of the match between the reflective wavelengths of the notch mirror and the color of the image light, and the wearer would be able to see with high clarity the surroundings. The transparency between the user's eye and the surrounding may be approximately 80% when using the tristimulus notch mirror. In addition, the image provided by the upper optical module 202 with the notch mirror combiner can provide higher contrast images than the holographic mirror combiner due to less scattering of the imaging light by the combiner.
Light can escape through the combiner 602 and may produce face glow as the light is generally directed downward onto the cheek of the user. When using a holographic mirror combiner or a tristimulus notch mirror combiner, the escaping light can be trapped to avoid face glow. In embodiments, if the image light is polarized before the combiner, a linear polarizer can be laminated, or otherwise associated, to the combiner, with the transmission axis of the polarizer oriented relative to the polarized image light so that any escaping image light is absorbed by the polarizer. In embodiments, the image light would be polarized to provide S polarized light to the combiner for better reflection. As a result, the linear polarizer on the combiner would be oriented to absorb S polarized light and pass P polarized light. This provides the preferred orientation of polarized sunglasses as well.
If the image light is unpolarized, a microlouvered film such as a privacy filter can be used to absorb the escaping image light while providing the user with a see-thru view of the environment. In this case, the absorbance or transmittance of the microlouvered film is dependent on the angle of the light. Where steep angle light is absorbed and light at less of an angle is transmitted. For this reason, in an embodiment, the combiner with the microlouver film is angled at greater than 45 degrees to the optical axis of the image light (e.g. the combiner can be oriented at 50 degrees so the image light from the file lens is incident on the combiner at an oblique angle.
While many of the embodiments of the present invention have been referred to as upper and lower modules containing certain optical components, it should be understood that the image light and dark light production and management functions described in connection with the upper module may be arranged to direct light in other directions (e.g. upward, sideward, etc.). In embodiments, it may be preferred to mount the upper module 202 above the wearer's eye, in which case the image light would be directed downward. In other embodiments it may be preferred to produce light from the side of the wearer's eye, or from below the wearer's eye. In addition, the lower optical module is generally configured to deliver the image light to the wearer's eye and allow the wearer to see through the lower optical module, which may be accomplished through a variety of optical components.
Another aspect of the present invention relates to eye imaging. In embodiments, a camera is used in connection with an upper optical module 202 such that the wearer's eye can be imaged using pixels in the “off” state on the DLP.
In embodiments, the eye imaging camera may image the wearer's eye at a moment in time where there are enough “off” pixels to achieve the required eye image resolution. In another embodiment, the eye imaging camera collects eye image information from “off” pixels over time and forms a time lapsed image. In another embodiment, a modified image is presented to the user wherein enough “off” state pixels are included that the camera can obtain the desired resolution and brightness for imaging the wearer's eye and the eye image capture is synchronized with the presentation of the modified image.
The eye imaging system may be used for security systems. The HWC may not allow access to the HWC or other system if the eye is not recognized (e.g. through eye characteristics including retina or iris characteristics, etc.). The HWC may be used to provide constant security access in some embodiments. For example, the eye security confirmation may be a continuous, near-continuous, real-time, quasi real-time, periodic, etc. process so the wearer is effectively constantly being verified as known. In embodiments, the HWC may be worn and eye security tracked for access to other computer systems.
The eye imaging system may be used for control of the HWC. For example, a blink, wink, or particular eye movement may be used as a control mechanism for a software application operating on the HWC or associated device.
The eye imaging system may be used in a process that determines how or when the HWC 102 delivers digitally displayed content to the wearer. For example, the eye imaging system may determine that the user is looking in a direction and then HWC may change the resolution in an area of the display or provide some content that is associated with something in the environment that the user may be looking at. Alternatively, the eye imaging system may identify different user's and change the displayed content or enabled features provided to the user. User's may be identified from a database of users eye characteristics either located on the HWC 102 or remotely located on the network 110 or on a server 112. In addition, the HWC may identify a primary user or a group of primary users from eye characteristics wherein the primary user(s) are provided with an enhanced set of features and all other user's are provided with a different set of features. Thus in this use case, the HWC 102 uses identified eye characteristics to either enable features or not and eye characteristics need only be analyzed in comparison to a relatively small database of individual eye characteristics.
Another aspect of the present invention relates to the generation of peripheral image lighting effects for a person wearing a HWC. In embodiments, a solid state lighting system (e.g. LED, OLED, etc), or other lighting system, may be included inside the optical elements of an lower optical module 204. The solid state lighting system may be arranged such that lighting effects outside of a field of view (FOV) of the presented digital content is presented to create an emersive effect for the person wearing the HWC. To this end, the lighting effects may be presented to any portion of the HWC that is visible to the wearer. The solid state lighting system may be digitally controlled by an integrated processor on the HWC. In embodiments, the integrated processor will control the lighting effects in coordination with digital content that is presented within the FOV of the HWC. For example, a movie, picture, game, or other content, may be displayed or playing within the FOV of the HWC. The content may show a bomb blast on the right side of the FOV and at the same moment, the solid state lighting system inside of the upper module optics may flash quickly in concert with the FOV image effect. The effect may not be fast, it may be more persistent to indicate, for example, a general glow or color on one side of the user. The solid state lighting system may be color controlled, with red, green and blue LEDs, for example, such that color control can be coordinated with the digitally presented content within the field of view.
In the embodiment illustrated in
Another aspect of the present invention relates to the mitigation of light escaping from the space between the wearer's face and the HWC itself. Another aspect of the present invention relates to maintaining a controlled lighting environment in proximity to the wearer's eyes. In embodiments, both the maintenance of the lighting environment and the mitigation of light escape are accomplished by including a removable and replaceable flexible shield for the HWC. Wherein the removable and replaceable shield can be provided for one eye or both eyes in correspondence to the use of the displays for each eye. For example, in a night vision application, the display to only one eye could be used for night vision while the display to the other eye is turned off to provide good see-thru when moving between areas where visible light is available and dark areas where night vision enhancement is needed.
In embodiments, an opaque front light shield 1412 may be included and the digital content may include images of the surrounding environment such that the wearer can visualize the surrounding environment. One eye may be presented with night vision environmental imagery and this eye's surrounding environment optical path may be covered using an opaque front light shield 1412. In other embodiments, this arrangement may be associated with both eyes.
Another aspect of the present invention relates to automatically configuring the lighting system(s) used in the HWC 102. In embodiments, the display lighting and/or effects lighting, as described herein, may be controlled in a manner suitable for when an eye cover 1408 is attached or removed from the HWC 102. For example, at night, when the light in the environment is low, the lighting system(s) in the HWC may go into a low light mode to further control any amounts of stray light escaping from the HWC and the areas around the HWC. Covert operations at night, while using night vision or standard vision, may require a solution which prevents as much escaping light as possible so a user may clip on the eye cover(s) 1408 and then the HWC may go into a low light mode. The low light mode may, in some embodiments, only go into a low light mode when the eye cover 1408 is attached if the HWC identifies that the environment is in low light conditions (e.g. through environment light level sensor detection). In embodiments, the low light level may be determined to be at an intermediate point between full and low light dependent on environmental conditions.
Another aspect of the present invention relates to automatically controlling the type of content displayed in the HWC when eye covers 1408 are attached or removed from the HWC. In embodiments, when the eye cover(s) 1408 is attached to the HWC, the displayed content may be restricted in amount or in color amounts. For example, the display(s) may go into a simple content delivery mode to restrict the amount of information displayed. This may be done to reduce the amount of light produced by the display(s). In an embodiment, the display(s) may change from color displays to monochrome displays to reduce the amount of light produced. In an embodiment, the monochrome lighting may be red to limit the impact on the wearer's eyes to maintain an ability to see better in the dark.
Referring to
While the pen 1500 may follow the general form of a conventional pen, it contains numerous technologies that enable it to function as an external user interface 104.
The pen 1500 may also include a pressure monitoring system 1504, such as to measure the pressure exerted on the lens 1502. As will be described in greater detail herein, the pressure measurement can be used to predict the user's intention for changing the weight of a line, type of a line, type of brush, click, double click, and the like. In embodiments, the pressure sensor may be constructed using any force or pressure measurement sensor located behind the lens 1502, including for example, a resistive sensor, a current sensor, a capacitive sensor, a voltage sensor such as a piezoelectric sensor, and the like.
The pen 1500 may also include a communications module 1518, such as for bi-directional communication with the HWC 102. In embodiments, the communications module 1518 may be a short distance communication module (e.g. Bluetooth). The communications module 1518 may be security matched to the HWC 102. The communications module 1518 may be arranged to communicate data and commands to and from the microprocessor 1510 of the pen 1500. The microprocessor 1510 may be programmed to interpret data generated from the camera 1508, IMU 1512, and pressure sensor 1504, and the like, and then pass a command onto the HWC 102 through the communications module 1518, for example. In another embodiment, the data collected from any of the input sources (e.g. camera 1508, IMU 1512, pressure sensor 1504) by the microprocessor may be communicated by the communication module 1518 to the HWC 102, and the HWC 102 may perform data processing and prediction of the user's intention when using the pen 1500. In yet another embodiment, the data may be further passed on through a network 110 to a remote device 112, such as a server, for the data processing and prediction. The commands may then be communicated back to the HWC 102 for execution (e.g. display writing in the glasses display, make a selection within the UI of the glasses display, control a remote external device 112, control a local external device 108), and the like. The pen may also include memory 1514 for long or short term uses.
The pen 1500 may also include a number of physical user interfaces, such as quick launch buttons 1522, a touch sensor 1520, and the like. The quick launch buttons 1522 may be adapted to provide the user with a fast way of jumping to a software application in the HWC system 100. For example, the user may be a frequent user of communication software packages (e.g. email, text, Twitter, Instagram, Facebook, Google+, and the like), and the user may program a quick launch button 1522 to command the HWC 102 to launch an application. The pen 1500 may be provided with several quick launch buttons 1522, which may be user programmable or factory programmable. The quick launch button 1522 may be programmed to perform an operation. For example, one of the buttons may be programmed to clear the digital display of the HWC 102. This would create a fast way for the user to clear the screens on the HWC 102 for any reason, such as for example to better view the environment. The quick launch button functionality will be discussed in further detail below. The touch sensor 1520 may be used to take gesture style input from the user. For example, the user may be able to take a single finger and run it across the touch sensor 1520 to affect a page scroll.
The pen 1500 may also include a laser pointer 1524. The laser pointer 1524 may be coordinated with the IMU 1512 to coordinate gestures and laser pointing. For example, a user may use the laser 1524 in a presentation to help with guiding the audience with the interpretation of graphics and the IMU 1512 may, either simultaneously or when the laser 1524 is off, interpret the user's gestures as commands or data input.
The domed cover lens, or other lens 1608 used to physically interact with the writing surface, will be transparent or transmissive within the active bandwidth of the camera 1602. In embodiments, the domed cover lens 1608 may be spherical or other shape and comprised of glass, plastic, sapphire, diamond, and the like. In other embodiments where low resolution imaging of the surface is acceptable. The pen 1500 can omit the domed cover lens 1608 and the ball lens 1604 can be in direct contact with the surface.
Another aspect of the pen 1500 relates to sensing the force applied by the user to the writing surface with the pen 1500. The force measurement may be used in a number of ways. For example, the force measurement may be used as a discrete value, or discontinuous event tracking, and compared against a threshold in a process to determine a user's intent. The user may want the force interpreted as a ‘click’ in the selection of an object, for instance. The user may intend multiple force exertions interpreted as multiple clicks. There may be times when the user holds the pen 1500 in a certain position or holds a certain portion of the pen 1500 (e.g. a button or touch pad) while clicking to affect a certain operation (e.g. a ‘right click’). In embodiments, the force measurement may be used to track force and force trends. The force trends may be tracked and compared to threshold limits, for example. There may be one such threshold limit, multiple limits, groups of related limits, and the like. For example, when the force measurement indicates a fairly constant force that generally falls within a range of related threshold values, the microprocessor 1510 may interpret the force trend as an indication that the user desires to maintain the current writing style, writing tip type, line weight, brush type, and the like. In the event that the force trend appears to have gone outside of a set of threshold values intentionally, the microprocessor may interpret the action as an indication that the user wants to change the current writing style, writing tip type, line weight, brush type, and the like. Once the microprocessor has made a determination of the user's intent, a change in the current writing style, writing tip type, line weight, brush type, and the like may be executed. In embodiments, the change may be noted to the user (e.g. in a display of the HWC 102), and the user may be presented with an opportunity to accept the change.
While a threshold value may be used to assist in the interpretation of the user's intention, a signature force event trend may also be used. The threshold and signature may be used in combination or either method may be used alone. For example, a single-click signature may be represented by a certain force trend signature or set of signatures. The single-click signature(s) may require that the trend meet a criteria of a rise time between x any y values, a hold time of between a and b values and a fall time of between c and d values, for example. Signatures may be stored for a variety of functions such as click, double click, right click, hold, move, etc. The microprocessor 1510 may compare the real-time force or pressure tracking against the signatures from a signature library to make a decision and issue a command to the software application executing in the GUI.
Generally, in the present disclosure, instrument stroke parameter changes may be referred to as a change in line type, line weight, tip type, brush type, brush width, brush pressure, color, and other forms of writing, coloring, painting, and the like.
Another aspect of the pen 1500 relates to selecting an operating mode for the pen 1500 dependent on contextual information and/or selection interface(s). The pen 1500 may have several operating modes. For instance, the pen 1500 may have a writing mode where the user interface(s) of the pen 1500 (e.g. the writing surface end, quick launch buttons 1522, touch sensor 1520, motion based gesture, and the like) is optimized or selected for tasks associated with writing. As another example, the pen 1500 may have a wand mode where the user interface(s) of the pen is optimized or selected for tasks associated with software or device control (e.g. the HWC 102, external local device, remote device 112, and the like). The pen 1500, by way of another example, may have a presentation mode where the user interface(s) is optimized or selected to assist a user with giving a presentation (e.g. pointing with the laser pointer 1524 while using the button(s) 1522 and/or gestures to control the presentation or applications relating to the presentation). The pen may, for example, have a mode that is optimized or selected for a particular device that a user is attempting to control. The pen 1500 may have a number of other modes and an aspect of the present invention relates to selecting such modes.
As with other examples presented herein, the microprocessor 1510 may monitor the contextual trend (e.g. the angle of the pen over time) in an effort to decide whether to stay in a mode or change modes. For example, through signatures, thresholds, trend analysis, and the like, the microprocessor may determine that a change is an unintentional change and therefore no user interface mode change is desired.
In embodiments, a confirmation selection may be presented to the user in the event a mode is going to change. The presentation may be physical (e.g. a vibration in the pen 1500), through a GUI, through a light indicator, etc.
Use scenario 1900 is a writing scenario where the pen 1500 is used as a writing instrument. In this example, quick launch button 122A is pressed to launch a note application 1910 in the GUI 1908 of the HWC 102 display 1904. Once the quick launch button 122A is pressed, the HWC 102 launches the note program 1910 and puts the pen into a writing mode. The user uses the pen 1500 to scribe symbols 1902 on a writing surface, the pen records the scribing and transmits the scribing to the HWC 102 where symbols representing the scribing are displayed 1912 within the note application 1910.
Use scenario 1901 is a gesture scenario where the pen 1500 is used as a gesture capture and command device. In this example, the quick launch button 122B is activated and the pen 1500 activates a wand mode such that an application launched on the HWC 102 can be controlled. Here, the user sees an application chooser 1918 in the display(s) of the HWC 102 where different software applications can be chosen by the user. The user gestures (e.g. swipes, spins, turns, etc.) with the pen to cause the application chooser 1918 to move from application to application. Once the correct application is identified (e.g. highlighted) in the chooser 1918, the user may gesture or click or otherwise interact with the pen 1500 such that the identified application is selected and launched. Once an application is launched, the wand mode may be used to scroll, rotate, change applications, select items, initiate processes, and the like, for example.
In an embodiment, the quick launch button 122A may be activated and the HWC 102 may launch an application chooser presenting to the user a set of applications. For example, the quick launch button may launch a chooser to show all communication programs (e.g. SMS, Twitter, Instagram, Facebook, email, etc.) available for selection such that the user can select the program the user wants and then go into a writing mode. By way of further example, the launcher may bring up selections for various other groups that are related or categorized as generally being selected at a given time (e.g. Microsoft Office products, communication products, productivity products, note products, organizational products, and the like)
The watchband controller 2000 may have quick launch interfaces 2008 (e.g. to launch applications and choosers as described herein), a touch pad 2014 (e.g. to be used as a touch style mouse for GUI control in a HWC 102 display) and a display 2012. The clip 2018 may be adapted to fit a wide range of watchbands so it can be used in connection with a watch that is independently selected for its function. The clip, in embodiments, is rotatable such that a user can position it in a desirable manner. In embodiments the clip may be a flexible strap. In embodiments, the flexible strap may be adapted to be stretched to attach to a hand, wrist, finger, device, weapon, and the like.
In embodiments, the watchband controller may be configured as a removable and replacable watchband. For example, the controller may be incorporated into a band with a certain width, segment spacing's, etc. such that the watchband, with its incorporated controller, can be attached to a watch body. The attachment, in embodiments, may be mechanically adapted to attach with a pin upon which the watchband rotates. In embodiments, the watchband controller may be electrically connected to the watch and/or watch body such that the watch, watch body and/or the watchband controller can communicate data between them.
The watchband controller may have 3-axis motion monitoring (e.g. through an IMU, accelerometers, magnetometers, gyroscopes, etc.) to capture user motion. The user motion may then be interpreted for gesture control.
In embodiments, the watchband controller may comprise fitness sensors and a fitness computer. The sensors may track heart rate, calories burned, strides, distance covered, and the like. The data may then be compared against performance goals and/or standards for user feedback.
Another aspect of the present invention relates to visual display techniques relating to micro Doppler (“mD”) target tracking signatures (“mD signatures”). mD is a radar technique that uses a series of angle dependent electromagnetic pulses that are broadcast into an environment and return pulses are captured. Changes between the broadcast pulse and return pulse are indicative of changes in the shape, distance and angular location of objects or targets in the environment. These changes provide signals that can be used to track a target and identify the target through the mD signature. Each target or target type has a unique mD signature. Shifts in the radar pattern can be analyzed in the time domain and frequency domain based on mD techniques to derive information about the types of targets present (e.g. whether people are present), the motion of the targets and the relative angular location of the targets and the distance to the targets. By selecting a frequency used for the mD pulse relative to known objects in the environment, the pulse can penetrate the known objects to enable information about targets to be gathered even when the targets are visually blocked by the known objects. For example, pulse frequencies can be used that will penetrate concrete buildings to enable people to be identified inside the building. Multiple pulse frequencies can be used as well in the mD radar to enable different types of information to be gathered about the objects in the environment. In addition, the mD radar information can be combined with other information such as distance measurements or images captured of the environment that are analyzed jointly to provide improved object identification and improved target identification and tracking. In embodiments, the analysis can be performed on the HWC or the information can be transmitted to a remote network for analysis and results transmitted back to the HWC. Distance measurements can be provided by laser range finding, structured lighting, stereoscopic depth maps or sonar measurements. Images of the environment can be captured using one or more cameras capable of capturing images from visible, ultraviolet or infrared light. The mD radar can be attached to the HWC, located adjacently (e.g. in a vehicle) and associated wirelessly with the HWC or located remotely. Maps or other previously determined information about the environment can also be used in the analysis of the mD radar information. Embodiments of the present invention relate to visualizing the mD signatures in useful ways.
There are several traces 2108 and 2104 presented to the wearer in the embodiment illustrated in
In embodiments, certain user positions may be known and thus identified in the FOV. For example, the shooter of the friendly fire trace 2108 may be from a known friendly combatant and as such his location may be known. The position may be known based on his GPS location based on a mobile communication system on him, such as another HWC 102. In other embodiments, the friendly combatant may be marked by another friendly. For example, if the friendly position in the environment is known through visual contact or communicated information, a wearer of the HWC 102 may use a gesture or external user interface 104 to mark the location. If a friendly combatant location is known the originating position of the friendly fire trace 2108 may be color coded or otherwise distinguished from unidentified traces on the displayed digital content. Similarly, enemy fire traces 2104 may be color coded or otherwise distinguished on the displayed digital content. In embodiments, there may be an additional distinguished appearance on the displayed digital content for unknown traces.
In addition to situationally associated trace appearance, the trace colors or appearance may be different from the originating position to the terminating position. This path appearance change may be based on the mD signature. The mD signature may indicate that the bullet, for example, is slowing as it propagates and this slowing pattern may be reflected in the FOV 2102 as a color or pattern change. This can create an intuitive understanding of wear the shooter is located. For example, the originating color may be red, indicative of high speed, and it may change over the course of the trace to yellow, indicative of a slowing trace. This pattern changing may also be different for a friendly, enemy and unknown combatant. The enemy may go blue to green for a friendly trace, for example.
Another aspect of the present invention relates to mD radar techniques that trace and identify targets through other objects, such as walls (referred to generally as through wall mD), and visualization techniques related therewith.
mD target recognition methods can identify the identity of a target based on the vibrations and other small movements of the target. This can provide a personal signature for the target. In the case of humans, this may result in a personal identification of a target that has been previously characterized. The cardio, heart beat, lung expansion and other small movements within the body may be unique to a person and if those attributes are pre-identified they may be matched in real time to provide a personal identification of a person in the FOV 2202. The person's mD signatures may be determined based on the position of the person. For example, the database of personal mD signature attributes may include mD signatures for a person standing, sitting, laying down, running, walking, jumping, etc. This may improve the accuracy of the personal data match when a target is tracked through mD signature techniques in the field. In the event a person is personally identified, a specific indication of the person's identity may be presented in the FOV 2202. The indication may be a color, shape, shade, name, indication of the type of person (e.g. enemy, friendly, etc.), etc. to provide the wearer with intuitive real time information about the person being tracked. This may be very useful in a situation where there is more than one person in an area of the person being tracked. If just one person in the area is personally identified, that person or the avatar of that person can be presented differently than other people in the area.
An aspect of the present invention relates to suppression of extraneous or stray light. As discussed herein elsewhere, eyeglow and faceglow are two such artifacts that develop from such light. Eyeglow and faceglow can be caused by image light escaping from the optics module. The escaping light is then visible, particularly in dark environments when the user is viewing bright displayed images with the HWC. Light that escapes through the front of the HWC is visible as eyeglow as it that light that is visible in the region of the user's eyes. Eyeglow can appear in the form of a small version of the displayed image that the user is viewing. Light that escapes from the bottom of the HWC shines onto the user's face, cheek or chest so that these portions of the user appear to glow. Eyeglow and faceglow can both increase the visibility of the user and highlight the use of the HWC, which may be viewed negatively by the user. As such, reducing eyeglow and faceglow is advantageous. In combat situations (e.g. the mD trace presentation scenerios described herein) and certain gaming situations, the suppression of extraneous or stray light is very important.
The disclosure relating to
An example of the source for the faceglow light can come from wide cone angle light associated with the image light incident onto the combiner 602. Where the combiner can include a holographic mirror or a notch mirror in which the narrow bands of high reflectivity are matched to wavelengths of light by the light source. The wide cone angle associated with the image light corresponds with the field of view provided by the HWC. Typically the reflectivity of holographic mirrors and notch mirrors is reduced as the cone angle of the incident light is increased above 8 degrees. As a result, for a a field of view of 30 degrees, substantial image light can pass through the combiner and cause faceglow.
In embodiments, the combiner 602 may include a notch mirror coating to reflect the wavelengths of light in the image light and a notch filter 2620 can be selected in correspondence to the wavelengths of light provided by the light source and the narrow bands of high reflectivity provided by the notch mirror. In this way, image light that is not reflected by the notch mirror is absorbed by the notch filter 2620. In embodiments of the invention the light source can provide one narrow band of light for a monochrome imaging or three narrow bands of light for full color imaging. The notch mirror and associated notch filter would then each provide one narrow band or three narrow bands of high reflectivity and absorption respectively.
We now turn back to a description of eye imaging technologies. Aspects of the present invention relate to various methods of imaging the eye of a person wearing the HWC 102. In embodiments, technologies for imaging the eye using an optical path involving the “off” state and “no power” state, which is described in detail below, are described. In embodiments, technologies for imaging the eye with optical configurations that do not involve reflecting the eye image off of DLP mirrors is described. In embodiments, unstructured light, structured light, or controlled lighting conditions, are used to predict the eye's position based on the light reflected off of the front of the wearer's eye. In embodiments, a reflection of a presented digital content image is captured as it reflects off of the wearer's eye and the reflected image may be processed to determine the quality (e.g. sharpness) of the image presented. In embodiments, the image may then be adjusted (e.g. focused differently) to increase the quality of the image presented based on the image reflection.
For comparison, illuminating light rays 2973 from the light source 2958 are also shown being reflected by the partially reflective layer 2960. Where the angle of the illuminating light 2973 is such that the DLP mirrors, when in the “on” state, reflect the illuminating light 2973 to form image light 2969 that substantially shares the same optical axis as the light from the wearer's eye 2971. In this way, images of the wearer's eye are captured in a field of view that overlaps the field of view for the displayed image content. In contrast, light reflected by DLP mirrors in the “off” state form dark light 2975 which is directed substantially to the side of the image light 2969 and the light from eye 2971. Dark light 2975 is directed toward a light trap 2962 that absorbs the dark light to improve the contrast of the displayed image as has been described above in this specification.
In an embodiment, partially reflective layer 2960 is a reflective polarizer. The light that is reflected from the eye 2971 can then be polarized prior to entering the corrective wedge 2966 (e.g with an absorptive polarizer between the upper module 202 and the lower module 204), with a polarization orientation relative to the reflective polarizer that enables the light reflected from the eye 2971 to substantially be transmitted by the reflective polarizer. A quarter wave retarder layer 2957 is then included adjacent to the DLP 2955 (as previously disclosed in
In a further embodiment illustrated by
Alternately, the “no power” state can be applied to a subset of the DLP mirrors (e.g. 10% of the DLP mirrors) within while another subset is in busy generating image light for content to be displayed. This enables the capture of an eye image(s) during the display of digital content to the wearer. The DLP mirrors used for eye imaging can, for example, be distributed randomly across the area of the DLP to minimize the impact on the quality of the digital content being displayed to the wearer. To improve the displayed image perceived by the wearer, the individual DLP mirrors put into the “no power” state for capturing each eye image, can be varied over time such as in a random pattern, for example. In yet a further embodiment, the DLP mirrors put into the “no power” state for eye imaging may be coordinated with the digital content in such a way that the “no power” mirrors are taken from a portion of the image that requires less resolution.
In the embodiments of the invention as illustrated in
In the embodiment illustrated in
Eye imaging systems where the polarization state of the light from the eye 2971 needs to be opposite to that of the image light 2969 (as shown in
In a further embodiment shown in
In yet another embodiment shown in
In embodiments directed to capturing images of the wearer's eye, light to illuminate the wearer's eye can be provided by several different sources including: light from the displayed image (i.e. image light); light from the environment that passes through the combiner or other optics; light provided by a dedicated eye light, etc.
In an embodiment of the eye imaging system, the lens for the camera is designed to take into account the optics associated with the upper module 202 and the lower module 204. This is accomplished by designing the camera to include the optics in the upper module 202 and optics in the lower module 204, so that a high MTF image is produced, at the image sensor in the camera, of the wearer's eye. In yet a further embodiment, the camera lens is provided with a large depth of field to eliminate the need for focusing the camera to enable sharp image of the eye to be captured. Where a large depth of field is typically provided by a high f/# lens (e.g. f/#>5). In this case, the reduced light gathering associated with high f/# lenses is compensated by the inclusion of a dedicated eye light to enable a bright image of the eye to be captured. Further, the brightness of the dedicated eye light can be modulated and synchronized with the capture of eye images so that the dedicated eye light has a reduced duty cycle and the brightness of infrared light on the wearer's eye is reduced.
In a further embodiment,
is an illustration of another embodiment using eye imaging, in which the sharpness of the displayed image is determined based on the eye glint produced by the reflection of the displayed image from the wearer's eye surface. By capturing images of the wearer's eye 3611, an eye glint 3622, which is a small version of the displayed image can be captured and analyzed for sharpness. If the displayed image is determined to not be sharp, then an automated adjustment to the focus of the HWC optics can be performed to improve the sharpness. This ability to perform a measurement of the sharpness of a displayed image at the surface of the wearer's eye can provide a very accurate measurement of image quality. Having the ability to measure and automatically adjust the focus of displayed images can be very useful in augmented reality imaging where the focus distance of the displayed image can be varied in response to changes in the environment or changes in the method of use by the wearer.
An aspect of the present invention relates to controlling the HWC 102 through interpretations of eye imagery. In embodiments, eye-imaging technologies, such as those described herein, are used to capture an eye image or series of eye images for processing. The image(s) may be process to determine a user intended action, an HWC predetermined reaction, or other action. For example, the imagery may be interpreted as an affirmative user control action for an application on the HWC 102. Or, the imagery may cause, for example, the HWC 102 to react in a pre-determined way such that the HWC 102 is operating safely, intuitively, etc.
In embodiments, the digital content that is in line with the virtual target line may not be displayed in the FOV until the eye position is in the right position. This may be a predetermined process. For example, the system may be set up such that a particular piece of digital content (e.g. an advertisement, guidance information, object information, etc.) will appear in the event that the wearer looks at a certain object(s) in the environment. A virtual target line(s) may be developed that virtually connects the wearer's eye with an object(s) in the environment (e.g. a building, portion of a building, mark on a building, gps location, etc.) and the virtual target line may be continually updated depending on the position and viewing direction of the wearer (e.g. as determined through GPS, ecompass, IMU, etc.) and the position of the object. When the virtual target line suggests that the wearer's pupil is substantially aligned with the virtual target line or about to be aligned with the virtual target line, the digital content may be displayed in the FOV 3704.
In embodiments, the time spent looking along the virtual target line and/or a particular portion of the FOV 3708 may indicate that the wearer is interested in an object in the environment and/or digital content being displayed. In the event there is no digital content being displayed at the time a predetermined period of time is spent looking at a direction, digital content may be presented in the area of the FOV 3708. The time spent looking at an object may be interpreted as a command to display information about the object, for example. In other embodiments, the content may not relate to the object and may be presented because of the indication that the person is relatively inactive. In embodiments, the digital content may be positioned in proximity to the virtual target line, but not in-line with it such that the wearer's view of the surroundings are not obstructed but information can augment the wearer's view of the surroundings. In embodiments, the time spent looking along a target line in the direction of displayed digital content may be an indication of interest in the digital content. This may be used as a conversion event in advertising. For example, an advertiser may pay more for an add placement if the wearer of the HWC 102 looks at a displayed advertisement for a certain period of time. As such, in embodiments, the time spent looking at the advertisement, as assessed by comparing eye position with the content placement, target line or other appropriate position may be used to determine a rate of conversion or other compensation amount due for the presentation.
An aspect of the invention relates to removing content from the FOV of the HWC 102 when the wearer of the HWC 102 apparently wants to view the surrounding environments clearly.
Another aspect of the present invention relates to determining a focal plane based on the wearer's eye convergence. Eyes are generally converged slightly and converge more when the person focuses on something very close. This is generally referred to as convergence. In embodiments, convergence is calibrated for the wearer. That is, the wearer may be guided through certain focal plane exercises to determine how much the wearer's eyes converge at various focal planes and at various viewing angles. The convergence information may then be stored in a database for later reference. In embodiments, a general table may be used in the event there is no calibration step or the person skips the calibration step. The two eyes may then be imaged periodically to determine the convergence in an attempt to understand what focal plane the wearer is focused on. In embodiments, the eyes may be imaged to determine a virtual target line and then the eye's convergence may be determined to establish the wearer's focus, and the digital content may be displayed or altered based thereon.
An aspect of the present invention relates to controlling the HWC 102 based on events detected through eye imaging. A wearer winking, blinking, moving his eyes in a certain pattern, etc. may, for example, control an application of the HWC 102. Eye imaging (e.g. as described herein) may be used to monitor the eye(s) of the wearer and once a pre-determined pattern is detected an application control command may be initiated.
An aspect of the invention relates to monitoring the health of a person wearing a HWC 102 by monitoring the wearer's eye(s). Calibrations may be made such that the normal performance, under various conditions (e.g. lighting conditions, image light conditions, etc.) of a wearer's eyes may be documented. The wearer's eyes may then be monitored through eye imaging (e.g. as described herein) for changes in their performance. Changes in performance may be indicative of a health concern (e.g. concussion, brain injury, stroke, loss of blood, etc.). If detected the data indicative of the change or event may be communicated from the HWC 102.
Aspects of the present invention relate to security and access of computer assets (e.g. the HWC itself and related computer systems) as determined through eye image verification. As discussed herein elsewhere, eye imagery may be compared to known person eye imagery to confirm a person's identity. Eye imagery may also be used to confirm the identity of people wearing the HWCs 102 before allowing them to link together or share files, streams, information, etc.
A variety of use cases for eye imaging are possible based on technologies described herein. An aspect of the present invention relates to the timing of eye image capture. The timing of the capture of the eye image and the frequency of the capture of multiple images of the eye can vary dependent on the use case for the information gathered from the eye image. For example, capturing an eye image to identify the user of the HWC may be required only when the HWC has been turned ON or when the HWC determines that the HWC has been put onto a wearer's head, to control the security of the HWC and the associated information that is displayed to the user. Wherein, the orientation, movement pattern, stress or position of the earhorns (or other portions of the HWC) of the HWC can be used to determine that a person has put the HWC onto their head with the intention to use the HWC. Those same parameters may be monitored in an effort to understand when the HWC is dismounted from the user's head. This may enable a situation where the capture of an eye image for identifying the wearer may be completed only when a change in the wearing status is identified. In a contrasting example, capturing eye images to monitor the health of the wearer may require images to be captured periodically (e.g. every few seconds, minutes, hours, days, etc.). For example, the eye images may be taken in minute intervals when the images are being used to monitor the health of the wearer when detected movements indicate that the wearer is exercising. In a further contrasting example, capturing eye images to monitor the health of the wearer for long-term effects may only require that eye images be captured monthly. Embodiments of the invention relate to selection of the timing and rate of capture of eye images to be in correspondence with the selected use scenario associated with the eye images. These selections may be done automatically, as with the exercise example above where movements indicate exercise, or these selections may be set manually. In a further embodiment, the selection of the timing and rate of eye image capture is adjusted automatically depending on the mode of operation of the HWC. The selection of the timing and rate of eye image capture can further be selected in correspondence with input characteristics associated with the wearer including age and health status, or sensed physical conditions of the wearer including heart rate, chemical makeup of the blood and eye blink rate.
In embodiments, the sensor that assesses the wearer's movements may be a GPS sensor, IMU, accelerometer, etc. The content position may be shifted from a neutral position to a position towards a side edge of the field of view as the forward motion increases. The content position may be shifted from a neutral position to a position towards a top or bottom edge of the field of view as the forward motion increases. The content position may shift based on a threshold speed of the assessed motion. The content position may shift linearly based on the speed of the forward motion. The content position may shift non-linearly based on the speed of the forward motion. The content position may shift outside of the field of view. In embodiments, the content is no longer displayed if the speed of movement exceeds a predetermined threshold and will be displayed again once the forward motion slows.
In embodiments, the content position may generally be referred to as shifting; it should be understood that the term shifting encompasses a process where the movement from one position to another within the see-through FOV or out of the FOV is visible to the wearer (e.g. the content appears to slowly or quickly move and the user perceives the movement itself) or the movement from one position to another may not be visible to the wearer (e.g. the content appears to jump in a discontinuous fashion or the content disappears and then reappears in the new position).
Another aspect of the present invention relates to removing the content from the field of view or shifting it to a position within the field of view that increases the wearer's view of the surrounding environment when a sensor causes an alert command to be issued. In embodiments, the alert may be due to a sensor or combination of sensors that sense a condition above a threshold value. For example, if an audio sensor detects a loud sound of a certain pitch, content in the field of view may be removed or shifted to provide a clear view of the surrounding environment for the wearer. In addition to the shifting of the content, in embodiments, an indication of why the content was shifted may be presented in the field of view or provided through audio feedback to the wearer. For instance, if a carbon monoxide sensor detects a high concentration in the area, content in the field of view may be shifted to the side of the field of view or removed from the field of view and an indication may be provided to the wearer that there is a high concentration of carbon monoxide in the area. This new information, when presented in the field of view, may similarly be shifted within or outside of the field of view depending on the movement speed of the wearer.
Another aspect of the present invention relates to identification of various vectors or headings related to the HWC 102, along with sensor inputs, to determine how to position content in the field of view. In embodiments, the speed of movement of the wearer is detected and used as an input for position of the content and, depending on the speed, the content may be positioned with respect to a movement vector or heading (i.e. the direction of the movement), or a sight vector or heading (i.e. the direction of the wearer's sight direction). For example, if the wearer is moving very fast the content may be positioned within the field of view with respect to the movement vector because the wearer is only going to be looking towards the sides of himself periodically and for short periods of time. As another example, if the wearer is moving slowly, the content may be positioned with respect to the sight heading because the user may more freely be shifting his view from side to side.
Another aspect of the present invention relates to damping a rate of content position change within the field of view. As illustrated in
Another aspect of the present invention relates to simultaneously presenting more than one content in the field of view of a see-through optical system of a HWC 102 and positioning one content with the sight heading and one content with the movement heading.
In a further embodiment, in an operating mode such as when the user is moving in an environment, digital content is presented at the side of the user's see-through FOV so that the user can only view the digital content by turning their head. In this case, when the user is looking straight ahead, such as when the movement heading matches the sight heading, the see-through view FOV does not include digital content. The user then accesses the digital content by turning their head to the side whereupon the digital content moves laterally into the user's see-through FOV. In another embodiment, the digital content is ready for presentation and will be presented if an indication for it's presentation is received. For example, the information may be ready for presentation and if the sight heading or predetermined position of the HWC 102 is achieved the content may then be presented. The wearer may look to the side and the content may be presented. In another embodiment, the user may cause the content to move into an area in the field of view by looking in a direction for a predetermined period of time, blinking, winking, or displaying some other pattern that can be captured through eye imaging technologies (e.g. as described herein elsewhere).
In yet another embodiment, an operating mode is provided wherein the user can define sight headings wherein the associated see-through FOV includes digital content or does not include digital content. In an example, this operating mode can be used in an office environment where when the user is looking at a wall digital content is provided within the FOV, whereas when the user is looking toward a hallway, the FOV is unencumbered by digital content. In another example, when the user is looking horizontally digital content is provided within the FOV, but when the user looks down (e.g. to look at a desktop or a cellphone) the digital content is removed from the FOV.
Another aspect of the present invention relates to collecting and using eye position and sight heading information. Head worn computing with motion heading, sight heading, and/or eye position prediction (sometimes referred to as “eye heading” herein) may be used to identify what a wearer of the HWC 102 is apparently interested in and the information may be captured and used. In embodiments, the information may be characterized as viewing information because the information apparently relates to what the wearer is looking at. The viewing information may be used to develop a personal profile for the wearer, which may indicate what the wearer tends to look at. The viewing information from several or many HWC's 102 may be captured such that group or crowd viewing trends may be established. For example, if the movement heading and sight heading are known, a prediction of what the wearer is looking at may be made and used to generate a personal profile or portion of a crowd profile. In another embodiment, if the eye heading and location, sight heading and/or movement heading are known, a prediction of what is being looked at may be predicted. The prediction may involve understanding what is in proximity of the wearer and this may be understood by establishing the position of the wearer (e.g. through GPS or other location technology) and establishing what mapped objects are known in the area. The prediction may involve interpreting images captured by the camera or other sensors associated with the HWC 102. For example, if the camera captures an image of a sign and the camera is in-line with the sight heading, the prediction may involve assessing the likelihood that the wearer is viewing the sign. The prediction may involve capturing an image or other sensory information and then performing object recognition analysis to determine what is being viewed. For example, the wearer may be walking down a street and the camera that is in the HWC 102 may capture an image and a processor, either on-board or remote from the HWC 102, may recognize a face, object, marker, image, etc. and it may be determined that the wearer may have been looking at it or towards it.
The eye imaging system can also be used for the assessment of aspects of health of the user. In this case, information gained from analyzing captured images of the iris 5012 is different from information gained from analyzing captured images of the retina 5014. Where images of the retina 5014 are captured using light 5357 that illuminates the inner portions of the eye including the retina 5014. The light 5357 can be visible light, but in an embodiment, the light 5357 is infrared light (e.g. wavelength 1 to 5 microns) and the camera 3280 is an infrared light sensor (e.g. an InGaAs sensor) or a low resolution infrared image sensor that is used to determine the relative amount of light 5357 that is absorbed, reflected or scattered by the inner portions of the eye. Wherein the majority of the light that is absorbed, reflected or scattered can be attributed to materials in the inner portion of the eye including the retina where there are densely packed blood vessels with thin walls so that the absorption, reflection and scattering are caused by the material makeup of the blood. These measurements can be conducted automatically when the user is wearing the HWC, either at regular intervals, after identified events or when prompted by an external communication. In a preferred embodiment, the illuminating light is near infrared or mid infrared (e.g. 0.7 to 5 microns wavelength) to reduce the chance for thermal damage to the wearer's eye. In another embodiment, the polarizer 3285 is antireflection coated to reduce any reflections from this surface from the light 5357, the light 2969 or the light 3275 and thereby increase the sensitivity of the camera 3280. In a further embodiment, the light source 5355 and the camera 3280 together comprise a spectrometer wherein the relative intensity of the light reflected by the eye is analyzed over a series of narrow wavelengths within the range of wavelengths provided by the light source 5355 to determine a characteristic spectrum of the light that is absorbed, reflected or scattered by the eye. For example, the light source 5355 can provide a broad range of infrared light to illuminate the eye and the camera 3280 can include: a grating to laterally disperse the reflected light from the eye into a series of narrow wavelength bands that are captured by a linear photodetector so that the relative intensity by wavelength can be measured and a characteristic absorbance spectrum for the eye can be determined over the broad range of infrared. In a further example, the light source 5355 can provide a series of narrow wavelengths of light (ultraviolet, visible or infrared) to sequentially illuminate the eye and camera 3280 includes a photodetector that is selected to measure the relative intensity of the series of narrow wavelengths in a series of sequential measurements that together can be used to determine a characteristic spectrum of the eye. The determined characteristic spectrum is then compared to known characteristic spectra for different materials to determine the material makeup of the eye. In yet another embodiment, the illuminating light 5357 is focused on the retina 5014 and a characteristic spectrum of the retina 5014 is determined and the spectrum is compared to known spectra for materials that may be present in the user's blood. For example, in the visible wavelengths 540 nm is useful for detecting hemoglobin and 660 nm is useful for differentiating oxygenated hemoglobin. In a further example, in the infrared, a wide variety of materials can be identified as is known by those skilled in the art, including: glucose, urea, alcohol and controlled substances.
Another aspect of the present invention relates to collecting and using eye position and sight heading information. Head worn computing with motion heading, sight heading, and/or eye position prediction (sometimes referred to as “eye heading” herein) may be used to identify what a wearer of the HWC 102 is apparently interested in and the information may be captured and used. In embodiments, the information may be characterized as viewing information because the information apparently relates to what the wearer is looking at. The viewing information may be used to develop a personal profile for the wearer, which may indicate what the wearer tends to look at. The viewing information from several or many HWC's 102 may be captured such that group or crowd viewing trends may be established. For example, if the movement heading and sight heading are known, a prediction of what the wearer is looking at may be made and used to generate a personal profile or portion of a crowd profile. In another embodiment, if the eye heading and location, sight heading and/or movement heading are known, a prediction of what is being looked at may be predicted. The prediction may involve understanding what is in proximity of the wearer and this may be understood by establishing the position of the wearer (e.g. through GPS or other location technology) and establishing what mapped objects are known in the area. The prediction may involve interpreting images captured by the camera or other sensors associated with the HWC 102. For example, if the camera captures an image of a sign and the camera is in-line with the sight heading, the prediction may involve assessing the likelihood that the wearer is viewing the sign. The prediction may involve capturing an image or other sensory information and then performing object recognition analysis to determine what is being viewed. For example, the wearer may be walking down a street and the camera that is in the HWC 102 may capture an image and a processor, either on-board or remote from the HWC 102, may recognize a face, object, marker, image, etc. and it may be determined that the wearer may have been looking at it or towards it.
In embodiments, sight headings may be used in conjunction with eye headings or eye and/or sight headings may be used alone. Sight headings can do a good job of predicting what direction a wearer is looking because many times the eyes are looking forward, in the same general direction as the sight heading. In other situations, eye headings may be a more desirable metric because the eye and sight headings are not always aligned. In embodiments herein examples may be provided with the term “eye/sight” heading, which indicates that either or both eye heading and sight heading may be used in the example.
In embodiments, the process involves collecting eye and/or sight heading information from a plurality of head-worn computers that come into proximity with an object in an environment. For example, a number of people may be walking through an area and each of the people may be wearing a head worn computer with the ability to track the position of the wearer's eye (s) as well as possibly the wearer's sight and movement headings. The various HWC wearing individuals may then walk, ride, or otherwise come into proximity with some object in the environment (e.g. a store, sign, person, vehicle, box, bag, etc.). When each person passes by or otherwise comes near the object, the eye imaging system may determine if the person is looking towards the object. All of the eye/sight heading information may be collected and used to form impressions of how the crowd reacted to the object. A store may be running a sale and so the store may put out a sign indicating such. The storeowners and managers may be very interested to know if anyone is looking at their sign. The sign may be set as the object of interest in the area and as people navigate near the sign, possibly determined by their GPS locations, the eye/sight heading determination system may record information relative to the environment and the sign. Once, or as, the eye/sight heading information is collected and associations between the eye headings and the sign are determined, feedback may be sent back to the storeowner, managers, advertiser, etc. as an indication of how well their sign is attracting people. In embodiments, the sign's effectiveness at attracting people's attention, as indicated through the eye/sight headings, may be considered a conversion metric and impact the economic value of the sign and/or the signs placement.
In embodiments, a map of the environment with the object may be generated by mapping the locations and movement paths of the people in the crowd as they navigate by the object (e.g. the sign). Layered on this map may be an indication of the various eye/sight headings. This may be useful in indicating wear people were in relation to the object when then viewed they object. The map may also have an indication of how long people looked at the object from the various positions in the environment and where they went after seeing the object.
In embodiments, the process involves collecting a plurality of eye/sight headings from a head-worn computer, wherein each of the plurality of eye/sight headings is associated with a different pre-determined object in an environment. This technology may be used to determine which of the different objects attracts more of the person's attention. For example, if there are three objects placed in an environment and a person enters the environment navigating his way through it, he may look at one or more of the objects and his eye/sight heading may persist on one or more objects longer than others. This may be used in making or refining the person's personal attention profile and/or it may be used in connection with other such people's data on the same or similar objects to determine an impression of how the population or crowd reacts to the objects. Testing advertisements in this way may provide good feedback of its effectiveness.
In embodiments, the process may involve capturing eye/sight headings once there is substantial alignment between the eye/sight heading and an object of interest. For example, the person with the HWC may be navigating through an environment and once the HWC detects substantial alignment or the projected occurrence of an upcoming substantial alignment between the eye/sight heading and the object of interest, the occurrence and/or persistence may be recorded for use.
In embodiments, the process may involve collecting eye/sight heading information from a head-worn computer and collecting a captured image from the head-worn computer that was taken at substantially the same time as the eye/sight heading information was captured. These two pieces of information may be used in conjunction to gain an understanding of what the wearer was looking at and possibly interested in. The process may further involve associating the eye/sight heading information with an object, person, or other thing found in the captured image. This may involve processing the captured image looking for objects or patterns. In embodiments, gaze time or persistence may be measured and used in conjunction with the image processing. The process may still involve object and/or pattern recognition, but it may also involve attempting to identify what the person gazed at for the period of time by more particularly identifying a portion of the image in conjunction with image processing.
In embodiments, the process may involve setting a pre-determined eye/sight heading from a pre-determined geospatial location and using them as triggers. In the event that a head worn computer enters the geospatial location and an eye/sight heading associated with the head worn computer aligns with the pre-determined eye/sight heading, the system may collect the fact that there was an apparent alignment and/or the system may record information identifying how long the eye/sight heading remains substantially aligned with the pre-determined eye/sight heading to form a persistence statistic. This may eliminate or reduce the need for image processing as the triggers can be used without having to image the area. In other embodiments, image capture and processing is performed in conjunction with the triggers. In embodiments, the triggers may be a series a geospatial locations with corresponding eye/sight headings such that many spots can be used as triggers that indicate when a person entered an area in proximity to an object of interest and/or when that person actually appeared to look at the object.
In embodiments, eye imaging may be used to capture images of both eyes of the wearer in order to determine the amount of convergence of the eyes (e.g. through technologies described herein elsewhere) to get an understanding of what focal plane is being concentrated on by the wearer. For example, if the convergence measurement suggests that the focal plane is within 15 feet of the wearer, than, even though the eye/sight headings may align with an object that is more than 15 feet away it may be determined that the wearer was not looking at the object. If the object were within the 15 foot suggested focal plane, the determination may be that the wearer was looking at the object.
The three dimensionally positioned virtual target line can be recalculated periodically (e.g. every millisecond, second, minute, etc.) to reposition the environmentally position locked content 5912 to remain in-line with the virtual target line. This can create the illusion that the content 5912 is staying positioned within the environment at a point that is associated with the other person's location 5902 independent of the location of the first person 5908 wearing the HWC 102 and independent of the compass heading of the HWC 102.
In embodiments, the environmentally locked digital content 5912 may be positioned with an object 5904 that is between the first person's location 5908 and the other person's location 5902. The virtual target line may intersect the object 5904 before intersecting with the other person's location 5902. In embodiments, the environmentally locked digital content 5912 may be associated with the object intersection point 5904. In embodiments, the intersecting object 5904 may be identified by comparing the two person's locations 5902 and 5908 with obstructions identified on a map. In embodiments the intersecting object 5904 may be identified by processing images captured from a camera, or other sensor, associated with the HWC 102. In embodiments, the digital content 5912 has an appearance that is indicative of being at the location of the other person 5902, at the location of the intersecting object 5904 to provide a more clear indication of the position of the other person's position 5902 in the FOV 5914.
Presented object B 6020 is aligned with a different virtual target line then presented object A 6018. Presented object B 6020 is also presented at content position B 6004 at a different focal plane than the content position A 6012. Presented content B 6020 is presented at a further focal plane, which is indicative that the other person 5902 is physically located at a further distance. If the focal planes are sufficiently different, the content at position A will come into focus at a different time than the content at position B because the two focal planes require different focus from the eye 6002.
Continuing to refer to
BlueForce member 6108 is obscured from the primary BlueForce member's 6102 view by an obstacle that is in close proximity to the obscured member 6108. As depicted, the obscured member 6108 is in a building but close to one of the front walls. In this situation, the digital content provided in the FOV of the primary member 6102 may be indicative of the general position of the obscured member 6108 and the digital content may indicate that, while the other person's location is fairly well marked, it is obscured so it is not as precise as if the person was in direct view. In addition, the digital content may be virtually positionally locked to some feature on the outside of the building that the obscured member is in. This may make the environmental locking more stable and also provide an indication that the location of the person is somewhat unknown.
BlueForce member 6110 is obscured by multiple obstacles. The member 6110 is in a building and there is another building 6112 in between the primary member 6102 and the obscured member 6110. In this situation, the digital content in the FOV of the primary member will be spatially quite short of the actual obscured member and as such the digital content may need to be presented in a way that indicates that the obscured member 6110 is in a general direction but that the digital marker is not a reliable source of information for the particular location of obscured member 6110.
Another aspect of the present invention relates to predicting the movement of BlueForce members to maintain proper virtual marking of the BlueForce member locations.
Another aspect of the present invention relates to monitoring the health of BlueForce members. Each BlueForce member may be automatically monitored for health and stress events. For example, the members may have a watchband as described herein elsewhere or other wearable biometric monitoring device and the device may continually monitor the biometric information and predict health concerns or stress events. As another example, the eye imaging systems described herein elsewhere may be used to monitor pupil dilatations as compared to normal conditions to predict head trama. Each eye may be imaged to check for differences in pupil dilation for indications of head trama. As another example, an IMU in the HWC 102 may monitor a person's walking gate looking for changes in pattern, which may be an indication of head or other trama. Biometric feedback from a member indicative of a health or stress concern may be uploaded to a server for sharing with other members or the information may be shared with local members, for example. Once shared, the digital content in the FOF that indicates the location of the person having the health or stress event may include an indication of the health event.
Another aspect of the present invention relates to virtually marking various prior acts and events. For example, as depicted in
Another aspect of the present invention relates to the physical location at which digital content is going to be presented to a person wearing a HWC 102. In embodiments, content is presented in a FOV of a HWC 102 when the HWC 102 is at a physical location that was selected based on personal information particular to the wearer of the HWC 102. In embodiments, the physical location is identified by a geo-spatial location and an attribute in the surroundings proximate the geo-spatial location. The attribute may be something that more precisely places the content within the environment located at the geo-spatial location. The attribute may be selected such that content appears in a hallway, office, near a billboard, rooftop, outside wall, object, etc. Personal information relating to the person may be stored such that it can be retrieved during a process of determining at what physical location in the world certain digital content should be presented to the person. In embodiments, the content may relate to the physical location. In other embodiments, the content does not necessarily relate to the physical location. In instances where the physical location is selected based on personal information and the content does not relate to the location, the location may have been selected because the location is of the type that the person may spend more time viewing or interacting with content of the type to be presented.
In embodiments, a method of presenting digital content in a FOV of a HWC 102 may include identifying that the HWC 102 has arrived at a physical location, wherein the physical location is pre-determined based on personal information relating to the person wearing the HWC, and presenting the digital content in relation to an attribute in the surroundings where the attribute was pre-selected based on the personal information. The personal information may relate to personal attributes, demographics, behaviors, prior visited locations, stored personal locations, preferred locations, travel habits, etc. For example, the person wearing the HWC 102 may frequent a venue often (e.g. a place of work), and the system may present content to the person when he arrives at the venue. The type of content may also be particular to the venue, or other location selection criteria, such that the person is more apt to view and/or interact with the content. The content, for example, may relate to services or products relating to the person's work and as such the system may present the content at or near the person's place of work under the assumption that the person is going to be more interested in content relating to his work when he is at or near his place of work. In another example, content may be presented to the person when the person is passing by a sports complex because the person is generally characterized as being interested in sports. This presentation may be based on the assumption that the person may be more interested in content presented in connection with a venue that he finds interesting.
In embodiments, the placement of the digital content may be based on the selection of an environment's attribute, which was selected based on personal information relating to the person wearing the HWC 102. The personal information may suggest that the person would be more apt to interact with content if it is presented indoors, outdoors, in a room, in a hallway, on a blank wall, over a TV, near a table, near a chair, near a vehicle, while sitting, while standing, while walking, etc. For example, a 50 year old man may be more apt to interact with content that is presented in an area where he will likely be sitting, while a 17 year old man may be more apt to interact with content while he is moving. This may cause the system to choose an internal wall of a building for the presentation to the 50 year old and an external wall of the building for the presentation to the 17 year old. The 50 year old may be more apt to interact with content presented near an entrance to his place of work and the 17 year old may be more apt to interact with content when presented proximate a vehicle. Each of these attributes near the geo-spatial location are eligible candidates for the presentation of the content and the selection of the one(s) to be used may be based on the personal information known about the person wearing the HWC 102.
As illustrated in
As illustrated in
As illustrated in
In embodiments, a person's movements may be traced to identify the areas of general or particular interest to the person. The traced movements may indicate a place of work or workday place of interest, evening or home locations, driving habits, transportation habits, store locations and identities, etc. and the traced movements may influence the physical location for the content presentation. The movements may be traced by tracing GPS movements, IMU movements, or other sensor feedback from the HWC 102 or other device, for example.
In embodiments, external user interfaces and gestures as described herein elsewhere may be used to interact with the content and/or assist in the positioning of the content. For example, the content may be set to appear when a user is at a particular location based on the user's personal traits or information and the user may then use an external user interface to interact with the content to reposition the content within the environment, for posting at another environment, sharing the content, storing the content for later viewing, etc. The content, for example, may appear in proximity to a doorway, as illustrated in
In embodiments, the display presentation technologies as described herein elsewhere may be used in connection with the presentation technologies based on physical placement based on personal information. For example, the content may be presented at a physical location selected based on the person's personal information and the content may be presented to be viewed at a particular focal plane such that the user perceives it in focus when the user looks at a distance associated with the focal plane. The content may also be presented when the person is at the physical location; however, the content presentation may further be based on obstacle management technologies. In the event that the user is proximate the physical location where the content is to be presented, an evaluation of obstacles in the area may be completed and then the content presentation may be altered based on any obstacles that obscure the user's view of the content presentation location.
In embodiments, the content presented at a physical location based on personal information may further be positioned in the FOV of the HWC 102 based on sensor information as described herein elsewhere. For example, the sensor may indicate that the person is moving and the content may be re-positioned within the FOV, out of the FOV, or removed entirely, based on the detected movement. In the event that the person is deemed to be moving forward quickly, for example, it may be assumed that the person wants the center of the FOV clear so the content may be shifted towards the side of the FOV. The content may be of a type that is sensor dependent, or not sensor dependent, and it may be presented with other content that is of the opposite dependency. In embodiments, the content position may move depending on the sight heading and if the sight heading is rapidly moving the position of the content in the FOV may move but the positioning may also be damped, as described herein elsewhere. The physical location presentation based on personal information may be presented in a ‘side panel’ such that it is presented when the person looks to the side or turns his head to a side when the person is at the physical location.
In embodiments, eye imaging and sight heading technologies as described herein elsewhere may be used in connection with the physical location content presentation based on personal information. For example, the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading.
Another aspect of the present invention relates to providing a user interface for a participant, such as a friend or other affiliated person, such that the participant can send content to a wearer of a HWC 102 to be presented at a physical location selected by the participant. The user interface may be a selectable criteria presented when the participant is sending content to a friend. For example, the participant may want to send a video to a friend from the participant's phone, HWC, or other computing system, and the participant may select an option to have the content presented at a point in time when the friend is in proximity to a physical location. Similar to other embodiments disclosed herein elsewhere, the user interface may further provide for the selection of an attribute or attribute type to target a more particular area proximate the geo-spatial location. For example, the friend may indicate in the user interface that the content is to be delivered in an office, internal wall, external wall, etc. proximate the geo-spatial location. In embodiments, a particular attribute may be targeted, such as a particular wall or near a particular window. In other embodiments, the attribute may be a type of attribute, such as a non-specific wall or doorway. Setting a type of attribute allows flexibility in that the HWC 102 can then select the particular attribute for placement of the content. The placement selection may be based on a priority set by the sending participant or user, for example. The presentation process may further involve presenting the content influenced by the recipient's personal information or other information as disclosed herein. For example, the participant may select to send the video to a friend with a presentation setting such that the content is presented near the person's place of work, coffee shop, or other location on the way to the place of work.
Embodiments of the present invention may involve a computer implemented process that involves receiving a personally selected geo-spatial location from a sender of digital content for the presentation of the digital content in a recipient's head-worn see-through display; and presenting, based on data indicative that the user is near the geo-spatial location, the digital content in the head-worn see-through display such that the digital content is perceived by the recipient to be associated with a physical attribute proximate the geo-spatial location. In embodiments, the personal selection may be facilitated through the use of a user interface with menu selections. The menu selections may include an indication of the recipient's frequented locations. The digital content relates to the selected geo-spatial location. For example, the digital content may be an advertisement and it may relate to a sale at a location proximate the presentation location. In embodiments, the digital content may not relate to the selected geo-spatial location. For example, a friend may have sent the recipient a virtual gift or personal message and the friend would prefer to have the recipient see, read and/or interact with the content at a pre-determined location.
In embodiments, the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented.
In embodiments, the user may select the digital content with an external user interface or gesture, as described herein elsewhere, to re-position or otherwise interact with the digital content. In embodiments, the step of presenting may further include identifying a potentially obstructive view object, as described herein elsewhere, and altering the digital content to indicate the potentially obstructive view. In embodiments, the step of presenting may include altering a position of the content presentation based on sensor feedback, as described herein elsewhere. For example, the sensor feedback may be indicative of the user moving forward quickly so the content may be shifted towards an edge of the FOV of the HWC to provide the user with a more clear view of his surroundings. In embodiments, the step of presenting may also include presenting the content at a location based on the recipient's personal information, or on other information as described herein.
Another aspect of the present invention relates to a user presetting preferred content receipt physical locations based on the type of content received. A HWC 102 user may preset physical locations, including geo-spatial locations and particular locations in proximity to the geo-spatial location, for the presentation of certain types of content. For example, the user may set a location near or at his place of work for the presentation of work content. The user may set nutrition and exercise content for presentation at the gym or home or while out dining. This technology provides the user with a way of organizing the presentation of content such that the user will be more apt to review and/or interact with the content and thus the content may be of more use to the user.
In embodiments, a computer implemented process in accordance with the principles of the present invention involves establishing a plurality of content presentation settings wherein each of the plurality of settings includes an indication of a physical location where a recipient desires a type of digital content to be presented in the recipient's head-worn see-through display; and presenting content intended for delivery to the recipient when data indicates that the head-worn see-through display is proximate one of the plurality of physical locations when the content type corresponds to the type of digital content to be presented at the physical location based on the content presentation setting.
In embodiments, the plurality of content presentation settings may be established through the recipient's selections on a settings user interface. The plurality of content presentation settings may be established based on the recipient's prior interactions with digital content. The presented content may relate to the physical location where presentation is made; the presented content may not relate to the physical location where presentation is made. The indication of physical location may includes a geo-spatial location and an object attribute type. The object attribute type may be a specific object's attribute. The step of presenting may also include presenting the content when data indicates that the head-worn see-through display is sight heading aligned with a selected object attribute. The step of presenting may also include presenting the content when data indicates that the head-worn see-through display is eye heading aligned with a selected object attribute. The process may also include receiving an indication that the recipient has selected the content with an external user interface to re-position the digital content. The step of presenting may also include identifying a potentially obstructive view object and altering the content to indicate the potentially obstructive view. The step of presenting may further include altering a position of the presentation based on sensor feedback. The sensor feedback may be indicative of the user moving forward. The step of presenting may also include presenting based on the recipient's personal information. The step of presenting may also include presenting based on a sender's location specific presentation request. Embodiments of the present invention involve processes described herein elsewhere.
Another aspect of the present invention relates to providing supplemental content to provide hints, guidance, and/or directions to a physical location where primary content may be presented. There are situations where content is scheduled to be presented to the wearer of a HWC 102 when the wearer enters a region or physical location or looks towards a physical location and there are times when it is useful to provide additional information outside or inside of the region to provide the wearer with indications that the primary content may be presented. The supplemental content may indicate that the wearer has a personal message waiting to be viewed, an advertisement waiting to be viewed, or other information waiting to be viewed.
In embodiments, a computer operated process may involve pre-determining a physical environment position for the presentation of primary virtual content, wherein the primary virtual content is presented in a see-through head-worn display; and pre-determining a position for each of a plurality of directionally informative content, wherein each of the plurality of directionally informative content is associated with a different physical environment position and indicates to a person wearing the see-through head-worn computer in what direction the primary virtual content will be found. The step of pre-determining the physical environment position for the presentation of the primary virtual content may involve determining the physical environmental position based on personal information relating to the person. The step of pre-determining the physical environment position for the presentation of the primary virtual content may involve determining the physical environmental position based on a content sender's requested delivery criteria. The step of pre-determining the physical environment position for the presentation of the primary virtual content may involve determining the physical environmental position based on the person's pre-set content delivery preferences. In embodiments, each of the plurality of directionally informative content may include video content. In embodiments, each of the plurality of directionally informative content may include an image. The physical environment position for the presentation of primary virtual content may include a geo-spatial position, environment attribute type, specific environment attribute, etc. In embodiments, the primary virtual content relates to the physical environment position for the presentation of primary virtual content. In other embodiments, the primary virtual content does not relate to the physical environment position for the presentation of primary virtual content. The primary virtual content may be presented when data indicates that the head-worn see-through display is proximate the physical environment position for the presentation of primary virtual content. The step of presenting may also include presenting the content when data indicates that the head-worn see-through display has an eye heading aligned with a selected object attribute. The step of presenting may also includes presenting the content when data indicates that the head-worn see-through display has a sight heading aligned with a selected object attribute. In embodiments, the process may also include receiving an indication that the recipient has selected the primary virtual content with an external user interface to re-position the digital content. In embodiments, the step of presenting may also include identifying a potentially obstructive view object and altering the primary virtual content to indicate the potentially obstructive view. The step of presenting may also include altering a position of the presentation based on sensor feedback. The sensor feedback may be indicative of the user moving forward. Embodiments of the present invention involve processes described herein elsewhere.
Another aspect of the present invention relates to sending a group message to a plurality of recipients where the message is presented to each of the group's recipients when each individual recipient enters a respective physical location. The physical location at which the message is presented may be based in part on the sender's preferences and in part on the recipient's preferences. As described herein, sender's and recipient's may have physical location presentation preferences and the system may reconcile these preferences before presenting the content. In a group message situation, in embodiments, a server may be involved in reconciling the sender's presentation preferences and each of the recipient's preferences.
In embodiments, a computer operated process may involve receiving content to be delivered to a plurality of recipients, wherein each recipient of the plurality of recipients has a preference for the physical location at which the content is to be presented in the recipient's see-through head-worn display and a sender of the content has a preference for the physical location at which the content is to be presented to each recipient of the plurality of recipients; identifying a final physical location for the presentation of the content for each recipient of the plurality of recipients, wherein the final physical location is based on both the recipient's preference and the sender's preference; and causing the content to be presented to each recipient of the plurality of recipients when each recipient is proximate the final physical location identified for the recipient. In embodiments, the sender's preference overrides the recipient's preference. In embodiments, the recipient's preference overrides the sender's preference. In embodiments, the final physical location represents a location that is within acceptable positions identified for the recipient and is a priority position for the sender. In embodiments, the physical location preference for at least one recipient of the plurality of recipients is established based on personal information relating to the at least one recipient. In embodiments, the physical location preference for at least one recipient of the plurality of recipients is established based on the at least one recipient's selections. In embodiments, the physical location preference for the sender is established based on the sender's selection in a user interface. The physical location for at least one recipient of the plurality of recipients may include a geo-spatial position. The physical location for at least one recipient of the plurality of recipients may include an environment attribute type. The physical location for at least one recipient of the plurality of recipients may include an environment attribute. The content may or may not relate to the physical location for at least one recipients of the plurality of recipients. The content may be presented when data indicates that the head-worn see-through display is proximate the final physical location for each of the recipients of the plurality of recipients. In embodiments, the step of presenting may also include presenting the content when data indicates that the head-worn see-through display has an eye heading aligned with a selected object attribute. The step of presenting may also include presenting the content when data indicates that the head-worn see-through display has a sight heading aligned with a selected object attribute. The process may also include receiving an indication that at least one recipient of the plurality of recipients has selected the content with an external user interface to re-position the digital content. In embodiments, the step of presenting may also include, for at least one recipient of the plurality of recipients, identifying a potentially obstructive view object and altering the primary virtual content to indicate the potentially obstructive view. The step of presenting may also include altering a position of the presentation based on sensor feedback. The sensor feedback may be indicative of the user moving forward. In embodiments, the content also has a presentation location preference and the step of identifying the final presentation location for each recipient of the plurality of recipients may also include presenting based on the content presentation location preference. Embodiments of the present invention involve processes described herein elsewhere.
Another aspect of the present invention relates to the activation of a marker recognition system for a HWC 102. The marker recognition activation process includes identifying when the HWC 102 is physically proximate a location where content is to be presented in the HWC 102. This process can save power by reducing the amount of time the marker recognition system is active. As described in connection with other embodiments described herein, content may be set to be presented to a HWC 102 when the HWC 102 is proximate a physical location. Further embodiments may involve identifying to the HWC 102 at what location(s) it should expect to find markers for the presentation of content. With the locations pre-identified to the HWC 102 the HWC 102 can activate the marker recognition system when proximate the pre-identified locations.
In a further embodiment, a haptic alert may be presented to the user when the HWC nears a geospatial location that has associated content to be presented to the user. The haptic alert can be in the form of a vibration, electric stimulus or a change of the displayed image such as a flash, etc.
In embodiments, a computer operated process may involve pre-setting a geo-spatial location where content will be displayed to a user of a see-through head-worn display; establishing a region proximate the geo-spatial location; and causing a marker recognition system of the head-worn see-through display to activate when data indicates that the see-through head-worn display is within the region, wherein the marker recognition system monitors a surrounding environment to identify a marker that will act as a virtual anchor for the presentation of the content. In embodiments, the marker may be a pre-established physical attribute. In embodiments, the marker is a priority marker selected from a plurality of markers, wherein the priority marker was identified by scanning the surrounding environment in a search for the plurality of markers. In embodiments, the geo-spatial location may be selected based on personal information known about the user. In embodiments, the geo-spatial location may be selected based on a presentation preference setting established by the user. In embodiments, the geo-spatial location may be selected based on a preference of a sender of the content. In embodiments, the geo-spatial location may be selected based on a preference setting of the content. In embodiments, the content may or may not relate to the geo-spatial location. In embodiments, the content may have been sent by another user characterized as a friend of the user through a known affiliation. The content may be a text message, image file, video, etc. In embodiments, informational content may presented to the user in the see-through head-worn display to indicate to the user that primary content is queued for presentation and will be presented when the user goes to the presentation. The informational content may also include an indication of a direction in which the user will be presented with the content. Embodiments of the present invention involve processes described herein elsewhere.
Another aspect of the present invention relates to confirming that content intended to be presented in a HWC 102 when the HWC 102 is proximate a physical location is actually presented and/or viewed. Senders of content tend to be interested to know if the content was delivered, presented, viewed, interacted with, etc. This may be important to advertisers, businesses, individuals, etc. With content intended to be delivered at a point in the future and dependent on the user being proximate a physical location it may be even more important to have a presentation confirmation system in accordance with the principles of the present invention.
In embodiments, a computer operated process may involve pre-setting a geo-spatial location where content will be displayed to a user of a see-through head-worn display; establishing a region proximate the geo-spatial location; presenting to the user in the see-through head-worn display the content when data indicative that the head-worn see-through display has entered the region; and causing a presentation confirmation to be communicated to a server based on the fact that the content was presented in the head-worn see-through display. In embodiments, the user's identity may be verified through biometric information such that the presentation confirmation represents a confirmation that the content was presented to a verified user. The user's identity may have been identified through eye imaging. In embodiments, the step of presenting to the user may also involve presenting when data indicative that the head-worn see-through display was aligned with a pre-established sight heading. The step of presenting to the user may also include presenting when data indicative that the user's eye was aligned with a pre-established eye heading. The presentation confirmation may be communicated from the server to a sender of the content. The sender may be a person characterized as a friend of the user through an known affiliation, an advertiser, etc. The presentation confirmation may include a persistence statistic indicative of how long the person viewed the content. The persistence statistic may be based on a recorded sight heading relating to the head-worn see-through display. The persistence statistic may be based on a recorded eye heading relating to the user's eye position. In embodiments the geo-spatial location may be selected based on personal information known about the user. The geo-spatial location may be selected based on a presentation preference established for the user. The geo-spatial location may be selected based on a presentation preference of a sender of the content. The geo-spatial location may be selected based on a preference setting relating to the content. Embodiments of the present invention involve processes described herein elsewhere.
Although embodiments of HWC have been described in language specific to features, systems, computer processes and/or methods, the appended claims are not necessarily limited to the specific features, systems, computer processes and/or methods described. Rather, the specific features, systems, computer processes and/or and methods are disclosed as non-limited example implementations of HWC. All documents referenced herein are hereby incorporated by reference.
This application is a continuation and claims the priority of the following patent application: U.S. patent application Ser. No. 14/300,387, entitled CONTENT PRESENTATION IN HEAD WORN COMPUTING, filed Jun. 10, 2014. U.S. patent application Ser. No. 14/300,387 is a continuation-in-part of and claims the priority of U.S. patent application Ser. No. 14/299,474, entitled CONTENT PRESENTATION IN HEAD WORN COMPUTING, filed Jun. 9, 2014. U.S. patent application Ser. No. 14/299,474. All of the above applications are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2064604 | Paul | Dec 1936 | A |
3305294 | Alvarez | Feb 1967 | A |
3531190 | Leblanc | Sep 1970 | A |
3671111 | Okner | Jun 1972 | A |
4034401 | Mann et al. | Jul 1977 | A |
4145125 | Chika | Mar 1979 | A |
4668155 | Kaufmann et al. | May 1987 | A |
4788535 | Chikara et al. | Nov 1988 | A |
4811739 | Silver et al. | Mar 1989 | A |
4852988 | Velez et al. | Aug 1989 | A |
4928301 | Smoot et al. | May 1990 | A |
D327674 | Kuo | Jul 1992 | S |
5151722 | Massof et al. | Sep 1992 | A |
5257094 | Larussa et al. | Oct 1993 | A |
D352930 | Tsuji | Nov 1994 | S |
5483307 | Anderson | Jan 1996 | A |
D375748 | Hartman | Nov 1996 | S |
D376790 | Goulet et al. | Dec 1996 | S |
5621424 | Shimada et al. | Apr 1997 | A |
5699057 | Ikeda et al. | Dec 1997 | A |
5699194 | Takahashi | Dec 1997 | A |
5717422 | Fergason et al. | Feb 1998 | A |
D392959 | Edwards | Mar 1998 | S |
5729242 | Margerum et al. | Mar 1998 | A |
5767841 | Hartman | Jun 1998 | A |
5788195 | Rice | Aug 1998 | A |
5808589 | Fergason | Sep 1998 | A |
5808802 | Hur | Sep 1998 | A |
D410638 | Sheehan et al. | Jun 1999 | S |
5914818 | Tejada et al. | Jun 1999 | A |
5949583 | Rallison et al. | Sep 1999 | A |
5991084 | Hildebrand et al. | Nov 1999 | A |
6028608 | Jenkins | Feb 2000 | A |
6034653 | Robertson et al. | Mar 2000 | A |
6046712 | Beller et al. | Apr 2000 | A |
6147805 | Fergason | Nov 2000 | A |
6160552 | Wilsher et al. | Dec 2000 | A |
6160666 | Rallinson et al. | Dec 2000 | A |
6204974 | Spitzer | Mar 2001 | B1 |
6222677 | Budd | Apr 2001 | B1 |
6297749 | Smith et al. | Oct 2001 | B1 |
D451892 | Carrere | Dec 2001 | S |
6347764 | Brandon et al. | Feb 2002 | B1 |
6379009 | Fergason | Apr 2002 | B1 |
6384982 | Spitzer | May 2002 | B1 |
6392656 | Someya et al. | May 2002 | B1 |
D460071 | Sheehan et al. | Jul 2002 | S |
6433760 | Vaissie et al. | Aug 2002 | B1 |
6456438 | Lee et al. | Sep 2002 | B1 |
6461000 | Magarill | Oct 2002 | B1 |
6478429 | Aritake et al. | Nov 2002 | B1 |
6480174 | Kaufmann et al. | Nov 2002 | B1 |
6491389 | Yaguchi et al. | Dec 2002 | B2 |
D470144 | Li | Feb 2003 | S |
6535182 | Stanton | Mar 2003 | B2 |
D473871 | Santos | Apr 2003 | S |
6563626 | Iwasaki | May 2003 | B1 |
D478052 | Thomas | Aug 2003 | S |
6642945 | Sharpe et al. | Nov 2003 | B1 |
6747611 | Budd et al. | Jun 2004 | B1 |
6771294 | Pulli et al. | Aug 2004 | B1 |
6795041 | Ogawa et al. | Sep 2004 | B2 |
6847336 | Lemelson et al. | Jan 2005 | B1 |
6906836 | Parker et al. | Jun 2005 | B2 |
D512027 | Sarasjoki et al. | Nov 2005 | S |
D513233 | Stauffer | Dec 2005 | S |
6987787 | Mick | Jan 2006 | B1 |
D514525 | Stauffer | Feb 2006 | S |
7003308 | Fuoss et al. | Feb 2006 | B1 |
7016116 | Dolgoff et al. | Mar 2006 | B2 |
7030925 | Tsunekawa et al. | Apr 2006 | B1 |
D521493 | Wai | May 2006 | S |
7088234 | Naito et al. | Aug 2006 | B2 |
D529467 | Rose | Oct 2006 | S |
D541226 | Wakisaka et al. | Apr 2007 | S |
7199934 | Yamasaki | Apr 2007 | B2 |
D559793 | Fan | Jan 2008 | S |
D571816 | Corcoran et al. | Jun 2008 | S |
7380936 | Howell et al. | Jun 2008 | B2 |
7401918 | Howell et al. | Jul 2008 | B2 |
7414791 | Urakawa et al. | Aug 2008 | B2 |
7417617 | Eichenlaub | Aug 2008 | B2 |
7457040 | Amitai | Nov 2008 | B2 |
7481531 | Howell et al. | Jan 2009 | B2 |
7500747 | Howell et al. | Mar 2009 | B2 |
7522344 | Curatu et al. | Apr 2009 | B1 |
7542210 | Chirieleison et al. | Jun 2009 | B2 |
7543943 | Hubby et al. | Jun 2009 | B1 |
7646540 | Dolgoff et al. | Jan 2010 | B2 |
7677723 | Howell et al. | Mar 2010 | B2 |
7690799 | Nestorovic et al. | Apr 2010 | B2 |
7728799 | Kerr et al. | Jun 2010 | B2 |
7733571 | Li | Jun 2010 | B1 |
7771046 | Howell et al. | Aug 2010 | B2 |
7777690 | Winsor et al. | Aug 2010 | B2 |
7777723 | Namiki et al. | Aug 2010 | B2 |
7777960 | Freeman | Aug 2010 | B2 |
7792552 | Thomas et al. | Sep 2010 | B2 |
7806525 | Howell et al. | Oct 2010 | B2 |
7812842 | Gordon et al. | Oct 2010 | B2 |
7813743 | Loeb | Oct 2010 | B1 |
7830370 | Yamazaki et al. | Nov 2010 | B2 |
7850301 | DiChiara et al. | Dec 2010 | B2 |
7855743 | Sako et al. | Dec 2010 | B2 |
D631881 | Quinn et al. | Feb 2011 | S |
D631882 | Odgers | Feb 2011 | S |
7928926 | Yamamoto et al. | Apr 2011 | B2 |
8004765 | Amitai | Aug 2011 | B2 |
8018579 | Krah et al. | Sep 2011 | B1 |
8079713 | Ashkenazi et al. | Dec 2011 | B2 |
8092007 | DiChiara et al. | Jan 2012 | B2 |
8166421 | Magal et al. | Apr 2012 | B2 |
8212859 | Tang et al. | Jul 2012 | B2 |
8228315 | Starner et al. | Jul 2012 | B1 |
8235529 | Raffle et al. | Aug 2012 | B1 |
8246170 | Yamamoto et al. | Aug 2012 | B2 |
D669066 | Olsson et al. | Oct 2012 | S |
8337013 | Howell et al. | Dec 2012 | B2 |
8376548 | Schultz | Feb 2013 | B2 |
8378924 | Jacobsen et al. | Feb 2013 | B2 |
8384999 | Crosby et al. | Feb 2013 | B1 |
D680112 | Monahan | Apr 2013 | S |
D680152 | Olsson et al. | Apr 2013 | S |
8427396 | Kim | Apr 2013 | B1 |
8430507 | Howell et al. | Apr 2013 | B2 |
8434863 | Howell et al. | May 2013 | B2 |
D685019 | Li | Jun 2013 | S |
8467133 | Miller et al. | Jun 2013 | B2 |
8472120 | Border et al. | Jun 2013 | B2 |
8473241 | Foxlin | Jun 2013 | B2 |
8477425 | Border et al. | Jul 2013 | B2 |
8482859 | Border et al. | Jul 2013 | B2 |
8487838 | Kipman et al. | Jul 2013 | B2 |
8488246 | Border et al. | Jul 2013 | B2 |
8489326 | Na et al. | Jul 2013 | B1 |
8494215 | Kimchi et al. | Jul 2013 | B2 |
8505430 | Andryukov et al. | Aug 2013 | B2 |
D689862 | Liu | Sep 2013 | S |
8531394 | Maltz et al. | Sep 2013 | B2 |
D690684 | Lee et al. | Oct 2013 | S |
8553910 | Dong et al. | Oct 2013 | B1 |
8564883 | Totani et al. | Oct 2013 | B2 |
8570273 | Smith | Oct 2013 | B1 |
8570656 | Weissman et al. | Oct 2013 | B1 |
8576276 | Bar-Zeev et al. | Nov 2013 | B2 |
8576491 | Takagi et al. | Nov 2013 | B2 |
8587869 | Totani et al. | Nov 2013 | B2 |
8593795 | Chi et al. | Nov 2013 | B1 |
8594467 | Lu et al. | Nov 2013 | B2 |
D696668 | Chen et al. | Dec 2013 | S |
8611015 | Wheeler et al. | Dec 2013 | B2 |
8662686 | Takagi et al. | Mar 2014 | B2 |
8670183 | Clavin et al. | Mar 2014 | B2 |
8678581 | Blum et al. | Mar 2014 | B2 |
8698157 | Hanamura | Apr 2014 | B2 |
8711487 | Takeda et al. | Apr 2014 | B2 |
8730129 | Solomon | May 2014 | B2 |
8743052 | Keller et al. | Jun 2014 | B1 |
8745058 | Garcia-Barrio | Jun 2014 | B1 |
8750541 | Dong et al. | Jun 2014 | B1 |
8752963 | McCulloch et al. | Jun 2014 | B2 |
8760765 | Gupta et al. | Jun 2014 | B2 |
8767306 | Miao et al. | Jul 2014 | B1 |
8770742 | Howell et al. | Jul 2014 | B2 |
8786675 | Deering et al. | Jul 2014 | B2 |
8786686 | Amirparviz | Jul 2014 | B1 |
8787006 | Golko et al. | Jul 2014 | B2 |
8803867 | Oikawa | Aug 2014 | B2 |
8814691 | Osterhout et al. | Aug 2014 | B2 |
8823071 | Oyamada | Sep 2014 | B2 |
8824779 | Smyth | Sep 2014 | B1 |
8832557 | Tang et al. | Sep 2014 | B2 |
8836768 | Zuccarino et al. | Sep 2014 | B1 |
8837880 | Takeda et al. | Sep 2014 | B2 |
8854433 | Rafii | Oct 2014 | B1 |
8854735 | Totani et al. | Oct 2014 | B2 |
8866702 | Mirov et al. | Oct 2014 | B1 |
8866849 | Chun et al. | Oct 2014 | B1 |
8867139 | Gupta | Oct 2014 | B2 |
D716808 | Yeom et al. | Nov 2014 | S |
D716813 | Deng | Nov 2014 | S |
8878749 | Wu et al. | Nov 2014 | B1 |
D719568 | Heinrich et al. | Dec 2014 | S |
D719569 | Heinrich et al. | Dec 2014 | S |
D719570 | Heinrich et al. | Dec 2014 | S |
8922530 | Pance | Dec 2014 | B2 |
8947323 | Geiss et al. | Feb 2015 | B1 |
8948935 | Peeters et al. | Feb 2015 | B1 |
8955973 | Raffle et al. | Feb 2015 | B2 |
8964298 | Haddick et al. | Feb 2015 | B2 |
D724083 | Olsson et al. | Mar 2015 | S |
8970495 | Weaver et al. | Mar 2015 | B1 |
8971023 | Olsson et al. | Mar 2015 | B2 |
8982014 | Evans et al. | Mar 2015 | B2 |
8982471 | Starner et al. | Mar 2015 | B1 |
D727317 | Olsson et al. | Apr 2015 | S |
9020832 | Fisher et al. | Apr 2015 | B2 |
D728573 | Deng | May 2015 | S |
9024842 | Wheeler et al. | May 2015 | B1 |
9031273 | Dong et al. | May 2015 | B2 |
9033502 | Nistico et al. | May 2015 | B2 |
D732025 | Heinrich et al. | Jun 2015 | S |
9046686 | Saito | Jun 2015 | B2 |
9046999 | King et al. | Jun 2015 | B1 |
9063563 | Gray et al. | Jun 2015 | B1 |
D733709 | Kawai | Jul 2015 | S |
9076368 | Evans et al. | Jul 2015 | B2 |
9096920 | Gomez | Aug 2015 | B1 |
9107622 | Nistico et al. | Aug 2015 | B2 |
9116337 | Miao | Aug 2015 | B1 |
D738373 | Davies et al. | Sep 2015 | S |
9122054 | Osterhout et al. | Sep 2015 | B2 |
9128281 | Osterhout et al. | Sep 2015 | B2 |
9129157 | Chao et al. | Sep 2015 | B2 |
9129295 | Border et al. | Sep 2015 | B2 |
9143693 | Zhou et al. | Sep 2015 | B1 |
9158115 | Worley et al. | Oct 2015 | B1 |
9158116 | Osterhout et al. | Oct 2015 | B1 |
D743963 | Osterhout | Nov 2015 | S |
9176582 | Johnson et al. | Nov 2015 | B1 |
D745007 | Cazalet et al. | Dec 2015 | S |
9202233 | Siegel et al. | Dec 2015 | B1 |
9225934 | Cho | Dec 2015 | B2 |
9229233 | Osterhout et al. | Jan 2016 | B2 |
9229234 | Osterhout | Jan 2016 | B2 |
9235051 | Salter et al. | Jan 2016 | B2 |
9269193 | Saito | Feb 2016 | B2 |
D751551 | Ho et al. | Mar 2016 | S |
D751552 | Osterhout | Mar 2016 | S |
9286728 | Osterhout et al. | Mar 2016 | B2 |
9298001 | Border et al. | Mar 2016 | B2 |
9298002 | Border et al. | Mar 2016 | B2 |
9298007 | Border | Mar 2016 | B2 |
9299194 | Border et al. | Mar 2016 | B2 |
D753114 | Osterhout | Apr 2016 | S |
9310610 | Border | Apr 2016 | B2 |
9316833 | Border et al. | Apr 2016 | B2 |
D756363 | Mathis | May 2016 | S |
D757006 | Cazalet et al. | May 2016 | S |
9329387 | Border et al. | May 2016 | B2 |
9354445 | Weaver et al. | May 2016 | B1 |
9366867 | Border et al. | Jun 2016 | B2 |
9366868 | Border et al. | Jun 2016 | B2 |
9377625 | Border et al. | Jun 2016 | B2 |
9400233 | Lin et al. | Jul 2016 | B2 |
9400390 | Osterhout et al. | Jul 2016 | B2 |
9401540 | Osterhout et al. | Jul 2016 | B2 |
9423612 | Border et al. | Aug 2016 | B2 |
9423842 | Osterhout et al. | Aug 2016 | B2 |
9436006 | Border | Sep 2016 | B2 |
9448409 | Border et al. | Sep 2016 | B2 |
9494800 | Border et al. | Nov 2016 | B2 |
9575321 | Osterhout et al. | Feb 2017 | B2 |
9628950 | Noeth et al. | Apr 2017 | B1 |
9720241 | Osterhout et al. | Aug 2017 | B2 |
20010019240 | Takahashi et al. | Sep 2001 | A1 |
20010050817 | Travers et al. | Dec 2001 | A1 |
20020005108 | Ludwig et al. | Jan 2002 | A1 |
20020085843 | Mann | Jul 2002 | A1 |
20020109903 | Kaeriyama et al. | Aug 2002 | A1 |
20020148655 | Cho et al. | Oct 2002 | A1 |
20020149545 | Hanayama et al. | Oct 2002 | A1 |
20020183101 | Oh et al. | Dec 2002 | A1 |
20020191297 | Gleckman et al. | Dec 2002 | A1 |
20030030597 | Geist | Feb 2003 | A1 |
20030030912 | Gleckman et al. | Feb 2003 | A1 |
20030151834 | Penn et al. | Aug 2003 | A1 |
20030209953 | Park et al. | Nov 2003 | A1 |
20030234823 | Sato et al. | Dec 2003 | A1 |
20040008158 | Chi et al. | Jan 2004 | A1 |
20040024287 | Patton et al. | Feb 2004 | A1 |
20040027312 | Owada et al. | Feb 2004 | A1 |
20040030448 | Solomon | Feb 2004 | A1 |
20040032392 | Chi et al. | Feb 2004 | A1 |
20040066363 | Yamano et al. | Apr 2004 | A1 |
20040066547 | Parker et al. | Apr 2004 | A1 |
20040080541 | Saiga et al. | Apr 2004 | A1 |
20040130522 | Lin et al. | Jul 2004 | A1 |
20040132509 | Glezerman | Jul 2004 | A1 |
20040150631 | Fleck et al. | Aug 2004 | A1 |
20040194880 | Jiang et al. | Oct 2004 | A1 |
20040227994 | Ma et al. | Nov 2004 | A1 |
20050010091 | Woods et al. | Jan 2005 | A1 |
20050010563 | Gross et al. | Jan 2005 | A1 |
20050041289 | Berman et al. | Feb 2005 | A1 |
20050122319 | Sakurai et al. | Jun 2005 | A1 |
20050154505 | Nakamura et al. | Jul 2005 | A1 |
20050156915 | Fisher et al. | Jul 2005 | A1 |
20050157949 | Aiso et al. | Jul 2005 | A1 |
20050212980 | Miyazaki et al. | Sep 2005 | A1 |
20060047386 | Kanevsky et al. | Mar 2006 | A1 |
20060050146 | Richardson et al. | Mar 2006 | A1 |
20060061542 | Stokic et al. | Mar 2006 | A1 |
20060092131 | Kuroki et al. | May 2006 | A1 |
20060098293 | Garoutte et al. | May 2006 | A1 |
20060119794 | Hillis et al. | Jun 2006 | A1 |
20060132457 | Rimas-Ribikauskas et al. | Jun 2006 | A1 |
20060132924 | Mimran et al. | Jun 2006 | A1 |
20060152686 | Yeralan et al. | Jul 2006 | A1 |
20060170652 | Bannai et al. | Aug 2006 | A1 |
20060173351 | Marcotte et al. | Aug 2006 | A1 |
20060178827 | Aoyama et al. | Aug 2006 | A1 |
20060215111 | Mihashi et al. | Sep 2006 | A1 |
20060224238 | Azar et al. | Oct 2006 | A1 |
20060238550 | Page et al. | Oct 2006 | A1 |
20060239629 | Qi et al. | Oct 2006 | A1 |
20060250322 | Hall et al. | Nov 2006 | A1 |
20060250696 | McGuire | Nov 2006 | A1 |
20060285315 | Tufenkjian et al. | Dec 2006 | A1 |
20060288233 | Kozlay et al. | Dec 2006 | A1 |
20070003168 | Oliver et al. | Jan 2007 | A1 |
20070004451 | Anderson et al. | Jan 2007 | A1 |
20070024750 | Wing Chung et al. | Feb 2007 | A1 |
20070024763 | Chung et al. | Feb 2007 | A1 |
20070024764 | Chung et al. | Feb 2007 | A1 |
20070024820 | Chung et al. | Feb 2007 | A1 |
20070024823 | Chung et al. | Feb 2007 | A1 |
20070025273 | Chung et al. | Feb 2007 | A1 |
20070030243 | Ishii et al. | Feb 2007 | A1 |
20070030456 | Duncan et al. | Feb 2007 | A1 |
20070035563 | Biocca et al. | Feb 2007 | A1 |
20070038960 | Rekimoto et al. | Feb 2007 | A1 |
20070058868 | Seino et al. | Mar 2007 | A1 |
20070069976 | Willins et al. | Mar 2007 | A1 |
20070070859 | Hirayama | Mar 2007 | A1 |
20070100637 | McCune et al. | May 2007 | A1 |
20070109284 | Yamazaki et al. | May 2007 | A1 |
20070120806 | Schmidt et al. | May 2007 | A1 |
20070120836 | Yamaguchi et al. | May 2007 | A1 |
20070132662 | Morita et al. | Jun 2007 | A1 |
20070161382 | Melinger et al. | Jul 2007 | A1 |
20070178950 | Lewis et al. | Aug 2007 | A1 |
20070233376 | Gershony et al. | Oct 2007 | A1 |
20070263174 | Shyu et al. | Nov 2007 | A1 |
20070273611 | Torch | Nov 2007 | A1 |
20070282682 | Dietz et al. | Dec 2007 | A1 |
20070296684 | Thomas et al. | Dec 2007 | A1 |
20080005702 | Skourup et al. | Jan 2008 | A1 |
20080066973 | Furuki et al. | Mar 2008 | A1 |
20080121441 | Sheets et al. | May 2008 | A1 |
20080122736 | Ronzani et al. | May 2008 | A1 |
20080143954 | Abreu et al. | Jun 2008 | A1 |
20080169998 | Jacobsen et al. | Jul 2008 | A1 |
20080186255 | Cohen et al. | Aug 2008 | A1 |
20080191965 | Pandozy et al. | Aug 2008 | A1 |
20080219025 | Spitzer et al. | Sep 2008 | A1 |
20080266645 | Dharmatilleke et al. | Oct 2008 | A1 |
20080291277 | Jacobsen et al. | Nov 2008 | A1 |
20080298639 | Tsunekawa et al. | Dec 2008 | A1 |
20090013204 | Kobayashi et al. | Jan 2009 | A1 |
20090015735 | Simmonds et al. | Jan 2009 | A1 |
20090040296 | Moscato et al. | Feb 2009 | A1 |
20090108837 | Johansson et al. | Apr 2009 | A1 |
20090110241 | Takemoto et al. | Apr 2009 | A1 |
20090147331 | Ashkenazi et al. | Jun 2009 | A1 |
20090183929 | Zhang et al. | Jul 2009 | A1 |
20090251441 | Edgecomb et al. | Oct 2009 | A1 |
20090279180 | Amitai et al. | Nov 2009 | A1 |
20100001572 | Masunaga et al. | Jan 2010 | A1 |
20100007852 | Bietry et al. | Jan 2010 | A1 |
20100046075 | Powell et al. | Feb 2010 | A1 |
20100056274 | Uusitalo et al. | Mar 2010 | A1 |
20100060713 | Snyder et al. | Mar 2010 | A1 |
20100073376 | Schmale | Mar 2010 | A1 |
20100079508 | Hodge et al. | Apr 2010 | A1 |
20100079733 | Lu et al. | Apr 2010 | A1 |
20100082368 | Gecelter et al. | Apr 2010 | A1 |
20100085325 | King-Smith et al. | Apr 2010 | A1 |
20100094161 | Kiderman et al. | Apr 2010 | A1 |
20100097580 | Yamamoto et al. | Apr 2010 | A1 |
20100103075 | Kalaboukis et al. | Apr 2010 | A1 |
20100113062 | Lee et al. | May 2010 | A1 |
20100130140 | Waku et al. | May 2010 | A1 |
20100149073 | Chaum et al. | Jun 2010 | A1 |
20100178101 | Day et al. | Jul 2010 | A1 |
20100194682 | Orr et al. | Aug 2010 | A1 |
20100240988 | Varga et al. | Sep 2010 | A1 |
20100241450 | Gierhart et al. | Sep 2010 | A1 |
20100253594 | Szczerba et al. | Oct 2010 | A1 |
20100254017 | Martins et al. | Oct 2010 | A1 |
20100280904 | Ahuja | Nov 2010 | A1 |
20100283774 | Bovet et al. | Nov 2010 | A1 |
20100290127 | Kessler et al. | Nov 2010 | A1 |
20100309426 | Howell et al. | Dec 2010 | A1 |
20100329301 | Pang et al. | Dec 2010 | A1 |
20110006982 | Rhee et al. | Jan 2011 | A1 |
20110007081 | Gordon | Jan 2011 | A1 |
20110012874 | Kurozuka et al. | Jan 2011 | A1 |
20110089325 | Ottney | Apr 2011 | A1 |
20110096100 | Sprague et al. | Apr 2011 | A1 |
20110102234 | Adams et al. | May 2011 | A1 |
20110130958 | Stahl et al. | Jun 2011 | A1 |
20110131495 | Bull et al. | Jun 2011 | A1 |
20110157236 | Inoue et al. | Jun 2011 | A1 |
20110159931 | Boss et al. | Jun 2011 | A1 |
20110164047 | Pance et al. | Jul 2011 | A1 |
20110164163 | Bilbrey et al. | Jul 2011 | A1 |
20110164221 | Tilleman et al. | Jul 2011 | A1 |
20110176106 | Lewkowski et al. | Jul 2011 | A1 |
20110196610 | Waldman et al. | Aug 2011 | A1 |
20110199171 | Prest et al. | Aug 2011 | A1 |
20110201213 | Dabov et al. | Aug 2011 | A1 |
20110202823 | Berger et al. | Aug 2011 | A1 |
20110205209 | Kurokawa et al. | Aug 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110221672 | Osterhout et al. | Sep 2011 | A1 |
20110221896 | Haddick et al. | Sep 2011 | A1 |
20110227820 | Haddick et al. | Sep 2011 | A1 |
20110234631 | Kim et al. | Sep 2011 | A1 |
20110248963 | Lawrence et al. | Oct 2011 | A1 |
20110285638 | Harris et al. | Nov 2011 | A1 |
20110285764 | Kimura et al. | Nov 2011 | A1 |
20120026088 | Goran et al. | Feb 2012 | A1 |
20120032874 | Mukawa | Feb 2012 | A1 |
20120035934 | Cunningham et al. | Feb 2012 | A1 |
20120047233 | Jin | Feb 2012 | A1 |
20120050140 | Border et al. | Mar 2012 | A1 |
20120050493 | Ernst et al. | Mar 2012 | A1 |
20120056093 | Poteet et al. | Mar 2012 | A1 |
20120062444 | Cok et al. | Mar 2012 | A1 |
20120062594 | Campbell et al. | Mar 2012 | A1 |
20120062998 | Schultz et al. | Mar 2012 | A1 |
20120068913 | Bar-Zeev et al. | Mar 2012 | A1 |
20120069413 | Schultz et al. | Mar 2012 | A1 |
20120075168 | Osterhout et al. | Mar 2012 | A1 |
20120078628 | Ghulman et al. | Mar 2012 | A1 |
20120081800 | Cheng et al. | Apr 2012 | A1 |
20120092328 | Flaks et al. | Apr 2012 | A1 |
20120092329 | Koo et al. | Apr 2012 | A1 |
20120096095 | Bhargava | Apr 2012 | A1 |
20120113514 | Rodman | May 2012 | A1 |
20120119978 | Border et al. | May 2012 | A1 |
20120120103 | Border et al. | May 2012 | A1 |
20120120498 | Harrison et al. | May 2012 | A1 |
20120127062 | Bar-Zeev et al. | May 2012 | A1 |
20120127284 | Bar-Zeev et al. | May 2012 | A1 |
20120133885 | Howell et al. | May 2012 | A1 |
20120154920 | Harrison et al. | Jun 2012 | A1 |
20120162270 | Fleck et al. | Jun 2012 | A1 |
20120163013 | Buelow, II et al. | Jun 2012 | A1 |
20120169608 | Forutanpour et al. | Jul 2012 | A1 |
20120176682 | Dejong et al. | Jul 2012 | A1 |
20120188245 | Hyatt et al. | Jul 2012 | A1 |
20120194550 | Osterhout et al. | Aug 2012 | A1 |
20120194553 | Osterhout et al. | Aug 2012 | A1 |
20120194784 | Shih et al. | Aug 2012 | A1 |
20120194859 | Oda | Aug 2012 | A1 |
20120200935 | Miyao et al. | Aug 2012 | A1 |
20120206817 | Totani et al. | Aug 2012 | A1 |
20120212398 | Border | Aug 2012 | A1 |
20120212484 | Haddick et al. | Aug 2012 | A1 |
20120212499 | Haddick et al. | Aug 2012 | A1 |
20120212593 | Na'Aman et al. | Aug 2012 | A1 |
20120218301 | Miller | Aug 2012 | A1 |
20120223885 | Perez | Sep 2012 | A1 |
20120224060 | Gurevich et al. | Sep 2012 | A1 |
20120229367 | Magyari et al. | Sep 2012 | A1 |
20120233000 | Fisher et al. | Sep 2012 | A1 |
20120235885 | Miller et al. | Sep 2012 | A1 |
20120237085 | Meier et al. | Sep 2012 | A1 |
20120242251 | Kwisthout et al. | Sep 2012 | A1 |
20120242697 | Border et al. | Sep 2012 | A1 |
20120242698 | Haddick et al. | Sep 2012 | A1 |
20120249741 | Maciocci et al. | Oct 2012 | A1 |
20120249797 | Haddick et al. | Oct 2012 | A1 |
20120250152 | Larson et al. | Oct 2012 | A1 |
20120264510 | Wigdor et al. | Oct 2012 | A1 |
20120287398 | Baker et al. | Nov 2012 | A1 |
20120293548 | Perez et al. | Nov 2012 | A1 |
20120294478 | Publicover et al. | Nov 2012 | A1 |
20120306850 | Balan et al. | Dec 2012 | A1 |
20120307198 | Ifergan | Dec 2012 | A1 |
20120326948 | Crocco et al. | Dec 2012 | A1 |
20120327040 | Simon et al. | Dec 2012 | A1 |
20120327116 | Liu et al. | Dec 2012 | A1 |
20130009366 | Hannegan et al. | Jan 2013 | A1 |
20130009907 | Rosenberg et al. | Jan 2013 | A1 |
20130063695 | Hsieh | Mar 2013 | A1 |
20130069924 | Robinson et al. | Mar 2013 | A1 |
20130069985 | Wong et al. | Mar 2013 | A1 |
20130070344 | Takeda et al. | Mar 2013 | A1 |
20130077049 | Bohn et al. | Mar 2013 | A1 |
20130083009 | Geisner et al. | Apr 2013 | A1 |
20130083055 | Piemonte et al. | Apr 2013 | A1 |
20130088413 | Raffle et al. | Apr 2013 | A1 |
20130100259 | Ramaswamy | Apr 2013 | A1 |
20130106674 | Wheeler et al. | May 2013 | A1 |
20130120224 | Cajigas et al. | May 2013 | A1 |
20130120841 | Shpunt et al. | May 2013 | A1 |
20130127906 | Sugita et al. | May 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130135198 | Hodge et al. | May 2013 | A1 |
20130141434 | Sugden et al. | Jun 2013 | A1 |
20130154913 | Genc et al. | Jun 2013 | A1 |
20130162632 | Varga et al. | Jun 2013 | A1 |
20130169530 | Bhaskar et al. | Jul 2013 | A1 |
20130176533 | Raffle et al. | Jul 2013 | A1 |
20130185052 | Boyd et al. | Jul 2013 | A1 |
20130194389 | Vaught et al. | Aug 2013 | A1 |
20130196757 | Latta et al. | Aug 2013 | A1 |
20130201080 | Evans et al. | Aug 2013 | A1 |
20130201081 | Evans et al. | Aug 2013 | A1 |
20130207887 | Raffle et al. | Aug 2013 | A1 |
20130207970 | Shpunt et al. | Aug 2013 | A1 |
20130214909 | Meijers et al. | Aug 2013 | A1 |
20130215149 | Hayashi et al. | Aug 2013 | A1 |
20130222919 | Komatsu et al. | Aug 2013 | A1 |
20130230215 | Gurman et al. | Sep 2013 | A1 |
20130234914 | Fujimaki et al. | Sep 2013 | A1 |
20130235331 | Heinrich et al. | Sep 2013 | A1 |
20130241805 | Gomez et al. | Sep 2013 | A1 |
20130241948 | Kimura | Sep 2013 | A1 |
20130242405 | Gupta et al. | Sep 2013 | A1 |
20130248691 | Mirov et al. | Sep 2013 | A1 |
20130249778 | Morimoto et al. | Sep 2013 | A1 |
20130249787 | Morimoto | Sep 2013 | A1 |
20130250207 | Bohn et al. | Sep 2013 | A1 |
20130250430 | Robbins et al. | Sep 2013 | A1 |
20130250503 | Olsson et al. | Sep 2013 | A1 |
20130257622 | Davalos et al. | Oct 2013 | A1 |
20130257709 | Raffle et al. | Oct 2013 | A1 |
20130258111 | Frank et al. | Oct 2013 | A1 |
20130265212 | Kato et al. | Oct 2013 | A1 |
20130265227 | Julian et al. | Oct 2013 | A1 |
20130278631 | Border et al. | Oct 2013 | A1 |
20130293530 | Perez et al. | Nov 2013 | A1 |
20130293580 | Spivack et al. | Nov 2013 | A1 |
20130300637 | Smits et al. | Nov 2013 | A1 |
20130300652 | Raffle et al. | Nov 2013 | A1 |
20130321265 | Bychkov et al. | Dec 2013 | A1 |
20130321271 | Bychkov et al. | Dec 2013 | A1 |
20130321932 | Hsu et al. | Dec 2013 | A1 |
20130335301 | Wong et al. | Dec 2013 | A1 |
20130335303 | Maciocci | Dec 2013 | A1 |
20130335415 | Chang | Dec 2013 | A1 |
20130335435 | Ambrus et al. | Dec 2013 | A1 |
20130335461 | Rekimoto et al. | Dec 2013 | A1 |
20130336528 | Itani et al. | Dec 2013 | A1 |
20130336629 | Mulholland et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342571 | Kinnebrew et al. | Dec 2013 | A1 |
20130342981 | Cox et al. | Dec 2013 | A1 |
20130346245 | Desore et al. | Dec 2013 | A1 |
20140028704 | Wu et al. | Jan 2014 | A1 |
20140043682 | Hussey et al. | Feb 2014 | A1 |
20140062854 | Cho | Mar 2014 | A1 |
20140063054 | Osterhout et al. | Mar 2014 | A1 |
20140063055 | Osterhout et al. | Mar 2014 | A1 |
20140063473 | Pasolini | Mar 2014 | A1 |
20140078043 | Kim et al. | Mar 2014 | A1 |
20140078282 | Aoki et al. | Mar 2014 | A1 |
20140091984 | Ashbrook et al. | Apr 2014 | A1 |
20140101608 | Ryskamp et al. | Apr 2014 | A1 |
20140104142 | Bickerstaff et al. | Apr 2014 | A1 |
20140104692 | Bickerstaff et al. | Apr 2014 | A1 |
20140125668 | Steed et al. | May 2014 | A1 |
20140125785 | Na et al. | May 2014 | A1 |
20140129328 | Mathew | May 2014 | A1 |
20140139655 | Mimar | May 2014 | A1 |
20140146394 | Tout et al. | May 2014 | A1 |
20140147829 | Jerauld | May 2014 | A1 |
20140152530 | Venkatesha et al. | Jun 2014 | A1 |
20140152558 | Salter et al. | Jun 2014 | A1 |
20140152676 | Rohn et al. | Jun 2014 | A1 |
20140153173 | Pombo et al. | Jun 2014 | A1 |
20140159995 | Adams et al. | Jun 2014 | A1 |
20140160055 | Margolis et al. | Jun 2014 | A1 |
20140160137 | Martin et al. | Jun 2014 | A1 |
20140160157 | Poulos et al. | Jun 2014 | A1 |
20140160170 | Lyons | Jun 2014 | A1 |
20140168056 | Swaminathan et al. | Jun 2014 | A1 |
20140168266 | Kimura et al. | Jun 2014 | A1 |
20140168716 | King et al. | Jun 2014 | A1 |
20140168735 | Yuan et al. | Jun 2014 | A1 |
20140176591 | Klein et al. | Jun 2014 | A1 |
20140176603 | Kumar et al. | Jun 2014 | A1 |
20140177023 | Gao et al. | Jun 2014 | A1 |
20140183269 | Glaser et al. | Jul 2014 | A1 |
20140204759 | Orlik et al. | Jul 2014 | A1 |
20140213280 | Sandel et al. | Jul 2014 | A1 |
20140222929 | Grossman | Aug 2014 | A1 |
20140225814 | English et al. | Aug 2014 | A1 |
20140232651 | Kress et al. | Aug 2014 | A1 |
20140240313 | Varga | Aug 2014 | A1 |
20140247286 | Chi et al. | Sep 2014 | A1 |
20140253588 | Mandala et al. | Sep 2014 | A1 |
20140253605 | Border et al. | Sep 2014 | A1 |
20140267010 | Pasquero et al. | Sep 2014 | A1 |
20140285631 | Janky et al. | Sep 2014 | A1 |
20140306866 | Miller et al. | Oct 2014 | A1 |
20140310075 | Ricci et al. | Oct 2014 | A1 |
20140320389 | Scavezze et al. | Oct 2014 | A1 |
20140320971 | Gupta et al. | Oct 2014 | A1 |
20140341441 | Slaby et al. | Nov 2014 | A1 |
20140361957 | Hua et al. | Dec 2014 | A1 |
20140361976 | Mao et al. | Dec 2014 | A1 |
20140362195 | Ng-thow-hing et al. | Dec 2014 | A1 |
20140363797 | Hu et al. | Dec 2014 | A1 |
20140372957 | Kipman et al. | Dec 2014 | A1 |
20140375542 | Bohn et al. | Dec 2014 | A1 |
20140375545 | Finocchio et al. | Dec 2014 | A1 |
20140375680 | Ackerman et al. | Dec 2014 | A1 |
20140375683 | Massey et al. | Dec 2014 | A1 |
20150002371 | Herrod et al. | Jan 2015 | A1 |
20150022542 | Baba | Jan 2015 | A1 |
20150029088 | Kim et al. | Jan 2015 | A1 |
20150035744 | Robbins et al. | Feb 2015 | A1 |
20150042544 | Sugihara et al. | Feb 2015 | A1 |
20150097719 | Balachandreswaran et al. | Apr 2015 | A1 |
20150109186 | Layson et al. | Apr 2015 | A1 |
20150134143 | Willenborg | May 2015 | A1 |
20150143297 | Wheeler et al. | May 2015 | A1 |
20150145839 | Hack et al. | May 2015 | A1 |
20150146004 | Kritt et al. | May 2015 | A1 |
20150147000 | Salvador et al. | May 2015 | A1 |
20150153572 | Miao et al. | Jun 2015 | A1 |
20150161913 | Dominguez et al. | Jun 2015 | A1 |
20150169953 | Border | Jun 2015 | A1 |
20150175068 | Szostak et al. | Jun 2015 | A1 |
20150178932 | Wyatt et al. | Jun 2015 | A1 |
20150181383 | Schulz | Jun 2015 | A1 |
20150186636 | Tharappel et al. | Jul 2015 | A1 |
20150198807 | Hirai | Jul 2015 | A1 |
20150201834 | Border et al. | Jul 2015 | A1 |
20150201835 | Border et al. | Jul 2015 | A1 |
20150201836 | Border et al. | Jul 2015 | A1 |
20150202962 | Habashima et al. | Jul 2015 | A1 |
20150205035 | Border et al. | Jul 2015 | A1 |
20150205100 | Border | Jul 2015 | A1 |
20150205101 | Border | Jul 2015 | A1 |
20150205102 | Border | Jul 2015 | A1 |
20150205103 | Border | Jul 2015 | A1 |
20150205104 | Border | Jul 2015 | A1 |
20150205105 | Border | Jul 2015 | A1 |
20150205107 | Border | Jul 2015 | A1 |
20150205108 | Border et al. | Jul 2015 | A1 |
20150205111 | Border et al. | Jul 2015 | A1 |
20150205112 | Border | Jul 2015 | A1 |
20150205113 | Border et al. | Jul 2015 | A1 |
20150205114 | Border et al. | Jul 2015 | A1 |
20150205115 | Border et al. | Jul 2015 | A1 |
20150205116 | Border et al. | Jul 2015 | A1 |
20150205117 | Border et al. | Jul 2015 | A1 |
20150205118 | Border et al. | Jul 2015 | A1 |
20150205119 | Osterhout et al. | Jul 2015 | A1 |
20150205120 | Border et al. | Jul 2015 | A1 |
20150205121 | Border et al. | Jul 2015 | A1 |
20150205122 | Border et al. | Jul 2015 | A1 |
20150205123 | Border | Jul 2015 | A1 |
20150205124 | Border | Jul 2015 | A1 |
20150205125 | Border et al. | Jul 2015 | A1 |
20150205126 | Schowengerdt | Jul 2015 | A1 |
20150205127 | Border et al. | Jul 2015 | A1 |
20150205128 | Border | Jul 2015 | A1 |
20150205129 | Border et al. | Jul 2015 | A1 |
20150205130 | Border | Jul 2015 | A1 |
20150205131 | Border et al. | Jul 2015 | A1 |
20150205132 | Osterhout et al. | Jul 2015 | A1 |
20150205135 | Border et al. | Jul 2015 | A1 |
20150205346 | Border | Jul 2015 | A1 |
20150205347 | Border | Jul 2015 | A1 |
20150205348 | Nortrup et al. | Jul 2015 | A1 |
20150205349 | Nortrup et al. | Jul 2015 | A1 |
20150205351 | Osterhout et al. | Jul 2015 | A1 |
20150205373 | Osterhout et al. | Jul 2015 | A1 |
20150205378 | Osterhout | Jul 2015 | A1 |
20150205384 | Osterhout et al. | Jul 2015 | A1 |
20150205385 | Osterhout et al. | Jul 2015 | A1 |
20150205387 | Osterhout et al. | Jul 2015 | A1 |
20150205388 | Osterhout | Jul 2015 | A1 |
20150205401 | Osterhout | Jul 2015 | A1 |
20150205402 | Osterhout | Jul 2015 | A1 |
20150205494 | Scott et al. | Jul 2015 | A1 |
20150205566 | Osterhout | Jul 2015 | A1 |
20150206008 | Border et al. | Jul 2015 | A1 |
20150206173 | Nortrup et al. | Jul 2015 | A1 |
20150212324 | Osterhout | Jul 2015 | A1 |
20150212327 | Osterhout et al. | Jul 2015 | A1 |
20150213584 | Ishikawa et al. | Jul 2015 | A1 |
20150213650 | Barzuza et al. | Jul 2015 | A1 |
20150213754 | Amjad | Jul 2015 | A1 |
20150226966 | Osterhout | Aug 2015 | A1 |
20150226967 | Osterhout et al. | Aug 2015 | A1 |
20150228099 | Osterhout | Aug 2015 | A1 |
20150228119 | Osterhout et al. | Aug 2015 | A1 |
20150228120 | Osterhout et al. | Aug 2015 | A1 |
20150229019 | Osterhout | Aug 2015 | A1 |
20150234508 | Cho et al. | Aug 2015 | A1 |
20150235422 | Lohse et al. | Aug 2015 | A1 |
20150235429 | Miller et al. | Aug 2015 | A1 |
20150235622 | Border et al. | Aug 2015 | A1 |
20150241963 | Nortrup et al. | Aug 2015 | A1 |
20150241964 | Nortrup et al. | Aug 2015 | A1 |
20150241965 | Nortrup et al. | Aug 2015 | A1 |
20150241966 | Nortrup et al. | Aug 2015 | A1 |
20150243039 | Holz | Aug 2015 | A1 |
20150245131 | Facteau et al. | Aug 2015 | A1 |
20150253573 | Sako et al. | Sep 2015 | A1 |
20150260887 | Salisbury et al. | Sep 2015 | A1 |
20150260986 | Nortrup | Sep 2015 | A1 |
20150261015 | Ha et al. | Sep 2015 | A1 |
20150277113 | Border et al. | Oct 2015 | A1 |
20150277116 | Richards et al. | Oct 2015 | A1 |
20150277118 | Border et al. | Oct 2015 | A1 |
20150277120 | Border | Oct 2015 | A1 |
20150277122 | Border | Oct 2015 | A1 |
20150277549 | Border | Oct 2015 | A1 |
20150277559 | Vescovi | Oct 2015 | A1 |
20150279010 | Cianfrone et al. | Oct 2015 | A1 |
20150279104 | Border et al. | Oct 2015 | A1 |
20150279107 | Border et al. | Oct 2015 | A1 |
20150279108 | Border | Oct 2015 | A1 |
20150287048 | Nortrup et al. | Oct 2015 | A1 |
20150293587 | Wilairat et al. | Oct 2015 | A1 |
20150294156 | Border et al. | Oct 2015 | A1 |
20150294627 | Yoo et al. | Oct 2015 | A1 |
20150301593 | Border et al. | Oct 2015 | A1 |
20150302646 | Osterhout et al. | Oct 2015 | A1 |
20150302647 | Osterhout et al. | Oct 2015 | A1 |
20150304368 | Vaccari et al. | Oct 2015 | A1 |
20150309313 | Border et al. | Oct 2015 | A1 |
20150309314 | Border et al. | Oct 2015 | A1 |
20150309317 | Osterhout et al. | Oct 2015 | A1 |
20150309534 | Osterhout | Oct 2015 | A1 |
20150309562 | Shams et al. | Oct 2015 | A1 |
20150309995 | Osterhout | Oct 2015 | A1 |
20150316766 | Weaver et al. | Nov 2015 | A1 |
20150316769 | Border et al. | Nov 2015 | A1 |
20150316770 | Border et al. | Nov 2015 | A1 |
20150316771 | Border et al. | Nov 2015 | A1 |
20150316772 | Border et al. | Nov 2015 | A1 |
20150331241 | Haddick et al. | Nov 2015 | A1 |
20150332032 | Alameh et al. | Nov 2015 | A1 |
20150338661 | Osterhout et al. | Nov 2015 | A1 |
20150346496 | Haddick et al. | Dec 2015 | A1 |
20150346511 | Osterhout et al. | Dec 2015 | A1 |
20150347823 | Monnerat et al. | Dec 2015 | A1 |
20150355466 | Border | Dec 2015 | A1 |
20150356772 | Osterhout et al. | Dec 2015 | A1 |
20150356775 | Osterhout et al. | Dec 2015 | A1 |
20150356776 | Osterhout et al. | Dec 2015 | A1 |
20150356777 | Osterhout et al. | Dec 2015 | A1 |
20150356778 | Osterhout et al. | Dec 2015 | A1 |
20150356779 | Osterhout et al. | Dec 2015 | A1 |
20150363975 | Osterhout et al. | Dec 2015 | A1 |
20150382305 | Drincic | Dec 2015 | A1 |
20160005003 | Norris | Jan 2016 | A1 |
20160007849 | Krueger et al. | Jan 2016 | A1 |
20160011417 | Border et al. | Jan 2016 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018640 | Haddick et al. | Jan 2016 | A1 |
20160018641 | Haddick et al. | Jan 2016 | A1 |
20160018642 | Haddick et al. | Jan 2016 | A1 |
20160018644 | Border et al. | Jan 2016 | A1 |
20160018645 | Haddick et al. | Jan 2016 | A1 |
20160018646 | Osterhout et al. | Jan 2016 | A1 |
20160018647 | Osterhout et al. | Jan 2016 | A1 |
20160018648 | Osterhout et al. | Jan 2016 | A1 |
20160018649 | Osterhout et al. | Jan 2016 | A1 |
20160018650 | Haddick et al. | Jan 2016 | A1 |
20160018651 | Haddick et al. | Jan 2016 | A1 |
20160018652 | Haddick et al. | Jan 2016 | A1 |
20160018653 | Haddick et al. | Jan 2016 | A1 |
20160018654 | Haddick et al. | Jan 2016 | A1 |
20160019715 | Haddick et al. | Jan 2016 | A1 |
20160019719 | Osterhout et al. | Jan 2016 | A1 |
20160021304 | Osterhout | Jan 2016 | A1 |
20160025974 | Osterhout et al. | Jan 2016 | A1 |
20160025977 | Osterhout | Jan 2016 | A1 |
20160025979 | Border et al. | Jan 2016 | A1 |
20160025980 | Osterhout et al. | Jan 2016 | A1 |
20160026239 | Border et al. | Jan 2016 | A1 |
20160027211 | Osterhout et al. | Jan 2016 | A1 |
20160027414 | Osterhout et al. | Jan 2016 | A1 |
20160035139 | Fuchs et al. | Feb 2016 | A1 |
20160045810 | Minkovitch | Feb 2016 | A1 |
20160048018 | De Matos | Feb 2016 | A1 |
20160048019 | Haddick et al. | Feb 2016 | A1 |
20160048021 | Border | Feb 2016 | A1 |
20160048023 | Haddick et al. | Feb 2016 | A1 |
20160048160 | Haddick et al. | Feb 2016 | A1 |
20160049008 | Haddick et al. | Feb 2016 | A1 |
20160054566 | Osterhout et al. | Feb 2016 | A1 |
20160055675 | Kasahara et al. | Feb 2016 | A1 |
20160062118 | Osterhout | Mar 2016 | A1 |
20160062121 | Border et al. | Mar 2016 | A1 |
20160062122 | Border | Mar 2016 | A1 |
20160077342 | Osterhout et al. | Mar 2016 | A1 |
20160085071 | Border | Mar 2016 | A1 |
20160085072 | Haddick et al. | Mar 2016 | A1 |
20160085278 | Osterhout et al. | Mar 2016 | A1 |
20160091718 | Border et al. | Mar 2016 | A1 |
20160091719 | Border | Mar 2016 | A1 |
20160109709 | Osterhout | Apr 2016 | A1 |
20160109711 | Border | Apr 2016 | A1 |
20160109713 | Osterhout | Apr 2016 | A1 |
20160116738 | Osterhout et al. | Apr 2016 | A1 |
20160116745 | Osterhout et al. | Apr 2016 | A1 |
20160116979 | Border | Apr 2016 | A1 |
20160131904 | Border et al. | May 2016 | A1 |
20160131911 | Border et al. | May 2016 | A1 |
20160131912 | Border et al. | May 2016 | A1 |
20160132082 | Border et al. | May 2016 | A1 |
20160133201 | Border et al. | May 2016 | A1 |
20160137312 | Osterhout | May 2016 | A1 |
20160147063 | Border et al. | May 2016 | A1 |
20160147064 | Border et al. | May 2016 | A1 |
20160147065 | Border et al. | May 2016 | A1 |
20160147070 | Border et al. | May 2016 | A1 |
20160154242 | Border | Jun 2016 | A1 |
20160154244 | Border et al. | Jun 2016 | A1 |
20160161743 | Osterhout et al. | Jun 2016 | A1 |
20160161747 | Osterhout | Jun 2016 | A1 |
20160170207 | Haddick et al. | Jun 2016 | A1 |
20160170208 | Border et al. | Jun 2016 | A1 |
20160170209 | Border et al. | Jun 2016 | A1 |
20160170699 | Border et al. | Jun 2016 | A1 |
20160171769 | Haddick et al. | Jun 2016 | A1 |
20160187651 | Border et al. | Jun 2016 | A1 |
20160187658 | Osterhout et al. | Jun 2016 | A1 |
20160202946 | Osterhout et al. | Jul 2016 | A1 |
20160207457 | Border et al. | Jul 2016 | A1 |
20160216516 | Border | Jul 2016 | A1 |
20160216517 | Border | Jul 2016 | A1 |
20160231571 | Border et al. | Aug 2016 | A1 |
20160239985 | Haddick et al. | Aug 2016 | A1 |
20160240008 | Haddick et al. | Aug 2016 | A1 |
20160246055 | Border et al. | Aug 2016 | A1 |
20160252731 | Border et al. | Sep 2016 | A1 |
20160259166 | Border et al. | Sep 2016 | A1 |
20160274361 | Border et al. | Sep 2016 | A1 |
20160274365 | Bailey et al. | Sep 2016 | A1 |
20160282626 | Border et al. | Sep 2016 | A1 |
20160286177 | Border et al. | Sep 2016 | A1 |
20160286203 | Border et al. | Sep 2016 | A1 |
20160286210 | Border et al. | Sep 2016 | A1 |
20160306173 | Tsukahara et al. | Oct 2016 | A1 |
20160329634 | Osterhout et al. | Nov 2016 | A1 |
20160357019 | Border et al. | Dec 2016 | A1 |
20170285350 | Osterhout et al. | Oct 2017 | A1 |
Number | Date | Country |
---|---|---|
368898 | May 1990 | EP |
777867 | Jun 1997 | EP |
1326121 | Jul 2003 | EP |
2207164 | Jul 2010 | EP |
2486450 | Aug 2012 | EP |
2490130 | Aug 2012 | EP |
2502410 | Sep 2012 | EP |
2674834 | Dec 2013 | EP |
2491984 | Dec 2012 | GB |
07110735 | Apr 1995 | JP |
2000102036 | Apr 2000 | JP |
2005138755 | Jun 2005 | JP |
2009171505 | Jul 2009 | JP |
5017989 | Sep 2012 | JP |
2012212990 | Nov 2012 | JP |
1020110101944 | Sep 2011 | KR |
2011143655 | Nov 2011 | WO |
2012040030 | Mar 2012 | WO |
2012058175 | May 2012 | WO |
2012064546 | May 2012 | WO |
2012082807 | Jun 2012 | WO |
2012118573 | Sep 2012 | WO |
2012118575 | Sep 2012 | WO |
2013043288 | Mar 2013 | WO |
2013049248 | Apr 2013 | WO |
2013050650 | Apr 2013 | WO |
2013103825 | Jul 2013 | WO |
2013110846 | Aug 2013 | WO |
2013170073 | Nov 2013 | WO |
2013176079 | Nov 2013 | WO |
2015109145 | Jul 2015 | WO |
2015109145 | Jul 2015 | WO |
2015164276 | Oct 2015 | WO |
2015179877 | Nov 2015 | WO |
2015179877 | Nov 2015 | WO |
2015195444 | Dec 2015 | WO |
2016044035 | Mar 2016 | WO |
2016073734 | May 2016 | WO |
2016133886 | Aug 2016 | WO |
Entry |
---|
US 8,743,465, 06/2014, Totani et al. (withdrawn) |
US 8,792,178, 07/2014, Totani et al. (withdrawn) |
US 9,195,056, 11/2015, Border et al. (withdrawn) |
PCT/US2015/011697, “International Application Serial No. PCT/US2015/011697, International Search Report and Written Opinion dated Apr. 13, 2015”, Osterhout Group, Inc., 14 pages. |
Schedwill, “Bidirectional OLED Microdisplay”, Fraunhofer Research Institution for Organics, Materials and Electronic Device Comedd, Apr. 11, 2014, 2 pages. |
Vogel, et al., “Data glasses controlled by eye movements”, Information and communication, Fraunhofer-Gesellschaft, Sep. 22, 2013, 2 pages. |
“Audio Spotlight”, by Holosonics, http://www.holosonics.com, accessed Jul. 3, 2014, 3 pages. |
“Help Requested! Comments and input needed for new coaxial UAS—DIY Drones”, http://diydrones.com/profiles/blogs/help-requested-comments-and-input-needed-for-new-coaxial-uas, Mar. 5, 2015, 1-3. |
“How Ascent AeroSystems is looking to add to your outdoor adventure”, http://droneblog.com/2015/03/23/how-ascent-aerosystems-is-looking-to-add-to-your-outdoor-adventure/#!prettyPhoto, Mar. 23, 2015, 1-10. |
“Lightberry”, https://web.archive.org/web/20131201194408/http:l/lightberry.eu/, Dec. 1, 2013, 11 Pages. |
“Sound from Ultrasound”, Wikipedia entry, http://en.m.wikipedia.org/wiki/Sound_from_ultrasound, accessed Jul. 3, 2014, 13 pages. |
Allison, Robert S. et al., ““Tolerance of Temporal Delay in Virtual Environments””, VR '01 Proceedings of the Virtual Reality 2001 Conference (VR'01), Centre for Vision Research and Departments of Computer Science and Psychology, Mar. 2001, 2-8. |
Bezryadin, Sergey et al., “Brightness Calculation in Digital Image Processing”, Technologies for Digital Fulfillment 2007, Las Vegas, NV, 2007, pp. 1-6. |
Fathi, Alircza et al., “Social interactions: A first-person perspective.”, Computer Vision and Pattern Recognition (CVPR), IEEE Conference on. IEEE, 2012., 2012, 8 Pages. |
Huang, Jin-Bin , “Image Completion Using Planar Structure Guidelines”, ACM Transactions on Graphics, vol. 33, No. 4, Article 129, Jul. 2014, pp. 1-10. |
Janin, Adam L. et al., “Calibration of Head-Mounted Displays for Augmented Reality Applications”, Research and Technology Boeing Computer Services MS 7L-48 P.O. Box 24346 Seattle, WA 98124-0346 Virtual Reality Annual International Symposium, 1993., 1993 IEEE,, 1993, 10 Pages. |
Lang, Manuel et al., ““Nonlinear Disparity Mapping for Stereoscopic 3D””, Jul. 2010, pp. 1-10. |
Logbar Inc., , “Ring: Shortcut Everything”, https://www.kickstarter.com/projects/1761670738/ring-shortcut-everything, Jun. 2012, 1 page. |
Losev, Oleg et al., “Light-emitting diode”, https://en.wikipedia.org/wiki/Lightemitting_diode, Nov. 2016, pp. 1-25. |
Mastandrea, Nick , “Mycestro, The Next Generation 3D Mouse”, https://www.kickstarter.com/projects/mycestro/mycestrotm-the-next-generation-3d-mouse, Dec. 2014, 22 pages. |
Pamplona, Vitor R. et al., “Photorealistic Models for Pupil Light Reflex and Iridal Pattern Deformation”, ACM Transactions on Graphics, vol. 28, No. 4, Article 106, Publication date: Aug. 2009, pp. 1-12. |
PCT/US2015/011697 , , “International Application Serial No. PCT/US2015/011697, International Preliminary Report on Patentability and Written Opinion dated Jul. 28, 2016”, Osterhout Group, Inc., 10 pages. |
PCT/US2015/026704 , , “International Application Serial No. PCT/US2015/026704, International Preliminary Report on Patentability and Written Opinion dated Nov. 3, 2016”, Osterhout Group, Inc., 10 pages. |
PCT/US2015/026704, , “International Search Report and Written Opinion”, Osterhout Group, Inc., 15 pages. |
PCT/US2015/033379, , “International Application Serial No. PCT/US2015/033379, International Preliminary Report on Patentability and Written Opinion dated Dec. 1, 2016”, Osterhout Group, Inc., 8 Pages. |
PCT/US2015/035192, , “International Application Serial No. PCT/US2015/035192, International Search Report and Written Opinion dated Sep. 3, 2015”, Osterhout Group, Inc., 11 pages. |
PCT/US2015/059264, , “International Application Serial No. PCT/US2015/059264, International Search Report and Written Opinion dated Feb. 19, 2016”, Osterhout Group, Inc., 11 Pages. |
PCT/US2016/018040, , “International Application Serial No. PCT/US2016/018040, International Search Report and Written Opinion dated Jul. 6, 2016”, Osterhout Group, Inc., 10 pages. |
PCT/US2016/038008, , “Application Serial No. PCT/US2016/038008, International Search Report and Written Opinion dated Oct. 27, 2016”, Osterhout Group, Inc., 8 pages. |
PCT/US2016/042440, , “Application Serial No. PCT/US2016/042440, The International Search Report and Written Opinion dated Oct. 13, 2016”, Osterhout Group, Inc., 7 pages. |
PCTUS2015033379, , “International Application Serial No. PCT/US2015/033379, International Search Report and Written Opinion dated Nov. 30, 2015”, Osterhout Group, Inc., 12 Pages. |
Plainis, Sotiris et al., “The Physiologic Mechanism of Accommodation”, Cataract & Refractive Surgery Today Europe, Apr. 2014, pp. 23-29. |
Walton, Zach , “Wear This Smartphone Controller on Your Finger”, http://www.webpronews.com/wear-this-smartphone-controller-on-your-finger-2012-06, 5 pages. |
Ye, Hui et al., “High Quality Voice Morphing”, Cambridge University Engineering Department Trumpington Street, Cambridge, England, CB2 1PZ, 2004, pp. I-9-I-11. |
U.S. Appl. No. 14/743,047, filed Jun. 18, 2015, Pending. |
U.S. Appl. No. 14/802,878, filed Jul. 17, 2015, Pending. |
U.S. Appl. No. 14/806,385, filed Jul. 22, 2015, Pending. |
U.S. Appl. No. 14/806,410, filed Jul. 22, 2015, Pending. |
U.S. Appl. No. 14/880,809, filed Oct. 12, 2015, Pending. |
U.S. Appl. No. 14/919,981, filed Oct. 22, 2015, Pending. |
U.S. Appl. No. 29/553,028, filed Jan. 28, 2016, Allowed. |
PCT/US2016/018040, Feb. 16, 2016, Pending. |
U.S. Appl. No. 15/051,365, filed Feb. 23, 2016, Pending. |
U.S. Appl. No. 15/053,054, filed Feb. 25, 2016, Pending. |
U.S. Appl. No. 15/053,110, filed Feb. 25, 2016, Pending. |
U.S. Appl. No. 15/056,573, filed Feb. 29, 2016, Pending. |
U.S. Appl. No. 15/058,383, filed Mar. 2, 2016, Pending. |
U.S. Appl. No. 15/058,835, filed Mar. 2, 2016, Pending. |
U.S. Appl. No. 15/063,667, filed Mar. 8, 2016, Pending. |
U.S. Appl. No. 15/063,682, filed Mar. 8, 2016, Pending. |
U.S. Appl. No. 15/063,691, filed Mar. 8, 2016, Pending. |
U.S. Appl. No. 15/063,702, filed Mar. 8, 2016, Pending. |
U.S. Appl. No. 15/063,714, filed Mar. 8, 2016, Pending. |
U.S. Appl. No. 15/094,039, filed Apr. 8, 2016, Pending. |
U.S. Appl. No. 15/149,456, filed May. 9, 2016, Pending. |
U.S. Appl. No. 15/155,139, filed May 16, 2016, Pending. |
U.S. Appl. No. 15/155,476, filed May 16, 2016, Pending. |
U.S. Appl. No. 15/162,737, filed May 24, 2016, Pending. |
U.S. Appl. No. 15/167,621, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,648, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,665, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,679, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,695, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,708, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/167,720, filed May 27, 2016, Pending. |
U.S. Appl. No. 15/170,256, filed Jun. 1, 2016, Pending. |
PCT/US2016/038008, Jun. 17, 2016, Pending. |
PCT/US2016/042440, Jul. 15, 2016, Pending. |
U.S. Appl. No. 15/214,591, filed Jun. 20, 2016, Pending. |
U.S. Appl. No. 15/223,423, filed Jun. 29, 2016, Pending. |
U.S. Appl. No. 15/242,893, filed Aug. 22, 2016, Pending. |
U.S. Appl. No. 15/242,757, filed Aug. 22, 2016, Pending. |
U.S. Appl. No. 15/241,314, filed Aug. 19, 2016, Pending. |
U.S. Appl. No. 29/575,093, filed Aug. 22, 2016, Pending. |
U.S. Appl. No. 15/249,637, filed Aug. 29, 2016, Pending. |
U.S. Appl. No. 15/259,473, filed Sep. 8, 2016, Pending. |
U.S. Appl. No. 15/259,465, filed Sep. 8, 2016, Pending. |
U.S. Appl. No. 29/555,129, filed Feb. 18, 2016, Allowed. |
U.S. Appl. No. 15/281,504, filed Sep. 30, 2016, Pending. |
PCT/US2016/058203, Oct. 21, 2016, Pending. |
U.S. Appl. No. 29/581,145, Oct. 17, 2016, Pending. |
U.S. Appl. No. 15/334,412, filed Oct. 26, 2016, Pending. |
U.S. Appl. No. 15/347,958, filed Nov. 10, 2016, Pending. |
U.S. Appl. No. 15/352,745, filed Nov. 16, 2016, Pending. |
PCT/US2016/064441, Dec. 1, 2016, Pending. |
PCT/US2015/011697, Jan. 16, 215, Pending. |
U.S. Appl. No. 14/524,940, filed Oct. 27, 2014, Pending. |
U.S. Appl. No. 14/742,399, filed Jun. 17, 2015, Abandoned. |
PCT/US2016/063946, Nov. 29, 2016, Pending. |
PCT/US2015/059264, Nov. 5, 2015, Pending. |
PCT/US2015/026704, Apr. 20, 2015, Pending. |
PCT/US2016/057021, Oct. 14, 2016, Pending. |
“Genius Ring Mice”, http://www.geniusnet.com/Genius/wSite/productCompare/compare.jsp, Dec. 23, 2014, 1 page. |
“Meet Nod, the Bluetooth Ring That Wants to Replace Your Mouse”, http://www.pcmag.com/article2/0,2817,2457238,00.asp, Apr. 29, 2014, 6 pages. |
Logbar Inc., “Ring: Shortcut Everything”, https://www.kickstarter.com/projects/1761670738/ring-shortcut-everything, Jun. 2012, 22 pages. |
Mastandrea, “Mycestro, The Next Generation 3D Mouse”, https://www.kickstarter.com/projects/mycestro/mycestrotm-the-next-generation-3d-mouse, Dec. 2014, 22 pages. |
Walton, “Wear This Smartphone Controller on Your Finger”, http://www.webpronews.com/wear-this-smartphone-controller-on-your-finger-2012-06, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20150355468 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14300387 | Jun 2014 | US |
Child | 14309233 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14299474 | Jun 2014 | US |
Child | 14300387 | US |