The present disclosure relates to wearable, near-eye displays with particular emphasis on see-through near-eye displays having high optical performance that are easy to fabricate, aesthetically pleasing, compact and lightweight enough to wear comfortably.
Progress in miniaturization and in the capability of electronic image display devices is a key enabling technology for compact and high performance near-eye displays (NEDs) which have been extensively researched in recent years in view of increasingly popular augmented reality (AR) applications.
A challenge in the design of NEDs for AR applications has been to make such a system truly “wearable”. For this purpose, the NED system must present high quality virtual images, be comfortable to wear, and not look awkward in appearance. Further, the system must not impair the user's perception of real-world views and be sufficiently rugged for daily handling. All these requirements must be met under the constraints of low cost in mass production for wide adoption.
There are a number of prior art NEDs which attempt to meet the above requirements with varying degrees of success. See, for example, U.S. Pat. Nos. 5,696,521, 5,886,822 and 8,508,851B2 whose exemplary NEDs are shown in
The resulting near-eye off-axis near-eye display system is undesirably difficult to fabricate and looks very different from ordinary eyeglasses and is often too bulky, thus not aesthetically pleasing from a consumer standpoint. For example, U.S. Pat. No. 9,134,535B2 discloses an exemplary NED as illustrated in
Another challenge in the design of prior art near-eye display systems has been the need to increase the system's field of view (FOV) so more information can be presented to the user for an improved user experience.
Optically, the FOV of a near-eye display can be increased merely by increasing the image display device size while keeping the system effective focal length (EFL) and numerical aperture (NA) unchanged. The product of NA and EFL defines the radius of the NED exit pupil within which an unobstructed virtual image can be viewed. This approach leads to a more complex optical system due to the associated size increase of optical surfaces/components and the need for aberration control for an increased FOV. In spite of the trade-off between system complexity and virtual image resolution at the corner fields, the resulting near-eye display will still grow in size due to the larger image display device and larger optical surfaces/components that must be employed to avoid vignetting. This in turn makes the near-eye display less wearable due to its excessive bulkiness. A larger display device is also undesirably less power efficient and is more costly.
To increase the FOV of a near-eye display, another approach has been to divide the FOV into multiple zones and to cover each zone with a channel which uses a smaller image display device. This angular tiling method has the advantage of each channel being relatively compact and the growth of system volume is proportional to the number of channels and the extent of the FOV. Also, the use of a number of smaller display devices is more cost-effective than the use of a single large device. See, for example, U.S. Pat. No. 6,529,331B2, and U.S. Pat. No. 9,244,277B2.
To increase the FOV of a NED, yet another prior art approach has been to divide the FOV into a number of zones and to cover each zone in a time sequence with light from a single display device. A smaller display device can be used in this approach which in turn typically makes the NED smaller. The imaged light from the display device is switched among different zones using switchable optics components. See, for example, U.S. Pub. No. US2015/0125109A1 and U.S. Pub. No. US2014/0232651A1.
Another significant technical hurdle to the miniaturization of near-eye display systems is the availability of high brightness and compact display devices, as can be inferred from the above discussion. Common conventional display device technologies include Digital Micro-Mirror Display (DMD), Liquid Crystal Display (LCD), Liquid Crystal on Silicon (LCOS) and Organic Light Emitting Diode (OLED). Systems such as DMD and LCOS require an accompanying illumination optical system which adds system volume. LCD technology has associated low brightness and lower resolution. OLED technology is more compact than DMD and LCOS and has better brightness and resolution than LCD. OLED is also a promising display device format for near-eye displays, but OLED still needs to further improve its brightness and durability for wide adoption in NED applications.
A new class of emissive micro-scale pixel array imager devices has been introduced as disclosed in U.S. Pat. Nos. 7,623,560, 7,767,479, 7,829,902, 8,049,231, 8,243,770, 8,567,960, and 8,098,265, the contents of each of which is fully incorporated herein by reference. The disclosed light emitting structures and devices referred to herein may be based on the Quantum Photonic Imager or “QPI®” imager. QPI® is a registered trademark of Ostendo Technologies, Inc. These disclosed devices desirably feature high brightness, very fast multi-color light intensity and spatial modulation capabilities, all in a very small single device size that includes all necessary image processing drive circuitry. The solid-state light—(SSL) emitting pixels of the disclosed devices may be either a light emitting diode (LED) or laser diode (LD), or both, whose on-off state is controlled by drive circuitry contained within a CMOS chip (or device) upon which the emissive micro-scale pixel array of the imager is bonded and electronically coupled. The size of the pixels comprising the disclosed emissive arrays of such imager devices is typically in the range of approximately 5-20 microns with a typical emissive surface area being in the range of approximately 15-150 square millimeters. The pixels within the above emissive micro-scale pixel array devices are individually addressable spatially, chromatically and temporally, typically through the drive circuitry of its CMOS chip. The brightness of the light generated by such imager devices can reach multiple 100,000 cd/m2 at reasonably low power consumption.
The QPI imager referred to in the exemplary embodiments described below is well-suited for use in the wearable devices described herein. See U.S. Pat. Nos. 7,623,560, 7,767,479, 7,829,902, 8,049,231, 8,243,770, 8,567,960, and 8,098,265. However, it is to be understood that the QPI imagers are merely examples of the types of devices that may be used in the present disclosure. Thus, in the description to follow, references to the QPI imager, display, display device or imager are to be understood to be for purposes of specificity in the embodiments disclosed, and not for any limitation of the present disclosure.
The embodiments herein are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and they mean at least one.
In the drawings:
The present disclosure and various of its embodiments are set forth in the following description of the embodiments which are presented as illustrated examples of the disclosure in the subsequent claims. It is expressly noted that the disclosure as defined by such claims may be broader than the illustrated embodiments described below. The word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
In one example embodiment, a near-eye display device is disclosed comprising a lens prism assembly comprising a viewer-facing surface, a scene-facing surface, a vertical dimension, a horizontal dimension and a lens thickness. The lens prism assembly may comprise an upper portion and a lower portion bonded together along a horizontal interface comprising a beam-splitting element. Two or more display devices may be disposed on a first edge surface and each configured to couple an optical image from the respective display devices through the lens thickness.
In one embodiment, a reflective element is disposed on a second edge surface that generally opposes the first surface. The reflective element, the beam-splitting element or both the reflective element and the beam-splitting element may comprise a region comprising a plurality of overlapping, prismatic facet elements that are configured to reflect or transmit the separate optical images (from the display devices) that overlap on the reflective element (i.e. areas where the two separate display device images overlap on the reflective element) in a predetermined interlaced optical pattern and back toward the beam-splitting element. The beam-splitting element may be further configured to couple the reflected optical images through the viewer-facing surface to an eye pupil of a user.
By virtue of the example embodiments herein, a compact NED is enabled that resembles the appearance of a pair of ordinary spectacle glasses (i.e., eyeglasses in the format of a typical consumer eyeglass frame), has good quality in both the displayed image and the “see-through” real-world view, is easy to fabricate in mass production and is comfortable to wear by accommodating large inter-pupil distance (IPD) variation among people.
The embodiments herein may be provided to take advantage of advances made in display device technologies, particularly self-emissive display devices with micro-pixels such as the Quantum Photonic Imager or “QPI®” imager. QPI® is a registered trademark of Ostendo Technologies, Inc. See U.S. Pat. Nos. 7,623,560, 7,767,479, 7,829,902, 8,049,231, 8,243,770, 8,567,960, and 8,098,265. Display devices such as the QPI imager offer high brightness and high resolution images in a very compact format and are particularly well-suited for the compact NED of present disclosure.
The embodiments herein may also take advantage of the ergonomic fact that, although humans' IPD varies greatly among the general population and users tend to scan their eyes in a large latitude to their left or to their right, a human's vertical eye scan movement is much narrower and less frequent. Humans generally tend to scan their vertical field of view by tilting their head back or forward. The embodiments herein exploit this behavior so that the disclosed NED does not require a circular eye box but rather provides an elongated eye box or exit pupil having a horizontal dimension that is larger than its vertical dimension. The eye box can be a 2-D or 3-D viewing region within which the viewer's eye can move and still see the entire image. The NED system may be vertically aligned with a user's eyes by adjusting the system up or down along the wearer's nose bridge like an ordinary pair of eyeglasses are worn and adjusted. Since much of the bulkiness of prior art NEDs comes from the requirement of having a large circular eye box, the reduction of eye box size in the vertical direction beneficially reduces system bulkiness in the corresponding horizontal (eye sight) direction which is the dominant eye-scan direction.
A conventional eyeglass or spectacle lens generally has its largest dimension along its width (horizontal) and the smallest dimension through its thickness with the height (vertical) dimension somewhere between the width and thickness dimension. The shape of the front (scene-facing) and the back (viewer-facing) of a spectacle lens is typically decided by the ophthalmic function or fashion. Thus, the lens edge surfaces may be modified to implement the function of coupling and reflecting an electronic image that is displayed from a display device to the eye pupil of a viewer. By disposing one or more display devices on the upper (“first”) or the lower (“second”) edge surface of a lens and coupling the image outputs from the display device(s) using a reflective optic assembly at the opposing edge surface, the larger lens horizontal dimension can correspond to the larger horizontal eye box dimension and the smaller lens vertical dimension can correspond to the smaller vertical eye box dimension.
By virtue of embodiments described herein, it is possible to simplify the light path from the display device to eye pupil and minimize any break of symmetry to ensure maximum optical performance. A lens suitable for an embodiment of the NED of the present disclosure may comprise a beam-splitting bonding interface embedded along the horizontal dimension of the lens, about mid-way along the lens' height dimension which divides the lens into two parts: an upper portion and a lower portion. The embedded interface can be provided with a partially-reflective coating whereby the interface functions as an optical beam-splitting element. In one embodiment, at least one, but preferably two or more, imaging source display devices such as QPI imagers are disposed on a first edge surface (“upper” in the described embodiment) facing the embedded beam-splitting interface and a novel reflective element surface, with or without optical power, is disposed on a second edge (“lower” in the described embodiment) facing the interface. It is expressly noted the above described embodiment is not limited to having the imager disposed on the first surface and the reflective element disposed on the second surface and that the terms “first” and “second” surfaces are used for convenience only and are considered interchangeable positions. For instance, disposing the imager on the second surface and the reflective element on the first surface are contemplated as being within the scope of the embodiments disclosed herein.
In one embodiment, a displayed image received from each of the display devices is coupled into the thickness of the lens body by optical transmission from the first edge. The information light transmits through the embedded partially-reflective interface and transverses the lens body along a straight path to the second edge upon which reflective element optics are disposed. Upon reflection from the reflective element disposed on the second edge, the information light travels back to the embedded partially-reflective interface. Upon reflection at the partially-reflective interface, information light transmits through the lens' viewer-facing surface and enters the eye pupil where it is perceived as a virtual image superimposed on the real-world view along the vision line.
In this optical path layout, much of the imaging work is performed by the edge reflective element optics which may be generally centered relative to the opposing display devices. The folding reflection at the embedded plane interface introduces little to no aberration. Although the spectacle lens viewer-facing surface is generally curved and tilted relative to the vision line for fashion or vision correction reasons, the aberration introduced at the viewer-facing surface is manageable due to the low divergence of information light at the surface and can be corrected at the reflective element or by clocking the display device and reflective element around the vision line as a group. Although the layout of the optical path of the information light of the present disclosure may be provided as a 3D path in nature, the break of symmetry is gentle with good optical performance attainable.
Another advantageous aspect of the present disclosure is the increase in horizontal field of view through the use of two QPI imagers or suitable display devices that are disposed on the first edge of the lens. In one embodiment, the total FOV may be divided into two tiled-up zones: a first zone and second zone with each zone supported by a single QPI imager. To ensure the complete overlap of light paths from these two zones over the eye box of NED, micro-prismatic facet features similar to those found in a Fresnel lens can be employed on the reflective element surface at the second edge of a lens and on the partially-reflective interface embedded in the thickness of the lens. These micro-prismatic facet features are categorized into two types with each type working with a respective display device. These two types of micro-prismatic facet features are interspersed at a period comparable to the micro-prismatic facet feature size. A typical size of the micro-prismatic facet feature may be about 20 to 150 um at which scale the light is reflected/refracted rather than diffracted. Thus, the wavelength dependence can be much less than that of diffracted elements. As a result, the eye box of the NED is composed of interspersed zones for different display devices at a pitch of 20 to 150 um. Since this pitch is much smaller than the typical 4 mm eye pupil of a user, a user's eyes can move over the entire eye box without image gaps being observed in the FOV. An advantageous effect of this embodiment is the compactness of the near-eye device and large effective eye box for the total FOV.
In one embodiment, a Fresnel slope angle is contoured along a closed circle of a Fresnel optical prismatic facet element to implement free-form optical surfaces which are useful in optical aberration correction.
The description of the various embodiments of the NED of the disclosure is made with regard to one lens or one eye but it is expressly noted that the description is intended to include two lenses or both eyes which together provides stereoscopic or binocular vision.
Except for the embedded beam-splitting surfaces 1330L and 1330R, the lens prism assemblies 1320L and 1320R resemble and function in the transmission of light received from the real-world view as a pair of ophthalmic lenses and may be provided with or without vision correction. Other components like computers, sensors, antenna, control circuit boards and batteries may also be incorporated into the holding frame assembly 1300 or alternatively, the display devices 1310L and 1310R can be connected to an external computer through wireless means or cables coming out of the two ends of the temples of the glasses frame. Because of the mirror symmetry relationship between the systems for the left eye and the right eye, only the right eye system is described in the following but use in binocular applications is contemplated as within the scope of the disclosure and the claims.
Earlier attempts have been made at making the vertical dimension of the beam footprint as large as the horizontal dimension. The resulting systems can be bulky and/or complicated. On the other hand, the optical system of the present disclosure beneficially has most of its optical power contributed by the reflective element 1440R which may be centered relative to the display device 1410R. In one embodiment, reflective element 1440R can be provided as a weak toroidal surface with small difference in curvature radius along two orthogonal directions. This departure from the rotational symmetry of reflective surface 1440R accounts for the fact that the viewer-facing surface of the lens prism assembly 1420R may be provided to have a different curvature radius along two orthogonal directions. Although the viewer-facing surface of lens prism assembly 1420R may be generally of toroidal shape and tilted with respect to the vision line 1460R, the information rays as focused by 1440R have low divergence at the viewer-facing surface and form moderate angles with respect to the vision line 1460R. In addition, the curvature radius on the scene-facing surface and viewer-facing surface of the lens prism assembly 1420R is not strong for practical reasons and all of these features combine to provide an acceptable aberration level and good optical performance.
For a given symmetrical optical surface, its equivalent Fresnel surface has concentric circular features with a constant slope angle. Optical power is imparted by providing a predetermined variation of slope angle across the various circular features. In a tilted optical system of the disclosure, a free-form optical surface may be used for aberration correction. A simple method to implement a free-form surface in the Fresnel type lens disclosed herein is to modulate the slope angle periodically at predetermined periods along the azimuth direction on a circle. An embodiment comprising the above approach is depicted in
Turning to
In the non-telecentric embodiments disclosed herein, the respective pixels of each of display elements 2000A and 2000B are provided with an associated array of pixel-level micro optical elements (pixel-level micro lens array or “PMLA”) that is configured to collimate light emitted from each pixel to match a reflector optical aperture. Each pixel element is configured to direct the light emitted from the pixel toward the reflector optical aperture by means of the associated PMLA micro optical element. The individual light emission from each pixel is thus directionally modulated by the PMLA in a unique direction to enable a predetermined non-telecentric pixel light emission pattern from the pixel array of non-telecentric display element 2000.
In one aspect, the PMLA layer of pixel-level micro optical elements is disposed above the pixels and is used to directionally modulate light coupled onto the PMLA micro optical elements from the corresponding pixels in a predetermined respective direction relative to an axis that is perpendicular to the surface of non-telecentric display element 2000A or 2000B.
Non-telecentric display element 2000A or 2000B may be incorporated into either a flat lens element design (as shown in
In the embodiment of
Non-telecentric display elements 2000C and 2000D may each comprise refractive or diffractive micro optical elements which may be fabricated from a UV curable polymer. The diffractive micro optical elements of non-telecentric display elements 2000C and 2000D may each comprise blazed gratings or rail gratings. Blazed gratings used in the non-telecentric display element may be configured whereby the directional modulation of the pixel outputs is determined at least in part by a slant angle, a pitch or both a slant angle and a pitch, of the blazed grating elements.
Many alterations and modifications may be made by those having ordinary skill in the art without departing from the spirit and scope of the disclosure. Therefore, it must be understood that the illustrated embodiment has been set forth only for the purposes of example and that it should not be taken as limiting the disclosure as defined by any claims in any subsequent application claiming priority to this application.
For example, notwithstanding the fact that the elements of such a claim may be set forth in a certain combination, it must be expressly understood that the disclosure includes other combinations of fewer, more or different elements, which are disclosed in above even when not initially claimed in such combinations.
The words used in this specification to describe the disclosure and its various embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus, if an element can be understood in the context of this specification as including more than one meaning, then its use in a subsequent claim must be understood as being generic to all possible meanings supported by the specification and by the word itself.
The definitions of the words or elements of any claims in any subsequent application claiming priority to this application should be, therefore, defined to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense, it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in such claims below or that a single element may be substituted for two or more elements in such a claim.
Although elements may be described above as acting in certain combinations and even subsequently claimed as such, it is to be expressly understood that one or more elements from a claimed combination can in some cases be excised from the combination and that such claimed combination may be directed to a subcombination or variation of a subcombination.
Insubstantial changes from any subsequently claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of such claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.
Any claims in any subsequent application claiming priority to this application are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the disclosure.
This application claims the benefit of U.S. Provisional Patent Application No. 62/318,468, filed Apr. 5, 2016, the entirety of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4427912 | Bui et al. | Jan 1984 | A |
4987410 | Berman | Jan 1991 | A |
5162828 | Furness et al. | Nov 1992 | A |
5368042 | O'Neal et al. | Nov 1994 | A |
5619373 | Meyerhofer et al. | Apr 1997 | A |
5696521 | Robinson et al. | Dec 1997 | A |
5818359 | Beach | Oct 1998 | A |
5886822 | Spitzer | Mar 1999 | A |
5986811 | Wohlstadter | Nov 1999 | A |
6128003 | Smith et al. | Oct 2000 | A |
6147807 | Droessler et al. | Nov 2000 | A |
6151167 | Melville | Nov 2000 | A |
6353503 | Spitzer et al. | Mar 2002 | B1 |
6433907 | Lippert et al. | Aug 2002 | B1 |
6456438 | Lee et al. | Sep 2002 | B1 |
6522794 | Bischel et al. | Feb 2003 | B1 |
6529331 | Massof et al. | Mar 2003 | B2 |
6666825 | Smith et al. | Dec 2003 | B2 |
6710902 | Takeyama | Mar 2004 | B2 |
6719693 | Richard | Apr 2004 | B2 |
6795221 | Urey | Sep 2004 | B1 |
6803561 | Dunfield | Oct 2004 | B2 |
6804066 | Ha et al. | Oct 2004 | B1 |
6829095 | Amitai | Dec 2004 | B2 |
6924476 | Wine et al. | Aug 2005 | B2 |
6937221 | Lippert et al. | Aug 2005 | B2 |
6984208 | Zheng | Jan 2006 | B2 |
6999238 | Glebov et al. | Feb 2006 | B2 |
7061450 | Bright et al. | Jun 2006 | B2 |
7071594 | Yan et al. | Jul 2006 | B1 |
7106519 | Aizenberg et al. | Sep 2006 | B2 |
7190329 | Lewis et al. | Mar 2007 | B2 |
7193758 | Wiklof et al. | Mar 2007 | B2 |
7209271 | Lewis et al. | Apr 2007 | B2 |
7215475 | Woodgate et al. | May 2007 | B2 |
7232071 | Lewis et al. | Jun 2007 | B2 |
7369321 | Ren et al. | May 2008 | B1 |
7482730 | Davis et al. | Jan 2009 | B2 |
7486255 | Brown et al. | Feb 2009 | B2 |
7545571 | Garoutte et al. | Jun 2009 | B2 |
7580007 | Brown et al. | Aug 2009 | B2 |
7619807 | Baek et al. | Nov 2009 | B2 |
7623560 | El-Ghoroury et al. | Nov 2009 | B2 |
7724210 | Sprague et al. | May 2010 | B2 |
7747301 | Cheng et al. | Jun 2010 | B2 |
7767479 | El-Ghoroury et al. | Aug 2010 | B2 |
7791810 | Powell | Sep 2010 | B2 |
7829902 | El-Ghoroury et al. | Nov 2010 | B2 |
7952809 | Takai | May 2011 | B2 |
8049231 | El-Ghoroury et al. | Nov 2011 | B2 |
8098265 | El-Ghoroury et al. | Jan 2012 | B2 |
8243770 | El-Ghoroury et al. | Aug 2012 | B2 |
8279716 | Gossweiler, III et al. | Oct 2012 | B1 |
8292833 | Son et al. | Oct 2012 | B2 |
8405618 | Colgate et al. | Mar 2013 | B2 |
8471967 | Miao et al. | Jun 2013 | B2 |
8477425 | Border et al. | Jul 2013 | B2 |
8482859 | Border | Jul 2013 | B2 |
8508830 | Wang | Aug 2013 | B1 |
8508851 | Miao et al. | Aug 2013 | B2 |
8510244 | Carson et al. | Aug 2013 | B2 |
8553910 | Dong et al. | Oct 2013 | B1 |
8567960 | El-Ghoroury et al. | Oct 2013 | B2 |
8619049 | Harrison et al. | Dec 2013 | B2 |
8725842 | Al-Nasser | May 2014 | B1 |
8743145 | Price | Jun 2014 | B1 |
8773599 | Saeedi et al. | Jul 2014 | B2 |
8854724 | El-Ghoroury et al. | Oct 2014 | B2 |
8928969 | Alpaslan et al. | Jan 2015 | B2 |
8975713 | Sako et al. | Mar 2015 | B2 |
9097890 | Miller et al. | Aug 2015 | B2 |
9110504 | Lewis et al. | Aug 2015 | B2 |
9134535 | Dobschal et al. | Sep 2015 | B2 |
9179126 | El-Ghoroury et al. | Nov 2015 | B2 |
9195053 | El-Ghoroury et al. | Nov 2015 | B2 |
9239453 | Cheng et al. | Jan 2016 | B2 |
9244277 | Cheng et al. | Jan 2016 | B2 |
9244539 | Venable et al. | Jan 2016 | B2 |
9274608 | Katz et al. | Mar 2016 | B2 |
9286730 | Bar-Zeev et al. | Mar 2016 | B2 |
9529191 | Sverdrup et al. | Dec 2016 | B2 |
9538182 | Mishourovsky et al. | Jan 2017 | B2 |
9681069 | El-Ghoroury et al. | Jun 2017 | B2 |
9712764 | El-Ghoroury et al. | Jul 2017 | B2 |
9774800 | El-Ghoroury et al. | Sep 2017 | B2 |
9779515 | El-Ghoroury et al. | Oct 2017 | B2 |
9965982 | Lapstun | May 2018 | B2 |
20020008854 | Leigh Travis | Jan 2002 | A1 |
20020017567 | Connolly et al. | Feb 2002 | A1 |
20020024495 | Lippert et al. | Feb 2002 | A1 |
20020075232 | Daum et al. | Jun 2002 | A1 |
20020083164 | Katayama et al. | Jun 2002 | A1 |
20020141026 | Wiklof et al. | Oct 2002 | A1 |
20020158814 | Bright et al. | Oct 2002 | A1 |
20020181115 | Massof et al. | Dec 2002 | A1 |
20020194005 | Lahr | Dec 2002 | A1 |
20030032884 | Smith et al. | Feb 2003 | A1 |
20030086135 | Takeyama | May 2003 | A1 |
20030122066 | Dunfield | Jul 2003 | A1 |
20030138130 | Cohen et al. | Jul 2003 | A1 |
20030184575 | Reho et al. | Oct 2003 | A1 |
20030187357 | Richard | Oct 2003 | A1 |
20040004585 | Brown et al. | Jan 2004 | A1 |
20040024312 | Zheng | Feb 2004 | A1 |
20040051392 | Badarneh | Mar 2004 | A1 |
20040080807 | Chen et al. | Apr 2004 | A1 |
20040080938 | Holman et al. | Apr 2004 | A1 |
20040085261 | Lewis et al. | May 2004 | A1 |
20040119004 | Wine et al. | Jun 2004 | A1 |
20040125076 | Green | Jul 2004 | A1 |
20040138935 | Johnson et al. | Jul 2004 | A1 |
20040179254 | Lewis et al. | Sep 2004 | A1 |
20040240064 | Dutta | Dec 2004 | A1 |
20050002074 | McPheters et al. | Jan 2005 | A1 |
20050024730 | Aizenberg et al. | Feb 2005 | A1 |
20050053192 | Sukovic et al. | Mar 2005 | A1 |
20050116038 | Lewis et al. | Jun 2005 | A1 |
20050117195 | Glebov et al. | Jun 2005 | A1 |
20050168700 | Berg et al. | Aug 2005 | A1 |
20050179976 | Davis et al. | Aug 2005 | A1 |
20050264502 | Sprague et al. | Dec 2005 | A1 |
20060017655 | Brown et al. | Jan 2006 | A1 |
20060132383 | Gally et al. | Jun 2006 | A1 |
20060152812 | Woodgate et al. | Jul 2006 | A1 |
20060253007 | Cheng et al. | Nov 2006 | A1 |
20060285192 | Yang | Dec 2006 | A1 |
20060290663 | Mitchell | Dec 2006 | A1 |
20070052694 | Holmes | Mar 2007 | A1 |
20070083120 | Cain et al. | Apr 2007 | A1 |
20070236450 | Colgate et al. | Oct 2007 | A1 |
20070269432 | Nakamura et al. | Nov 2007 | A1 |
20070276658 | Douglass | Nov 2007 | A1 |
20080002262 | Chirieleison | Jan 2008 | A1 |
20080049291 | Baek et al. | Feb 2008 | A1 |
20080130069 | Cernasov | Jun 2008 | A1 |
20080141316 | Igoe et al. | Jun 2008 | A1 |
20080239452 | Xu et al. | Oct 2008 | A1 |
20090073559 | Woodgate et al. | Mar 2009 | A1 |
20090086170 | El-Ghoroury et al. | Apr 2009 | A1 |
20090096746 | Kruse et al. | Apr 2009 | A1 |
20090161191 | Powell | Jun 2009 | A1 |
20090199900 | Bita et al. | Aug 2009 | A1 |
20090222113 | Fuller et al. | Sep 2009 | A1 |
20090256287 | Fu et al. | Oct 2009 | A1 |
20090268303 | Takai | Oct 2009 | A1 |
20090278998 | El-Ghoroury et al. | Nov 2009 | A1 |
20090327171 | Tan et al. | Dec 2009 | A1 |
20100003777 | El-Ghoroury et al. | Jan 2010 | A1 |
20100026960 | Sprague | Feb 2010 | A1 |
20100046070 | Mukawa | Feb 2010 | A1 |
20100053164 | Imai et al. | Mar 2010 | A1 |
20100066921 | El-Ghoroury et al. | Mar 2010 | A1 |
20100091050 | El-Ghoroury et al. | Apr 2010 | A1 |
20100156676 | Mooring et al. | Jun 2010 | A1 |
20100171922 | Sessner et al. | Jul 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100220042 | El-Ghoroury et al. | Sep 2010 | A1 |
20100241601 | Carson et al. | Sep 2010 | A1 |
20100245957 | Hudman et al. | Sep 2010 | A1 |
20100259472 | Radivojevic et al. | Oct 2010 | A1 |
20100267449 | Gagner et al. | Oct 2010 | A1 |
20110054360 | Son et al. | Mar 2011 | A1 |
20110115887 | Yoo et al. | May 2011 | A1 |
20110221659 | King, III et al. | Sep 2011 | A1 |
20110285666 | Poupyrev et al. | Nov 2011 | A1 |
20110285667 | Poupyrev et al. | Nov 2011 | A1 |
20120033113 | El-Ghoroury et al. | Feb 2012 | A1 |
20120075173 | Ashbrook et al. | Mar 2012 | A1 |
20120075196 | Ashbrook et al. | Mar 2012 | A1 |
20120105310 | Sverdrup et al. | May 2012 | A1 |
20120113097 | Nam et al. | May 2012 | A1 |
20120120498 | Harrison et al. | May 2012 | A1 |
20120143358 | Adams et al. | Jun 2012 | A1 |
20120154441 | Kim | Jun 2012 | A1 |
20120157203 | Latta et al. | Jun 2012 | A1 |
20120195461 | Lawrence Ashok Inigo | Aug 2012 | A1 |
20120212398 | Border et al. | Aug 2012 | A1 |
20120212399 | Border et al. | Aug 2012 | A1 |
20120218301 | Miller | Aug 2012 | A1 |
20120236201 | Larsen et al. | Sep 2012 | A1 |
20120249409 | Toney et al. | Oct 2012 | A1 |
20120249741 | Maciocci et al. | Oct 2012 | A1 |
20120288995 | El-Ghoroury et al. | Nov 2012 | A1 |
20120290943 | Toney et al. | Nov 2012 | A1 |
20120293402 | Harrison et al. | Nov 2012 | A1 |
20120299962 | White et al. | Nov 2012 | A1 |
20120319940 | Bress et al. | Dec 2012 | A1 |
20120320092 | Shin et al. | Dec 2012 | A1 |
20130016292 | Miao et al. | Jan 2013 | A1 |
20130021658 | Miao et al. | Jan 2013 | A1 |
20130027341 | Mastandrea | Jan 2013 | A1 |
20130041477 | Sikdar et al. | Feb 2013 | A1 |
20130050260 | Reitan | Feb 2013 | A1 |
20130080890 | Krishnamurthi | Mar 2013 | A1 |
20130083303 | Hoover et al. | Apr 2013 | A1 |
20130100362 | Saeedi et al. | Apr 2013 | A1 |
20130141895 | Alpaslan et al. | Jun 2013 | A1 |
20130162505 | Crocco et al. | Jun 2013 | A1 |
20130169536 | Wexler et al. | Jul 2013 | A1 |
20130176622 | Abrahamsson et al. | Jul 2013 | A1 |
20130187836 | Cheng et al. | Jul 2013 | A1 |
20130196757 | Latta et al. | Aug 2013 | A1 |
20130215516 | Dobschal et al. | Aug 2013 | A1 |
20130225999 | Banjanin et al. | Aug 2013 | A1 |
20130258451 | El-Ghoroury et al. | Oct 2013 | A1 |
20130271679 | Sakamoto et al. | Oct 2013 | A1 |
20130285174 | Sako et al. | Oct 2013 | A1 |
20130286053 | Fleck et al. | Oct 2013 | A1 |
20130286178 | Lewis et al. | Oct 2013 | A1 |
20130321581 | El-Ghoroury et al. | Dec 2013 | A1 |
20140009845 | Cheng et al. | Jan 2014 | A1 |
20140024132 | Jia et al. | Jan 2014 | A1 |
20140049417 | Abdurrahman et al. | Feb 2014 | A1 |
20140049983 | Nichol et al. | Feb 2014 | A1 |
20140055352 | Davis et al. | Feb 2014 | A1 |
20140055692 | Kroll et al. | Feb 2014 | A1 |
20140085177 | Lyons et al. | Mar 2014 | A1 |
20140091984 | Ashbrook et al. | Apr 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140098067 | Yang et al. | Apr 2014 | A1 |
20140118252 | Kim et al. | May 2014 | A1 |
20140129207 | Bailey et al. | May 2014 | A1 |
20140139454 | Mistry et al. | May 2014 | A1 |
20140139576 | Costa et al. | May 2014 | A1 |
20140147035 | Ding et al. | May 2014 | A1 |
20140168062 | Katz et al. | Jun 2014 | A1 |
20140176417 | Young et al. | Jun 2014 | A1 |
20140185142 | Gupta et al. | Jul 2014 | A1 |
20140200496 | Hyde et al. | Jul 2014 | A1 |
20140232651 | Kress et al. | Aug 2014 | A1 |
20140292620 | Lapstun | Oct 2014 | A1 |
20140300869 | Hirsch et al. | Oct 2014 | A1 |
20140301662 | Justice et al. | Oct 2014 | A1 |
20140304646 | Rossmann | Oct 2014 | A1 |
20140340304 | Dewan et al. | Nov 2014 | A1 |
20150001987 | Masaki et al. | Jan 2015 | A1 |
20150035832 | Sugden et al. | Feb 2015 | A1 |
20150054729 | Minnen et al. | Feb 2015 | A1 |
20150058102 | Christensen et al. | Feb 2015 | A1 |
20150125109 | Robbins et al. | May 2015 | A1 |
20150138086 | Underkoffler et al. | May 2015 | A1 |
20150148886 | Rao et al. | May 2015 | A1 |
20150193984 | Bar-Zeev et al. | Jul 2015 | A1 |
20150205126 | Schowengerdt | Jul 2015 | A1 |
20150220109 | von Badinski et al. | Aug 2015 | A1 |
20150235467 | Schowengerdt et al. | Aug 2015 | A1 |
20150277126 | Hirano et al. | Oct 2015 | A1 |
20150301256 | Takiguchi | Oct 2015 | A1 |
20150301383 | Kimura | Oct 2015 | A1 |
20150033539 | El-Ghoroury et al. | Nov 2015 | A1 |
20150323990 | Maltz | Nov 2015 | A1 |
20150323998 | Kudekar et al. | Nov 2015 | A1 |
20150326842 | Huai | Nov 2015 | A1 |
20150381782 | Park | Dec 2015 | A1 |
20160018948 | Parvarandeh et al. | Jan 2016 | A1 |
20160026059 | Chung et al. | Jan 2016 | A1 |
20160028935 | El-Ghoroury et al. | Jan 2016 | A1 |
20160116738 | Osterhout | Apr 2016 | A1 |
20160182782 | El-Ghoroury et al. | Jun 2016 | A1 |
20160191765 | El-Ghoroury et al. | Jun 2016 | A1 |
20160191823 | El-Ghoroury et al. | Jun 2016 | A1 |
20160220232 | Takada et al. | Aug 2016 | A1 |
20160342151 | Dey, IV et al. | Nov 2016 | A1 |
20170065872 | Kelley | Mar 2017 | A1 |
20170069134 | Shapira et al. | Mar 2017 | A1 |
20170184776 | El-Ghoroury et al. | Jun 2017 | A1 |
20170236295 | El-Ghoroury | Aug 2017 | A1 |
20170261388 | Ma et al. | Sep 2017 | A1 |
20170310956 | Perdices-Gonzalez et al. | Oct 2017 | A1 |
Number | Date | Country |
---|---|---|
103298410 | Sep 2013 | CN |
103546181 | Jan 2014 | CN |
103558918 | Feb 2014 | CN |
104460992 | Mar 2015 | CN |
0431488 | Jan 1996 | EP |
WO-2014124173 | Aug 2014 | WO |
Entry |
---|
Ahumada, Jr., Albert J. et al., “Spatio-temporal discrimination model predicts temporal masking functions”, Proceedings of SPIE—-the International Society for Optical Engineering, Human vision and electronic imaging III, vol. 3299, 1998, 6 pp. total. |
Beulen, Bart W. et al., “Toward Noninvasive Blood Pressure Assessment in Arteries by Using Ultrasound”, Ultrasound in Medicine & Biology, vol. 37, No. 5, 2011, pp. 788-797. |
Bickel, Bernd et al., “Capture and Modeling of Non-Linear Heterogeneous Soft Tissue”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2009, vol. 28, Issue 3, Article No. 89, Aug. 2009, 9 pp. total. |
Castellini, Claudio et al., “Using Ultrasound Images of the Forearm to Predict Finger Positions”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 20, No. 6, Nov. 2012, pp. 788-797. |
Cobbold, Richard S. , “Foundations of Biomedical Ultrasound”, Oxford University Press, 2007, pp. 3-95. |
Guo, Jing-Yi et al., “Dynamic monitoring of forearm muscles using one-dimensional sonomyography system”, Journal of Rehabilitation Research & Development, vol. 45, No. 1, 2008, pp. 187-195. |
Harrison, Chris et al., “Skinput: Appropriating the Body as an Input Surface”, CHI '10 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2010, pp. 453-462. |
Hsiao, Tzu-Yu et al., “Noninvasive Assessment of Laryngeal Phonation Function Using Color Doppler Ultrasound Imaging”, Ultrasound in Med. & Biol., vol. 27, No. 8, 2001, pp. 1035-1040. |
Keir, Peter J. et al., “Changes in geometry of the finger flexor tendons in the carpal tunnel with wrist posture and tendon load: an MRI study on normal wrists”, Clinical Biomechanics, vol. 14, 1999, pp. 635-645. |
Khuri-Yakub, Butrus T. et al., “Capacitive micromachined ultrasonic transducers for medical imaging and therapy”, J. Micromech. Microeng., vol. 21, No. 5, May 2011, pp. 054004-054014. |
Koutsouridis, G. G. et al., “Towards a Non-Invasive Ultrasound Pressure Assessment in Large Arteries”, Technische Universiteit Eindhoven, University of Technology, Mate Poster Award 2010: 15th Annual Poster Contest, 2010, 1 page total. |
Legros, M. et al., “Piezocomposite and CMUT Arrays Assessment Through In Vitro Imaging Performances”, 2008 IEEE Ultrasonics Symposium, Nov. 2-5, 2008, pp. 1142-1145. |
Martin, Joel R. et al., “Changes in the flexor digitorum profundus tendon geometry in the carpal tunnel due to force production and posture of metacarpophalangeal joint of the index finger: An MRI study”, Clinical Biomechanics, vol. 28, 2013, pp. 157-163. |
Martin, Joel R. et al., “Effects of the index finger position and force production on the flexor digitorum superficialis moment arms at the metacarpophalangeal joints—a magnetic resonance imaging study”, Clinical Biomechanics, vol. 27, 2012, pp. 453-459. |
Mujibiya, Adiyan et al., “The Sound of Touch: On-body Touch and Gesture Sensing Based on Transdermal Ultrasound Propagation”, ITS '13 Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces, Oct. 6-9, 2013, pp. 189-198. |
Paclet, Florent et al., “Motor control theories improve biomechanical model of the hand for finger pressing tasks”, Journal of Biomechanics, vol. 45, 2012, pp. 1246-1251. |
Pinton, Gianmarco F. et al., “A Heterogeneous Nonlinear Attenuating Full-Wave Model of Ultrasound”, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 56, No. 3, Mar. 2009, pp. 474-488. |
Richard, William D. et al., “A scalable architecture for real-time synthetic-focus imaging”, Ultrasonic Imaging, vol. 25, 2003, pp. 151-161. |
Shi, Jun et al., “Feasibility of controlling prosthetic hand using sonomyography signal in real time: Preliminary study”, Journal of Rehabilitation Research & Development, vol. 47, No. 2, 2010, pp. 87-97. |
Sikdar, Siddhartha et al., “Novel Method for Predicting Dexterous Individual Finger Movements by Imaging Muscle Activity Using a Wearable Ultrasonic System”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, No. 1, Jan. 2014, pp. 69-76. |
Sueda, Shinjiro et al., “Musculotendon Simulation for Hand Animation”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2008, vol. 27 Issue 3, Article No. 83, vol. 27 Issue 3, Aug. 2008, 8 pp. total. |
Szabo, Thomas L. , “Diagnostic Ultrasound Imaging: Inside Out, Second Edition”, Elsevier Inc., 2013, 829 pp. total. |
Van Den Branden Lambrecht, Christian J. , “A Working Spatio-Temporal Model of the Human Visual System for Image Restoration and Quality Assessment Applications”, ICASSP-96, Conference Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1996, 4 pp. total. |
Watson, Andrew B. et al., “Model of human visual-motion sensing”, Journal of the Optical Society of America A, vol. 2, No. 2, Feb. 1985, pp. 322-342. |
Watson, Andrew B. et al., “Model of visual contrast gain control and pattern masking”, Journal of the Optical Society of America A, vol. 14, No. 9, Sep. 1997, pp. 2379-2391. |
Watson, Andrew B. , “The search for optimal visual stimuli”, Vision Research, vol. 38, 1998, pp. 1619-1621. |
Watson, Andrew B. , “The Spatial Standard Observer: A Human Visual Model for Display Inspection”, Society for Information Display, SID 06 Digest, Jun. 2006, pp. 1312-1315. |
Watson, Andrew B. , “Visual detection of spatial contrast patterns: Evaluation of five simple models”, Optics Express, vol. 6, No. 1, Jan. 3, 2000, pp. 12-33. |
Williams III, T. W. , “Progress on stabilizing and controlling powered upper-limb prostheses”, Journal of Rehabilitation Research & Development, Guest Editorial, vol. 48, No. 6, 2011, pp. ix-xix. |
Willis, Karl D. et al., “MotionBeam: A Metaphor for Character Interaction with Handheld Projectors”, CHI '11 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7-12, 2011, pp. 1031-1040. |
Yun, Xiaoping et al., “Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking”, IEEE Transactions on Robotics, vol. 22, No. 6, Dec. 2006, pp. 1216-1227. |
Zhang, Cha et al., “Maximum Likelihood Sound Source Localization and Beamforming for Directional Microphone Arrays in Distributed Meetings”, IEEE Transactions on Multimedia, vol. 10, No. 3, Apr. 2008, pp. 538-548. |
“International Search Report and Written Opinion of the International Searching Authority Dated Apr. 19, 2017; International Application No. PCT/US2016/069042”, dated Apr. 19, 2017. |
“International Search Report and Written Opinion of the International Searching Authority Dated Jun. 29, 2017; International Application No. PCT/US2017/026238”, dated Jun. 29, 2017. |
“Invitation to Pay Additional Fees Dated Feb. 13, 2017; International Application No. PCT/US2016/069042”, dated Feb. 13, 2017. |
Fattal, David et al., “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display”, Nature, vol. 495, Mar. 21, 2013, pp. 348-351. |
Grossberg, Stephen et al., “Neural dynamics of saccadic and smooth pursuit eye movement coordination during visual tracking of unpredictably moving targets”, Neural Networks, vol. 27, 2012, pp. 1-20. |
Hua, Hong et al., “A 3D integral imaging optical see-through head-mounted display”, Optics Express, vol. 22, No. 11, May 28, 2014, pp. 13484-13491. |
Lanman, Douglas et al., “Near-Eye Light Field Displays”, ACM Transactions on Graphics (TOC), vol. 32, Issue 6, Article 220, Nov. 2013, 27 pp. total. |
Marwah, Kshitij et al., “Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections”, Proc. of SIGGRAPH 2013 (ACM Transactions on Graphics, 32, 4), 2013, 12 pp. total. |
Rolland, Jannick P. et al., “Dynamic focusing in head-mounted displays”, Part of the IS&T/SPIE Conference on The Engineering Reality of Virtual Reality, SPIE vol. 3639, Jan. 1999, pp. 463-470. |
Wikipedia, “List of refractive indices”, https://en.wikipedia.org/wiki/List_of_refractive_indices, Dec. 7, 2003, 5 pp. total. |
“Office Action Dated Aug. 31, 2018; U.S. Appl. No. 15/391,583”, dated Aug. 31, 2018. |
“Office Action Dated Mar. 5, 2018; U.S. Appl. No. 15/391,583”, dated Mar. 5, 2018. |
Number | Date | Country | |
---|---|---|---|
20170285347 A1 | Oct 2017 | US |
Number | Date | Country | |
---|---|---|---|
62318468 | Apr 2016 | US |