The invention is system, method, and apparatus (collectively the “system”) for displaying an image (collectively, the “system”). More specifically, the system uses a tuning assembly to modify the focal point of light in the displayed image. The system can display an image comprised of light with more than one focal point.
Human beings are used to interacting in a world of three dimensions. A single field of view of a human being may include images from objects less than 3 feet away, between 3-5 feet away, between 5-8 feet away, between 8-12 feet away, and further than 12 feet away. Eyes can rapidly change focus on different objects located at different distances. As some objects are brought into heightened focus, other objects may fall out of focus. When a human eye is focused an object that is 10 feet away, an adjacent object that is also 10 feet away can remain in sharp focus, an object merely 7 feet away may be somewhat blurred, and the image of an object merely two feet away is likely substantially blurred. Different images within the field of view have a different focal point and different focal length.
In contrast, prior art image display technologies display images using light that does not vary with respect to focal length or focal point. Prior art 3D technologies give an illusion of depth by presenting a separate image to each eye, but the images often appear unrealistic because the focal distance of all objects in the displayed image is the same, regardless of whether the image pertains to a small object within arms-reach or the view of the moon on the horizon. The illusion of depth can be enhanced somewhat by expressly blurring a background or foreground image, but such an approach does not allow the viewer to shift their focus to the blurred area.
Prior art image displays often provider viewers with unrealistic images because the focal point of light throughout the image is constant.
The invention is system, method, and apparatus (collectively the “system”) for displaying an image (collectively, the “system”). More specifically, the system can use a tuning assembly to modify the focal point of light in the displayed image.
The system displays an image comprised of light with more than one focal point.
Many features and inventive aspects of the system are illustrated in the various drawings described briefly below. However, no patent application can expressly disclose in words or in drawings, all of the potential embodiments of an invention. Variations of known equivalents are implicitly included. In accordance with the provisions of the patent statutes, the principles, functions, and modes of operation of the systems, apparatuses, and methods (collectively the “system”) are explained and illustrated in certain preferred embodiments. However, it must be understood that the inventive systems may be practiced otherwise than is specifically explained and illustrated without departing from its spirit or scope. All components illustrated in the drawings below and associated with element numbers are named and described in Table 1 provided in the Detailed Description section.
The invention is system, method, and apparatus (collectively the “system”) for displaying an image (collectively, the “system”). More specifically, the system can use a tuning assembly to modify the focal point of light in the displayed image.
Prior art image display technologies often displays images that are unrealistic. In the real world, we encounter images from objects that are close by as well as from objects that are far distances away. Human beings are used to changing focus when we change from looking at something close to looking at something far away. Images originating from different distances involve different focal points and focal lengths. Focus far away, and the nearby images will blur. Focus nearby, and the far off images will blur. Displayed images are representations, imitations, or even simulations or reality, but the field of view in a displayed image will be comprised of one focal point and one focal length. Foreground or background may be artificially blurred, but the light used to display the image will be of a single focal point.
A system that can display an image comprised of light with more than focal point can be implemented in a variety of different ways. A single image can be broken into more than subframes. Each subframe can pertain to a portion of the displayed images. The various subframes can be displayed in a variety of different sequences and at sufficient speeds such that a viewer sees an integrated image with more than one focal point instead multiple partial images each with a single focal point. The concept of a video comprised of multiple still frame images works on the same basis. Each subframe within each frame of a video can be displayed quickly enough so that the viewer cannot distinctly perceive subframes of the image.
Different subframes of the image pertain to different depth regions within the image. So portions of the image that are far away from the viewer can be displayed in the same subframe while portions of the image that are close by can be displayed in their own subframe. Different embodiments of the system can involve a different number of subframes and a different number of depth regions. Some systems may only have two subframes and two depth regions. Other systems could have two, three, four, five, or even more subframes and depth regions in a single image. In many instances, it is believed that five depth regions are desirable, with region 1 being from 2-3 from the viewer, region 2 being from about 3-5 feet from the viewer, region 3 being from about 5-8 feet from the viewer, region for being from about 8-12 feet from the user, and region 4 being from about 12 or more feet from the user. Different embodiments of the system can involve different configurations of depth regions.
The ability of utilize light with more than one focal point in a single image can be implemented in interactive as well as non-interactive ways. In a non-interactive embodiment, depth regions are defined within the media itself. The movie, video game, or other type of content can within itself have these depth regions defined as pre-defined depth regions. The system can be implemented in an interactive manner by tracking the eye movement of the user. The system can identify which area of the image that the user is focusing on, use focused light for that area and unfocused light for the other areas. A third approach, which can be referred to as an interactive approach, involves the depth regions of the non-interactive approach combined with the user tracking functionality of an interactive approach. User attention is captured with respect to the pre-defined depth regions and the impact of that attention is factored into the displayed image based on the configuration of depth regions. So for example, if the user is focusing on something in depth region 1, depth regions 2-5 will displayed with increasing lack of focus.
A. Prior Art—Static Focal Uniformity within an Image
An optical system such as the human eye or a camera focuses on an image by having the light from that image converge to a single point referred to as a focal point. The display of an image works on a similar principle, but in reverse.
B. Inventive System—Dynamic Focal Variation within an Image
In contrast the
In contrast to
If there is no focal variation with the light 800 in an individual image 880, then there is only one depth region 860 within that image 880. If there is focal variation with the light 800 in an individual image 880, then there are multiple depth regions 860 in that image 880.
C. Subframes and Depth Regions
As illustrated in
The timing of subframes 852 can be coordinated with the pulses of light 800 from the light source 210 to create those subframes 852. The tuning assembly 700 can however be positioned at a wide variety of positions within the image display process after the light 800 has been created by the light source 210. Typically, the tuning assembly 700 will be positioned between the light source 210 and the modulator 320.
Some embodiments of the system 100 will can utilize predetermined depth regions 860 that are based solely on the media content and that do not “factor in” any eye tracking attribute 530 captured by the tracking/sensor assembly 500. Other embodiments may give primary importance to the eye-tracking attribute 530 of the viewer 96 and relatively little weight to depth regions 860 as defined within the media content itself. Still other embodiments may utilize a hybrid approach, in which user actions navigate the user within an image that is otherwise defined with respect to predefined depth regions 860. Some embodiments will allow the user 90 to determine which mode of operation is preferable, offering users 90 a menu of options with respect to the potential impact of an eye-tracking attribute 530 that is captured by the system 100.
D. Process Flow View
The system 100 can be described as a method 900 or process for displaying an image 880 with light includes more than one focal point 870.
At 910, light 800 is supplied or generated.
At 920, the light 800 from 910 is modulated into an image 880 (or at least an interim image 850 that is subject to further modification/focusing downstream).
At 940, modifying the light 800 in the image 880 (or interim image 850 in some circumstances) so that the light 800 comprising the final image 880 is comprised of more than one focal point 870. The process at 940 can be performed at a variety of different places in the light/modulation/projection process. The tuning assembly 700 can be positioned between the light source 210 and the modulator 320, between the modulator 320 and the projection assembly 400, or between the projection assembly 400 and the eye 92 of the user 90.
E. Tuning Assembly
A tuning assembly 700 is the configuration of devices, components, and processes that allow the focal point 870 of light 800 making up the image 880 to be changed.
1. Tuning Lens
A tuning lens 710 is a lens that can have its curvature 711 changed so that the focal point 870 of light traveling through the lens 710 also changes.
As illustrated in
Examples of tuning lenses 710 illustrated in
2. Tunable Lens Array
A tuning assembly 700 can use multiple non-dynamic lenses 160 instead of one dynamically changing tuning lens 710. A splitter 724 can be used to direct light 800 to different lenses 160 with each lens 160 possessing a different curvature 711 and resulting in different focal points 870.
3. Movable Lens
A tuning assembly 700 can utilize a moving lens 730 to change the focal point 870 of the light 800 travelling through it.
4. Deformable Mirror
As illustrated in
The system 100 can be described in terms of assemblies of components that perform various functions in support of the operation of the system 100.
As illustrated in
A. Illumination Assembly
An illumination assembly 200 performs the function of supplying light 800 to the system 100 so that an image 880 can be displayed. As illustrated in
In many instances, it will be desirable to use a 3 LED lamp as a light source, which one LED designated for each primary color of red, green, and blue.
B. Imaging Assembly
An imaging assembly 300 performs the function of creating the image 880 from the light 800 supplied by the illumination assembly 200. As illustrated in
Imaging assemblies 300 can vary significantly based on the type of technology used to create the image. Display technologies such as DLP (digital light processing), LCD (liquid-crystal display), LCOS (liquid crystal on silicon), and other methodologies can involve substantially different components in the imaging assembly 300.
A modulator 320 (sometimes referred to as a light modulator 320) is the device that modifies or alters the light 800, creating the image 880 that is to be displayed. Modulators 320 can operate using a variety of different attributes of the modulator 320. A reflection-based modulator 322 uses the reflective-attributes of the modulator 320 to fashion an image 880 from the supplied light 800. Examples of reflection-based modulators 322 include but are not limited to the DMD 324 of a DLP display and some LCOS (liquid crystal on silicon) panels 340. A transmissive-based modulator 321 uses the transmissive-attributes of the modulator 320 to fashion an image 880 from the supplied light 800. Examples of transmissive-based modulators 321 include but are not limited to the LCD (liquid crystal display) 330 of an LCD display and some LCOS panels 340. The imaging assembly 300 for an LCOS or LCD system 100 will typically have a combiner cube or some similar device for integrating the different one-color images into a single image 880.
The imaging assembly 300 can also include a wide variety of supporting components 150.
C. Projection Assembly
As illustrated in
The projection assembly 400 can also include a variety of supporting components 150 as discussed below.
C. Sensor/Tracking Assembly
As illustrated in
D. Tuning Assembly
The focal point 870 of the light 800 can be adjusted at a rate faster than the eye 92 of the viewer 96 can perceive, and that focal point 870 can accurately be driven to a given set-point within its range of operation. The tuning assembly 700 is used to change the focal point 870 of the projected image 880. Changes in the focal point 870 of the projected image 880 effectively change the distance from the eye 92 that the viewer 96 perceives the projected image 880 to be.
This can be incorporated into the system 100 in a variety of different ways, and can be particularly beneficial in a system 100 in which the image 880 is a 3D image or stereoscopic image, all of which also rely on projection of a stereoscopic image 881 (i.e. a slightly different image is projected to each eye mimicking the way that our left and right eyes see slightly different views of real world objects).
In some embodiments, the image 880 presented to a given eye is decomposed into a series of subframes 852 based on the intended distance of objects in the image 880. Then the subframes 852 are presented sequentially to the viewer while the tuning assembly 700 is used to vary the focal point 870 accordingly.
In some embodiments, the system 100 can employ a tracking assembly 500 to capture eye-tracking attributes 530 pertaining to the viewer's interactions with the image 880. An eye tracking assembly 500 can be used to determine where, within the projected image 880, the viewer 96 is looking, and correlates that to an object/region in the image 880. The system 100 can then use the tuning assembly 700 to adjust the focal point 870 of the entire projected image 880 to match the distance of the object/region that the viewer 96 is focusing on. In some embodiments, both of the approaches described above can be combined into a hybrid approach. In this technique the image 880 is decomposed based on depth regions 860, but the number and extent of each depth region 860 is based the current gaze direction and/or focus of the viewer 96.
The tuning assembly 700 can be particularly useful in presenting realistic holographic 3D images 881 (including video) to the user 90 of a near-eye display apparatus 114, such as a visor apparatus 115 or VRD visor apparatus 116. Prior art approaches to near-eye displays often suffer from a lack of realism because the focal points 870 of all objects in the displayed image are the same, regardless of whether they are intended to be close to the viewer 96 or far away from the viewer 96. This means that the entire scene is in focus, regardless of where the user 90 is looking. Background and/or foreground images can be blurred to enhance the illusion, but the viewer 96 cannot shift their focus to the blurred objects. The use of the tuning assembly 700 by the system 100 allows for the addition of various focal points 870 within the projected image 880. This means that if the viewer 96 is looking at an object, areas of the scene that are intended to appear closer or farther than that object will be out of focus, in the same way that they would when viewing a real world scene. If the user shifts their gaze to a different area of the scene they are able to bring that area into focus. The same overall effect is achieved by all three of the methods presented. The use of the tunable lens 710 or other similar focal modulating device of the tuning assembly 700 is an important because it allows the use non-coherent light sources such as LEDs and simple wave guide structures.
The tuning assembly 700 can be used with a wide variety of different modulators 320 and incorporated into DLP systems 141, LCD systems 142, LCOS systems 143, and other display technologies that are utilize micro-mirror arrays, reflective or transmissive liquid crystal displays, and other forms of modulators 320. The tuning assembly 700 can be positioned in a variety of different locations in the terms of the light pathway from light source 210 to displayed image 880. The tuning assembly 700 can be placed between the illumination source 210 and the image generator 320, between the image generator 320 and the splitter plate 430, or between the splitter plate 430 and the viewer's eye 92. Added to this configuration, the system 100 may also incorporate additional optical elements that are not shown in order to achieve correct focus and/or mitigate distortion of the images 880.
To create a 3D image, 3D depth information about the scene is required, rather than simple 2 dimensional image. In all three approaches the scene is first decomposed first based on whether the image will be presented to left or right eye (as in conventional stereoscopic 3D), each image is then decomposed a second time base on the distance within the scene from the viewer. The three approaches are further detailed below.
In this approach the scene 880 is decomposed into a number of depth regions 860. For each image 880 or frame of video a series of subframes 852 are generated. Each subframe 852 is an image of a particular depth region 860, with all other depth regions 860 removed from the scene. The subframes 852 are presented sequentially to the viewer 96. The tuning assembly 700 is used to modulate the focal point 870 of the projected images according to the depth region 860 that each represents. As an example, a scene may be decomposed into 5 depth regions, with region 1 being from 2-3 feet from the user, region 2 being from 3-5 feet from the users, region 3 being from 5-8 feet from the user, region 4 being from 8-12 feet from the user and region 5 being everything farther than 12 feet away. When the image of region 1 is presented, the tunable lens is adjusted so that image has focal distance of 2.5 ft., next the image of region 2 is presented and the focal distance is adjusted to 4 feet, and so on for each depth region. The subframes 852 are cycled at a rapid pace so that user 90 perceives a complete scene with objects at various distances.
In this approach the scene is decomposed into a number of depth regions 860. The system 100 can uses 1 or more sensors 510 from the tracking assembly 500 to track the viewer's pupils and determine where in the scene the viewer 96 is looking. The system 100 then presents a single frame 880 to the viewer 96, with the focal point 870 set to the depth region 860 of the object that the user is looking at. The projected image contains the full scene, composed off all the depth regions, but only the region 860 that the user 90 is looking in will appear in focus. The tuning assembly 700 is used to adjust the focal point 870 of the projected image 880 to the match that of the depth region 860 of focus. The eye 92 is tracked at a high rate, and the image presented to the viewer, together with the focal distance of the image are updated in real time as the viewer's focus shifts.
Alternative approaches can “factor in” eye-tracking information to dynamically determine the number and position of the depth regions used in the scene. For example if the users is looking at an object that is intended to appear 1 foot from their face, the depth regions may be broken up as 8-10 inches, 10-12 inches, 12-15 inches, 15-20 inches and >20 inches. But if the viewer is looking at an object 15 feet away, the depth regions might be broken out differently, with depth regions 860 being measured in feet, not inches. The subdivisions can be adjusted each time the viewer shifts their gaze to another object in the scene. This mimics how human beings interact with the real world at different scales of distances at different times.
E. Supporting Components
Light 800 can be a challenging resource to manage. Light 800 moves quickly and cannot be constrained in the same way that most inputs or raw materials can be.
The system 100 can be implemented with respect to a wide variety of different display technologies, including but not limited to DLP.
A. DLP Embodiments
As discussed above, the illumination assembly 200 includes a light source 210 and multiple diffusers 282. The light 800 then passes to the imaging assembly 300. Two TIR prisms 311 direct the light 800 to the DMD 324, the DMD 324 creates an image 880 with that light 800, and the TIR prisms 311 then direct the light 800 embodying the image 880 to the display 410 where it can be enjoyed by one or more users 90.
The tuning lens 710 or other focal modifying component of the tuning assembly 700 can be positioned in a variety of different locations within the light pathway that begins with the light source 210 generating light 800 and ends with the eye 92 of the viewer 96.
The system 100 can be implemented in a wide variety of different configurations and scales of operation. However, the original inspiration for the conception of using subframe sequences 854 that differentiate different areas of the image 880 based on focal points 870 occurred in the context of a VRD visor system 106 embodied as a VRD visor apparatus 116. A VRD visor apparatus 116 projects the image 880 directly onto the eyes of the user 90. The VRD visor apparatus 116 is a device that can be worn on the head of the user 90. In many embodiments, the VRD visor apparatus 116 can include sound as well as visual capabilities. Such embodiments can include multiple modes of operation, such as visual only, audio only, and audio-visual modes. When used in a non-visual mode, the VRD apparatus 116 can be configured to look like ordinary headphones.
A 3 LED light source 213 generates the light which passes through a condensing lens 160 that directs the light 800 to a mirror 151 which reflects the light 800 to a shaping lens 160 prior to the entry of the light 800 into an imaging assembly 300 comprised of two TIR prisms 311 and a DMD 324. The interim image 850 from the imaging assembly 300 passes through another lens 160 that focuses the interim image 850 into a final image 880 that is viewable to the user 90 through the eyepiece 416. The tuning assembly 700 is used in conjunction with the subframe sequence 854 to change the focal points 870 of light 800 on a depth region 860 by depth region 860 basis before the viewer 96 has access to the image 880.
No patent application can expressly disclose in words or in drawings, all of the potential embodiments of an invention. Variations of known equivalents are implicitly included. In accordance with the provisions of the patent statutes, the principles, functions, and modes of operation of the systems 100, methods 900, and apparatuses 110 (collectively the “system” 100) are explained and illustrated in certain preferred embodiments. However, it must be understood that the inventive systems 100 may be practiced otherwise than is specifically explained and illustrated without departing from its spirit or scope.
The description of the system 100 provided above and below should be understood to include all novel and non-obvious alternative combinations of the elements described herein, and claims may be presented in this or a later application to any novel non-obvious combination of these elements. Moreover, the foregoing embodiments are illustrative, and no single feature or element is essential to all possible combinations that may be claimed in this or a later application.
The system 100 represents a substantial improvement over prior art display technologies. Just as there are a wide range of prior art display technologies, the system 100 can be similarly implemented in a wide range of different ways. The innovation of altering the subframe illumination sequence 854 within a particular frame 882 can be implemented at a variety of different scales, utilizing a variety of different display technologies, in both immersive and augmenting contexts, and in both one-way (no sensor feedback from the user 90) and two-way (sensor feedback from the user 90) embodiments.
A. Variations of Scale
Display devices can be implemented in a wide variety of different scales. The monster scoreboard at EverBanks Field (home of the Jacksonville Jaguars) is a display system that is 60 feet high, 362 feet long, and comprised of 35.5 million LED bulbs. The scoreboard is intended to be viewed simultaneously by tens of thousands of people. At the other end of the spectrum, the GLYPH™ visor by Avegant Corporation is a device that is worn on the head of a user and projects visual images directly in the eyes of a single viewer. Between those edges of the continuum are a wide variety of different display systems.
The system 100 displays visual images 808 to users 90 with enhanced light with reduced coherence. The system 100 can be potentially implemented in a wide variety of different scales.
1. Large Systems
A large system 101 is intended for use by more than one simultaneous user 90. Examples of large systems 101 include movie theater projectors, large screen TVs in a bar, restaurant, or household, and other similar displays. Large systems 101 include a subcategory of giant systems 102, such as stadium scoreboards 102a, the Time Square displays 102b, or other or the large outdoor displays such as billboards off the expressway.
2. Personal Systems
A personal system 103 is an embodiment of the system 100 that is designed to for viewing by a single user 90. Examples of personal systems 103 include desktop monitors 103a, portable TVs 103b, laptop monitors 103c, and other similar devices. The category of personal systems 103 also includes the subcategory of near-eye systems 104.
a. Near-Eye Systems
A near-eye system 104 is a subcategory of personal systems 103 where the eyes of the user 90 are within about 12 inches of the display. Near-eye systems 104 include tablet computers 104a, smart phones 104b, and eye-piece applications 104c such as cameras, microscopes, and other similar devices. The subcategory of near-eye systems 104 includes a subcategory of visor systems 105.
b. Visor Systems
A visor system 105 is a subcategory of near-eye systems 104 where the portion of the system 100 that displays the visual image 200 is actually worn on the head 94 of the user 90. Examples of such systems 105 include virtual reality visors, Google Glass, and other conventional head-mounted displays 105a. The category of visor systems 105 includes the subcategory of VRD visor systems 106.
c. VRD Visor Systems
A VRD visor system 106 is an implementation of a visor system 105 where visual images 200 are projected directly on the eyes of the user. The technology of projecting images directly on the eyes of the viewer is disclosed in a published patent application titled “IMAGE GENERATION SYSTEMS AND IMAGE GENERATING METHODS” (U.S. Ser. No. 13/367,261) that was filed on Feb. 6, 2012, the contents of which are hereby incorporated by reference. It is anticipated that a VRD visor system 106 is particularly well suited for the implementation of the multiple diffuser 140 approach for reducing the coherence of light 210.
3. Integrated Apparatus
Media components tend to become compartmentalized and commoditized over time. It is possible to envision display devices where an illumination assembly 120 is only temporarily connected to a particular imaging assembly 160. However, in most embodiments, the illumination assembly 120 and the imaging assembly 160 of the system 100 will be permanently (at least from the practical standpoint of users 90) into a single integrated apparatus 110.
B. Different Categories of Display Technology
The prior art includes a variety of different display technologies, including but not limited to DLP (digital light processing), LCD (liquid crystal displays), and LCOS (liquid crystal on silicon).
C. Immersion Vs. Augmentation
Some embodiments of the system 100 can be configured to operate either in immersion mode or augmentation mode, at the discretion of the user 90. While other embodiments of the system 100 may possess only a single operating mode 120.
D. Display Only Vs. Display/Detect/Track/Monitor
Some embodiments of the system 100 will be configured only for a one-way transmission of optical information. Other embodiments can provide for capturing information from the user 90 as visual images 880 and potentially other aspects of a media experience are made accessible to the user 90.
E. Media Players—Integrated Vs. Separate
Display devices are sometimes integrated with a media player. In other instances, a media player is totally separate from the display device. By way of example, a laptop computer can include in a single integrated device, a screen for displaying a movie, speakers for projecting the sound that accompanies the video images, a DVD or BLU-RAY player for playing the source media off a disk. Such a device is also capable of streaming
F. Users—Viewers Vs. Operators
G. Attributes of Media Content
As illustrated in
As illustrated in
Table 1 below sets forth a list of element numbers, names, and descriptions/definitions.
Number | Name | Date | Kind |
---|---|---|---|
2149341 | Harrison | Mar 1939 | A |
D132442 | Montgomery | May 1942 | S |
3356439 | Magnus | Dec 1967 | A |
D246259 | Nishimura et al. | Nov 1977 | S |
D254183 | Doodson | Feb 1980 | S |
D262019 | Upshaw | Nov 1981 | S |
D270634 | Ungar | Sep 1983 | S |
4459470 | Shlichta et al. | Jul 1984 | A |
4553534 | Stiegler | Nov 1985 | A |
4859030 | Rotier | Aug 1989 | A |
4961626 | Fournier et al. | Oct 1990 | A |
D313092 | Nilsson | Dec 1990 | S |
5047006 | Brandston et al. | Sep 1991 | A |
5095382 | Abe | Mar 1992 | A |
5106179 | Kamaya et al. | Apr 1992 | A |
D328461 | Daido et al. | Aug 1992 | S |
5140977 | Raffel | Aug 1992 | A |
D338010 | Yamatogi | Aug 1993 | S |
5266070 | Hagiwara et al. | Nov 1993 | A |
5303085 | Rallison | Apr 1994 | A |
5467104 | Furness et al. | Nov 1995 | A |
5552922 | Magarill | Sep 1996 | A |
5624156 | Leal et al. | Apr 1997 | A |
D388114 | Ferro | Dec 1997 | S |
5794127 | Lansang | Aug 1998 | A |
5844656 | Ronzani et al. | Dec 1998 | A |
5886822 | Spitzer | Mar 1999 | A |
5915783 | McDowell et al. | Jun 1999 | A |
5931534 | Hutter | Aug 1999 | A |
5945965 | Inoguchi et al. | Aug 1999 | A |
5984477 | Weissman et al. | Nov 1999 | A |
5991085 | Rallison et al. | Nov 1999 | A |
5991087 | Rallison | Nov 1999 | A |
6008781 | Furness et al. | Dec 1999 | A |
6016385 | Yee et al. | Jan 2000 | A |
6097543 | Rallison et al. | Aug 2000 | A |
6185045 | Hanano | Feb 2001 | B1 |
6342871 | Takeyama | Jan 2002 | B1 |
6351252 | Atsumi et al. | Feb 2002 | B1 |
6386706 | McClure et al. | May 2002 | B1 |
6437915 | Moseley et al. | Aug 2002 | B2 |
D467580 | Mori | Dec 2002 | S |
D484485 | Matsuoka | Dec 2003 | S |
6678897 | Lindgren | Jan 2004 | B2 |
6721750 | Jones et al. | Apr 2004 | B1 |
6724906 | Naksen et al. | Apr 2004 | B2 |
6932090 | Reschke et al. | Aug 2005 | B1 |
7245735 | Han | Jul 2007 | B2 |
7275826 | Liang | Oct 2007 | B2 |
D556187 | Feng | Nov 2007 | S |
D560654 | Feng | Jan 2008 | S |
D567215 | Lee | Apr 2008 | S |
D570825 | Schultz et al. | Jun 2008 | S |
7388960 | Kuo et al. | Jun 2008 | B2 |
7431392 | Tamara | Oct 2008 | B2 |
7483200 | Pan | Jan 2009 | B1 |
D587683 | Ham et al. | Mar 2009 | S |
7604348 | Jacobs et al. | Oct 2009 | B2 |
7697203 | Cha et al. | Apr 2010 | B2 |
7735154 | Gellis et al. | Jun 2010 | B2 |
D632668 | Brunner et al. | Feb 2011 | S |
D638397 | McManigal | May 2011 | S |
D640256 | So | Jun 2011 | S |
7959296 | Cowan et al. | Jun 2011 | B2 |
8006320 | Rohbani | Aug 2011 | B1 |
8057036 | Hess et al. | Nov 2011 | B2 |
8094120 | Ratai | Jan 2012 | B2 |
8094927 | Jin et al. | Jan 2012 | B2 |
8106938 | Tzschoppe | Jan 2012 | B2 |
D656480 | McManigal et al. | Mar 2012 | S |
8144079 | Mather et al. | Mar 2012 | B2 |
8144274 | Lee | Mar 2012 | B2 |
D657344 | Brunner et al. | Apr 2012 | S |
8149342 | Ijzerman et al. | Apr 2012 | B2 |
8154800 | Kean et al. | Apr 2012 | B2 |
8162482 | DeCusatis et al. | Apr 2012 | B2 |
D660823 | Hardi et al. | May 2012 | S |
D660824 | Hardi et al. | May 2012 | S |
8194058 | Shestak et al. | Jun 2012 | B2 |
8208715 | Lau et al. | Jun 2012 | B2 |
8212810 | Naske et al. | Jul 2012 | B2 |
8243126 | Louwsma et al. | Aug 2012 | B2 |
8244027 | Takahashi | Aug 2012 | B2 |
8284235 | Held et al. | Oct 2012 | B2 |
D673136 | Kelly et al. | Dec 2012 | S |
D673520 | Tan | Jan 2013 | S |
D674767 | Brunner et al. | Jan 2013 | S |
8362974 | Miyake et al. | Jan 2013 | B2 |
D675595 | Cho et al. | Feb 2013 | S |
D683329 | Hagelin | May 2013 | S |
8451229 | Otsuki et al. | May 2013 | B2 |
8508830 | Wang | Aug 2013 | B1 |
8545013 | Hwang et al. | Oct 2013 | B2 |
D693791 | Troy | Nov 2013 | S |
D695263 | Mogili | Dec 2013 | S |
8605935 | Huang | Dec 2013 | B1 |
D697495 | Lian | Jan 2014 | S |
D699702 | Chen | Feb 2014 | S |
D704704 | Tatara et al. | May 2014 | S |
D709880 | Kim et al. | Jul 2014 | S |
D715255 | Nunez et al. | Oct 2014 | S |
D720721 | Lu | Jan 2015 | S |
D722041 | Sparks et al. | Feb 2015 | S |
8964298 | Haddick et al. | Feb 2015 | B2 |
D724560 | Galler | Mar 2015 | S |
D727278 | Solomon et al. | Apr 2015 | S |
D727280 | Levine | Apr 2015 | S |
D727281 | Levine | Apr 2015 | S |
D727288 | Yamasaki et al. | Apr 2015 | S |
D728512 | Nakagawa | May 2015 | S |
D729196 | Liu | May 2015 | S |
D729198 | Brunner et al. | May 2015 | S |
9036849 | Thompson et al. | May 2015 | B2 |
9042948 | Serota | May 2015 | B2 |
D733090 | Petersen | Jun 2015 | S |
9158115 | Worley et al. | Oct 2015 | B1 |
9223136 | Braun et al. | Dec 2015 | B1 |
9529191 | Sverdrup et al. | Dec 2016 | B2 |
9603457 | Massaud et al. | Mar 2017 | B2 |
20020070590 | Carstens | Jun 2002 | A1 |
20020089469 | Cone et al. | Jul 2002 | A1 |
20020175880 | Melville et al. | Nov 2002 | A1 |
20020186180 | Duda | Dec 2002 | A1 |
20030058209 | Balogh | Mar 2003 | A1 |
20030095081 | Furness et al. | May 2003 | A1 |
20030164814 | Starkweather et al. | Sep 2003 | A1 |
20030210801 | Naksen et al. | Nov 2003 | A1 |
20030227465 | Morgan et al. | Dec 2003 | A1 |
20040113867 | Tomine et al. | Jun 2004 | A1 |
20050116922 | Kim | Jun 2005 | A1 |
20050195277 | Yamasaki | Sep 2005 | A1 |
20050206583 | Lemelson et al. | Sep 2005 | A1 |
20060028400 | Lapstun et al. | Feb 2006 | A1 |
20060087618 | Smart et al. | Apr 2006 | A1 |
20060181482 | Iaquinto | Aug 2006 | A1 |
20060181484 | Sprague et al. | Aug 2006 | A1 |
20060238717 | Maximus et al. | Oct 2006 | A1 |
20070081248 | Wu | Apr 2007 | A1 |
20070091272 | Lerner et al. | Apr 2007 | A1 |
20070093118 | Pond et al. | Apr 2007 | A1 |
20070097277 | Hong et al. | May 2007 | A1 |
20070273983 | Hebert | Nov 2007 | A1 |
20080158672 | McCosky | Jul 2008 | A1 |
20090015917 | Iwamoto et al. | Jan 2009 | A1 |
20090039692 | Tuckey et al. | Feb 2009 | A1 |
20090152915 | Krasna et al. | Jun 2009 | A1 |
20090206641 | Brown | Aug 2009 | A1 |
20090262044 | Otsuki et al. | Oct 2009 | A1 |
20090276238 | Filipovich et al. | Nov 2009 | A1 |
20100007852 | Bietry et al. | Jan 2010 | A1 |
20100053729 | Tilleman et al. | Mar 2010 | A1 |
20100073469 | Fateh | Mar 2010 | A1 |
20100103676 | Noeth | Apr 2010 | A1 |
20100182688 | Kim et al. | Jul 2010 | A1 |
20100231579 | Kanbayashi et al. | Sep 2010 | A1 |
20100301640 | Heiser | Dec 2010 | A1 |
20110002533 | Inoue et al. | Jan 2011 | A1 |
20110007132 | Redmann et al. | Jan 2011 | A1 |
20110018903 | Lapstun et al. | Jan 2011 | A1 |
20110037829 | Hata | Feb 2011 | A1 |
20110044046 | Abu-Ageel | Feb 2011 | A1 |
20110063203 | Hong | Mar 2011 | A1 |
20110085727 | Yoon et al. | Apr 2011 | A1 |
20110086747 | Broderick | Apr 2011 | A1 |
20110096147 | Yamazaki et al. | Apr 2011 | A1 |
20110109133 | Galbreath et al. | May 2011 | A1 |
20110134229 | Matsumoto et al. | Jun 2011 | A1 |
20110134497 | Horimai | Jun 2011 | A1 |
20110141240 | Dutta et al. | Jun 2011 | A1 |
20110141244 | Vos et al. | Jun 2011 | A1 |
20110157696 | Bennett et al. | Jun 2011 | A1 |
20110193248 | Hsu | Aug 2011 | A1 |
20110227820 | Haddick et al. | Sep 2011 | A1 |
20110254834 | Jeon et al. | Oct 2011 | A1 |
20110273365 | West et al. | Nov 2011 | A1 |
20110273662 | Hwang et al. | Nov 2011 | A1 |
20120007800 | Jaroslaw | Jan 2012 | A1 |
20120033061 | Ko et al. | Feb 2012 | A1 |
20120050503 | Kraft | Mar 2012 | A1 |
20120059464 | Zhao | Mar 2012 | A1 |
20120069131 | Abelow | Mar 2012 | A1 |
20120084652 | Bauza et al. | Apr 2012 | A1 |
20120086917 | Okuda et al. | Apr 2012 | A1 |
20120105310 | Sverdrup et al. | May 2012 | A1 |
20120105740 | Jannard et al. | May 2012 | A1 |
20120120498 | Harrison et al. | May 2012 | A1 |
20120127062 | Bar-Zeev | May 2012 | A1 |
20120195454 | Nishihara et al. | Aug 2012 | A1 |
20120212398 | Border et al. | Aug 2012 | A1 |
20120236030 | Border et al. | Sep 2012 | A1 |
20120244812 | Rosener | Sep 2012 | A1 |
20120262477 | Buchheit | Oct 2012 | A1 |
20120262549 | Ferguson | Oct 2012 | A1 |
20120262562 | Fukutake et al. | Oct 2012 | A1 |
20120280941 | Hu | Nov 2012 | A1 |
20120307357 | Choi et al. | Dec 2012 | A1 |
20130002660 | Chikazawa | Jan 2013 | A1 |
20130010055 | Raju et al. | Jan 2013 | A1 |
20130044939 | Li | Feb 2013 | A1 |
20130057961 | Evans et al. | Mar 2013 | A1 |
20130120265 | Horii et al. | May 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130147791 | Gilberton et al. | Jun 2013 | A1 |
20130160039 | Mentz et al. | Jun 2013 | A1 |
20130182086 | Evans et al. | Jul 2013 | A1 |
20130194244 | Tamir | Aug 2013 | A1 |
20130201080 | Evans et al. | Aug 2013 | A1 |
20130258463 | Evans et al. | Oct 2013 | A1 |
20130278631 | Border et al. | Oct 2013 | A1 |
20130293531 | Cao et al. | Nov 2013 | A1 |
20130307842 | Grinberg et al. | Nov 2013 | A1 |
20130314303 | Osterhout et al. | Nov 2013 | A1 |
20130314615 | Allen et al. | Nov 2013 | A1 |
20130342904 | Richards | Dec 2013 | A1 |
20140043320 | Tosaya et al. | Feb 2014 | A1 |
20140063055 | Osterhout et al. | Mar 2014 | A1 |
20140139652 | Aiden et al. | May 2014 | A1 |
20140139927 | Hiraide | May 2014 | A1 |
20140200079 | Bathiche et al. | Jul 2014 | A1 |
20140253698 | Evans et al. | Sep 2014 | A1 |
20150028755 | Chang et al. | Jan 2015 | A1 |
20150060811 | Shiratori | Mar 2015 | A1 |
20150091781 | Yu et al. | Apr 2015 | A1 |
20150097759 | Evans | Apr 2015 | A1 |
20150103152 | Qin | Apr 2015 | A1 |
20150331246 | Dewald et al. | Nov 2015 | A1 |
20160018639 | Spitzer et al. | Jan 2016 | A1 |
20160033771 | Tremblay et al. | Feb 2016 | A1 |
20160195718 | Evans | Jul 2016 | A1 |
20160198133 | Evans | Jul 2016 | A1 |
20160291326 | Evans et al. | Oct 2016 | A1 |
20160292921 | Evans et al. | Oct 2016 | A1 |
20160295202 | Evans et al. | Oct 2016 | A1 |
20170068311 | Evans et al. | Mar 2017 | A1 |
20170139209 | Evans et al. | May 2017 | A9 |
Number | Date | Country |
---|---|---|
2257445 | Jul 1999 | CA |
2651774 | Oct 2004 | CN |
202306016 | Jul 2012 | CN |
2012253471 | Dec 2012 | JP |
2008070683 | Jun 2008 | WO |
2011097226 | Aug 2011 | WO |
2011137034 | Nov 2011 | WO |
2012000457 | Jan 2012 | WO |
2012098534 | Jul 2012 | WO |
2013012259 | Jan 2013 | WO |
Entry |
---|
Chapter 2-Principles of Stereoscopic Depth Perception and Reproduction, 2007. |
Rainbow Symphony, Pulfrich 3D Glasses, <http:/fwww.3dglasses.net/Pulfrich%203D%20Glasses.htm>, Retrieved on Jul. 27, 2016, 2 pages. |
Ruxandra Serbanescu, “Polarization of light”, 2009. |
“Binocular Disparity”, Collins English Dictionary, <http://www.dictionary.com/browse/binocular-disparity>, Retrieved on Apr. 11, 2016, 2 pages. |
Qian, Ning, “Binocular Disparity and the Perception of Depth”, Neuron, vol. 18, Mar. 1997, 359-368. |
Number | Date | Country | |
---|---|---|---|
20160295202 A1 | Oct 2016 | US |