The invention is system, method, and apparatus (collectively the “system”) for displaying images. More specifically, the invention is a system that reduces the color breakup or rainbow effect in the display of video images.
The “rainbow effect” is a well-known anomaly with respect to digital light processing (DLP) The phenomenon is even described in Wikipedia and illustrated in a video posted on YouTube. The color in a DLP produced image is traditionally produced by a spinning filter commonly referred to a color wheel. However, even DLP projectors that no longer use a mechanical color wheel still produce a “rainbow effect” in the displayed images.
The “rainbow effect” has been described as a brief flash of colors when the viewer rapidly looks from side to side on the screen or looks rapidly from the screen to side of the room. These flash of colors look like small flickering rainbows.
The “rainbow effect” is not a desirable anomaly for viewers. It would be desirable to eliminate or at least further reduce instances of the “rainbow effect”.
The invention is system, method, and apparatus (collectively the “system”) for displaying images. More specifically, the invention is a system that reduces the color breakup or rainbow effect in the display of video images.
The system uses subframe illumination sequences that are not identical to each other in order to eliminate or at least substantially reduce the “color breakup” or “rainbow effect” of conventional DLP projectors. In some instances, the differences between the subframe illumination sequences can be relatively significant. In other instances, there may be only a relatively subtle difference in sequence attribute. It only takes one difference in one subframe illumination sequence attribute for two sequences to be non-identical.
Many features and inventive aspects of the system are illustrated in the various drawings described briefly below. All components illustrated in the drawings below and associated with element numbers are named and described in Table 1 provided in the Detailed Description section.
The invention is system, method, and apparatus (collectively the “system”) for displaying images. More specifically, the invention is a system that reduces the color breakup or rainbow effect in the display of video images.
The system utilizes subframe illumination sequences that are not identical to each other. Doing this can eliminate or at least substantially reduce the “rainbow effect” complained of by some viewers. The prior art utilizes identical subframe illumination sequences. This practice originates from the dependence on color wheels, but the practice continues today even though there are alternative mechanisms for imbuing color into a projected image.
The subframe illumination sequence is about a sequence of pulsing light (a pulse) to create a partial image (a subframe).
A. The Prior Art—Identical Subframe Illumination Sequences
In prior art approaches, even if multiple subframe illumination sequences 854 are formed for a single frame 882, those multiple subframe illumination sequences 854 are identical to teach other. Unlike the frames 882 which run from 1 to N, the subframe illumination sequences 854 run from 1 to 1 because they are all identical. Whatever the color order, the intensity of the pulses, the gap (if any) between pulses, and the duration of the pulses, and all of the subframe illumination sequences 854 are identical. If multiple sequences 854 are used with respect to a single frame 882, then the pulsed pixels (i.e. color map) are precisely the same for the multiple sequences 854 as well. In other words, in the example of
B. System—Non-Identical Subframe Illumination Sequences
The core innovation of the system 100 is the use of subframe illumination sequences 865 that are not identical to each other. The differences between sequences 865 can be substantial or relatively minor while still advancing the cause of eliminating or at least reducing the “rainbow effect”.
That is not to say that all subframe illumination sequence attributes 870 must different from each other. To the contrary, in many instances, all that may be required is a deviation in one subframe or pulse in a single difference with respect to one subframe illumination sequence attribute 870. Even a single difference between sequences 854 is a departure from the prior art, and a potentially valuable tool in addressing the “rainbow effect”.
For example in
Non-identical subframe sequences 854 means that there is at least one difference between the collective attributes (870). The difference could be in the color order 871. For example, sequence 1 could have color order 871 of red-green-blue but sequence 2 could have a color order 871 of blue-green-red, red-blue-green, or some other different color order 871 with all other attributes 870 remaining identical.
The difference in sequences 854 could pertain to pulse intensity 872. The pulses 860 used to create the subframes 852 can vary between pulses, or even during the duration of a pulse 860.
Gap length 873 (which can also be referred to as gap duration 873) is another potential useful attribute 873 for variation. Traditional color wheels 240 do not utilize gaps between colors. There are no gaps, and thus the gap lengths are zero. In some prior art approaches, there may be pulses of white light or of no light whatsoever. Such periods are “gaps” and the duration of those periods are gap lengths 873. In some embodiments of the system 100, altering the gap lengths 873 between sequences 854 can be a highly effective tool.
Duration 874 (which can also be referred to as pulse duration 874) refers to the duration of a pulse 860. The variables of pulse intensity 872, gap duration 873, and pulse duration 874 can involve substantial interplay between them.
The attribute 870 of pulsed pixels 875 (which can also be referred to as a color map) refers to the pixels being pulsed. For example in a first red pulse 860 there may be additional pixels or conversely fewer pixels being pulsed with light.
C. Process Flow View
Different embodiments of the system 100 may implement a wide variety of different approaches in differentiating between two or more subframe illumination sequences 854.
The system 100 can be described in terms of assemblies of components that perform various functions in support of the operation of the system 100.
A. Illumination Assembly
An illumination assembly 200 performs the function of supplying light 800 to the system 100 so that an image 880 can be displayed. As illustrated in
In many instances, it will be desirable to use a 3 LED lamp as a light source, which one LED designated for each primary color of red, green, and blue.
B. Imaging Assembly
An imaging assembly 300 performs the function of creating the image 880 from the light 800 supplied by the illumination assembly 200. As illustrated in
Imaging assemblies 300 can vary significantly based on the type of technology used to create the image. Display technologies such as DLP (digital light processing), LCD (liquid-crystal display), LCOS (liquid crystal on silicon), and other methodologies can involve substantially different components in the imaging assembly 300.
A modulator 320 (sometimes referred to as a light modulator 320) is the device that modifies or alters the light 800, creating the image 880 that is to be displayed. Modulators 320 can operate using a variety of different attributes of the modulator 320. A reflection-based modulator 322 uses the reflective-attributes of the modulator 320 to fashion an image 880 from the supplied light 800. Examples of reflection-based modulators 322 include but are not limited to the DMD 324 of a DLP display and some LCOS (liquid crystal on silicon) panels 340. A transmissive-based modulator 321 uses the transmissive-attributes of the modulator 320 to fashion an image 880 from the supplied light 800. Examples of transmissive-based modulators 321 include but are not limited to the LCD (liquid crystal display) 330 of an LCD display and some LCOS panels 340. The imaging assembly 300 for an LCOS or LCD system 100 will typically have a combiner cube or some similar device for integrating the different one-color images into a single image 880.
The imaging assembly 300 can also include a wide variety of supporting components 150.
C. Projection Assembly
As illustrated in
The projection assembly 400 can also include a variety of supporting components 150 as discussed below.
D. Supporting Components
Light 800 can be a challenging resource to manage. Light 800 moves quickly and cannot be constrained in the same way that most inputs or raw materials can be.
E. Process Flow View
The system 100 can be described as the interconnected functionality of an illumination assembly 200, an imaging assembly 300, and a projection assembly 400. The system 100 can also be described in terms of a method 900 that includes an illumination process 910, an imaging process 920, and a projection process 930.
The system 100 can be implemented with respect to a wide variety of different display technologies, including but not limited to DLP.
A. DLP Embodiments
As discussed above, the illumination assembly 200 includes a light source 210 and multiple diffusers 282. The light 800 then passes to the imaging assembly 300. Two TIR prisms 311 direct the light 800 to the DMD 314, the DMD 314 creates an image 880 with that light 800, and the TIR prisms 311 then direct the light 800 embodying the image 880 to the display 410 where it can be enjoyed by one or more users 90.
The system 100 can be implemented in a wide variety of different configurations and scales of operation. However, the original inspiration for the conception of using non-identical subframe illumination sequences 854 occurred in the context of a VRD visor system 106 embodied as a VRD visor apparatus 116. A VRD visor apparatus 116 projects the image 880 directly onto the eyes of the user 90. The VRD visor apparatus 116 is a device that can be worn on the head of the user 90. In many embodiments, the VRD visor apparatus 116 can include sound as well as visual capabilities. Such embodiments can include multiple modes of operation, such as visual only, audio only, and audio-visual modes. When used in a non-visual mode, the VRD apparatus 116 can be configured to look like ordinary headphones.
A 3 LED light source 213 generates the light which passes through a condensing lens 160 that directs the light 800 to a mirror 151 which reflects the light 800 to a shaping lens 160 prior to the entry of the light 800 into an imaging assembly 300 comprised of two TIR prisms 311 and a DMD 314. The interim image 850 from the imaging assembly 300 passes through another lens 160 that focuses the interim image 850 into a final image 880 that is viewable to the user 90 through the eyepiece 416.
No patent application can expressly disclose in words or in drawings, all of the potential embodiments of an invention. Variations of known equivalents are implicitly included. In accordance with the provisions of the patent statutes, the principles, functions, and modes of operation of the systems 100, methods 900, and apparatuses 110 (collectively the “system” 100) are explained and illustrated in certain preferred embodiments. However, it must be understood that the inventive systems 100 may be practiced otherwise than is specifically explained and illustrated without departing from its spirit or scope.
The description of the system 100 provided above and below should be understood to include all novel and non-obvious alternative combinations of the elements described herein, and claims may be presented in this or a later application to any novel non-obvious combination of these elements. Moreover, the foregoing embodiments are illustrative, and no single feature or element is essential to all possible combinations that may be claimed in this or a later application.
The system 100 represents a substantial improvement over prior art display technologies. Just as there are a wide range of prior art display technologies, the system 100 can be similarly implemented in a wide range of different ways. The innovation of altering the subframe illumination sequence 854 within a particular frame 882 can be implemented at a variety of different scales, utilizing a variety of different display technologies, in both immersive and augmenting contexts, and in both one-way (no sensor feedback from the user 90) and two-way (sensor feedback from the user 90) embodiments.
A. Variations of Scale
Display devices can be implemented in a wide variety of different scales. The monster scoreboard at EverBanks Field (home of the Jacksonville Jaguars) is a display system that is 60 feet high, 362 feet long, and comprised of 35.5 million LED bulbs. The scoreboard is intended to be viewed simultaneously by tens of thousands of people. At the other end of the spectrum, the GLYPH™ visor by Avegant Corporation is a device that is worn on the head of a user and projects visual images directly in the eyes of a single viewer. Between those edges of the continuum are a wide variety of different display systems.
The system 100 displays visual images 808 to users 90 with enhanced light with reduced coherence. The system 100 can be potentially implemented in a wide variety of different scales.
1. Large Systems
A large system 101 is intended for use by more than one simultaneous user 90. Examples of large systems 101 include movie theater projectors, large screen TVs in a bar, restaurant, or household, and other similar displays. Large systems 101 include a subcategory of giant systems 102, such as stadium scoreboards 102a, the Time Square displays 102b, or other or the large outdoor displays such as billboards off the expressway.
2. Personal Systems
A personal system 103 is an embodiment of the system 100 that is designed to for viewing by a single user 90. Examples of personal systems 103 include desktop monitors 103a, portable TVs 103b, laptop monitors 103c, and other similar devices. The category of personal systems 103 also includes the subcategory of near-eye systems 104.
a. Near-Eye Systems
A near-eye system 104 is a subcategory of personal systems 103 where the eyes of the user 90 are within about 12 inches of the display. Near-eye systems 104 include tablet computers 104a, smart phones 104b, and eye-piece applications 104c such as cameras, microscopes, and other similar devices. The subcategory of near-eye systems 104 includes a subcategory of visor systems 105.
b. Visor Systems
A visor system 105 is a subcategory of near-eye systems 104 where the portion of the system 100 that displays the visual image 200 is actually worn on the head 94 of the user 90. Examples of such systems 105 include virtual reality visors, Google Glass, and other conventional head-mounted displays 105a. The category of visor systems 105 includes the subcategory of VRD visor systems 106.
c. VRD Visor Systems
A VRD visor system 106 is an implementation of a visor system 105 where visual images 200 are projected directly on the eyes of the user. The technology of projecting images directly on the eyes of the viewer is disclosed in a published patent application titled “IMAGE GENERATION SYSTEMS AND IMAGE GENERATING METHODS” (U.S. Ser. No. 13/367,261) that was filed on Feb. 6, 2012, the contents of which are hereby incorporated by reference. It is anticipated that a VRD visor system 106 is particularly well suited for the implementation of the multiple diffuser 140 approach for reducing the coherence of light 210.
3. Integrated Apparatus
Media components tend to become compartmentalized and commoditized over time. It is possible to envision display devices where an illumination assembly 120 is only temporarily connected to a particular imaging assembly 160. However, in most embodiments, the illumination assembly 120 and the imaging assembly 160 of the system 100 will be permanently (at least from the practical standpoint of users 90) into a single integrated apparatus 110.
B. Different Categories of Display Technology
The prior art includes a variety of different display technologies, including but not limited to DLP (digital light processing), LCD (liquid crystal displays), and LCOS (liquid crystal on silicon).
C. Immersion vs. Augmentation
Some embodiments of the system 100 can be configured to operate either in immersion mode or augmentation mode, at the discretion of the user 90. While other embodiments of the system 100 may possess only a single operating mode 120.
D. Display Only vs. Display/Detect/Track/Monitor
Some embodiments of the system 100 will be configured only for a one-way transmission of optical information. Other embodiments can provide for capturing information from the user 90 as visual images 880 and potentially other aspects of a media experience are made accessible to the user 90. Figure if is a hierarchy diagram that reflects the categories of a one-way system 124 (a non-sensing operating mode 124) and a two-way system 123 (a sensing operating mode 123). A two-way system 123 can include functionality such as retina scanning and monitoring. Users 90 can be identified, the focal point of the eyes 92 of the user 90 can potentially be tracked, and other similar functionality can be provided. In a one-way system 124, there is no sensor or array of sensors capturing information about or from the user 90.
E. Media Players—Integrated vs. Separate
Display devices are sometimes integrated with a media player. In other instances, a media player is totally separate from the display device. By way of example, a laptop computer can include in a single integrated device, a screen for displaying a movie, speakers for projecting the sound that accompanies the video images, a DVD or BLU-RAY player for playing the source media off a disk. Such a device is also capable of streaming
F. Users—Viewers vs. Operators
G. Attributes of Media Content
As illustrated in
As illustrated in
Table 1 below sets forth a list of element numbers, names, and descriptions/definitions.
This utility patent application both (i) claims priority to and (ii) incorporates by reference in its entirety, the provisional patent application titled “NEAR-EYE DISPLAY APPARATUS AND METHOD” (Ser. No. 61/924,209) that was filed on Jan. 6, 2014.