1. Field of the Invention
The present invention relates to image display, and more particularly, to techniques for mapping an angular viewing area of an image display panel according to viewer position.
2. Description of the Related Art
It is contemplated that image display panels may be designed for enhanced operation based on a detailed knowledge of the viewer's position.
For example, it is contemplated that an image display panel could display three-dimensional (3D) images, without requiring the user to wear special 3D glasses, as described in copending U.S. patent application Ser. No. ______ (Docket No. H00011896), entitled “Directional Display,” filed on the same date and by the same inventor as the present application, the entire contents of which are incorporated herein by reference. This application describes a display panel designed to create a 3D visual effect by precisely aiming different images toward the left and right eyes, respectively, of a viewer. It would be beneficial for such a device to be capable of tracking the precise location of the viewer's eyeballs, so that 3D images are displayed even if the viewer moves within the viewing environment.
However, there are other ways that image display could be enhanced based on knowledge of the precise location of the viewer(s). For instance, it is contemplated that different types of information could be displayed to the viewer based on his/her position in the viewing environment. An example of this is to display location-based information prompting the viewer to move toward a desired location within the viewing environment (e.g., closer to the center). This type of display might be useful in applications where the subject is to be photographed for security, professional portrait, etc.
A camera might be used to identify and track the position of a viewer's eyes using conventional image recognition techniques, which involve video digital signal processing (DSP) procedures. However, such techniques are time consuming and computationally expensive, because they require mathematical transformations of image frames and other types of image processing to account for illumination conditions, etc.
Disclosed embodiments of this application are used for segmenting the viewing environment of an image display panel into angular regions, which correspond to the current positions of the people viewing the display panel.
Particularly, the present invention identifies and tracks pairs of eyes associated with people in the viewing environment. To do this, exemplary embodiments of the invention detect occurrences of the “red-eye” effect in people who are viewing the display panel. Thus, the present invention may include a mechanism (light source) for creating the red-eye effect in the viewers and an image capture device for obtaining image data of the environment. The captured image data may be analyzed to find occurrences of red-eye and, thus, identify the position of a viewer in the display panel environment.
Although the red-eye effect is more commonly associated with flash photography (i.e., in the visible spectrum), the red-eye effect also occurs in the infrared (IR) spectrum. Thus, an embodiment of the present invention may create the red-eye effect by emitting IR light into the room, so as not to interfere with normal viewing of the images. Further, image data of the environment may be captured by an IR camera and analyzed to detect occurrence of red-eye.
In an exemplary embodiment of the invention, the captured image data is analyzed to confirm potential red-eye regions as corresponding to the eyes of a viewer. This may be accomplished by applying a “blink filter.” Specifically, after potential red-eye regions are paired off, the blink filter analyzes each pair of red-eye regions to determine whether they disappear and reappear simultaneously in accordance with the normal eye blinking of a user. Thus, each blinking pair of red-eye regions is confirmed to be a pair of viewer's eyes.
In a further embodiment, the location of each pair of eyes detected from the captured image data may be mapped to a particular angular region in the viewing embodiment. For example, each eye of each viewer may be mapped to a particular angular region. Furthermore, this map may be used for driving the image display operation of an image display panel. For example, if the display panel is specially configured for precise directional display, the map may be used for creating a three-dimensional (3D) visual effect by directing slightly different images to the viewer's left and right eyes.
Further aspects and advantages of the present invention will become apparent upon reading the following detailed description in conjunction with the accompanying drawings, which are given by way of illustration only and, thus, are not limitative of the present invention. In these drawings, similar elements are referred to using similar reference numbers, wherein:
Aspects of the invention are more specifically set forth in the accompanying description with reference to the appended figures.
As illustrated in
The control unit 77 of
For instance, if the display panel 80 is configured for precise directional display (as will be described in more detail below), the display driver 116 may control the display panel 80 to generate various images and aim them directly to specific viewers. Thus, the display panel 80 may be configured to display different images to different viewers, based on their positions. Also, in an exemplary embodiment, the control unit 77 may be capable of detecting the precise locations of each eye of the viewer. Thus, by controlling the display panel 80 to aim slightly different images to the viewer's left and right eyes, respectively, the viewer may be able to view three-dimensional (3D) images without needing to wear special eyeglasses or other headgear (as will be described in more detail below).
Referring again to
However, as described above, the display panel 80 may be configured as a directional display, capable of generating images and aiming them in different programmable directions in the viewing environment. Examples of directional display panels 80 are described in copending U.S. patent application Ser. No. ______ (Docket No. H00011896), entitled “Directional Display,” filed on the same date and by the same inventor as the present application, the entire contents of which are incorporated herein by reference. Particularly, as described in detail in the aforementioned copending patent application, a directional display panel 80 may include microscopically small light deflecting devices corresponding to respective pixel positions. Each light deflecting device may be selectively switched between different states for precisely deflecting light (image pixel) in different directions, under the control of electrical, mechanical, and/or magnetic signals. For instance, the light deflecting devices may be implemented using existing Digital Micromirror Device™ (DMD) technology, manufactured by Texas Instruments, or using microfluidic devices described in more detail in the aforementioned copending patent application.
For embodiments in which the display panel 80 is a directional display, it is possible to display 3D images to each viewer. Thus, a concise description of 3D images will now be provided. When viewing an object in a room, e.g., the 3D effect is created because the viewer's left eye is seeing something different than the right eye at a particular moment. Specifically, when a person looks at the object, the left eye forms a left-eye image IL of the object and the right eye forms another, slightly different, right-eye image IR of the object. The differences between the left-eye image IL and right-eye image IR can be seen by looking at an object with the left eye while the right eye is covered, and then with the right eye while the left eye is covered. Both images IL and IR are sent to the viewer's brain, and the brain processes them in order to obtain a 3D image of the object.
Thus, if display panel 80 is a directional display, it may be capable of mimicking the effect of left and right eye imaging by generating two separate images IL and IR to be sent to the viewer's left and right eyes, respectively, using eye positioning information mapped by the control unit 77. If the images IL and IR are transmitted to the respective eyes at nearly the same time, the viewer's brain will process them to create the 3D effect.
The above description of directional displays and 3D imaging applications is only provided for the purpose of enablement of a particular embodiment. Such description is not meant to limit the present application to the use of directional displays or the application of displaying 3D images.
Referring again to
As described above, the control unit 77 may identify eye positions of the eyes of viewer P1, P2, and P3, while viewers move within the environment. Since the control unit 77 may track position in terms of angular position,
Referring again to
Alternatively, by dynamically tracking the current position and number of viewers, the display panel 80 may be designed to display images tailored to such information. For example, such an application may be used for prompting a viewer to move toward a desired location within the viewing environment (e.g., closer to the center). This might be useful in applications in which the subjects are to be photographed for security, professional portrait, etc. Another use might be to wait until a predetermined number of viewers are present before starting a movie or television program. Also, if the display panel 80 is a directional display, the mapping of the viewing environment could be used for displaying 3D images (described above) or for displaying different types of data to viewers at different locations.
Now, the operation of the control unit 77 and other components of the system illustrated in
Particularly,
According to an exemplary embodiment, the control unit 77 implements a method for individual eye detection using the red-eye effect. The red-eye effect refers to the reflection of light off the retinas at the back of a person's eyes. This commonly occurs in flash photography, where the photographed person's eyes do not have time to adjust to the sudden brightness before the picture is taken. Thus, the person's eyes appear red in the photograph. However, the red-eye effect is present in both visible and infrared (IR) regions of the spectrum. Thus, it is possible for the present invention to use the red-eye effect in either visible or in infrared, to detect positions of viewers' eyes.
However, it might be preferable to take advantage of the IR red-eye effect, which does not require the flashing of visible light that would otherwise interfere with the viewing of the image. Accordingly, in an exemplary embodiment, the control unit 77 uses the IR red-eye effect to perform a quick and computationally inexpensive mapping of the angular viewing area of display 80, without being noticed by the viewers.
While the red-eye effect is being created in the eyes of viewers in the viewing environment, image data of the viewing environment is captured in real-time by camera 301. This is illustrated in step S10 of
As the light is emitted by source 304, the camera 301 (e.g., IR camera) captures image data of the viewing environment (step S110 in
The camera 301 should be controlled to capture enough image frames so that the red-eye regions can be distinguished from other image elements. This is illustrated in step S120 of
As illustrated by steps S20-S40 of
Particularly, the control unit 77 may obtain each frame of the captured image data and apply standard noise filtration on the image frame (steps S200 and S210 in
If IR image data is captured, the red-eye pixel clusters may appear as bright (nearly white) spots in the image data. Thus, in an exemplary embodiment, the intensity values of the filtered image frame may be inverted, as illustrated by frame 621 of
Thus, the spatial eye filter 106 may identify and tags pixel clusters of a particular color and/or intensity level. Other criteria may also be applied to such pixel clusters, i.e., only those pixel clusters and having a contiguous area within an expected size and/or shape may qualify as a candidate red-eye region. The criteria for color, intensity, sizes and/or shape, may be determined, e.g., by off-line training using a large number of face images exhibiting red-eye effect.
The control unit 77 may also use various criteria, as well as different types of feature recognition technologies, to detect candidate red-eye regions. Such feature recognition technologies may use, e.g., neural-network techniques, Principal Components Analysis based techniques, etc.
After the candidate red-eye regions are found, the spatial filter 106 may process the candidate eye regions in the image data. Specifically, the spatial filter 106 attempts to find a potential pairing of each candidate red-eye region with another (step S30 in
Of course, spatial filter 106 may apply other criteria to determine potential pairings, e.g., similar pixel intensity or color, etc.
As potential pairings of candidate red-eye regions are determined for each frame of the captured image data, the control unit 77 may generate a corresponding frame of an eye signature map in which the potential pairings are demarcated. This is illustrated in step S40 of
As shown in
After each of the buffered frames of image data have been processed in order to generate the eye signature map, the control unit 77 may verify whether each potential pairing of candidate red-eye regions actually correspond to a pair of viewer's eyes. This is shown in step S50 of
Since the potential pairings are verified as red-eyes based on blinking, the camera 301 should be designed to capture image data of the viewing environment over a long enough time period, such that each viewer will be expected to blink at least once during this time period. Accordingly, this is one criterion for determining whether the camera 301 has captured enough image frames, referring back to step S130 in
As discussed above, the temporal blink filter 112 verifies whether each potential pairing of candidate red-eye regions in the eye signature map actually corresponds to the eye positions of a viewer. Thus, for each verified set of red-eye regions, the control unit 77 extracts information about the positions of the corresponding eyes so that they can be mapped to the viewing environment.
For example, the control unit 77 performs the necessary calculations, mathematical transformations, etc. on the pixel elements in the signature eye map, demarcating a verified pair of red-eyes, to determine the angular position of each of the red-eyes with respect to the central axis. It will be readily understood by those of ordinary skill in the art the various techniques for deriving such information from the eye signature map.
Accordingly, the control unit 77 may be designed to segment the angular space of the viewing environment based on the detected positions of the viewers (specifically, their eyes). To do this, the control unit 77 may generate a map (or, alternatively, revise an existing map) of the viewing environment indicating the angular regions of the environment that correspond to the positions of each viewer's eyes. This is illustrated in step S60 of
The control unit 77 may be configured to periodically repeat the procedure for mapping the location of viewers eyes, described above in connection with
Particularly,
Consider, for example, an application in which the display driver 116 in
Further details regarding the specific control and operation of a directional display panel 80 are provided in copending U.S. patent application Ser. No. ______ (Docket No. H00011896), entitled “Directional Display,” filed on the same date and by the same inventor as the present application, the entire contents of which are incorporated herein by reference.
Referring again to
Furthermore, although various components of
With various exemplary embodiments being described above, it should be noted that such descriptions are provided for illustration only and, thus, are not meant to limit the present invention defined by the claims below. The present invention is intended to cover any variation or modification of these embodiments, which do not depart from the spirit or scope of the present invention.
For example, although some aspects of the methods and apparatuses disclosed in this application have been described in the context of eye detection, it is contemplated that the principles disclosed in this application might be used for detection and tracking of other objects besides eyes of viewers.
Also, the principles of the present invention are applicable and can be incorporated in a variety of imaging systems and projective displays. The methods and apparatuses disclosed in this application may be implemented in LCDs, light-boxes, backlit advertising panels, theatre screen displays etc.