Stereoscopic image displays, or 3D displays, have become increasingly popular for use in, for example, home televisions, movie theaters, portable display devices, etc. These 3D displays provide an immersive experience for a viewer by allowing the viewer to perceive depth to the displayed images. While some stereo image displays require the use of special eyewear in order to perceive 3D images, autostereoscopic (“autostereo”) displays are 3D displays in which no special eyewear is needed in which each eye can see a different image.
Generally, image content for 3D displays is created with the expectation that the viewer will watch the images with their head in a vertical upright position (e.g., with no head roll), and from a position directly in front of the display (e.g., with no oblique viewing position) relative to the screen. However, if the viewer desires to relax their posture and view the 3D images with their head in a non-vertical position (e.g., with head roll), the viewer may perceive a loss of the depth sensation, and may experience image crosstalk, eyestrain, and/or discomfort. Also, if the viewer is at a position other than directly in front of the display (e.g., at an oblique viewing position), the 3D images may appear to shear (or skew) towards the viewer, and may appear distorted.
The above information disclosed in this Background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.
According to an embodiment of the present invention, a display includes: a sensor coupled to the display to detect a first position corresponding to an eye of a first viewer; and an image renderer coupled to the sensor to adjust a first 3D image to be displayed towards the eye of the first viewer according to the first position.
The sensor may detect a headroll of the first viewer, and the image renderer may adjust the first 3D image according to the headroll of the first viewer.
The sensor may detect an oblique angle between the first position and the display, and the image renderer may apply a keystone distortion to the first 3D image to generate a keystone distorted first 3D image in response to the detecting of the oblique angle.
The sensor may detect a headroll of the first viewer, and the image renderer may adjust the keystone distorted first 3D image according to the headroll of the first viewer.
The sensor may detect a second position corresponding to an eye of a second viewer, and the image renderer may adjust a second 3D image to be displayed towards the eye of the second viewer according to the second position.
The sensor may detect a vertical overlap between the eye of the first viewer and the eye of the second viewer, the eye of the first viewer may be a left eye or a right eye, and the eye of the second viewer may be a right eye when the eye of the first viewer is the left eye, and the eye of the second viewer may be a left eye when the eye of the first viewer is the right eye, and the image renderer may adjust the first 3D image according to the first position, and may adjust the second 3D image according to the second position.
The sensor may detect a headroll of the first viewer and a headroll of the second viewer, and the image renderer may adjust the first 3D image according to the headroll of the first viewer, and may adjust the second 3D image according to the headroll of the second viewer.
The display may further include: an optical layer overlapping the display and arranged to direct light to be emitted by pixels toward the eye of the first viewer to display the first 3D image.
The optical layer may include a lenticular array or a lenticular lens.
Only the pixels that emit light directed toward the eye of the first viewer may be utilized to display the first 3D image.
According to another embodiment of the present invention, a method for adjusting a 3D image includes: detecting, by a sensor, a first position corresponding to an eye of a first viewer; and adjusting, by an image renderer coupled to the sensor, a first 3D image to be displayed towards the eye of the first viewer according to the first position.
The method may further include: detecting, by the sensor, a headroll of the first viewer; and adjusting, by the image renderer, the first 3D image according to the headroll of the first viewer.
The method may further include: detecting, by the sensor, an oblique angle between the first position and a display; and applying, by the image renderer, a keystone distortion to the first 3D image to generate a keystone distorted first 3D image in response to the detecting of the oblique angle.
The method may further include: detecting, by the sensor, a headroll of the first viewer; and adjusting, by the image renderer, the keystone distorted first 3D image according to the headroll of the first viewer.
The method may further include: detecting, by the sensor, a second position corresponding to an eye of a second viewer; and adjusting, by the image renderer, a second 3D image to be displayed towards the eye of the second viewer according to the second position.
The method may further include: detecting, by the sensor, a vertical overlap between the eye of the first viewer and the eye of the second viewer, the eye of the first viewer may be a left eye or a right eye, and the eye of the second viewer may be a right eye when the eye of the first viewer is the left eye, and the eye of the second viewer may be a left eye when the eye of the first viewer is the right eye; and adjusting, by the image renderer, the first 3D image according to the first position, and the second 3D image according to the second position.
The method may further include: detecting, by the sensor, a headroll of the first viewer and a headroll of the second viewer; and adjusting, by the image renderer, the first 3D image according to the headroll of the first viewer, and the second 3D image according to the headroll of the second viewer.
The method may further include: directing, by an optical layer overlapping a display, light being emitted by pixels toward the eye of the first viewer to display the first 3D image.
The optical layer may include a lenticular array or a lenticular lens.
Only the pixels that emit light directed toward the eye of the first viewer may be utilized to display the first 3D image.
The above and other aspects and features of the present invention will become apparent to those skilled in the art from the following detailed description of the example embodiments with reference to the accompanying drawings.
Hereinafter, example embodiments will be described in more detail with reference to the accompanying drawings, in which like reference numbers refer to like elements throughout. The present invention, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments herein. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey some of the aspects and features of the present invention to those skilled in the art. Accordingly, processes, elements, and techniques that are not necessary to those having ordinary skill in the art for a complete understanding of the aspects and features of the present invention are not described with respect to some of the embodiments of the present invention. Unless otherwise noted, like reference numerals denote like elements throughout the attached drawings and the written description, and thus, descriptions thereof will not be repeated. In the drawings, the relative sizes of elements, layers, and regions may be exaggerated for clarity.
It will be understood that, although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section described below could be termed a second element, component, region, layer or section, without departing from the spirit and scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Further, the use of “may” when describing embodiments of the present invention refers to “one or more embodiments of the present invention.” Also, the term “exemplary” is intended to refer to an example or illustration.
It will be understood that when an element or layer is referred to as being “on,” “connected to,” “coupled to,” or “adjacent to” another element or layer, it can be directly on, connected to, coupled to, or adjacent to the other element or layer, or one or more intervening elements or layers may be present. However, when an element or layer is referred to as being “directly on,” “directly connected to,” “directly coupled to,” or “immediately adjacent to” another element or layer, there are no intervening elements or layers present.
The discomfort and the degradation of the 3D experience that result from viewing a 3D image with a tilted head (e.g., a head “roll,” as in yaw, pitch, and roll) is primarily due to 3D image content being conventionally designed for horizontally separated eyes that are aligned with the horizontal axis of the display. That is, the separation, or disparity, between a right image and a left image (e.g., a right eye image and a left eye image) of a given 3D image is conventionally designed to be in a horizontal direction such that horizontally disparate points of the right and left images fall within the same lateral plane as the eyes of a viewer with no head roll. In other words, the interocular axis of the viewer (e.g., a line connecting both eyes of the viewer, passing through the center of both eyes, and rotating about a point between both eyes) is parallel to an axis corresponding to the disparity (e.g., positional disparity) of the left image and the right image of the 3D image.
Disparity of a 3D image, as used herein, refers to the difference in physical location on a display between a left image and a right image, which combine to form a 3D image. The right image and the left image are typically similar images, except for a difference in physical locations of the right and left images on a display. The disparity between the left image and the right image includes a direction, for example, the general direction on the display in which the right image is separate from the left image, or vice versa. As discussed above, related 3D displays only incorporate a horizontal direction of disparity between right and left images.
The direction of disparity between a left image and a right image may correspond to differences in set reference points between the left image and the right image. For example, the direction of disparity between a left image and a right image may refer to the common direction of disparities between every pixel of the right image and every pixel of the left image.
The disparity between the left image and the right image also includes a magnitude, that is, the amount of separation between the two images. A magnitude of disparity between a left image and a right image of a 3D image may vary throughout the 3D image (e.g., from pixel to pixel), depending on the desired 3D effect of certain points of the 3D image corresponding to the degree of depth that is intended to be conveyed.
Accordingly, to compensate for the vertical convergence occurring and failure of the vectors from intersecting during head roll while viewing a 3D display, in some embodiments of the present invention, disparities of 3D images generated by the 3D display are adjusted, thereby reducing the negative effects of head roll associated with related 3D display devices.
Referring to
Referring to
Referring to
By adjusting the orientation of the disparities in conjunction with reducing their magnitudes as the degree of head roll increases, not only are vertical convergence eye movement and the associated negative effects reduced, but also image quality is maintained despite the increasing viewer head roll.
In some other embodiments of the present invention, the attenuation of the magnitude of the disparities is such that the disparities are not completely eliminated, but are instead limited to a fraction of the original depth (e.g., 10% of the original depth), thereby retaining some of the depth sensation at the more extreme head roll positions while still decreasing challenges associated with maintaining a desirable image quality.
Referring to
Referring to
To achieve the rotation of disparities within a 3D image as described above with respect to
For each pixel in the left image, the disparity specifies the position of the pixels in the right image, with the exception of occluded points. Thus, each pixel in the right image may be repositioned based on the nominal position in the left image, and based on the disparity estimated from the image pair.
Given a measured disparity between a point shown in the right and left images, the position of the pixels in the right image are repositioned based on the disparity (Δ), the gain factor (γ), and the head roll angle (AR) with reference to the position of the left image. At the conclusion of the operation, all disparities will have the same orientation as that of the interocular axis of a viewer regardless of the gain factor.
Referring to
Original Disparity: Δ=X1−Xr1
Disparity Gain Factor: γ=1 or γ=cos(Ar)
Warped Right Image Position: Xr2=X1+γΔ*cos(AR); Yr2=Y1+γΔ*sin(AR)
Wherein X1 and Y1 represent X-Y coordinates of a pixel of an original left image, Xr1 and Yr1 represent X-Y coordinates of a pixel of an original right image, Xr2 and Yr2 represent X-Y coordinates of a warped/adjusted pixel of a right image, AR represents an angle of head roll of a viewer, and γ represents a disparity gain factor.
The warping operation according to the present embodiment calculates the new right eye view based on the left eye view.
The above equations describe one method of warping a left image to create a new right image according to an embodiment of the present invention. Other embodiments may utilize other formulas to achieve warping.
In cases in which an edge of an object of an image is shifted, the shifting may occlude or unocclude a region behind it. In cases of occlusion, information may be simply discarded as a result. In cases of unocclusion, the missing information may be estimated to avoid holes in the image. Texture in-filling algorithms may be used to fill the unoccluded regions with statistically similar texture as that of the regions abutting the unoccluded regions. The infilling techniques may include texture stretching, statistical texture generation, texture copying, or other techniques know to those skilled in the art.
Although the warping operation embodied by the above equations is effective in generating a desired magnitude and direction of disparity, the warping may introduce artifacts into an image. After a rotation, some edges of objects may shift and may occlude other content, or may reveal or unocclude portions of the image for which there is no valid information in the original right image.
In cases where a shifted edge occludes other content, it is desirable that these shifted pixels overwrite the occluded values. On the other hand, in situations in which a hole is opened in the image, a variety of techniques known in the art may be utilized to fill in the missing portions of the image, such as texture extrapolation of the unoccluded surface, recruitment of the missing pixel values from the left image, any of the infilling techniques mentioned above, or any other technique known to those skilled in the art.
In the present embodiment, the original disparity Δ is characterized by X1−Xr1, which is the disparity between the X-coordinate of the pixel of the left image and the X-coordinate of the pixel of the right image. This equation simply embodies the concept of disparity as discussed throughout the application, that is, the concept of localized positional differences between the right and left images.
Furthermore, the disparity gain factor (γ) may be 1 or may be cos(AR) depending on whether full disparity is maintained at all head roll angles, or whether the disparity is attenuated according to the degree of head roll, respectively. The concepts of full disparity and attenuated disparity are discussed above with reference to
For example, a person having ordinary skill in the art may attempt to limit disparity to less than 3% of the screen width. For a resolution that is 1920 pixels wide, this would correspond to about 60 pixels. Under an extreme head roll, it is possible to have a region as wide as 60 pixels that has been unoccluded, which presents an opportunity for objectionable artifacts. By throttling back the depth in proportion to the magnitude of the head roll, it is possible to greatly reduce the size of these regions that may be filled in.
Referring to
In some embodiments of the present invention, the operation for warping a stereoscopic image in response to head roll, as shown in
Those having skill in the art will understand that the above operation for warping/modifying a stereoscopic image in response to head roll according to the embodiment of the present invention shown in
In the embodiment of the present invention shown in
Referring to
Original Disparity: Δ=X1−Xr1
Disparity Gain Factor: γ=1 or γ=cos(Ar)
Warped Right Image Position: Xr2=Xr1−Δ*√[1+γ2−2γ*cos(AR)−γ2 sin2(AR)]; Yr2=Y1+γΔ*sin(AR)
Wherein X1 and Y1 represent X-Y coordinates of a pixel of an original left image, Xr1 and Yr1 represent X-Y coordinates of a pixel of an original right image, Xr2 and Yr2 represent X-Y coordinates of a warped/adjusted pixel of a right image, AR represents an angle of head roll of a viewer, and γ represents a disparity gain factor.
The warping operation according to the present embodiment of the present invention calculates the new right-eye view based on the original right-eye view, thereby using less aggressive image warping in instances of small viewer head roll, as compared to some of the embodiments of the present invention shown in
Locations of a left image pixel 600 (X1, Y1) and an original right image pixel 602 (Xr1, Yr1) are shown. Circle 604 represents the full magnitude of the disparity (e.g., the original disparity) between the left image pixel 600 and the original right image pixel 602. During the warping process of the present embodiment, the left image pixel 600 remains static, while the right image pixel 602 is warped about, or moved with respect to, the right image pixel 602 corresponding to the degree of head roll (AR) of a viewer, thereby relocating the original right image pixel 602 to its new position′ depicted by adjusted right image pixel 606. Additionally, the magnitude of the disparity between the left image pixel 600 and the adjusted right image pixel 606 is attenuated (when compared to the disparity between the left image pixel 600 and the original right image pixel 602) by the disparity gain factor (γ), as illustrated by the adjusted right image pixel 606 not being located on the full magnitude circle 604.
In some embodiments of the present invention, the operation for warping a stereoscopic image in response to head roll, as shown in
Those having skill in the art will understand that the above operation for warping/modifying a stereoscopic image in response to head roll according to the embodiment of the present invention shown in
In the embodiment of the present invention shown in
In another embodiment of the present invention, the image warping may be based predominately on the right view, but regions from the original left image may be used to fill in texture in unoccluded regions of the warped right image.
Furthermore, the embodiments of the present invention shown in
Referring to
When the optical layer 704 includes a lenticular array, the lenticular array may include a plurality of tightly spaced lenses that are cylindrically or spherically shaped, and are arranged such that they are repeating one dimensionally (e.g., in a horizontal direction) or repeating two dimensionally (e.g., in a horizontal and vertical direction).
The image renderer 706 may calculate disparity maps for 3D images displayed on the autostereo display 700. The image renderer 706 may also adjust or compensate the calculated disparity maps to generate 3D images based on the adjusted/compensated disparity maps. The image renderer 706 may calculate the adjusted disparity maps based on detected viewer head roll information and/or oblique viewing position sent to the image renderer 706 from, for example, an optical tracking sensor 708. After the image renderer 706 generates 3D images having adjusted disparities according to the viewer head roll and/or oblique viewing position, the image renderer 706 sends the adjusted 3D images to the display area 702 for viewing.
As will be described in further detail below with reference to
The autostereo display 700 may be coupled to the optical tracking sensor 708. In some embodiments, the optical tracking sensor 708 may be a camera or other imaging device that is used in conjunction with face detection algorithms to detect a viewing position (e.g., corresponding to an oblique viewing position), an orientation of the viewer's head 710 (e.g., corresponding to head roll), position corresponding to the viewer's right eye 712 and left eye 714, and/or a number of different viewers (e.g., multiple viewers). The position corresponding to the viewer's right eye 712 and left eye 714 may be detected in any suitable manner, for example, the actual positions of the right eye 712 and left eye 714 may be detected, or some other feature of the viewer (e.g., nose, mouth, ears, etc.) may be detected and the position of the right eye 712 and left eye 714 may be estimated (e.g., calculated) therefrom.
Referring to
In typical eye-tracking autostereo displays, a camera may detect where a viewer is located with respect to a front of the display to ensure that the appropriate right and left views are visible from the respective viewing zones where the right and left eyes are located. For example, the display may allocate right and left images to the pixels in the viewing zones where the right and left eyes are located, respectively.
Referring to
In operation 1002, the head roll angle of the viewer is detected and estimated. The head roll angle may be estimated in a number of ways. For example, the head roll angle may be estimated by using a horizontal reference axis between the viewer's eyes (e.g., the interocular axis) and by determining the head roll according to the degree of rotational displacement of the viewer's interocular axis. The interocular axis may be an axis that laterally intersects the viewer's eyes and that rotates around a center point between the eyes. However, embodiments of the present invention are not limited to the above, as the reference axis and the axis used to measure the degree of head roll from the reference axis may be any suitable measurement locations, such as vertical axes.
At operation 1004, it is determined whether the degree of the head roll angle is greater than a reference head roll angle or degree. If the detected head roll angle is less than or equal to the reference head roll angle, the process bypasses operations 1006, 1008, and 1010. In this case, the display presents the original uncompensated right view and left view images to the right-eye and left-eye viewing zones of the viewer, and thus the original 3D image is displayed to the viewer, with no adjustment.
In other embodiments of the present invention, the estimation of the head roll angle at operation 1002 and the determination of whether or not the head roll angle is greater than a threshold angle at operation 1004 may occur before the extraction of the disparity map at operation 1000. In this alternative embodiment, if the head roll angle is determined to be less than or equal to the threshold angle, the process ends and the original 3D image is displayed to the respective viewing zones of the viewer. On the other hand, if the head roll angle is determined to be greater than the threshold angle, the process advances to extraction of the disparity map (e.g., operation 1000), adjustment of the disparity map according to the head roll (e.g., operation 1006), application of the adjusted disparity map (e.g., operation 1008), and to distribution of the adjusted right and/or left views to the appropriate viewing zones (e.g., operation 1010).
If the estimated head roll angle is greater than the reference head roll angle, the process continues. As an example, the reference head roll angle may be 10 degrees, and if the estimated head roll angle is 10 degrees or less, the process will simply bypass the head roll compensation steps 1006 through 1010. Alternatively, if the estimated head roll angle is greater than 10 degrees, the process proceeds to operation 1006. However, embodiments of the present invention are not limited to the above, as the reference head roll angle may be any angle, or operation 1004 may be omitted altogether, and operation 1002 may directly precede operation 1006.
In operation 1006, the disparity map extracted in operation 1000 is adjusted or compensated according to the calculated head roll angle. Several different image adjustment techniques have been described in detail above, and thus, their description will not be repeated. The adjusted disparity map is then applied to the right and/or left view of the viewer in operation 1008, which corresponds to the right and/or left image, respectively. In other words, the adjustment may be applied to the right view/right image, left view/left image, or may be concurrently applied to both of the left view and the right view.
In operation 1010, the compensated right and/or left images are respectively distributed to the appropriate viewing zones for the right and/or left views. In other words, the compensated or uncompensated right view/right image is distributed to the appropriate viewing zone for the viewer's right eye, the compensated or uncompensated left view/left image is distributed to the appropriate viewing zone for the viewer's left eye, or both the left and right images are compensated and distributed to the appropriate viewing zones for the left and right eyes of the viewer, respectively.
Accordingly in some embodiments of the present invention, a 3D image having compensated right and/or left images for a viewer's head roll may be displayed at the appropriate viewing zone for the viewer of an autostereo display. As will be described in further detail below, the above-described compensation techniques for head roll may be applied to viewers at an oblique viewing position with respect to the viewing surface of the autostereo display, in addition to adjusting the images for the oblique viewing position.
Referring to
For example, referring to
The left image 1202 and the right image 1204 are respectively displayed to a left eye 1208 and a right eye 1210 of the viewer 1206 from the viewing surface of the 3D display 1200, to depict a 3D image 1212 to the viewer 1206. When the viewer 1206 is positioned directly in front of the 3D display 1200, the 3D image 1212 appears, for example, in the shape of a cube or a rectangular cuboid from the viewing surface of the 3D display 1200.
However, as shown in
Accordingly, to compensate for this shearing (or skewing) of the object in the 3D image 1212 when viewing a 3D display from an oblique viewing position, in some embodiments of the present invention, inverse keystone image distortion (“keystone compensation”) is applied to the images and distributed to the appropriate viewing zones of the viewer's eyes positioned at the oblique viewing position. As would be known to those having ordinary skill in the art, inverse keystone image distortion shifts the center of projection for the image from the front surface normal to an oblique viewing position. In the corrected image, appropriate foreshortening is applied to the images in both the vertical and horizontal axes.
Therefore, as shown in
Thus, according to some embodiments of the present invention, the object of the 3D image 1312 may retain its intended geometry, with the front surface of the cube or rectangular cuboid facing the viewer, as shown in
Referring to
At operation 1404, it is determined whether the oblique viewing position is greater than a threshold central viewing position. The threshold central viewing position may include a threshold viewing angle and/or a threshold viewing distance between a desired viewing position (e.g., directly in front) of the viewer and the viewing surface of the display. If the oblique viewing position is less than or equal to the threshold central viewing position, the process bypasses operations 1406 and 1408. In this case, the display presents the original uncompensated right and left view image to the right-eye and left-eye viewing zones, respectively, of the viewer, and thus the original 3D image is displayed to the viewer, with no keystone distortion adjustment.
If the estimated oblique viewing position is greater than the threshold central viewing position, the process continues. For example, the threshold viewing angle may be 10 degrees, and if the estimated oblique viewing angle of the oblique viewing position is 10 degrees or less, the process will present the unaltered images to the respective right and left viewing zones. If the estimated oblique viewing angle is greater than 10 degrees, the process proceeds to operation 1406. However, embodiments of the present invention are not limited to the above, as the threshold oblique viewing angle may be any angle, or operation 1404 may be omitted altogether, and operation 1402 may directly precede operation 1406.
In operation 1406, keystone compensation according to the calculated oblique viewing position is applied to the left and right images, as would be known to a person having ordinary skill in the art. For example, the image renderer may use (or utilize) a projection matrix to re-project the image to a slanted surface with respect to the position of the viewer, and could be implemented by a graphics processor.
The adjusted images are then distributed to the right and/or left views of the viewer in operation 1408, which corresponds to the right and/or left viewing zones, respectively. In other words, the adjusted right view/right image is distributed to the appropriate viewing zone for the viewer's right eye, the adjusted left view/left image is distributed to the appropriate viewing zone for the viewer's left eye.
Accordingly, in some embodiments of the present invention, keystone compensation may be applied to the left and right views of the 3D images of an object displayed on an autostereo display, so that the object of the 3D image appears to be rotated to face a viewer at an oblique position of the autostereo display, as if the viewer is viewing the 3D image directly in front of the 3D display.
Although the above embodiments have been primarily directed toward use by a single viewer, in other embodiments of the present invention, stereoscopic images may be adjusted in response to multiple viewers' individual head rolls and/or oblique viewing positions when watching a multi-viewer autostereo display.
Referring to
In a typical multi-viewer autostereo display without eye tracking systems, the display will typically set the original left (e.g., A) and right (e.g., H) views to extreme viewing positions and interpolate the views between them. Thus, the sign of disparity would be correct, but have reduced magnitude for the second viewer 1502, but would be reversed for the first viewer 1500.
In a multi-viewer autostereo display including an optical tracking sensor 1508 (e.g., a camera), the optical tracking sensor may determine in which viewing zones the eyes of each of the viewer's 1500 and 1502 are located, and may display the left and right images to the appropriate left and right viewing zones for each of the viewers 1500 and 1502.
In some embodiments of the present invention, the optical tracking sensor may determine that the viewing position of the first viewer 1500 is within the threshold central viewing position, and thus, a left and right image may be displayed to the H and A viewing zones of the first viewer 1500, respectively (noting that the left eye and the right eye each see the appropriate image), without any adjustment for head roll or keystone compensation. The optical tracking sensor may determine that the viewing position of the second viewer 1502 is at an oblique viewing position λ that is greater than the threshold central viewing position. Accordingly, the left and right images for the second viewer 1502 may be adjusted for keystone compensation corresponding to the oblique viewing angle λ as described above with reference to
In some embodiments of the present invention, the optical tracking sensor may detect the head roll α for the first viewer 1500 and the head roll β for the second viewer 1502. Accordingly, the right and/or left images for the first and second viewers 1500 and 1502 may be respectively adjusted for the detected head rolls a and 13, and may be distributed to the appropriate viewing zones, but without keystone compensation.
In some embodiments of the present invention, the right and left images of the second viewer 1502 may be adjusted for keystone compensation according to the oblique viewing angle λ, the right and/or left images may also be adjusted for the head roll β, and the adjusted right and left images may be respectively distributed to viewing zones G and E of the second viewer 1502.
Accordingly, in some embodiments of the present invention, the right and left images for each of the multiple viewers may be adjusted according to each viewer's head roll and/or oblique viewing position.
While the above embodiments of the present invention have been described for use with autostereo displays having a plurality of predetermined (e.g., fixed) viewing zones, as will be described in further detail below, according to other embodiments of the present invention custom viewing zones may be created around the detected position of the viewer's eyes.
Unlike the autostereo displays described above, light field displays offer an enhanced level of control over the direction of light exiting each pixel of the display. These light field displays do not have predetermined viewing zones in the sense described above, but typically show a continuum of viewing angles of the object in the horizontal direction (or in the horizontal and vertical direction). In other words, light field displays emit light in multiple directions and display various viewing angles of the object as the viewer moves in front of the display (e.g., looks around the object), and thus, information corresponding to the position of the viewer or the location of the viewer's eyes are not typically determined.
According to some embodiments of the present invention, however, an appropriate viewing zone (or sweet spot) may be created around the detected positions of the viewer's eyes, and appropriate pixels (or sub-pixels) may be controlled to emit light towards the position of the viewer's eye. In other words, instead of constructing a full light field to display various viewing angles of the object, in some embodiments of the present invention, a custom viewing zone may be created around the detected positions of the viewer's eyes to deliver a stereo image pair corresponding to the custom viewing zones.
Referring to
The light field display 1600 is coupled to an optical tracking sensor 1606. In some embodiments, the optical tracking sensor 1606 may be a camera or other imaging device that is used in conjunction with face detection algorithms to detect a viewing position of a viewer (e.g., corresponding to an oblique viewing position), an orientation of the viewer's head (e.g., corresponding to head roll), position corresponding to the viewer's right eye and left eye, and/or a number of different viewers (e.g., multiple viewers). The position corresponding to the viewer's right eye and left eye may be detected in any suitable manner, for example, the actual positions of the right eye and left eye may be detected, or some other feature of the viewer (e.g., nose, mouth, ears, etc.) may be detected and the position of the right eye and left eye may be estimated (e.g., calculated) therefrom.
Based on the detected eye position K, the light field display 1600 presents the appropriate image intended for that eye (e.g., left view image or right view image) on the appropriate sub-pixels to create a custom viewing zone. The sub-pixels that are not used for the custom viewing zones may not emit light. Thus, only the sub-pixels that are used for the custom viewing zones may display the corresponding images. However, the present invention is not limited thereto.
As shown in
In some embodiments, head roll of the viewer may be detected by the optical tracking sensor 1606, and the image displayed to the detected right and/or left eyes may be adjusted for the detected head roll and displayed to the appropriate custom viewing zones.
In some embodiments, an oblique viewing position of the viewer may be detected by the optical tracking sensor 1606, and the image to be displayed to the detected left and right eyes may be keystone compensated according to the oblique viewing position of the viewer and displayed to the appropriate custom viewing zones.
In some embodiments, head roll and oblique viewing position of the viewer may be detected by the tracking sensor 1606, and the image displayed to the detected left and right eyes may be appropriately adjusted for the detected head roll and keystone compensated.
Referring to
As shown in
Because each localized lenticular feature 1702 is capable of emitting light in a vertical direction as well as a horizontal direction, the light field display 1700 may create custom viewing zones in the vertical direction as well as in the horizontal direction, unlike the autostereo displays typically having only horizontal viewing zones. Thus, as will be described in further detail below, custom viewing zones may be created for each detected viewer, even when there are vertically offset viewers. These light field displays can also be used to create custom viewing zones at the correct depth for the viewer's eyes, such that the created custom viewing zones have appropriate horizontal, vertical, and depth to stimulate the designated eye.
Referring to
The left and right views for the third and fourth viewers 1818 and 1824 may be adjusted for head roll and/or key stone distortion according to some of the embodiments described above, for example, with reference to
Because the autostereo display 1800 shown in
According to some embodiments of the present invention, the autostereo display may be a display device (e.g., light field display) capable of creating custom viewing zones in both the horizontal and vertical direction (e.g., via a localized lenticular feature having tightly spaced lenses that are arranged two dimensionally). Thus, as shown in
The autostereo display 1900 may also include optical layer 1903 (e.g., a localized lenticular feature), to control the direction of light emitted from sub-pixels of the autostereo display 1900 in a horizontal and vertical direction. Accordingly, the autostereo display 1900 may create custom viewing zones for each detected eye of each detected viewer in both the horizontal and vertical directions.
In other words, a custom viewing zone may be created for the first viewer 1904 around the left eye to show a custom left image 1906 and a custom viewing zone may be created around the right eye to show a custom right image 1908. Both the left image 1906 and the right image 1908 may be keystone distorted according to an oblique viewing angle β, and the right image 1908 may be adjusted for head roll of the first viewer 1904.
A custom viewing zone may be created for the second viewer 1910 around the left eye to show a custom left image 1912 and a custom viewing zone may be created around the right eye to show a custom right image 1914. Both the left image 1912 and the right image 1914 may be keystone distorted according to an oblique viewing angle 6, and the right image 1914 may be adjusted for head roll of the second viewer 1910.
A custom viewing zone may be created for the third viewer 1916 around the left eye to show a custom left image 1918 and a custom viewing zone may be created around the right eye to show a custom right image 1920. Both the left image 1918 and the right image 1920 may be keystone distorted according to an oblique viewing angle α, and the right image 1920 may be adjusted for head roll of the third viewer 1916.
A custom viewing zone may be created for the fourth viewer 1922 around the left eye to show a custom left image 1924 and a custom viewing zone may be created around the right eye to show a custom right image 1926. Both the left image 1924 and the right image 1926 may be keystone distorted according to an oblique viewing angle γ, and the right image 1926 may be adjusted for head roll of the fourth viewer 1922.
Referring to
In operation 2010, the head roll and oblique viewing position for each viewer is detected and estimated. As described above with reference to
In other embodiments of the present invention, the estimation of the head roll angles and oblique viewing positions at operation 2010 may occur before the extraction of the disparity map at operation 2000.
In operation 2020, the disparity map extracted in operation 2000 is adjusted or compensated according to the calculated head roll angle for each of the viewers. Several different image adjustment techniques have been described in detail above, and thus, their description will not be repeated. The adjusted disparity map for each viewer is then respectively applied to the right and/or left views for each of the viewers in operation 2030 according to their head roll angles. In other words, the adjustment may be applied to the right view/right image, left view/left image, or may be concurrently applied to both of the left view and the right view for each viewer according to their head roll angles.
In operation 2040, keystone distortion according to the calculated oblique viewing position for each of the viewers is respectively applied to the left and right images for each viewer, as would be known to a person having ordinary skill in the art. For example, as described above, each of the left and right images may be remapped such that the images project toward the eyes as they would from a central viewing position.
In operation 2050, the compensated right and/or left images for each viewer are respectively distributed to the appropriate (or custom) viewing zones for the right and/or left views. In other words, the compensated or uncompensated right view/right image is distributed to the appropriate viewing zones for each of the viewer's right eye, the compensated or uncompensated left view/left image is distributed to the appropriate viewing zones for each of the viewers' left eye, or both the left and right images are compensated and distributed to the appropriate viewing zones for each of the viewers' left and right eyes, respectively.
Accordingly, the right and left images for one or more viewers of an autostereo display may be adjusted according to the head roll for each viewer and/or oblique viewing angle for each viewer, such that a loss of depth sensation, image crosstalk, eyestrain, and/or discomfort may be prevented or reduced.
In the above figures and described embodiments, the estimating of the disparity in which the disparity is aligned with the interocular axis is described with respect to a format including a stereoscopic image pair. However, as would be realized by a person having ordinary skill in the art, the present invention is not limited thereto, and other formats may include 2D plus depth, wherein disparity estimation is not required and the depth can be converted directly to the disparity map and the disparity applied in the direction of the head roll. Another possible format is left image plus difference, wherein the right image may be reconstructed prior to estimating the disparity map, after which the right image can be processed according to the above-described methods. In any case, the above described embodiments may be applied to reprocess the stereo image data in any received format from known disparity information.
In the above figures and described embodiments, re-rendering of the right eye image in response to head roll has been shown and described, as it is typical in stereo imaging to assign the left eye image as a key image and the right eye image as a secondary image. However, the image-rendering is not limited thereto and could be performed for the left eye instead, or for both the left and right eyes concurrently (e.g., simultaneously).
Although the present invention has been described with reference to the example embodiments, those skilled in the art will recognize that various changes and modifications to the described embodiments may be performed, all without departing from the spirit and scope of the present invention. Furthermore, those skilled in the various arts will recognize that the present invention described herein will suggest solutions to other tasks and adaptations for other applications. It is the applicant's intention to cover by the claims herein, all such uses of the present invention, and those changes and modifications which could be made to the example embodiments of the present invention herein chosen for the purpose of disclosure, all without departing from the spirit and scope of the present invention. Thus, the example embodiments of the present invention should be considered in all respects as illustrative and not restrictive, with the spirit and scope of the present invention being indicated by the appended claims and their equivalents.
This patent application claims priority to and the benefit of Provisional Application No. 61/907,931, filed Nov. 22, 2013, entitled “COMPENSATION TECHNIQUE FOR HEAD ROLL POSITION IN AUTOSTEREOSCOPIC DISPLAYS” the entire content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61907931 | Nov 2013 | US |