The present disclosure relates to a three-dimensional image display apparatus. To put it in detail, the present disclosure relates to a three-dimensional image display apparatus capable of reducing unnaturalness and discomfort feelings both of which are caused by the so-called reverse view.
There has been known a variety of three-dimensional image display apparatus each used for implementing binocular visions for an image observer observing two image having disparities. There are two principal methods adopted by the three-dimensional image display apparatus. One of them is an eyeglass method of making use of eyeglasses to separate images having disparities into an image for the left eye and an image for the right eye. The other one is a naked-eye method of separating images having disparities into an image for the left eye and an image for the right eye without making use of eyeglasses.
In the case of the three-dimensional image display apparatus adopting the naked-eye method, progress has been made in an effort to put a specific three-dimensional image display apparatus to practical use. The specific three-dimensional image display apparatus is a three-dimensional image display apparatus constructed typically by combining an optical separation section and an image display section, which is actually a two-dimensional image display apparatus. In this case, the optical separation section includes a parallax barrier also referred to as a disparity barrier or a lens sheet having an array of lenses.
For example, the three-dimensional image display apparatus making use of a parallax barrier as an optical separation section is normally configured from an image display section and the parallax barrier having an aperture extended virtually in the vertical direction (also referred to as the longitudinal direction). In this case, the image display section is typically an image display panel having a plurality pixels laid out in the horizontal direction also referred to as the lateral direction and the vertical direction to form a two-dimensional matrix.
The three-dimensional image display apparatus making use of an optical separation section can be typically an apparatus wherein the optical separation section is provided between the image display section and the image observer as shown in FIG. 7 of Japanese Patent Laid-open No. Hei 5-122733. As an alternative, the three-dimensional image display apparatus making use of an optical separation section can also be an apparatus wherein the image display section comprises a transmission-type liquid-crystal display panel serving as an image display section and a illumination section as shown in FIG. 10 of Japanese Patent No. 3565391. In this case, the optical separation section is provided between the image display section and the illumination section.
As shown in
Let the left and right eyes of the image observer be positioned at the view points 1 and 2 respectively. In this case, if the group of pixels denoted by notations L2, L4, L6, L8 and L10 is used for displaying an image for the left eye whereas the group of pixels denoted by notations R1, R3, R5, R7 and R9 is used for displaying an image for the right eye, the observer will recognize the image for the left eye and the image for the right eye as a three-dimensional image. That is to say, when the image observer is present in an area wherein the left eye receives the image for the view point 1 whereas the right eye receives the image for the view point 2, the observer will recognize the image for the left eye and the image for the right eye as a three-dimensional image.
If the image observer moves to a location at which the left eye receives the image for the view point 2 whereas the right eye receives the image for the view point 1, however, the image for the left eye is reversely received by the right eye whereas the image for the left eye is reversely received by the right eye in the so-called reverse-view state. In this state, the image observer conversely perceives the front portion of the observation subject as the back portion of the observation subject and vice versa, hence, feeling unnaturalness and a discomfort.
Efforts made to reduce unnaturalness and discomfort feelings both of which are caused by the so-called reverse view are described in Japanese Patent Laid-open No. 2000-47139. To put it concretely, Japanese Patent Laid-open No. 2000-47139 discloses a three-dimensional image display apparatus which detects the position of the image observer and, in accordance with the detected position of the image observer, changes the shape of a mask pattern of a light modulator corresponding to the optical separation section. Japanese Patent Laid-open No. 2000-47139 also describes a three-dimensional image display apparatus which detects the position of the image observer and, in accordance with the detected position of the image observer, changes the contents of an image displayed on the image display section.
The three-dimensional image display apparatus having a configuration in which the position of the image observer is detected and, on the basis of the detected position, the image display section and the optical separation section are controlled entails a complicated configuration and complex control, resulting in high cost. In addition, when a plurality of image observers are observing one three-dimensional image display apparatus from different positions, there is further raised a problem that the control of the three-dimensional image display apparatus becomes even more difficult.
It is thus a desire of the present disclosure to provide a three-dimensional image display apparatus capable of reducing unnaturalness and discomfort feelings, both of which are caused by the so-called reverse view, with no difficulty and without entailing a complicated configuration and complex control even if a plurality of image observers are observing the three-dimensional image display apparatus from different positions.
In order to achieve the desire described above, in accordance with a first embodiment of the present disclosure, there is provided a three-dimensional image display apparatus in which an image for each of a plurality of view points in each of a plurality of observation areas can be observed, wherein the three-dimensional image display apparatus displays one or two images pertaining to a pair of images put in a reverse-view relation in the vicinity of an edge of the observation areas by making use of data different from image data for the view points.
In order to achieve the desire described above, in accordance with a second embodiment of the present disclosure, there is provided a three-dimensional image display apparatus in which an image for each of view points in each of a plurality of observation areas can be observed, wherein the three-dimensional image display apparatus creates one or two images pertaining to a pair of images put in a reverse-view relation in the vicinity of an edge of the observation areas by displaying pieces of image data having a variety of types on a time-division basis.
The three-dimensional image display apparatus according to the first and second embodiments of the present disclosure are capable of lowering the degree of the reverse view in the vicinity of an edge of an observation area without detecting the position of the image observer and without controlling the image display section or the like in accordance with the detected position. In addition, even if a plurality of image observers are observing the three-dimensional image display apparatus from different positions, the three-dimensional image display apparatus is capable of reducing unnaturalness and discomfort feelings, both of which are caused by the so-called reverse view.
Embodiments of the present disclosure are explained below by referring to the diagrams. However, implementations of the present disclosure are by no means limited to the embodiments. In addition, a variety of numerical values used in the embodiments and materials for making elements employed in the embodiment are also typical. In the following description, the same elements and elements having the same function are denoted by the same reference numeral and enlightened once in order to avoid duplications of explanations. It is to be noted that the following description is divided into chapters arranged as follows.
1. Explanation of a three-dimensional image display apparatus provided by the disclosure, its driving method and general matters
2. Explanations of three-dimensional image display apparatus according to embodiments
3. Operations carried out by the three-dimensional image display apparatus without reverse view
4. First embodiment
5. Second embodiment
6. Third embodiment
7. Fourth embodiment
8. Fifth embodiment
9. Sixth embodiment
10. Seventh embodiment
11. Eighth embodiment
12. Ninth embodiment
13. Tenth embodiment
14. Eleventh embodiment
15. Twelfth embodiment
16. Thirteenth embodiment
17. Fourteenth embodiment (and others)
As a three-dimensional image display apparatus provided by the present disclosure, there is widely used a three-dimensional image display apparatus capable of displaying images for a plurality of points of view on the basis of image data for the points of view and usable for observing images for the points of view in a plurality of observation areas. In this specification, a point of view is also referred to as a view point.
As described above, the three-dimensional image display apparatus according to the first embodiment of the present disclosure displays one or both of a pair of images put in a reverse-view relation in the vicinity of an edge of an observation area by making use of data different from image data for points of view. Thus, it is possible to decrease the absolute value of the magnitude of a disparity between images forming a pair of images put in a reverse-view relation in the vicinity of an edge of an observation area. As a result, it is possible to lower the degree of the reverse view in the vicinity of an edge of an observation area. In this case, the data different from image data for points of view can be configured from data obtained as a result of combining pieces of image data having a variety of types.
From a viewpoint of simplifying the configuration of the three-dimensional image display apparatus, it is desirable to provide a configuration in which each of the pieces of image data having a variety of types is image data for a different point of view. However, each of the pieces of image data having a variety of types is not necessarily limited to such a configuration. For example, it is also possible to provide a configuration in which pieces of image data are generated separately from each other and these generated pieces of data are used as the pieces of image data having a variety of types. In this case, the pieces of image data include image data generated by reworking some or all of image data for a point of view and image data generated for a virtual point of view.
It is possible to provide a configuration in which image data displayed on the basis data obtained by combining the pieces of image data having a variety of types is put in an array obtained by alternately laying out components of the pieces of image data having a variety of types to create a stripe state or a configuration in which the components of the pieces of image data having a variety of types are laid out to form a checker board pattern.
Typical examples of the configuration in which components of an image are alternately laid out to create a stripe state are a configuration in which components of an image are alternately laid out in pixel-column units or pixel units and a configuration in which components of an image are alternately laid out in pixel-column-group units each having a plurality of pixel columns adjacent to each other or alternately laid out in pixel-row-group units each having a plurality of pixel rows adjacent to each other. Also, typical examples of the configuration in which components of an image are laid out to form a checker board pattern are a configuration in which the components of an image are laid out in pixel units to form a checker board pattern and a configuration in which the components of an image are laid out in pixel-group units each having a plurality of pixels to form a checker board pattern.
As an alternative, it is also possible to provide the three-dimensional image display apparatus according to the first embodiment of the present disclosure with a configuration in which the data different from the image data for a point of view is data obtained by finding an average of the pieces of image data having a variety of types. In this case, it is desirable to configure each of the pieces of image data having a variety of types from image data for a different point of view. However, each of the pieces of image data having a variety of types is by no means limited to such a configuration. For example, as described above, it is also possible to provide a configuration in which pieces of image data are generated separately from each other and these generated pieces of data are used as the pieces of image data having a variety of types. In this case, the pieces of image data include image data generated by reworking some or all of image data for a point of view and image data generated for a virtual point of view. It is to be noted that the data obtained by finding an average of the pieces of image data having a variety of types implies a set of data obtained by averaging pieces of data for the same pixel. In addition, the word ‘average’ is not limited to an arithmetic average also referred to as an average mean. That is to say, the word ‘average’ may also imply a weighted average. In the case of a weighted average, weight coefficients used for computing the weighted average can be properly selected in accordance with the design of the three-dimensional image display apparatus.
As an alternative, it is also possible to provide the three-dimensional image display apparatus according to the first embodiment of the present disclosure with a configuration in which the data different from the image data for a point of view is data for another point of view.
As described above, the three-dimensional image display apparatus according to the second embodiment of the present disclosure is capable of displaying an image for every point of view in each of a plurality of observation areas. The three-dimensional image display apparatus forms one or both of a pair of images put in a reverse-view relation in the vicinity of an edge of an observation area by displaying pieces of image data having a variety of types on a time-division basis. In this case, it is desirable to configure each of the pieces of image data having a variety of types from image data for a different point of view. However, each of the pieces of image data having a variety of types is by no means limited to such a configuration. For example, as described above, it is also possible to provide a configuration in which pieces of image data are generated separately from each other and these generated pieces of data are used as the pieces of image data having a variety of types. In this case, the pieces of image data include image data generated by reworking some or all of image data for a point of view and image data generated for a virtual point of view.
In the three-dimensional image display apparatus according to the second embodiment of the present disclosure, the display obtained in an operation carried out on a time-division basis can be configured as a display obtained by performing the so-called progressive scanning or the so-called interlace scanning.
If the three-dimensional image display apparatus is provided with an image display section for displaying a multi-view-point image and an optical separation section, which is used for separating the multi-view-point image to be displayed on the image display section and for allowing an image for each point of view in every observation area to be observed, the three-dimensional image display apparatus can be configured to include the optical separation section provided between the image display section and the image observer or to include the optical separation section provided between the image display section and an illumination section. In the case of the first configuration including the optical separation section provided between the image display section and the image observer, a commonly known display unit can be used as the image display section. Typical examples of the commonly known display unit are a liquid-crystal display panel, an electro luminescence display panel and a plasma display panel. In the case of the second configuration including the optical separation section provided between the image display section and an illumination section, on the other hand, a commonly known transmission-type display panel such as a transmission-type liquid-crystal display panel can be used as the image display section. In addition, the image display section can be a monochrome or color display section.
The configuration of the optical separation section, a position at which the optical separation section is to be installed and other things related to the optical separation section are properly set in accordance with, among others, the specifications of the three-dimensional image display apparatus and the like. If a parallax barrier is selected to serve as the optical separation section, a fixed parallax barrier can be employed or, as an alternative, a dynamically switchable parallax barrier can be used.
The fixed parallax barrier can be created by adoption of a commonly known method making use of a base material made from a commonly known transparent material such as the acrylic resin, the PC (polycarbonate) resin, the ABS resin, the PMMA (poly(methyl methacrylate)), the PAR (polyacrylate resin), the PET (polyethylene terephthalate) or the glass. Typical examples of the commonly known method are a combination of a photolithographic method and an etching method, a variety of printing methods such as a screen printing method, an ink jet method and a metal mask printing method, a coating method (an electrical coating method and a non-electrolytic plating method) and a lift-off method. On the other hand, the dynamically switchable parallax barrier can be configured by making use of typically a light valve provided with a liquid-crystal material layer to serve as a valve that can be electrically switched. The type of a material used for making the light valve using a liquid-crystal material layer and the operating mode of the liquid-crystal material layer are not limited in particular. As a matter of fact, in some cases, the liquid-crystal display panel of a monochrome display unit can be used as the dynamically parallax barrier. The size of each aperture of the parallax barrier, the wire pitch and the like can be properly set in accordance with the specifications of the three-dimensional image display apparatus and the like.
If a lens sheet is used as the optical separation section, a configuration in which the lens sheet is designed and the structure of the lens sheet are not prescribed in particular. For example, it is possible to make use of a lens sheet formed in an integrated fashion by utilizing a commonly known transparent material described above or a lens sheet in which a lens array is created by using an light-sensitive resin material or the like on a base made from the material described above to serve as a base having a sheet shape. The optical power of the lens array, the pitch at which the lens array is created and other attributes of the lens array are properly determined in accordance with, among others, the specifications of the three-dimensional image display apparatus and the like.
In the configuration of the three-dimensional image display apparatus including a transmission-type display panel and an illumination section, a widely known illumination section can be used. The configuration of the illumination section is not limited in particular. In general, however, the illumination section can be configured to make use of commonly known members such as a light source, a prism sheet, a diffusion sheet and a light guiding plate.
In embodiments to be described later, a transmission-type color liquid-crystal display panel adopting the active matrix method is used as the image display section, and a fixed parallax barrier is employed as the optical separation section. In addition, in the embodiments, the optical separation section is provided between the image display section and an illumination section. However, implementations of the present disclosure are by no means limited to the embodiments.
The liquid-crystal display panel is typically configured to include a front panel having a first transparent electrode, a rear panel having a second transparent electrode as well as a liquid-crystal material provided between the front panel and the rear panel.
To put it more concretely, the front panel typically includes a first substrate, a first transparent electrode and a polarization film. The first substrate is a substrate made from glass. Also referred to as a common electrode, the first transparent electrode is provided on the inner surface of the first substrate. The first electrode is typically made from the ITO (Indium Tin Oxide). Also, the polarization film is provided on the outer surface of the first substrate. In addition, in the case of a color liquid-crystal display panel, the front panel has a configuration in which a color filter is provided on the inner surface of the first substrate and the color filter is covered with an overcoat layer made from an acrylic resin or an epoxy resin. The first transparent electrode is created on the overcoat layer. On the first transparent electrode, an orientation film is created. The layout pattern of the color filter can be a delta layout pattern, a stripe layout pattern, a diagonal layout pattern or a rectangular layout pattern.
On the other hand, to put it more concretely, the rear panel typically includes a second substrate, a switching device, a second transparent electrode and a polarization film. The second substrate is a glass substrate. The switching device is created on the inner surface of the second substrate. The second transparent electrode (referred to as a pixel electrode, for example, made from the ITO (Indium Tin Oxide)) is controlled by the switching device to enter a conductive or nonconductive state. The polarization film is provided on the outer surface of the second substrate. An orientation film is created on the entire surface including the second transparent electrode. Commonly known members can be used as a variety of members composing the transmission-type liquid-crystal display panel. By the same token, commonly known materials can be used as a variety of liquid-crystal materials composing the transmission-type liquid-crystal display panel. Typical examples of the switching device are a three-terminal device and a two-terminal device. A typical example of the three-terminal device is a TFT (Thin Film Transistor), and typical examples of the two-terminal device are a MIM (Metal Insulator Metal) device, a varistor device and a diode.
It is to be noted that, in the color liquid-crystal display panel, the first and second transparent electrodes are created in areas overlapping each other and an area including a liquid-crystal cell corresponds to a sub-pixel. In addition, a red-color light emitting sub-pixel is configured from a combination of a relevant area and a color filter passing through light having a red color. By the same token, a green-color light emitting sub-pixel is configured from a combination of a relevant area and a color filter passing through light having a green color. In the same way, a blue-color light emitting sub-pixel is configured from a combination of a relevant area and a color filter passing through light having a blue color light. The layout pattern of the red-color light emitting sub-pixels, the layout pattern of the green-color light emitting sub-pixels and the layout pattern of the blue-color light emitting sub-pixels match the layout pattern described above as the layout pattern of the color filters.
In addition, it is also possible to provide a configuration in which sub-pixels having a type or a plurality of types are added to the sub-pixels having the three types described above. Typical examples of the additional sub-pixels are a sub-pixel emitting light having a white color to increase the luminance, a sub-pixel emitting light having a supplementary color to enlarge a color reproduction range, a sub-pixel emitting light having a yellow color to enlarge a color reproduction range and a sub-pixel emitting light having yellow and cyan colors to enlarge a color reproduction range.
When notation (M0, N0) denote a pixel count of M0×N0 for a case in which the image display section is assumed to display an ordinary planar image, the values of the pixel count (M0, N0) are, specifically, VGA (640, 480), S-VGA (800, 600), XGA (1024, 768), APRC (1152, 900), S-XGA (1280, 1024), U-XGA (1600, 1200), HD-TV (1920, 1080) and Q-XGA (2048, 1536). In addition, other values of the pixel count (M0, N0) include (1920, 1035), (720, 480) and (1280, 960). These values of the pixel count (M0, N0) are each a typical image display resolution. However, the values of the pixel count (M0, N0) are by no means limited to the examples given above.
A driving section for driving the image display section can be configured from a variety of circuits such as an image-signal processing section, a timing control section, a data driver and a gate driver. Each of these circuits can be configured by using commonly known circuit devices and the like.
As shown in
The image display section 10 is a section for displaying a multi-view-point image for view points A1 to A9. A driving section 100 is a section for generating multi-view-point image display data on the basis of pieces of image data D1 to D9 for the points of view and supplying the multi-view-point image display data to the image display section 10 in order to drive the image display section 10. Operations carried out by the driving section 100 will be described later in detail by referring to
M×N sub-pixels 12 are laid out in a display area 11 of the image display section 10 to form a matrix having M columns and N rows. M sub-pixels 12 are laid out in the horizontal direction (in the X direction of the figure), and N sub-pixels 12 are laid out in the vertical direction (in the Y direction of the figure). A sub-pixel 12 placed at the intersection of the mth column (where m=1, 2, . . . and M) and the nth row (where n=1, 2, . . . and N) is referred to as a (m, n)th sub-pixel 12 or a sub-pixel 12(m, n). In addition, a sub-pixel 12 on the mth column is referred to as a sub-pixel 12m in some cases.
The image display section 10 is a color liquid-crystal display panel adopting the active matrix method. The sub-pixels 12 are laid out in such an order that a sub-pixel 12 on the first column is a sub-pixel emitting light having a red color, a sub-pixel 12 on the second column is a sub-pixel emitting light having a green color and a sub-pixel 12 on the third column is a sub-pixel emitting light having a blue color. This layout order is repeated for sub-pixels 12 on the fourth and subsequent columns. Generally speaking, a sub-pixel 12 on the mth column is a sub-pixel emitting light having a red color if the remainder of dividing (m−1) by 3 is 0, a sub-pixel 12 on the mth column is a sub-pixel emitting light having a green color if the remainder of dividing (m−1) by 3 is 1, and a sub-pixel 12 on the mth column is a sub-pixel emitting light having a blue color if the remainder of dividing (m−1) by 3 is 2.
As described earlier, notation (M0, N0) denotes a pixel count of M0×N0 for a case in which the image display section 10 is assumed to display an ordinary planar image. A typical pixel count is (1920, 1080). In the case of an ordinary planar image, each pixel on the ordinary planar image is a set including three sub-pixels laid out in the horizontal direction, that is, a set including a sub-pixel emitting light having a red color, a sub-pixel emitting light having a green color and a sub-pixel emitting light having a blue color so that the equations M=M0×3 and N=N0 hold true. That is to say, in the case of the pixel count of (1920, 1080), the equations M=5,760 and N=1,080 hold true.
The image display section 10 is configured to include typically a front panel, a rear panel and a liquid-crystal material provided between the front and rear panels. The front panel is a panel provided on a side close to the observation area WA, and the rear panel is a panel provided on a side close to the optical separation section 30. For the sake of drawing simplicity, however,
The optical separation section 30 has a plurality of apertures 31 laid out in the vertical direction to form vertical columns and a plurality of light shielding sections 32 between two adjacent vertical aperture columns. That is to say, each of the vertical aperture columns consists of a plurality of apertures 31 substantially laid out in the vertical direction (in the Y direction in the figure). An aperture-column count P is the number of the aperture columns described above in the optical separation section 30. The aperture columns are laid out in the horizontal direction (in the X direction in the figure). An aperture 31 on the pth aperture column (where p=1, 2, . . . and P) is referred to as an aperture 31p. As will be described later in detail, the pixel-column count M and the aperture-column P satisfy the following relation: M≈P×9.
Every aperture column is basically configured to include N apertures 31. As will be described later, the direction in which apertures 31 are laid out on an aperture column and the Y direction form a small angle. For this reason, an aperture column on an edge of the optical separation section 30 includes apertures 31, the number of which is smaller than N.
The optical separation section 30 is typically made by creating a light sensitive material layer including black-color pigments on a PET film and, then, removing the light sensitive material layer by adoption of a combination of the photolithographic and etching methods in order to leave light shielding sections 32 on the PET film. Portions from which the light sensitive material layer is removed become apertures 31.
It is to be noted that, in
The illumination section 20 is configured to make use of commonly known members such as a light source, a prism sheet, a diffusion sheet and a light guiding plate (these members are shown in none of the figures). Diffusion light passing through the diffusion sheet and the other members is radiated from a light emitting surface 21 of the illumination section 20 to the back surface of the image display section 10. Since the optical separation section 30 blocks some of the light radiated by the illumination section 20, an image to be displayed on the image display section 10 is separated into a plurality of images each provided for a point of view.
When light originating from the illumination section 20 and passing through an aperture 31 of the optical separation section 30 hits the image display section 10, some of the light is reflected by the image display section 10 back to the optical separation section 30 and illuminates the optical separation section 30. Thus, the directivity of the disparity image may deteriorate because some of the light is reflected by the image display section 10 back to the optical separation section 30 and illuminates the optical separation section 30. In order to solve this problem, a reflection preventing film is provided on one side of the image display section 10. The side of the image display section 10 is a side close to the optical separation section 30. As an alternative, the reflection preventing film is provided on one side of the optical separation section 30. The side of the optical separation section 30 is a side close to the image display section 10. If the reflection preventing film is provided on one side of the optical separation section 30, it is desirable to provide the reflection preventing film on only the light shielding sections 32. The configuration of the reflection preventing film is not prescribed in particular, but a commonly known reflection preventing film can be used.
The distance between the optical separation section 30 and the image display section 10, a sub-pixel pitch and an aperture pitch are set at values satisfying conditions for allowing observation of a desirable three-dimensional image appearing on the observation area WA determined in the specifications of the three-dimensional image display apparatus 1. The sub-pixel pitch is a pitch oriented in the X direction of the figure as the pitch of the sub-pixels 12. Also, the aperture pitch is a pitch oriented in the X direction of the figure as the pitch of the apertures 31. These conditions are described in concrete terms as follows.
The number of view points of an image displayed in the three-dimensional image display apparatus according to the embodiments is nine for each of the observation areas WAL, WAC and WAR shown in
As shown in
In
In order to make the explanation to be read by referring to
In the figure, notations ND and RD denote a sub-pixel pitch [mm] and an aperture pitch [mm] respectively. Notation Z1 denotes the distance [mm] between the aperture 31 and the image display section 10, and notation Z2 denotes the distance [mm] between the image display section 10 and each of the observation areas WAL, WAC and WAR. In addition, notation DP denotes the distance [mm] between every two adjacent points of view on each of the observation areas WAL, WAC and WAR.
Notation PW denotes the width of the aperture 31, and notation SW denotes the width of the light shielding section 32. Thus, the equation RD=SW+PW holds true. To describe qualitatively, the smaller the value of the expression PW/RD=PW/(SW+PW), the better the directivity of the image of every point of view. However, the smaller the value of the expression PW/RD=PW/(SW+PW), the worse the luminance of the observed image. Thus, it is only necessary to set PW/RD at a proper value according to the specifications of the three-dimensional image display apparatus.
Light rays coming from an aperture 31p and passing through sub-pixels 12(m−4, n), 12(m−3, n), . . . and 12(m+4, n) propagate to respectively view points A1, A2, . . . and A9 in the center observation area WAC. Conditions for the propagations of the light rays from the aperture 31p to the view points A1, A2, . . . and A9 in the center observation area WAC are discussed as follows. In order to make the discussion easy to understand, the aperture width PW of the aperture 31 is assumed to be sufficiently small and the discussion focuses attention on the locus of light passing through the center of the aperture 31.
A virtual straight line stretched in the Z direction to pass through the center of the aperture 31p is taken as a reference. Notation X1 denotes the distance between the reference and the center of the sub-pixel 12(m−4, n), and notation X2 denotes the distance between the reference and the view point A1 in the center observation area WAC. In order for light coming from the aperture 31p and passing through the sub-pixel 12(m−4, n) to propagate to the view point A1 in the center observation area WAC, from a homothetic relation, Eq. (1) given below is satisfied.
Z1:X1=(Z1+Z2):X2 (1)
Since X1 and X2 in Eq. (1) given above satisfy the equations X1=4×ND and X2=4×DP respectively, substitution of these equations into Eq. (1) results in Eq. (1′) given as follows:
Z1:4×ND=(Z1+Z2):4×DP (1′)
If Eq. (1′) given above is satisfied, it is geometrically obvious that light rays coming from the aperture 31p and passing through the sub-pixels 12(m−3, n), 12(m−2, n), . . . and 12(m+4, n) also propagate to respectively the view points A2, A3, . . . and A9 in the center observation area WAC.
Light rays coming from an aperture 31p−1 and passing through sub-pixels 12(m−4, n), 12(m−3, n), . . . and 12(m+4, n) propagate to respectively the view points A1, A2, . . . and A9 in the right-side observation area WAR. Conditions for the propagations of the light rays from the aperture 31p−1 to the view points A1, A2, . . . and A9 in the right-side observation area WAR are discussed as follows.
A virtual straight line stretched in the Z direction to pass through the center of the aperture 31p-1 is taken as a reference. Notation X3 denotes the distance between the reference and the center of the sub-pixel 12(m−4, n) whereas notation X4 denotes the distance between the reference and the view point A1 in the right-side observation area WAR. In order for light coming from the aperture 31p−1 and passing through the sub-pixel 12(m−4, n) to propagate to the view point A1 in the observation area WAR, from a homothetic relation, Eq. (2) given below is satisfied.
Z1:X3=(Z1+Z2):X4 (2)
Since X3 and X4 in Eq. (2) given above satisfy the equations X3=RD−X1=RD−4×ND and X4=RD+5×DP respectively, substitution of these equations into Eq. (2) results in Eq. (2′) given as follows:
Z1:(RD−4×ND)=(Z1+Z2):(RD+5×DP) (2′)
If Eq. (2′) given above is satisfied, it is geometrically obvious that light rays coming from the aperture 31p+1 and passing through the sub-pixels 12(m−3, n), 12(m−2, n), . . . and 12(m+4, n) also propagate to respectively the view points A2, A3, . . . and A9 in the observation area WAR.
Light rays coming from an aperture 31p+1 and passing through sub-pixels 12(m−4, n), 12(m−3, n), . . . and 12(m+4, n) propagate to respectively the view points A1, A2, . . . and A9 in the left-side observation area WAL. Conditions for the propagations of the light rays from the aperture 31p+1 to the view points A1, A2, . . . and A9 in the left-side observation area WAL are obtained by inverting the conditions shown in
Each of the distances Z2 and DP is set at a value determined in advance on the basis of the specifications of the three-dimensional image display apparatus 1. In addition, the sub-pixel pitch ND is determined in accordance with the structure of the image display section 10. The distance Z1 and the aperture pitch RD are expressed by respectively Eqs. (3) and (4) which are derived from Eqs. (1′) and (2′).
Z1=Z2×ND/(DP−ND) (3)
RD=9×DP×ND/(DP−ND) (4)
If the sub-pixel pitch ND of the image display section 10 is 0.175 [mm], the distance Z2 is 3,000 [mm], the distance DP is 65.0 [mm] for example, the distance Z1 is found to be about 8.10 [mm], and the aperture pitch RD is found to be about 1.58 [mm].
It is to be noted that, if the configuration of the three-dimensional image display apparatus 1 is set so that the image observer is capable of observing an image for another point of view when the image observer moves by a distance about equal to half the distance between the left and right eyes of the image observer, the value of the distance DP merely needs to be reduced to half. If the value of the distance DP is reduced to 32.5 [mm], the distance Z1 is found to be about 16.2 [mm], and the aperture pitch RD is found to be about 1.58 [mm].
In the three-dimensional image display apparatus 1, a spacer shown in none of the figures is used for separating the image display section 10 and the optical separation section 30 from each other by the distance Z1 described above.
It is to be noted that the distance between the light emitting surface 21 of the illumination section 20 and the optical separation section 30 is not limited in particular. It is only necessary to set the distance between the light emitting surface 21 of the illumination section 20 and the optical separation section 30 at a proper value according to the specifications of the three-dimensional image display apparatus 1.
In the typical configuration described above, the value of the aperture pitch RD is about nine times the value of the sub-pixel pitch ND. Thus, M and P satisfy the relation M∓P×9.
The distance Z1 and the aperture pitch RD are set so that the conditions described above are satisfied. With the conditions satisfied, at each of the view points A1, A2, . . . and A9 in each of the observation areas WAL, WAC and WAR, an image for a view point determined in advance can be observed.
As explained before by referring to
Accordingly, if attention is paid to sub-pixels 12 on three rows adjacent to each other, it is obvious from
If the nth row be a row in the middle of pixel rows, in
First of all, the description begins with discussion of pixels composing an image observed at a view point A4. The image observed at a view point A4 is configured from sub-pixels each marked by notation A4 in the table shown in
As shown in
Next, the following description is given as discussion of pixels composing an image observed at a view point A5. The image observed at a view point A5 is configured from sub-pixels each marked by notation A5 in the table shown in
The pixels 512 are laid out in the same way as the pixels 412 explained above by referring to
As described above, an image observed at the view point A4 is configured to include J×K pixels 412 laid out to form a matrix. By the same token, an image observed at the view point A5 is configured to include J×K pixels 512 laid out to form a matrix.
The explanation of pixels composing an image observed at another point of view is the same as the explanation described above except that the combination of sub-pixels composing each pixel for the other point of view is different from the combination described above. Thus, the explanation of pixels composing an image observed at any other point of view and the arrangement is omitted. It is to be noted that, in the following description, each of pixels composing an image observed at the view point A1 is referred to as a pixel 112. By the same token, each of pixels composing an image observed at the view point A2 is referred to as a pixel 212. Likewise, each of pixels composing an image observed at the view point A8 is referred to as a pixel 812. In the same way, each of pixels composing an image observed at the view point A9 is referred to as a pixel 912.
The above description has explained relations between pixels composing an image for each point of view and sub-pixels composing the image display section. Next, the following description explains multi-view-point image display data used for displaying a multi-view-point image on the image display section.
As shown in
The image data D1
The driving section 100 shown in
As shown in
Thus, a view point toward which light coming from a sub-pixel 12(m, n) placed at the intersection of the mth column and the nth row propagates is referred to as a view point AQ where suffix Q is an integer in the range 1 to 9. The value of Q is expressed by Eq. (5) given below. In Eq. (5), notation mod (dividend, divisor) implies a remainder of dividing the dividend by the divisor.
Q=mod(m+n−2,9)+1 (5)
In addition, if the sub-pixel 12(m, n) is one of sub-pixels composing a pixel placed at the intersection of the jth column and the kth row in an image for the view point AQ (where j=1, 2, . . . and J and k=1, 2, . . . and K), the values of j and k are expressed by respectively Eqs. (6) and (7) given below. It is to be noted that, in Eqs. (6) and (7), notation INT (argument) is a function of finding an integer from the argument by truncating the fraction part of the argument.
j=INT([mod(n−1,3)+m−1]/9)+1 (6)
k=INT((n−1)/3)+1 (7)
In addition, a sub-pixel on the mth column is a sub-pixel emitting light having a red color if the remainder of dividing (m−1) by 3 is 0, a sub-pixel on the mth column is a sub-pixel emitting light having a green color if the remainder of dividing (m−1) by 3 is 1, and a sub-pixel on the mth column is a sub-pixel emitting light having a blue color if the remainder of dividing (m−1) by 3 is 2.
Thus, a sub-pixel 12(m, n) placed at the intersection of the mth column and the nth row is associated with red-color display data for the view point AQ if mod (m−1, 3)=0, the sub-pixel 12(m, n) is associated with green-color display data for the view point AQ if mod (m−1, 3)=1, and the sub-pixel 12(m, n) is associated with blue-color display data for the view point AQ if mod (m−1, 3)=2.
If the effect of the reverse view is not to be reduced, the view points A1 to A9 are associated with image data D1 to image data D9 respectively as they are. On the other hand, the embodiments carry out processing including an operation of properly replacing image data for some points of view with image data for other points of view.
In order to make the explanation easier to understand, this paragraph explains selection of data for a case in which the effect of the reverse view is not to be reduced. If the effect of the reverse view is not to be reduced, a sub-pixel 12(m, n) placed at the intersection of the mth column and the nth row is associated with image data DQ
Due to a relation based on sets each having sub-pixels 12 for which pixels composing an image for a point of view are laid out in an inclined direction, as shown in
By selecting image data in accordance with the procedure described above, it is possible to generate multi-view-point image display data used for displaying a multi-view-point image on the image display section.
Notations D1 to D9 shown in
When the left and right eyes of the image observer are both located in the same observation area, the image observer recognizes the image as a three-dimensional image. For example, the left and right eyes of the image observer are located at respectively view points A4 and A5 in the observation area WAC shown in
Notations A4 and A5 shown in
The image observer makes use of the left eye to observe an image created by sub-pixels driven by the image data D4 and makes use of the right eye to observe an image created by sub-pixels driven by the image data D5.
The image observer makes use of the left eye to observe an image created by pixels 412 on the basis of image data D4 (1, 1) to image data D4 (J, K) (as shown in
When the left and right eyes of the image observer are located in different observation areas, on the other hand, a reverse-view phenomenon occurs. In the reverse-view phenomenon, conversely, the image for the left eye is observed by the right eye whereas the image for the right eye is observed by the left eye. The image observer perceives an image in which the front and rear portions are swapped with each other. As a result, the image observer feels unnaturalness and discomforts.
For example, if the left eye of the image observer is located at a view point A9 in the left-side observation area WAL shown in
Notations A1 and A9 shown in
The image observer makes use of the left eye to observe an image created by sub-pixels driven by the image data D9 and makes use of the right eye to observe an image created by sub-pixels driven by the image data D1.
The image observer makes use of the left eye to observe an image created by pixels 912 on the basis of image data D9 (1, 1) to image data D9 (J, K) as shown in
A first embodiment implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus.
In the first embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, both the images of the pair are displayed by making use of data different from pieces of image data provided for points of view. The data different from pieces of image data provided for points of view is data obtained by combining pieces of image data having a plurality of types. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view. In an image displayed on the basis of data obtained by combining pieces of image data having a plurality of types, components of the pieces of image data having a plurality of types are alternately laid out to create a stripe state.
An outline of the operation carried out by the first embodiment to generate multi-view-point image display data is explained as follows. A plurality of pieces of image data provided for a plurality of different view points are combined in order to generate data DS1 to be described later. To put it more concretely, the pieces of image data are image data D1 and image data D9. Then, a view point A1 is associated with the data DS1 replacing the image data D1. By the same token, a view point A9 is also associated with the data DS1 replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with image data D2 to image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
The image display section 10 is driven to operate on the basis of the multi-view-point image display data generated as described above. By driving the image display section 10 to operate in this way, even if a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area, both the images of the pair can each be displayed by combining pieces of image data associated with images for a plurality of view points.
As is obvious from an equation shown in
The image data D1 to the image data D9 are supplied to the driving section 100 without modifying these pieces of image data. Then, the driving section 100 generates the data DS1 on the basis of the operation shown in
As shown in
Thus, when the left and right eyes of the image observer are put at view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer does not feel unnaturalness and a discomfort which are caused by a reverse-view phenomenon. Even if a plurality of image observers observe an image displayed on the same three-dimensional image display apparatus from different locations, it is possible to reduce unnaturalness and a discomfort caused by a reverse-view phenomenon without a problem.
Each of the images observed at the view points A1 and A9 includes components of images for the view points A1 and A9. Thus, when the left and right eyes of the image observer are put at the view points A1 and A2 respectively, an image component included in an image observed by the left eye as an image component for the view point A9 and an image component included in an image observed by the right eye as an image component for the view point A2 are put in a reverse-view relation. However, the image observed by the left eye also includes image components for the view point A1 and, this image and the image provided for the view point A2 to be observed by the right eye are put in a normal three-dimensional view relation. Thus, the image observer does not strongly feel unnaturalness and a discomfort caused by the reverse-view phenomenon described above. In addition, also when the left and right eyes of the image observer are put at the view points A8 and A9 respectively, an image provided for the view point A8 to be observed by the left eye and an image component provided for the view point A1 as an image component included in an image observed by the right eye are put in a reverse-view relation. However, the image observed by the right eye also includes image components for the view point A9 and, this image and the image provided for the view point A2 to be observed by the right eye are put in a normal three-dimensional view relation. Thus, the image observer never strongly does not feel unnaturalness and a discomfort caused by the reverse-view phenomenon described above.
As described above, the image data D1 and the image data D9 are combined in order to generate the data DS1. However, it is also possible to provide a configuration in which the image data D2 and the image data D8 are combined or a configuration in which the image data D3 and the image data D7 are combined. As another alternative, it is also possible to provide a configuration in which data obtained by reworking the image data D1 and data obtained by reworking the image data D9 are typically combined. It is only necessary to properly select a combination of pieces of image data in accordance with the design of the three-dimensional image display apparatus.
In addition, it is also possible to provide a configuration in which three or more pieces of image data with different types are combined in order to generate the data DS1. For example, it is also possible to provide a configuration in which the image data D1, the image data D5 and the image data D9 are combined or a configuration in which the image data D2, the image data D5 and the image data D8 are combined. As an alternative, it is also possible to provide a configuration in which data obtained by reworking the image data D1, data obtained by reworking the image data D5 and data obtained by reworking the image data D9 are typically combined.
A second embodiment is obtained by modifying the first embodiment. In the case of the first embodiment, each of the view points A1 and A9 is associated with the same data DS1. In the case of the second embodiment, on the other hand, the view points A1 and A9 are associated with different pieces of data as follows.
An outline of the operation carried out by the second embodiment to generate multi-view-point image display data is explained as follows. A plurality of pieces of image data provided for a plurality of different view points are combined in order to generate data DS2 to be described later in addition to the data DS1 explained above in the description of the first embodiment. To put it more concretely, the pieces of image data are image data D1 and image data D9. Then, a view point A1 is associated with the data DS1 replacing the image data D1. On the other hand, a view point A9 is associated with the data DS2 replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with image data D2 to image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
The image display section 10 is driven to operate on the basis of the multi-view-point image display data generated as described above. By driving the image display section 10 to operate in this way, even if a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area, both the images of the pair can each be displayed by combining pieces of image data associated with images for a plurality of view points.
The method for generating the data DS1 (j, k) has been explained by referring to
As is obvious from comparison of
As shown in
Thus, when the left and right eyes of the image observer are put at the view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer never feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon.
As explained earlier in the description of the first embodiment, it is also possible to provide a configuration in which the image data D2 and the image data D8 are typically combined in order to generate the data DS2. As another alternative, it is also possible to provide a configuration in which in which three or more pieces of image data with different types are typically combined in order to generate the data DS2. It is only necessary to properly select a combination of pieces of image data in accordance with the design of the three-dimensional image display apparatus.
A third embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus.
Also in the case of the third embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, both the images of the pair are displayed by making use of data different from pieces of image data provided for points of view. The data different from pieces of image data provided for points of view is data obtained by combining pieces of image data having a plurality of types. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view. In an image displayed on the basis of data obtained by combining pieces of image data having a plurality of types, components of the pieces of image data having a plurality of types are laid out to form a checker board pattern.
In the third embodiment, a plurality of pieces of image data provided for a plurality of different view points are combined in order to generate data DC1 to be described later. To put it more concretely, the pieces of image data are image data D1 and image data D9. Then, a view point A1 is associated with the data DC1 replacing the image data D1. By the same token, a view point A9 is also associated with the data DC1 replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with image data D2 to image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
The image display section 10 is driven to operate on the basis of the multi-view-point image display data generated as described above. By driving the image display section 10 to operate in this way, even if a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area, both the images of the pair can each be displayed by combining pieces of image data associated with images for a plurality of view points.
As is obvious from an equation shown in
The image data D1 to the image data D9 are supplied to the driving section 100 without modifying these pieces of image data. Then, the driving section 100 generates the data DC1 on the basis of the operation shown in
As shown in
Thus, when the left and right eyes of the image observer are put at the view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer never feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon. Even if a plurality of image observers observe an image displayed on the same three-dimensional image display apparatus from different locations, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon without a problem.
In addition, unlike the first embodiment, the components of the two images are laid out to form a checker board pattern. Thus, the image observer is capable of recognizing a planar image obtained as a result of superposing two images for the two points of view on each other as a smoother image. The operation carried out by the third embodiment to generate multi-view-point image display data is slightly more complicated than the operation carried out by the first embodiment to generate the multi-view-point image display data. However, the third embodiment has a merit that the displayed image can be made smoother.
Also in the case of the third embodiment, each of the images observed at the view points A1 and A9 includes image components for the view points A1 and A9. Thus, the image observer never strongly feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon when the image observer observes an image at the view points A1 and A2 or observes an image at the view points A8 and A9.
As described above, the data DC1 is generated by combining the image data D1 with the image data D9. As explained earlier in the description of the first embodiment, however, it is also possible to provide a configuration in which the image data D2 and the image data D8 are typically combined in order to generate the data DC1. As another alternative, it is also possible to provide a configuration in which in which three or more pieces of image data with different types are typically combined in order to generate the data DC1. It is only necessary to properly select a combination of pieces of image data in accordance with the design of the three-dimensional image display apparatus.
A fourth embodiment is obtained by modifying the third embodiment. In the case of the third embodiment, each of the view points A1 and A9 is associated with the same data DC1. In the case of the fourth embodiment, on the other hand, the view points A1 and A9 are associated with different pieces of data as follows.
An outline of the operation carried out by the fourth embodiment to generate multi-view-point image display data is explained as follows. A plurality of pieces of image data provided for a plurality of different view points are combined in order to generate data DC2 to be described later in addition to the data DC1 explained above in the description of the third embodiment. Then, a view point A1 is associated with the data DC1 replacing the image data D1. On the other hand, a view point A9 is associated with the data DC2 replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with image data D2 to image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
The image display section 10 is driven to operate on the basis of the multi-view-point image display data generated as described above. By driving the image display section 10 to operate in this way, even if a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area, both the images of the pair can each be displayed by combining pieces of image data associated with images for a plurality of view points.
The method for generating the data DC1 (j, k) has been explained by referring to
As is obvious from comparison of
As shown in
Thus, when the left and right eyes of the image observer are put at the view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer never feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon. In addition, in the case of the fourth embodiment, the two images observed at the view points A9 and A1 respectively have different phases of the array of the checker board pattern. Thus, the image observer recognizes a planar image more smoothly.
As explained earlier in the description of the first embodiment, also in the case of the fourth embodiment, it is also possible to provide a configuration in which the image data D2 and the image data D8 are typically combined in order to generate the data DC2. As another alternative, it is also possible to provide a configuration in which in which three or more pieces of image data with different types are typically combined in order to generate the data DC2. It is only necessary to properly select a combination of pieces of image data in accordance with the design of the three-dimensional image display apparatus.
A fifth embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus.
Also in the case of the fifth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, both the images of the pair are displayed by making use of data different from pieces of image data provided for points of view. The data different from pieces of image data provided for points of view is data obtained by computing the average of pieces of image data having a plurality of types. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view. In the following description, the average is assumed to be an arithmetic average also referred to as an average mean.
An outline of the operation carried out by the fifth embodiment to generate multi-view-point image display data is explained as follows. In the fifth embodiment, data Dav is generated on the basis of data found by computing an arithmetic average from a plurality of pieces of image data provided for a plurality of different view points. To put it more concretely, the pieces of image data are image data D1 and image data D9. Then, a view point A1 is associated with the data Dav replacing the image data D1. By the same token, a view point A9 is associated with the data Dav replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with image data D2 to image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
As shown in
The image data D1 to the image data D9 are supplied to the driving section 100 without modifying these pieces of image data. Then, the driving section 100 generates the data Dav on the basis of the operation shown in
As shown in
Thus, when the left and right eyes of the image observer are put at the view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer never feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon. Even if a plurality of image observers observe an image displayed on the same three-dimensional image display apparatus from different locations, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon without a problem.
The data Dav reflects the values of the image data D1 and the image data D9. Thus, when the left and right eyes of the image observer are put at the view points A1 and A2 respectively, there is a reverse-view relation between the image to be observed by the left eye at the view point A1 and the image to be observed by the right eye at the view point A2. Since the data Dav also reflects the value of the image data D1, however, the image observer never strongly feels unnaturalness and a discomfort which are caused by the reverse-view relation. It is to be noted that, even for a case in which the left and right eyes of the image observer are put at the view points A8 and A9 respectively, the above description basically holds true.
As described above, the data Dav is found by making use of the image data D1 and the image data D9. However, it is also possible to provide a configuration in which the data Dav is found by making use of the image data D2, the image data D8 or the like. In addition, it is also possible to provide a configuration in which the data Dav is found by making use of the image data D3, the image data D7 or the like. It is only necessary to properly select a combination of pieces of image data in accordance with the design of the three-dimensional image display apparatus as pieces of image data to be used for finding the data Dav.
In addition, it is also possible to provide a configuration in which the data Dav is found by making use of three or more pieces of image data. For example, it is also possible to provide a configuration in which the data Dav is found by making use of the image data D1, the image data D5, the image data D9 and/or the like or a configuration in which the data Dav is found by making use of the image data D2, the image data D5, the image data D8 and/or the like.
A sixth embodiment implements a three-dimensional image display apparatus according to the second embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus.
Also in the case of the sixth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, both the images of the pair are created by displaying a plurality of pieces of image data having different types on a time-division basis. The pieces of image data are pieces of image data for different points of view.
An outline of the operation carried out by the sixth embodiment to generate multi-view-point image display data is explained as follows. In the sixth embodiment, an image configured to include a pair of frames having a typical frame frequency of 120 hertz is displayed. The two frames of the pair are referred to as a first half frame and a second half frame respectively. The first-half and second-half frames pertaining to the frame pair of an image at each of the view points A1 and A9 are associated with different pieces of image data. To put it more concretely, the view point A1 is associated with the image data D1 and the image data D9 as the first half frame and the second half frame respectively. On the other hand, the view point A9 is associated with the image data D9 and the image data D1 as the first half frame and the second half frame respectively. It is to be noted that, for the view points A2 to A8, both the first half frame and the second half frame are associated with the image data D2 to the image data D8 respectively without modifying these pieces of image data. Thus, the images for the view points A1 and A9 are each created by displaying a plurality of pieces of image data for a plurality of view points on a time-division basis.
The switching between the first half frame and the second half frame is carried out at such a speed that the image observer does not perceive the individual images. Thus, the image observer perceives an image obtained as a result of superposing the images of the first half frame and the second half frame on each other due to the effect of a residual-image phenomenon of the perception. As a result, since the two images observed at the view points A9 and A1 respectively are virtually the same image, there is no disparity between the two images. In this way, it is possible to decrease the absolute value of the magnitude of a disparity between the two images included in a pair as two images put in a reverse-view relation.
Thus, when the left and right eyes of the image observer are put at the view points A9 and A1 respectively, the image observer recognizes a planar image obtained as a result of superposing two images for the view points A9 and A1 on each other. As a result, the image observer never feels unnaturalness and a discomfort which are caused by a reverse-view phenomenon. Even if a plurality of image observers observe an image displayed on the same three-dimensional image display apparatus from different locations, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon without a problem.
Also in the case of the sixth embodiment, each of the images observed at the view points A1 and A9 includes components of images for the view points A1 and A9. Thus, when the left and right eyes of the image observer are put at the view points A1 and A2 respectively or when the left and right eyes of the image observer are put at the view points A8 and A9 respectively, the image observer never strongly feels unnaturalness and a discomfort which are caused by the reverse-view phenomenon described above.
In addition, in order to further reduce the unnaturalness caused by the reverse-view relation between the view points A1 and A2 or between the view points A8 and A9, it is possible to provide a configuration in which a plurality of pieces of image data for a plurality of view points are displayed on a time-division basis for the first half frame and the second half frame also at the view points A2 and A8.
In this typical example, in addition to the operation explained earlier by referring to
In an image observed at the view point A2, image information for the view point A2 is mixed with image information for the view point A3. In an image observed at the view point A8, on the other hand, image information for the view point A8 is mixed with image information for the view point A7. It is thus possible to reduce the unnaturalness caused by the reverse-view relations between the view points A1 and A2 as well as between the view points A8 and A9
In the configuration described above, at the view point A1, the first half frame and the second half frame are associated with the image data D1 and the image data D9 respectively. At the view point A9, on the other hand, the first half frame and the second half frame are associated with the image data D9 and the image data D1 respectively. However, implementations of the sixth embodiment are by no means limited to this configuration. For example, as an alternative to the configuration described above, it is also possible to provide a configuration in which the image data D1 and the image data D9 are replaced with respectively the image data D2 and the image data D8 or the like or a configuration in which the image data D1 and the image data D9 are replaced with respectively the image data D3 and the image data D7 or the like. As another alternative, it is also possible to provide a configuration like one shown in
In the operations explained above by referring to
In a configuration wherein the interlace scanning is carried out, one frame is configured to include first and second fields as shown in
The image observed at the view point A1 is an image obtained as a result of superposing odd-numbered rows of the image data D1 on even-numbered rows of the image data D9. On the other hand, the image observed at the view point A9 is an image obtained as a result of superposing odd-numbered rows of the image data D9 on even-numbered rows of the image data D1. Since the two images observed at the view points A9 and A1 respectively are perceived as virtually the same image, there is essentially no disparity between the two images. Thus, in the same way as the operation described before by referring to
A seventh embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus.
Also in the case of the seventh embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, both the images of the pair are displayed by making use of data different from pieces of image data provided for points of view. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view.
An outline of the operation carried out by the seventh embodiment to generate multi-view-point image display data is explained as follows. A view point A1 is associated with the image data D2 replacing the image data D1. By the same token, a view point A9 is associated with the image data D8 replacing the image data D9. It is to be noted that the view points A2 to A8 are associated with the image data D2 to the image data D8 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
Thus, since the image data D1 and the image data D9 are not used, it is not necessary to supply these pieces of image data to the driving section 100. As a matter of fact, it is possible to omit the image data D1 and the image data D9.
As shown in
In this way, it is possible to decrease the absolute value of the magnitude of a disparity between the two images included in a pair as two images put in a reverse-view relation. Thus, in comparison with a case in which the effect of the reverse view is not to be reduced, it is possible to observe an image having a smaller effect of the reverse view phenomenon. As a result, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon.
When the left and right eyes of the image observer are positioned at view points A1 and A2 respectively or when the left and right eyes are positioned at the view points A8 and A9 respectively, the image observed by the left eye and the image observed by the right eye are the same image. Thus, to describe qualitatively, as the image observer moves toward the edge of the observation area, the observer feels less three-dimensionality of the image.
As described above, it is possible to provide a configuration in which the view point A1 is associated with the image data D2 and the view point A9 is associated with the image data D8. However, implementations of the seventh embodiment are by no means limited to such a configuration. For example, the view point A1 can be associated with data obtained as a result of reworking the image data D2 and the view point A9 can be associated with data obtained as a result of reworking the image data D8.
An eighth embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The eighth embodiment is obtained by modifying the seventh embodiment.
In the case of the seventh embodiment, the view point A1 is associated with the image data D2 and the view point A9 is associated with the image data D8. In the case of the eighth embodiment, on the other hand, the view point A1 is associated with the image data D3 and the view point A9 is associated with the image data D7. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
Thus, since the image data D1 and the image data D9 are not used also in the case of the eighth embodiment, it is not necessary to supply these pieces of image data to the driving section 100. As a matter of fact, it is possible to omit the image data D1 and the image data D9.
As shown in
Thus, in comparison with the seventh embodiment, it is possible to observe an image having an even smaller effect of the reverse view. As a result, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon.
It is to be noted that, in the case of the eighth embodiment, when the left and right eyes of the image observer are positioned at view points A1 and A2 respectively or when the left and right eyes are positioned at the view points A8 and A9 respectively, the image observed by the left eye is swapped with the image observed by the right eye.
In the states shown in
A ninth embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The ninth embodiment is obtained by modifying the eighth embodiment.
In the case of the eighth embodiment, the view point A1 is associated with the image data D3 and the view point A9 is associated with the image data D7. In the case of the ninth embodiment, on the other hand, in addition to this operation carried out by the eighth embodiment, the view point A2 is associated with the image data D3 in the same way as the view point A1 and the view point A8 is associated with the image data D7 in the same way as the view point A9. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
Since the image data D1, the image data D2, the image data D8 and the image data D9 are not used in the case of the eighth embodiment, it is not necessary to supply these pieces of image data to the driving section 100. As a matter of fact, it is possible to omit the image data D1, the image data D2, the image data D8 and the image data D9.
In the case of the ninth embodiment, when the left and right eyes of the image observer are positioned at view points A1 and A2 respectively or when the left and right eyes are positioned at the view points A8 and A9 respectively, the image observed by the left eye and the image observed by the right eye are the same image. Thus, unlike the eighth embodiment, the image observer never sees images having a disparity between the images in a reverse-view state.
A tenth embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The tenth embodiment is obtained by modifying the first embodiment.
Also in the tenth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, one of the images of the pair is displayed by making use of data different from pieces of image data provided for points of view. The data different from pieces of image data provided for points of view is data obtained by combining pieces of image data having a plurality of types. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view. In an image displayed on the basis of data obtained by combining pieces of image data having a plurality of types, components of the pieces of image data having a plurality of types are alternately laid out to create a stripe state.
In the case of the first embodiment, both the view points A1 and A9 are associated with the data DS1. In the case of the tenth embodiment, on the other hand, only the view point A1 is associated with the data DS1. In addition, the view points A2 to A9 are associated with image data D2 to image data D9 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
A method for generating the data DS1 (j, k) is the same as the method explained earlier by referring to
As shown in
As described above, only the view point A1 is associated with the data DS1. However, it is also possible to provide a configuration in which only the view point A9 is associated with the data DS1. As another alternative, in place of the data DS1, it is also possible to make use of the data DS2 explained before in the description of the third embodiment.
An eleventh embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The eleventh embodiment is obtained by modifying the third embodiment.
Also in the eleventh embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, one of the images of the pair is displayed by making use of data different from pieces of image data provided for points of view. The data different from pieces of image data provided for points of view is data obtained by combining pieces of image data having a plurality of types. To put it more concretely, the pieces of image data having a plurality of types are pieces of image data provided for different points of view. In an image displayed on the basis of data obtained by combining pieces of image data having a plurality of types, components of the pieces of image data having a plurality of types are laid out to form a checker board pattern.
In the case of the third embodiment, both the view points A1 and A9 are associated with the data DC1. In the case of the eleventh embodiment, on the other hand, only the view point A1 is associated with the data DC1. In addition, the view points A2 to A9 are associated with image data D2 to image data D9 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
A method for generating the data DC1 (j, k) is the same as the method explained earlier by referring to
As shown in
As described above, only the view point A1 is associated with the data DC1. However, it is also possible to provide a configuration in which only the view point A9 is associated with the data DC1. As another alternative, in place of the data DC1, it is also possible to make use of the data DC2 explained before in the description of the fourth embodiment.
A twelfth embodiment also implements a three-dimensional image display apparatus according to an embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The twelfth embodiment is obtained by modifying the fifth embodiment.
Also in the twelfth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, one of the images of the pair is displayed by making use of image data provided for at least two points of view. To put it more concretely, one of the images of the pair is displayed on the basis of data representing an arithmetic average of pieces of image data provided for at least two points of view.
In the case of the fifth embodiment, both the view points A1 and A9 are associated with the data Dav. In the case of the twelfth embodiment, on the other hand, only the view point A1 is associated with the data Dav. In addition, the view points A2 to A9 are associated with image data D2 to image data D9 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
A method for generating the data Dav (j, k) is the same as the method explained earlier by referring to
For an image shown in
As described above, only the view point A1 is associated with the data Dav. However, it is also possible to provide a configuration in which only the view point A9 is associated with the data Dav.
A thirteenth embodiment also implements a three-dimensional image display apparatus according to the second embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The thirteenth embodiment is obtained by modifying the sixth embodiment.
Also in the case of the thirteenth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, one of the images of the pair is created by displaying a plurality of pieces of image data having different types on a time-division basis. The pieces of image data are pieces of image data for different points of view.
A displayed image is configured from a pair including a first half frame and a second half frame. In the case of the sixth embodiment, the first-half and second-half frames of the image for each of the view points A1 and A9 are associated with pieces of image data having different type. In the case of the thirteenth embodiment, on the other hand, the first-half and second-half frames of the image for only the view point A1 are associated with the image data D1 and the image data D9 respectively. It is to be noted that, for the view points A2 to A9, both the first half frame and the second half frame are associated with the image data D2 to the image data D9 respectively without modifying these pieces of image data. In this way, the image for the view point A1 is created by displaying a plurality of pieces of image data for a plurality of view points on a time-division basis.
Since essential effects provided by the thirteenth embodiment are approximately the same as those provided by the eleventh and the twelfth embodiments, the effects provided by the thirteenth embodiment are not described. As explained above, the first-half and second-half frames of the image for only the view point A1 are associated with different pieces of image data which are the image data D1 and the image data D9 respectively. However, it is also possible to provide a configuration in which the first-half and second-half frames of the image for only the view point A9 are associated with different pieces of image data which are the image data D1 and the image data D9 respectively. In addition, it is also possible to provide a configuration in which the image data D2 and the image data D8 are used in place of the image data D1 and the image data D9 respectively or a configuration in which the image data D3 and the image data D7 are used in place of the image data D1 and the image data D9 respectively. On top of that, it is also possible to provide a configuration in which the interlace scanning is carried out as explained earlier by referring to
A fourteenth embodiment also implements a three-dimensional image display apparatus according to the first embodiment of the present disclosure and a method for driving the three-dimensional image display apparatus. The fourteenth embodiment is obtained by modifying the seventh embodiment.
Also in the case of the fourteenth embodiment, a pair of images are put in a reverse-view relation in the vicinity of an edge of an observation area. In order to solve this problem, one of the images of the pair is displayed by making use of data different from pieces of image data provided for points of view. To put it more concretely, the data different from pieces of image data provided for points of view is image data provided for another point of view.
In the case of the seventh embodiment, the view point A1 is associated with the data image D2 and the view point A9 is associated with the data image D8. In the case of the fourteenth embodiment, on the other hand, the view point A1 is associated with typically the data image D5 in place of the data image D1. It is to be noted that the view points A2 to A9 are associated with image data D2 to image data D9 respectively without modifying these pieces of image data. Then, multi-view-point image display data is generated in accordance with the flowchart shown in
Since the image data D1 is not used also in the case of the fourteenth embodiment, it is not necessary to supply the image data D1 to the driving section 100. As a matter of fact, it is possible to omit the image data D1.
As shown in
Thus, in comparison with a configuration in which the effect of the reverse view is not to be reduced, it is possible to observe an image having an even smaller effect of the reverse view. As a result, it is possible to reduce unnaturalness and a discomfort which are caused by a reverse-view phenomenon.
As described above, only the view point A1 is associated with the image data for another point of view. However, it is also possible to provide a configuration in which only the view point A9 is associated with the image data for another point of view. Further, as described above, the view point A1 is associated with the image data D5. However, it is also possible to provide a configuration in which the view point A1 is associated with another image data.
Embodiments of the present disclosure have been described in concrete terms. However, implementations of the present disclosure are not limited to these embodiments. That is to say, it is possible to make any changes to the embodiments as long as the changes are based on the technological concepts of the present disclosure.
For example, in a configuration wherein the value of DP is set at 32.5 mm, when the left and right eyes of the image observer are positioned at view points A8 and A1 respectively to form reverse relation 1 shown in
In addition, as shown in
As an alternative, it is also possible to provide a configuration in which every aperture of the optical separation section is stretched in the vertical direction as shown in
The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2010-293219 filed in the Japan Patent Office on Dec. 28, 2010, the entire content of which is hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2010-293219 | Dec 2010 | JP | national |