The present disclosure relates to an image processing apparatus, an image processing method, and a computer program.
A time-division driven video display device is a video display device that outputs multiple video streams while sequentially switching video streams in a time-division manner. Examples of such time-division driven video display devices include a time-division stereoscopic video display system using a pair of so-called shutter glasses (for example, see JP H9-138384A, JP 2000-36969A, and JP 2003-45343A) and a multi-video display system using a pair of shutter glasses to allow multiple viewers to view different videos without dividing a screen.
A person extracts and combines a plurality of depth cues from a difference between two-dimensional retina videos obtained by right and left eyes (binocular disparity), thereby perceiving three-dimensional information and recognizing an object as a three-dimensional-like stereoscopic video. Rotational movements of eyeballs change a convergence, i.e., a crossing angle of lines of sight, and a person determines a distance from an object on the basis of the convergence, thus recognizing a space in a three-dimensional manner. Showing an image in a stereoscopic manner using this principle is called a stereoscopic vision. An image shown using each of images for the right and left eyes is called a stereoscopic image. A video shown by preparing a plurality of images for the right and left eyes and continuously changing the plurality of images for the right and left eyes is called a stereoscopic video. An apparatus capable of displaying the stereoscopic images and videos is called a stereoscopic video display device.
The time-division stereoscopic video display system is a video display system using a stereoscopic video display device alternately displaying a left-eye video and a right-eye video on the entire screen in an extremely short cycle and separately providing right and left-eye videos in synchronization with the display cycle of the left-eye video and the right-eye video at the same time. For example, in the shutter glasses method, while the left-eye video is displayed, a left-eye unit of the shutter glasses passes light, and the right-eye unit shields light. On the other hand, while the right-eye video is displayed, the right-eye unit of the shutter glasses passes light, and the left-eye unit shields light.
In order to display an arbitrary object in stereo, two kinds of images are necessary to be generated; an image of the object seen by the right eye and an image of the object seen by the left eye.
The first method is a method for preparing a three-dimensional shape data of the object and viewpoint positions on the right and left of a viewer, and for drawing images for the right and left eyes having an exact binocular disparity by calculation based on positional relationship thereof. In this method, since images are generated by the exact disparity calculation, it is possible to create images for the right and left eyes having a natural stereoscopic effect.
The second method is a method for adding a stereoscopic effect to images not by preparing images separately for the right and left eyes, but by displaying the same image shifting for the right eye and the left eye. Since this method can use a drawing method for ordinary two-dimensional images as it is, the method can be realized on many machines without high calculation cost.
The above first method is capable of creating images for the right and left eyes having a natural stereoscopic effect since the images are created by an exact disparity calculation, however, high calculation cost and a dedicated hardware for drawing, or the like are necessary for three-dimensional calculation and drawing processing performed by the method in order to obtain disparity. For that reason, it is difficult for electrical devices such as a television having only limited processing capacity to realize this method, and the application range of this method is limited only to devices having high processing capacity, such as a personal computer or the like having a high-performing CPU and a graphic board.
Further, while the above second method can be realized on many devices without high calculation cost, however, there is an issue that since images to be reflected on the right and left eyes are exactly identical, it tends to be seen an average image emerged in space not an exact stereoscopic model.
Therefore, regarding a stereoscopic video display apparatus for displaying a stereoscopic video, it is expected for a method to be able to further improve the stereoscopic effect without high calculation cost.
In light of the foregoing, it is desirable to provide an image processing apparatus, image processing method, and computer program, which are novel and improved, and which are able to further improve a stereoscopic effect of an object by simple calculation and to display the object on a screen.
According to an embodiment of the present disclosure, there is provided an image processing apparatus including a drawing position calculation unit for calculating a drawing position of each image of a group of at least two images in a shape being divided at predetermined distance in the width direction so that each of the images is to be displayed on a screen as a stereoscopic image, the shape which is expected to be displayed on a screen as a stereoscopic image, and an image drawing unit for drawing each image of the group of images at the drawing position calculated by the drawing position calculation unit.
The image processing apparatus may further include an image dividing unit for dividing a shape, which is expected to be displayed on a screen as a stereoscopic image, at predetermined distance in the width direction, and for generating a group of at least two images. The drawing position calculation unit may calculate a drawing position of each image of a group of at least two images generated by the image dividing unit so that each of the images is to be displayed on a screen as a stereoscopic image.
The image dividing unit may calculate a distance in the width direction of the group of images and a gap of drawing positions between an innermost image and a second innermost image when generating the group of images.
The image processing apparatus may further include an image storage unit for storing a group of at least two images generated by the image dividing unit. The drawing position calculation unit may calculate a drawing position of each image of a group of at least two images stored in the image storage unit so that each of the image is to be displayed on a screen as a stereoscopic image.
The image dividing unit may store the generated image in the image storage unit while linking the image to information identifying a shape which is expected to be displayed on a screen as a stereoscopic.
The image dividing unit may calculate a distance in the width direction of the group of images and a gap of drawing positions between an innermost image and a second innermost image when generating the group of images, and stores a result of calculation in the image storage unit.
The drawing position calculation unit may calculate a drawing position of a third innermost or further inner images using the distance in the width direction of the group of images and the gap of drawing positions between the innermost image and the second innermost image.
The information of the gap of drawing positions may be added to the group of images in advance.
Calculation, which is performed by the drawing position calculation unit, of drawing positions of each image to be displayed as a stereoscopic image may be calculating the drawing positions of images for the right and left eyes with predetermined disparity.
Further, according to another embodiment of the present disclosure, there is provided a method for image processing which includes calculating a drawing position of each image of a group of at least two images in a shape being divided at predetermined distance in the width direction so that each of the images is to be displayed on a screen as a stereoscopic image, the shape which is expected to be displayed on a screen as a stereoscopic image, and drawing each image of the group of images at the drawing position calculated in the step of calculating the drawing position.
Further, according to another embodiment of the present disclosure, there is provided a computer program to cause a computer to execute calculating a drawing position of each image of a group of at least two images in a shape being divided at predetermined distance in the width direction so that each of the images is to be displayed on a screen as a stereoscopic image, the shape which is expected to be displayed on a screen as a stereoscopic image, and drawing each image of the group of images at the drawing position calculated in the step of calculating the drawing position.
According to the embodiment above of the present disclosure described above, it is possible to provide an image processing apparatus, image processing method, and computer program, which are novel and improved, and which are able to further improve a stereoscopic effect of an object by simple calculation and to display the object on a screen.
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.
The following explanation will be made in the order listed below.
<1. Principle of stereoscopic vision>
<2. An embodiment of the present disclosure>
<3. Conclusion>
First, a principle of stereoscopic vision used in a stereoscopic display device will be explained with reference to the drawings. As shown in
Rotational movements of eyeballs change a convergence as shown in
Hereinafter, an explanation will be given on a video display system according to an embodiment of the present disclosure. First, a configuration of a video display system according to the embodiment of the present disclosure will be explained.
The display device 100 shown in
Although the configuration of the image display unit 110 will be explained later in detail, the configuration of the image display unit 110 is briefly explained here. The image display unit 110 is configured to include a light source, a liquid crystal panel, and a pair of polarization plates provided to sandwich the liquid crystal panel. A light emitted by the light source is passed through the liquid crystal panel and the polarization plate to be converted into a light polarized in a predetermined direction.
The shutter glasses 200 is configured to include a right eye image transmission unit 212 and a left eye image transmission unit 214, which are made of, for example, liquid crystal shutters. The shutter glasses 200 perform opening and closing operations of the right eye image transmission unit 212 and the left eye image transmission unit 214 each made of the liquid crystal shutter, in response to a signal transmitted from the display device 100. The opening and closing operations performed by the right eye image transmission unit 212 and the left eye image transmission unit 214 are executed by a shutter control unit 130. The viewer can perceive an image displayed on the image display unit 110 as a stereoscopic image, by looking at the light emitted from the image display unit 110 through the right eye image transmission unit 212 and the left eye image transmission unit 214 of the shutter glasses 200.
On the other hand, when a normal image is displayed on the image display unit 110, the viewer can perceive the image as the normal image by seeing the light output from the image display unit 110 as it is.
In
As above, the configuration of the video display system 10 according to an embodiment of the present disclosure has been explained. Next, an explanation will be given on a functional configuration of the display device 100 according to the embodiment of the present disclosure.
As shown in
The image display unit 110 displays images in the manner described above, and when a signal is applied from an external source, images are displayed in accordance with the applied signal. The image display unit 110 is configured to include a display panel 112, a gate driver 113, a data driver 114, and a backlight 115.
The display panel 112 displays images in accordance with the signal applied from an external source. The display panel 112 displays images by sequentially scanning a plurality of scanning lines. Liquid crystal molecules having a predetermined orientation are filled in a space between transparent plates, made of glass or the like, of the display panel 112. A drive system of the display panel 112 may be a twisted nematic (TN) system, a vertical alignment (VA) system, or an in-place-switching (IPS) system. In the following explanation, the drive system of the display panel 112 is the VA system, unless otherwise specified, but it is to be understood that the present disclosure is not limited to this example. It should be noted that the display panel 112 according to the present embodiment is a display panel that can rewrite the screen at a high-speed frame rate (120 Hz or 240 Hz, for example). In the present embodiment, an image for the right eye and an image for the left eye are displayed alternately on the display panel 112 with a predetermined timing, thereby causing the viewer to perceive a stereoscopic image.
The gate driver 113 is a driver that drives a gate bus line (not shown in the figures) of the display panel 112. A signal is transmitted from the timing control unit 140 to the gate driver 113, and the gate driver 113 outputs a signal to the gate bus line in accordance with the signal transmitted from the timing control unit 140.
The data driver 114 is a driver that generates a signal that is applied to a data line (not shown in the figures) of the display panel 112. A signal is transmitted from the timing control unit 140 to the data driver 114. The data driver 114 generates a signal to be applied to the data line, in accordance with the signal transmitted from the timing control unit 140, and outputs the generated signal.
The backlight 115 is provided on the furthermost side of the image display unit 110 as seen from the side of the viewer. When an image is displayed on the image display unit 110, white light that is not polarized (unpolarized light) is output from the backlight 115 to the display panel 112 positioned on the side of the viewer. The backlight 115 may use a light-emitting diode, for example, or may use a cold cathode tube. It should be noted that the backlight 115 shown in
When the video signal control unit 120 receives a transmission of a video signal from an external source outside of the video signal control unit 120, the video signal control unit 120 executes various kinds of signal processing on the received video signal to output so that the video signal becomes suitable for displaying a three-dimensional image in the image display unit 110. The video signal processed by the video signal control unit 120 is transmitted to the timing control unit 140. When the video signal control unit 120 executes the signal processing, the video signal control unit 120 transmits a predetermined signal to the shutter control unit 130 in accordance with the signal processing. Examples of signal processings performed by the video signal control unit 120 include the following processings.
When a video signal to display the image for the right eye (a right-eye video signal) on the image display unit 110 and a video signal to display the image for the left eye (a left-eye video signal) on the image display unit 110 are transmitted to the video signal control unit 120, the video signal control unit 120 generates, from the two received video signals, a video signal for a three-dimensional image. In the present embodiment, the video signal control unit 120 generates, from the received right-eye video signal and the left-eye video signal, video signals to display images on the display panel 112 in the following order in a time-division manner: image for the right eye, image for the right eye, image for the left eye, image for the left eye, image for the right eye, image for the right eye, and so on.
The shutter control unit 130 receives the predetermined signal that is generated in accordance with the signal processing performed by the video signal control unit 120, and generates a shutter control signal that controls shutter operation of the shutter glasses 200 in accordance with the predetermined signal. The shutter glasses 200 perform opening and closing operations of the right eye image transmission unit 212 and the left eye image transmission unit 214, on the basis of the shutter control signal that is generated by the shutter control unit 130 and transmitted wirelessly based on, for example, IEEE802.15.4. The backlight control unit 155 receives a predetermined signal generated based on the signal processing performed by the video signal control unit 120, and generates a backlight control signal for controlling lighting operation of the backlight according to the signal.
In accordance with the signal transmitted from the video signal control unit 120, the timing control unit 140 generates a pulse signal that is used to operate the gate driver 113 and the data driver 114. When the pulse signal is generated by the timing control unit 140, and the gate driver 113 and the data driver 114 receive the pulse signal generated by the timing control unit 140, an image corresponding to the signal transmitted from the video signal control unit 120 is displayed on the display panel 112.
The memory 150 stores computer programs for operating the display device 100, and various setting of the display device 100, or the like. Further, in the present embodiment, it stores data of image (for example, image of icon, or the like) expected to be displayed by the image display unit 110 as a stereoscopic image. Using the images stored in the memory 150, the video signal control unit 120 performs image drawing processing for causing the image display unit 110 to display as a stereoscopic image.
As above, the functional configuration of the display device 100 according to an embodiment of the present disclosure has been explained. Subsequently, an explanation will be given on a configuration of the video signal control unit 120 included in the display device 100 according to the embodiment of the present disclosure.
As shown in
The drawing position calculation unit 121 is to calculate a drawing position for causing the image display unit 110 to display an object as a stereoscopic image using information of the object data supplied by the external source outside of the video signal control unit 120. The calculation of the drawing position executed by the drawing position calculation unit 121 is calculation processing of drawing positions of images for the right and left eyes to be displayed on the image display unit 110 with predetermined disparity between the images for the right and left eyes. After calculating a drawing position of images, the drawing position calculation unit 121 transmits information of the drawing position to the image drawing unit 122.
The image drawing unit 122 is to execute processing for drawing images based on information on drawing positions of images, the drawing positions which is calculated by the drawing position calculation unit 121, for displaying an object as a stereoscopic image. When the image drawing unit 122 executes processing for drawing images based on information on drawing positions of the images, the images is to be displayed on the image display unit 110 as a stereoscopic image.
The present embodiment prepares groups of plurality of images in a shape being divided by each depth and being expected to be displayed on the image display unit 110 as a stereoscopic image. Further, depending upon the position on the screen, the image drawing unit 122 draws each image of this group of images shifting them appropriately so as to create disparity images without any sense of unpleasant in high speed.
Therefore, in the present embodiment, as described above, groups of plurality of images in a shape being divided by each depth and being expected to be displayed on the image display unit 110 as a stereoscopic image are prepared in advance, and the image drawing unit 122 calculates a drawing position to cause the image to be displayed as a three-dimensional image.
The object data is configured by groups of values in depth corresponding to a plurality of images indicating shapes of each depth of the object, and a virtual three-dimensional coordinate (where the object is arranged in a space expressed as a stereoscopic video) in the stereoscopic video of the object itself. The image of the object data may be created in advance manually, or may be created by calculation based on the three-dimensional data of the object.
The drawing position calculation unit 121 calculates drawing positions to cause the image display unit 110 to be displayed as a stereoscopic image with respect to the images 161a, 161b, 161c, and 161d respectively. By doing this, each of the images 161a, 161b, 161c, and 161d is to be displayed having a predetermined disparity, and a user can recognize the image of a pot 160 shown in
As above, the configuration of the video signal control unit 120 included in the display device 100 according to an embodiment of the present disclosure has been explained. Subsequently, an explanation will be given on an operation of the display device 100 according to the embodiment of the present disclosure.
At first, group of images dividing a three-dimensional model of the object expected to be displayed as a stereoscopic image in the depth direction is to be input to the video signal control unit 120 (step S101). The group of the images may be stored, for example, in the memory 150, or may be in the shape where it is included in video signals being broadcast by broadcast stations.
When the group of images dividing a three-dimensional model of the object expected to be displayed as a stereoscopic image in the depth direction is input into the video signal control unit 120, the drawing position calculation unit 121 calculates a drawing position for each of the images input into the video signal control unit 120 (step S102).
The drawing position calculation unit 121 calculates a two-dimensional coordinate of images for the right and left eyes of each image using the following formulas based on a relative coordinate from virtually defined positions of the right and left eyes to which three-dimensional coordinates included in the object are transformed, and information on depth of each image.
X2d=X3d/(Z3d+depth)*coefficient x
Y2d=Y3d/(Z3d+depth)*coefficient y
Coefficient x and coefficient y are coefficients for adjusting parsing size in transformation, and may use any arbitrary value larger than 0. Further, the value of width corresponds to 0, D, 2D shown in
In addition, the relativized three dimensional coordinate sets the virtual viewpoint position as the origin, the inner direction of a screen as +Z, the upper direction of the screen as +Y, and the right direction of the screen as +X. The two-dimensional coordinate also sets the center of the screen as the origin, the right direction as +X, and the upper direction as +Y.
If distances (D) between each image divided as shown
At first, the drawing position calculation unit 121 calculates the drawing position of an image where depth=0 using the above formulas. Similarly, the drawing position calculation unit 121 calculates the drawing position of an image where depth=D. Subsequently, the drawing position calculation unit 121 calculates the gap (set as dx) between the drawing positions when the depth is shifted for D using the gap of drawing positions between the image where depth=0 and the image where depth=D.
Subsequent drawing positions of images, such as the one where depth is 2D, 3D, or the like are calculated using the gap of drawing position dx for the one of depth D calculated as above. For example, the gap of drawing position for the one of depth D becomes 2dx, while the gap of drawing position for the one of depth 3D becomes 3dx.
Further, when using such simplified formula as above, the gap of drawing position dx in case where depth D changed may be calculated as an approximate fixed value so as to be provided as object data paired with an image. This further reduces the cost for drawing position calculation by the drawing position calculation unit 121.
When the drawing position calculation unit 121 calculates the drawing position for each of images being input to the video signal control unit 120 in the above step S102, subsequently the image drawing unit 122 executes processing of drawing images based on the information on drawing position of images for displaying the object as a stereoscopic image, the drawing position that the drawing position calculation unit 121 calculated (step S103). When the image drawing unit 122 executes processing of drawing images in step S103, the images being input in the video signal control unit 120 can be displayed as a stereoscopic image on the image display unit 110.
As above, the operation of the display device according to an embodiment of the present disclosure has been explained. Subsequently, an explanation will be given on modified examples of the video signal control unit 120 included in the display device 100 according to the embodiment of the present disclosure.
Information may be provided not on images of an object divided by a predetermined distance in the depth direction, but on three dimensional data of the object (three dimensional model) depending upon the data format of the object. In such cases, as preparation prior to the above-described processing of calculating drawing position, drawing images, or the like, cross-sectional diagrams of stereoscopic shape of three-dimensional model may be created, enabling the image processing according to the present embodiment applicable to reduce the calculation cost for the subsequent drawings. As the divided images, the cross-sectional diagrams at the time when the three-dimensional model is divided in the Z direction in a predetermined distance (equally spaced, for example) are used. At this time, if the dividing distance in the Z direction becomes narrower, a stereoscopic image closer to the three-dimensional model can be generated, if the dividing distance becomes broader, faster drawing can be achieved.
As shown in
The video signal control unit 120 shown in
The image dividing unit 123 is to use the three-dimensional shape data being input in the video signal control unit 120 to generate images divided in predetermined distance in the depth direction (direction Z) as shown in
The drawing position calculation unit 121 calculates a drawing position for causing the image display unit 110 to display the image, which the image dividing unit 123 has generated, or which the image dividing unit 123 has generated and stored in the image storage unit 124, as a stereoscopic image. This enables each of the plurality of images that the image dividing unit 123 has generated to be displayed on the image display unit 110 having a predetermined disparity. Subsequently the user can recognize the three-dimensional shape data of the object being input to the video signal control unit 120 as a stereoscopic image by looking the plurality of images generated by the image dividing unit 123 through the shutter glasses 200.
Next, an operation of the video signal control unit 120 shown in
When the three-dimensional shape data is input in the video signal control unit 120 (step S111), the image dividing unit 123 generates images divided in a predetermined distance in the depth direction based on the three-dimensional data being input (step S112). For example, when the three-dimensional data similar to the reference sign 160 in
When the image dividing unit 123 generates images divided in the predetermined distance in the depth direction based on the three-dimensional data being input in the above step S112, subsequently the drawing position calculation unit 121 calculates a drawing position for each image that the image dividing unit 123 has generated (step S113).
When the drawing position calculation unit 121 calculates the drawing position for each of images being input to the image dividing unit 123 in the above step S113, subsequently the image drawing unit 122 executes processing of drawing images based on the information on drawing position of images for displaying the object as a stereoscopic image, the drawing position that the drawing position calculation unit 121 calculated (step S114). When the image drawing unit 122 executes processing of drawing images in step S114, the three-dimensional data being input in the video signal control unit 120 can be displayed as a stereoscopic image on the image display unit 110.
As described above, using the three-dimensional shape data being input in the video signal control unit 120, the video signal control unit 120 generates images divided in the predetermined distance in the depth direction (direction Z) shown as
As described above, the modified examples of the video signal control unit 120 included in the display device 100 according to an embodiment of the present disclosure have been explained.
As described above, the display device 100 according to an embodiment of the present disclosure calculates drawing positions in order to display a group of images, which is dividing the three-dimensional model of the object expected to be displayed as a stereoscopic image in the depth direction, as a stereoscopic image on the image display unit 110. The video signal control unit 120 displays the group of images on the calculated drawing position of the image display unit 110.
This enables the display device 100 according to an embodiment of the present disclosure to display an object expected to be displayed as a stereoscopic image on the image display unit 110 using a simple calculation.
The display device 100 according to the embodiment of the present disclosure can generate a group of images dividing a three-dimensional model in the depth direction based on the three-dimensional model of an object expected to be displayed as a stereoscopic image, and calculates drawing positions for causing the group of images to be displayed as a stereoscopic image on the image display unit 110. This enables the display device 100 according to the embodiment of the present disclosure to transform the three-dimensional model of the object expected to be displayed as a stereoscopic image using a simple calculation to display on the image display unit 110.
The operation of the display device 100 according to the embodiment of the present disclosure described above may be performed by hardware or may be performed by software. If the operation of the display device 100 according to the embodiment of the present disclosure described above is performed by software, the above-described operation may be performed by a CPU or other control device having medium on which computer programs are recorded inside the display device 100, and reading out the computer program from the medium to sequentially execute.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
For example, in the above embodiment, the display device 100 has been explained as an example where a user views an image displayed on the display device 100 through the shutter glasses 200 and perceives the image as a three-dimensional image. However, the present disclosure is not limited to such an example. Likewise, the present disclosure can also be applied to a video display device where a user directly views an image displayed on the display device 100 and recognizes the image as a three-dimensional image.
Further, for example, in the above embodiment, when the three-dimensional shape data is input in the video signal control unit 120, images divided in a predetermined distance in the depth direction (direction Z) are generated, and a drawing position of the images for displaying the object as a stereoscopic image with respect to each of the images is calculated. However, as far as the images divided in a predetermined distance in the depth direction has already been generated from the three-dimensional shape data, the divide images necessarily need to be generated again, therefore, it is enough to read the images from the image storage unit 124. At that time, information for identifying three-dimensional shape data being input in the video signal control unit 120 may be generated, or information for identifying the three-dimensional shape data may be added to the three-dimensional shape data in advance to supply to the video signal control unit 120. When the video signal control unit 120 store divided images in the image storage unit 124, it may store the divided images linking with the information identifying the three-dimensional shape data.
Further, in the above embodiment, for example, when the three-dimensional shape data is input to the video signal control unit 120, images divided in a predetermined distance in the depth direction (direction Z) are generated to calculate drawing positions of the images for displaying an object as a stereoscopic image with respect to each of the images. However, at a time of generating the divided images, information on a gap of the drawing positions between the innermost image and the second innermost image may be stored in the image storage unit 124.
The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2010-196647 filed in the Japan Patent Office on Sep. 2, 2010, the entire content of which is hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
P2010-196647 | Sep 2010 | JP | national |