This application claims priority to Japanese Patent Application No. 2010-255501, filed on Nov. 16, 2010 and Japanese Patent Application No. 2011-206781, filed on Sep. 22, 2011. The entire disclosure of Japanese Patent Application No. 2010-255501 and Japanese Patent Application No. 2011-206781 are hereby incorporated herein by reference.
1. Technical Field
The present technology relates to a display device that displays image information, and more particularly relates to a display device that displays depth direction position information expressing the positional relation in front of and behind a subject in a three-dimensional image. The present technology also relates to a display method that is executed by the above-mentioned display device.
2. Background Information
Display devices that display three-dimensional images (3D images), as well as imaging devices that capture these three-dimensional images, have been attracting attention in recent years. Many different display methods have been proposed, but they all share the same basic principle. This basic principle is that parallax is artificially created for the left and right eyes, so that the brain of the person viewing the image perceives a three-dimensional image.
When a three-dimensional image is captured, if the subject is located to close and there is too much parallax, the human brain cannot successfully blend the three-dimensional image, which makes the viewer feel that something is not right. Accordingly, when imaging is performed, depth direction position information expressing the convergence angle, the horizontal field of view, and/or the front and rear positional relation of the subject, etc., is adjusted so that there will not be a large parallax.
“Distance image” is a method for making depth direction position information visible with a two-dimensional display device. The primary display method for a distance image is a method in which pixels are expressed by hue on the basis of depth direction position information with respect to the pixels of the original image. In this case, for example, the hue of the pixels changes from a color with a high optical wavelength (red) to a color with a low optical wavelength (violet) as the position of the pixels moves from closer to farther away. There is another method, in which pixels are expressed by achromatic brightness (gray scale) on the basis of depth direction position information with respect to the pixels of the original image. In this case, for example, the brightness of the pixels changes from white (high brightness) to black (low brightness) as the position of the pixels moves from closer to farther away.
One method for displaying depth direction position information that has been disclosed involves choosing the orientation of various points on a surface from among three-dimensional shape data, and outputting the chosen orientations so that they can be identified (see Japanese Laid-Open Patent Application H6-111027). Here, the chosen orientations are expressed by the direction and length of line segments, or are expressed by the hue and saturation of colors.
With prior art, however, the above-mentioned distance image was produced and displayed by subjecting the original image to conversion processing. Depth information was ascertained by means of the hue, brightness, etc., of the distance image. However, even though viewers could get a physical sense of a certain position from the hue or brightness of a distance image, it was difficult to ascertain a certain position accurately. Thus, a problem with prior art was that information in the depth direction could not be accurately and easily recognized.
The present technology was conceived in an effort to solve the above problems encountered in the past. The display device disclosed herein comprises an input unit, a depth direction position information processing unit, and a display processing unit. Depth direction position information is inputted to the input unit. The depth direction position information corresponds to information about the depth direction of a subject and/or the position in the depth direction of the subject. The depth direction position information is set for at least part of a two-dimensional image corresponding to the subject. The depth direction position information processing unit is configured to process two-dimensional information. The two-dimensional information is made up of the depth direction position information and either horizontal direction position information or vertical direction position information. The horizontal direction position information corresponds to the position of the subject in the horizontal direction and is set for at least part of the two-dimensional image. The vertical direction position information corresponds to the position of the subject in the vertical direction and is set for at least part of the two-dimensional image. The display processing unit is configured to display an image corresponding to the two-dimensional information processed by the depth direction position information processing unit.
With the above constitution, information in the depth direction can be accurately and easily recognized. Also, object cropping adjustment of the display screen edges, reference plane adjustment, and so forth can be easily carried out on the basis of this information in the depth direction.
Referring now to the attached drawings, which form a part of this original disclosure:
Selected embodiments of the present technology will now be explained with reference to the drawings. It will be apparent to those skilled in the art from this disclosure that the following descriptions of the embodiments of the present technology are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
Embodiments will now be described through reference to the drawings.
A depth direction position information display device 100 is made up of an input component 101, an image processor 102, a depth direction position information processor 103, a display processor 104, and a setting component 109.
Images and depth direction position information are inputted to the input component 101. The input component 101 then outputs the image to the image processor 102, and outputs the depth direction position information to the depth direction position information processor 103. The depth direction position information is set for at least part of the region (such as a subject, a block, a pixel, etc.) of an image in a two-dimensional plane when the image is expressed as a two-dimensional plane composed of a horizontal direction and a vertical direction.
The image inputted here (input image) is, for example, a still picture for each frame or for each field. The input image may also be continuous still pictures, for example. In this case, the input image is treated as a moving picture.
The input image includes, for example, a single image, a left-eye image and a right-eye image having binocular parallax, a three-dimensional image expressed by CG or another such surface model, and the like.
The depth direction position information includes, for example, the distance in the depth direction to the subject obtained by metering (depth coordinate), the amount of parallax between a left-eye image and a right-eye image having binocular parallax, coordinates indicating position information in a three-dimensional space for each apex of a model or texture information in a three-dimensional image, and the like. The amount of parallax corresponds, for example, to the amount of change in position information in the horizontal direction in regions corresponding to a left-eye image and a right-eye image.
The following is a description of a mode in which depth direction position information is set on the basis of an input image. First, a mode will be described in specific terms in which the distance in the depth direction is calculated on the basis of a single image. The camera shown in
Next, a mode in which the amount of parallax is set on the basis of a left-eye image and a right-eye image having binocular parallax will be described in specific terms. When the subjects shown in
In the following description, a plane in which the left-eye image and the right-eye image coincide, that is, a plane with no binocular parallax made up of the horizontal direction and the vertical direction, is defined as the reference plane. For example, the plane in which the parallax is zero is calculated on the basis of the left-eye image and the right-eye image, and this plane is set as the reference plane. For instance, when an image is captured using a two-lens camera, the positional relation between the various subjects can be ascertained from this reference plane, which makes it easier for three-dimensional imaging to be performed. The phrases “a plane with no binocular parallax” and “the plane in which the parallax is zero” here include the meaning of “a plane in which binocular parallax is within a specified range of error” and “a plane in which parallax is within a specified range of error using zero as a reference.”
Finally, a mode will be described in specific terms in which coordinates indicating position information about the three-dimensional space for each apex in CG or another such surface model are set on the basis of a three-dimensional image expressed by this model. When the subjects shown in
The setting component 109 performs settings related to the display of depth direction position information. The setting component 109 outputs corresponding setting information to the image processor 102, the depth direction position information processor 103, and the display processor 104. Settings related to the display of depth direction position information include the following, for example.
(1) Setting of Depth Direction Position Information Serving as a Reference, Namely, Reference Plane Position Information (such as the Parallax Measurement Start Position, and a Position with no Binocular Parallax)
Depth direction position information that will serve as a reference, that is, reference plane position information, may be set on the basis of actual measurement, or may be set by relative calculation from an image. For instance, when the subjects shown in
In another mode, information indicating the position where there will be no binocular parallax in the display of a three-dimensional image is set as reference plane position information. For instance, when the subjects shown in
(2) Setting of Position Information (Horizontal Direction Position Information or Vertical Direction Position Information) used Along with Depth Direction Position Information
Horizontal direction position information and vertical direction position information are information for defining the position of the image in the horizontal direction (horizontal coordinate) and the position in the vertical direction (vertical coordinate). When the image is expressed as a two-dimensional plane composed of the horizontal direction and the vertical direction, the horizontal direction position information and vertical direction position information are set for at least a partial region (such as a subject, a block, a pixel, etc.) of the image on the two-dimensional plane (two-dimensional image).
Here, the horizontal direction position information or depth direction position information is selected in order to display a depth direction position information image. For example, the depth direction position information image for the subjects shown in
Also, the depth direction position information image for the subjects shown in
When an image has three-dimensional information, the perspective position may be changed. For example, when the perspective position is changed, the above-mentioned processing may be performed using horizontal direction position information after the change in perspective position, vertical direction position information after the change in perspective position, and depth direction position information after the change in perspective position.
Thus, the positional relation between subjects can be confirmed from various perspectives by using horizontal direction position information and vertical direction position information to display the depth direction position information. Specifically, the positional relation between the subjects during three-dimensional imaging can be ascertained, and three-dimensional imaging can be easily carried out.
(3) Setting the Range of the Depth Direction for the Displayed Depth Direction Position Information
Here, the display range of the depth direction position information is set. For example, when the display range for depth direction position information (the broken line in
In another mode, the depth direction position information may be converted so that the depth direction position information for subjects outside the display range (subjects A and C in
Here, the depth direction position information for subjects outside the display range may be displayed by using a different color or by flashing. This notifies the user that a subject is outside the display range. Also, within the display range of depth direction position information, if there is a range in which the parallax is too great in 3D view and causes discomfort, then this range may be set as a parallax warning region (the shaded region in
Similarly, if a subject that stands out forward in a 3D view is cut off at the edge of the screen, that range may be indicated as a parallax warning region (shaded region). Furthermore, the regions shown in
Thus, when a subject is outside the display range of depth direction position information, the depth direction position information for that subject is highlighted in its display, so the user can be notified in advance during 3D imaging about a subject that may cause discomfort because of excessive parallax in 3D view. Also, the depth direction position information for all subjects can be ascertained, without changing the display range of depth direction position information, by converting depth direction position information for subjects outside the display range so that they can be displayed within the display range.
(4) Setting Whether or not to use Depth Direction Position Information for a Partial Region of a Two-Dimensional Image, and Whether or not to use Depth Direction Position Information for the Entire Region of a Two-Dimensional Image
Here, for example, a region for the entire image shown in
Also, here, for example, a partial region (employed region) of the image shown in
(5) Setting the Image Display Position and Display Size
Here, the display position and display size of an image are set. For example, setting related to combining images, setting related to switching the display of images, setting for producing an image outputted to another display device, and the like are carried out.
For example, a setting for displaying the entire image as shown in
Also, as shown in
The settings shown here are just an example, and setting of the display position of another image and setting of another display size may also be performed.
(6) Setting Whether or not to Display a Depth Direction Position Information Image as a Combination with the Image; Setting Whether to Display a Combined Display of a Depth Direction Position Information Image in a Partial Region or Superposed with the Entire Screen; and Setting the Display Position and Display Size of a Depth Direction Position Information Image
Here, whether or not to display an image as a combination with a depth direction position information image is set. For example, setting for directing the various displays given below is executed.
If there is no combination, then just the depth direction position information is displayed. In this case, the depth direction position information may be displayed on another display device besides the main display device. Also, in this case, display of the image and the depth direction position information image may be switched on a single display device.
Meanwhile, for example, when the image shown in
Also, when the entire configuration of the image is to be ascertained, as shown in
The image processor 102 processes the image inputted from the input component 101, thereby producing a two-dimensional image, and outputs this to the display processor 104. As a result of this image processing, a two-dimensional image corresponding to the display mode set with the setting component 109 is produced. Examples of image processing include the production of a combined image in which a left-eye image and a right-eye image having binocular parallax are superposed as in the setting in (5) above, the production of a monochromatic image, the conversion of the image size, the selection of a partial display region, and the conversion of a three-dimensional image into a two-dimensional image from a certain perspective.
The depth direction position information processor 103 processes the depth direction position information outputted from the input component 101 so as to create the display set with the setting component 109, thereby producing a two-dimensional image (depth direction position information image), and outputs this to the display processor 104.
Here, the depth direction position information is processed by choosing the information required for the production of a two-dimensional image. For example, depth direction position information and the horizontal direction position information thereof, or depth direction position information and the vertical direction position information thereof, in the set region of the image set in (2) above are chosen as the information required to produce a depth direction position information image. The chosen information is subjected, for example, to conversion in which the depth direction position information is put into a specified display range as set in (3) or (5) above, or to conversion into depth direction position information corresponding to a two-dimensional image as seen from the perspective set in the three-dimensional image.
A two-dimensional image indicating depth direction position information (depth direction position information image) is produced, for example, by converting depth direction position information and the horizontal direction position information thereof into image information made up of the vertical direction and the horizontal direction, and plotting this on a two-dimensional graph. Also, a depth direction position information image is produced by converting depth direction position information and the vertical direction position information thereof into image information made up of the horizontal direction and the vertical direction, and plotting this on a two-dimensional graph. The two-dimensional image may be subjected to colorization processing, flash processing, or the like to make the graph easier to read. Also, the reference plane set in (1) above may be displayed in the two-dimensional image, for example.
The display processor 104 combines the image inputted from the image processor 102 with the image inputted from the depth direction position information processor 103 and displays the combined image. More specifically, the display processor 104 displays a combination of the image outputted by the image processor 102 and the image outputted by the depth direction position information processor 103 on the basis of the display settings at the setting component 109 and the display settings of (4) and (6) above, for example.
An example of the configuration of the depth direction position information display device in an embodiment was described above through reference to a block diagram.
In this case, the setting component 109 processes as follows. First, the setting component 109 sets the depth direction position information that will serve as a reference. For instance, as shown in
Next, the setting component 109 sets the parallax warning region to the range shown in
Then, the setting component 109 uses the depth direction position information with respect to the entire region of the two-dimensional image shown in
When the above-mentioned settings are performed by the setting component 109, various processing is executed as follows on the basis of these settings.
First, in input processing P1, a left-eye image and a right-eye image having binocular parallax, and the amount of parallax for each pixel of the left-eye image and the right-eye image are inputted to the input component 101. The amount of parallax may be with respect to the left-eye image or to the right-eye image, or may be with respect to both the left-eye image and the right-eye image. Next, the input component 101 outputs the image to the image processor 102, and outputs depth direction position information to the depth direction position information processor 103. Here, the position of the depth direction position information corresponding to the image is defined by the horizontal direction position information and the vertical direction position information.
In image conversion processing P2, the image processor 102 produces an image according to the display settings of the setting component 109. For example, the image processor 102 adds the pixel values for the same position in the left-eye image and right-eye image having binocular parallax, and divides the result by two. As a result, the image processor 102 produces the image in
In
In depth direction position information conversion processing P3, the depth direction position information processor 103 chooses depth direction position information and the horizontal direction position information thereof with respect to the entire region of the two-dimensional image on the basis of the display settings at the setting component 109. The depth direction position information processor 103 displays the reference plane set by the reference plane position information in the above-mentioned two-dimensional graph. Also, the depth direction position information processor 103 plots in a two-dimensional graph the amount of parallax within ±3% of the horizontal direction size of the image, using this reference plane as a reference. For example, when the horizontal size of the image is 1920 pixels, the amount of parallax that is ±3% of the horizontal direction size of the image is ±58 pixels. Here, the plus side is defined in the direction of shifting horizontally to the right, and the minus side is defined in the direction of shifting horizontally to the left. In this case, the depth direction position information processor 103 plots the range over which the amount of parallax for each pixel is from +58 pixels to −58 pixels, that is, the amount of parallax in the depth direction display range, in a two-dimensional graph.
Here, for an object whose amount of parallax exceeds the depth direction display range, the amount of parallax of this object is set to the upper or lower limit value for the amount of parallax within the employed region (in this case, the total amount of parallax since the entire screen is the employed region). For example, if the amount of parallax of the object is over +58 pixels, the amount of parallax of this object is set to +58 pixels. If the amount of parallax of the object is under −58 pixels, the amount of parallax of this object is set to −58 pixels.
Also, the region corresponding to the amount of parallax outside the range of ±2% of the horizontal direction size of the image is set as the parallax warning region. For example, when the horizontal size of the image is 1920 pixels, the amount of parallax that is ±2% of the horizontal direction size of the image is ±38 pixels. In this case, the depth direction position information processor 103 plots the amount of parallax outside the range of ±38 pixels in a two-dimensional graph just as with the above-mentioned depth direction display range. The depth direction position information processor 103 then adds color, such as yellow or red, to the graph background to highlight the parallax warning region, and produces the depth direction position information image in
In combination setting determination P4, it is determined whether or not a combination of the image and the depth direction position information image has been set. In this embodiment, since the display setting shown in
In the combination processing P5, the display processor 104 combines the image with the depth direction position information image, and produces a combined display image. For example, when the depth direction position information image is produced in full size, the display processor 104 converts the depth direction position information image to semi-transparent, and reduces the vertical axis (depth direction) size of the depth direction position information image to one-third. The display processor 104 then matches the horizontal direction position information of the depth direction position information image with the horizontal position of the pixels of the image in the lower one-third region of the image. As a result, the converted depth direction position information image is superposed with the lower one-third region of the image, and a combined display image is produced. More precisely, the image brightness is increased in the lower one-third region of the image, the pixel values for the same position are added in the converted depth direction position information image and the image with increased brightness, and this result is divided by two. This processing produces a combined display image.
In display processing P6, the display processor 104 displays the combined image produced in the combination processing P5. Consequently, for example, an image corresponding to the display setting at the setting component 109 is displayed.
An example of the operation of a depth direction position information display device in an embodiment was described above through reference to a flowchart.
The various settings and processing performed with the above-mentioned depth direction position information display device 100 can be executed by a controller (not shown). This controller may be made up of a ROM (read only memory) and a CPU, for example. The ROM holds programs for executing various settings and processing. The controller has the CPU execute the programs in the ROM, and thereby controls the settings and processing of the input component 101, the image processor 102, the depth direction position information processor 103, the display processor 104, and the setting component 109. When the various settings and processing are executed, a recording component (not shown) such as a RAM (random access memory) is used for temporarily storing the data.
With the above-mentioned depth direction position information display device 100, information in the depth direction can be accurately and easily recognized. Also, object cropping adjustment of the display screen edges, reference plane adjustment, and so forth can be easily carried out on the basis of this information in the depth direction. Furthermore, the depth direction position information image can be used to notify the user during imaging so that a subject is not in the parallax warning region, or to correct a recorded image, or to adjust the depth direction position information of a reference plane in which there is no left and right parallax.
(a) In the above embodiment, the amount of parallax for each pixel was used as depth direction position information, but the amount of parallax for each block, each region, or each subject may be used instead as depth direction position information. Also, a person, thing, or other such subject was used as the object to simplify the description, but processing can be performed as discussed above on all subjects having parallax. For example, as shown in
(b) In the above embodiment, an example was given in which a display image was produced by superposing the left-eye image and right-eye image as shown in
(c) In the above embodiment, depth direction position information may be reported by incorporating a measure that indicates depth distance (a meter, pixel count, etc.) into the depth direction position information image. This allows the user to easily grasp the depth direction position information.
(d) In the above embodiment, an example was given in which a combined image was produced by a mean superposition of the left-eye image and right-eye image, but how the combined image is produced is not limited to what is given in the above embodiment, and the image may be produced in any way. For instance, a combined image may be produced by alternately displaying the left-eye image and right-eye image at high speed. Or, a combined image may be produced by superposing the left-eye image and right-eye image once every line.
(e) In the above embodiment, the setting component 109 made settings related to the display of depth direction position information, but the settings related to the display of depth direction position information are not limited to what was given in (1) to (6) above, and any such settings may be made. The device can be made to operate according to the settings even if the settings related to the display of depth direction position information are made in a different mode from what was given in (1) to (6) above.
(f) In the above embodiment, an example was given in which depth direction position information was inputted to the input component 101, but depth direction position information may be produced on the basis of an image inputted to the input component 101.
(g) In the above embodiment, an example was given in which the horizontal direction position information included a horizontal direction coordinate (such as an x coordinate), the vertical direction position information included a vertical direction coordinate (such as a y coordinate), and the depth direction position information included a depth direction coordinate (such as a z coordinate). Specifically, in the above embodiment, in the above embodiment, three-dimensional position information about the image was defined in a three-dimensional orthogonal coordinate system, but the three-dimensional position information about the image may be defined in some other way. For example, three-dimensional position information about the image may be defined by a polar coordinate system composed of the distance r from a reference point, the angle θ around the reference point, and the height y from the reference point. The reference point (home point) can be set as desired.
In understanding the scope of the present disclosure, the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. The foregoing also applies to words having similar meanings such as the terms, “including”, “having” and their derivatives. Also, the terms “part,” “section,” “portion,” “member” or “element” when used in the singular can have the dual meaning of a single part or a plurality of parts. Also as used herein to describe the above embodiment(s), the following directional terms “forward”, “rearward”, “above”, “downward”, “vertical”, “horizontal”, “below” and “transverse” as well as any other similar directional terms refer to those directions of a display device and a display method. Accordingly, these terms, as utilized to describe the present technology should be interpreted relative to a display device and a display method.
The term “configured” as used herein to describe a component, section, or part of a device implies the existence of other unclaimed or unmentioned components, sections, members or parts of the device to carry out a desired function.
The terms of degree such as “substantially”, “about” and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed.
While only selected embodiments have been chosen to illustrate the present technology, it will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the scope of the technology as defined in the appended claims. For example, the size, shape, location or orientation of the various components can be changed as needed and/or desired. Components that are shown directly connected or contacting each other can have intermediate structures disposed between them. The functions of one element can be performed by two, and vice versa. The structures and functions of one embodiment can be adopted in another embodiment. It is not necessary for all advantages to be present in a particular embodiment at the same time. Every feature which is unique from the prior art, alone or in combination with other features, also should be considered a separate description of further technologies by the applicant, including the structural and/or functional concepts embodied by such feature(s). Thus, the foregoing descriptions of the embodiments according to the present technology are provided for illustration only, and not for the purpose of limiting the technology as defined by the appended claims and their equivalents.
The technology disclosed herein can be broadly applied to display devices that display information about a three-dimensional image.
Number | Date | Country | Kind |
---|---|---|---|
2010-255501 | Nov 2010 | JP | national |
2011-206781 | Sep 2011 | JP | national |