This invention relates to a representative image decision apparatus, an image compression apparatus and methods and programs for controlling the operation thereof.
It has become possible to image solids and display them as stereoscopic images. In the case of a display device that cannot display stereoscopic images, selecting a representative image from a plurality of images representing a stereoscopic image and displaying the selected representative image has been given consideration. To achieve this, there is for example a technique (Japanese Patent Application Laid-Open No. 2009-42900) for selecting an image that has grasped the features of a three-dimensional object from a moving image obtained by imaging the three-dimensional object. However, there are cases where an important subject does not appear in the selected image although it does appear in the other images. Furthermore, there is a technique (Japanese Patent Application Laid-Open No. 6-203143) for extracting an occlusion region (a hidden region), which indicates a portion of an image that does not appear in other images, from among images of a plurality of frames obtained by imaging from a plurality of different viewpoints, and finding the outline of a subject with a high degree of accuracy. However, a representative image cannot be decided upon. Further, when compression is applied to a plurality of images at a uniform ratio, there are instances where the image quality of important images declines.
An object of the present invention is to decide a representative image in which an important subject portion will also appear. A further object of the present invention is to not cause a decline in the image quality of an important image.
A representative image decision apparatus according to a first aspect of the present invention is characterized by comprising: an occlusion region detection device (occlusion region detection means) for detecting, from each image of a plurality of images captured from different viewpoints and having at least one portion in common, an occlusion region that does not appear in the other images; a score calculation device (score calculation means) for calculating scores, which represent degrees of importance of the occlusion regions detected by the occlusion region detection device, based upon ratios of a prescribed object contained in the occlusion regions of each of the plurality of images; and a decision device (decision means) for deciding that an image containing an occlusion region for which the score calculated by the score calculation device is high is a representative image.
The first aspect of the present invention also provides an operation control method suited to the above-described representative image decision apparatus. Specifically, the method comprises: an occlusion region detection device detecting, from each of a plurality of images captured from different viewpoints and having at least one portion in common, an occlusion region that does not appear in the other images; a score calculation device calculating scores, which represent degrees of importance of the occlusion regions detected by the occlusion region detection device, based upon ratios of a prescribed object contained in the occlusion regions of each of the plurality of images; and a decision device deciding that an image containing an occlusion region for which the score calculated by the score calculation device is high is a representative image.
The first aspect of the present invention also provides a program for implementing the method of controlling the operation of the above-described representative image decision apparatus. It may be arranged so that a recording medium on which such an operation program has been stored is provided.
In accordance with the present invention, an occlusion region is detected from each image of a plurality of images, the detected occlusion region not appearing in the other images. Scores representing degrees of importance of the occlusion regions are calculated based upon ratios of a prescribed object in the occlusion regions of each of the plurality of images. An image containing an occlusion region for which the calculated score is high is decided upon as a representative image. In accordance with the present invention, since an image for which the degree of importance of an image portion in an occlusion region is high (namely an image having a large proportion of the prescribed object) is decided upon as a representative image, an image in which a highly important image portion (the prescribed object) does not appear can be prevented from being decided upon as a representative image.
For example, the score calculation device calculates scores, which represent degrees of importance of the occlusion regions detected by the occlusion region detection device, based upon at least one among ratios of a prescribed object contained in the occlusion regions of respective ones of the plurality of images, strengths of images within the occlusion regions, saturations of images within the occlusion regions, brightnesses of images within the occlusion regions, areas of the occlusion regions and variance of images within the occlusion regions.
For example, the score calculation device performs calculation so as to raise the score of a region where occlusion regions overlap.
In a case where the plurality of images are three or more, the decision device decides upon images of two or more frames, which contain occlusion regions for which the scores calculated by the score calculation device are high, as representative images, by way of example.
The apparatus may further comprise a compression device for performing compression in such a manner that the more an image is one containing an occlusion region for which the score calculated by the score calculation device is high, the smaller the ratio of compression applied.
The apparatus may further comprise a first notification device (first notification means) for giving notification in such a manner that imaging is performed from a viewpoint (on at least one of both sides of the representative image) that is near a viewpoint of the representative image decided by the decision device.
In a case where the plurality of images are three or more, the decision device decides upon images of two or more frames, which contain occlusion regions for which the scores calculated by the score calculation device are high, as representative images, by way of example. In addition, the apparatus further comprises: a determination unit (determination means) for determining whether the images of the two frames decided by the decision unit were captured from adjacent viewpoints; and a second notification unit (second notification means), responsive to a determination made by the determination unit that the images of the two frames decided by the decision unit were captured from adjacent viewpoints, for giving notification in such a manner that imaging will be performed from a viewpoint between viewpoints at two locations at which the two frames of images were captured, and responsive to a determination made by the determination unit that the images of the two frames decided by the decision unit were not captured from adjacent viewpoints, for giving notification in such a manner that imaging will be performed from a viewpoint close to the viewpoint of an image containing an occlusion region having the highest score.
The decision unit decides that an image containing an occlusion region with the highest score is a representative image, by way of example. In this case, the apparatus further comprises a recording control device (recording control means) for correlating, and recording on a recording medium, image data representing each image of the plurality of images and data identifying representative image decided by the decision device.
The prescribed object is, for example, a face.
An image compression apparatus according to a second aspect of the present invention is characterized by comprising: an occlusion region detection device (occlusion region detection means) for detecting, from image of a plurality of images captured from different viewpoints and having at least one portion in common, an occlusion region that does not appear in other images; a score calculation device (score calculation means) for calculating scores, which represent degrees of importance of the occlusion regions detected by the occlusion region detection device, based upon ratios of a prescribed object contained in the occlusion regions of each of the plurality of images; and a compression device (compression means) for performing compression in such a manner that the more an image is one containing an occlusion region for which the score calculated by the score calculation device is high, the smaller the ratio of compression applied.
The second aspect of the present invention also provides an operation control method suited to the above-described image compression apparatus. Specifically, the method comprises: an occlusion region detection device detecting, from each of a plurality of images captured from different viewpoints and having at least one portion in common, an occlusion region that does not appear in other images; a score calculation device calculating scores, which represent degrees of importance of the occlusion regions detected by the occlusion region detection device, based upon ratios of a prescribed object contained in the occlusion regions of each of the plurality of images; and a compression device performing compression in such a manner that the more an image is one containing an occlusion region for which the score calculated by the score calculation device is high, the smaller the ratio of compression applied.
The second aspect of the present invention also provides a program for implementing the method of controlling the operation of the above-described image compression apparatus. Further, it may be arranged so that a recording medium on which such an operation program has been stored is provided as well.
In accordance with the second aspect of the present invention, occlusion regions are detected from respective ones of a plurality of images, the occlusion regions not appearing in other images. Scores representing degrees of importance of the occlusion regions are calculated based upon ratios of a prescribed object in the occlusion regions of respective ones of the plurality of images. Compression (low compression) is performed in such a manner that the more an image is one containing an occlusion region for which the calculated score is high, the smaller the ratio of compression applied. The more an image is one for which the degree of importance of an occlusion region is high, the higher the image quality of the image obtained.
a illustrates a left-eye image and
a illustrates a left-eye image and
a to 10c illustrate three images having different viewpoints;
a to 14c illustrate three images having different viewpoints;
a to 16c illustrate three images having different viewpoints;
a and 1b illustrate images captured by a stereoscopic imaging digital still camera.
The left-eye image 1L contains person images 2L and 3L, and the right-eye image 1R contains person images 2R and 3R. The person image 2L contained in the left-eye image 1L and the person image 2R contained in the right-eye image 1R represent the same person, and the person image 3L contained in the left-eye image 1L and the person image 3R contained in the right-eye image 1R represent the same person.
The left-eye image 1L and the right-eye image 1R have been captured from different viewpoints. How the person image 2L and person image 3L contained in the left-eye image 1L look differs from how the person image 2R and person image 3R contained in the right-eye image 1R look. There is an image portion that appears in the left-eye image 1L but not in the right-eye image 1R. Conversely, there is an image portion that appears in the right-eye image 1R but not in the left-eye image 1L.
This embodiment decides a representative image from a plurality of images which share at least a portion in common from among a plurality of images that have been captured from different viewpoints. In the example shown in
The left-eye image 1L and right-eye image 1R, which are multiple images the viewpoints of which are different, as shown in
First, occlusion regions in the left-eye image 1L are detected (occlusion regions in the right-eye image 1R may just as well be detected). The left-eye image 1L and the right-eye image 1R are compared, and regions represented by pixels for which pixels corresponding to the pixels constituting the left-eye image 1L do not exist in the right-eye image 1R are adopted as the occlusion regions in the left-eye image 1L.
a and 3b show the left-eye image 1L and the right-eye image 1R, in which occlusion regions are illustrated.
In the left-eye image 1L shown in
When the occlusion regions 4L in the left-eye image 1L are detected, the score of the occlusion regions 4L is calculated (step 13). The method of calculating scores will be described later.
If the detection of occlusion regions and the calculation of the scores of occlusion regions has not ended with regard to all images of the plurality of images read (“NO” at step 14), then the detection of occlusion regions and the calculation of the scores of occlusion regions is performed with regard to the remaining images. In this case, occlusion regions regarding the right-eye image are detected (step 12).
b shows the right-eye image 1R, in which occlusion regions are illustrated.
Regions represented by pixels for which pixels corresponding to the pixels constituting the right-eye image 1R do not exist in the left-eye image 1L are adopted as occlusion regions 4R in the right-eye image 1R. In the right-eye image 1R shown in
The score of the occlusion regions 4L in the left-eye image 1L and the score of the occlusion regions 4R in the right-eye image 1R are calculated (step 13 in
When the detection of occlusion regions and the calculation of the scores of occlusion regions is finished with regard to all of the images read (“YES” at step 14), the image containing the occlusion regions having the highest score is decided upon as a representative image (step 15).
If the proportion of a face contained in an occlusion region is 0% to 49%, 50% to 99% or 100%, then the score Sf is 0, 40 or 100, respectively.
If, in a case where edge strength takes on levels from 0 to 255, the average edge strength of the image portion of an occlusion region takes on a level from 0 to 127, a level from 128 to 191 or a level from 192 to 255, then the score Se is 0, 50 or 100, respectively.
If, in a case where average saturation takes on levels from 0 to 100, the average saturation of the image portion of an occlusion region takes on a level from 0 to 59, a level from 60 to 79 or a level from 80 to 100, then the score is 0, 50 or 100, respectively.
If, in a case where average brightness takes on levels from 0 to 100, the average brightness of the image portion of an occlusion region takes on a level from 0 to 59, a level from 60 to 79 or a level from 80 to 100, then the score is 0, 50 or 100, respectively.
If the area ratio is 0% to 9%, 10% to 29% or 30% or greater, then the score Sa is 0, 50 or 100, respectively.
In a case where a variance takes on a value of 0 to 99, a value of 100 to 999 or a value of 1000 or greater, the score Sv is 10, 60 or 100.
A total score St is thus calculated from Equation 1 from the score Sf in accordance with face-region area ratio, the score Se in accordance with average edge strength, the score Sc in accordance with average saturation, the score Sb in accordance with average brightness, the score Sa in accordance with occlusion-region area ratio, and the score Sv in accordance with variance value. In Equation 1, α1 to α6 are arbitrary coefficients.
St=α1×Sf+α2×Se+α3×Sc+α4×Sb+α5×Sa+α6×Sv Equation 1
The image containing the occlusion region for which the score St thus calculated is highest is decided upon as the representative image.
In the embodiment described above, a representative image is decided using the total score St. However, the image adopted as the representative image may just as well be the image that contains the occlusion region for which any one score is highest from among the score Sf in accordance with face-region area ratio, score Se in accordance with average edge strength, score Sc in accordance with average saturation, score Sb in accordance with average brightness, score Sa in accordance with occlusion-region area ratio and score Sv in accordance with variance value, or the occlusion region for which the sum of any combination of these scores is highest. For example, the representative image may be decided from the score Sf obtained based solely upon area ratios of face regions contained in occlusion regions (here the face is an object, but the object may be other than a face). Further, the representative image may just as well be decided from the score Sf of the face-region area ratio and at least one from among the score Se in accordance with average edge strength, score Sc in accordance with average saturation, score Sb in accordance with average brightness, score Sa in accordance with occlusion-region area ratio and score Sv in accordance with variance value.
a, 10b and 10c and
This modification decides a representative image from the images of three frames. Operation is similar also for images of four frame or more.
a, 10b and 10c are examples of a first image 31A, a second image 31B and a third image 31C captured from different viewpoints and having at least a portion of the imaging zone in common. The second image 31B is an image obtained in a case where the image was captured from the front side of the subject. The first image 31A is an image obtained in a case where the image was captured from a viewpoint leftward of the second image 31B (to the left of the subject). The third image 31C is an image obtained in a case where the image was captured from a viewpoint rightward of the second image 31B (to the right of the subject).
The first image 31A contains a person image 32A and a person image 33A, the second image 31B contains a person image 32B and a person image 33B, and the third image 31C contains a person image 32C and a person image 33C. The person images 32A, 32B and 32C represent the same person, and the person images 33A, 33B and 33C represent the same person.
The occlusion regions of the second image 31B include first occlusion regions that appear in the second image 31B but not in the first image 31A, second occlusion regions that appear in the second image 31B but not in the third image 31C, and a third occlusion region that appears in the second image 31B but neither in the first image 31A nor in the third image 31C.
Occlusion regions 34 on the right side of the person image 32B and on the right side of the person image 33B are first occlusion regions 34 that appear in the second image 31B but not in the first image 31A. Occlusion regions 35 on the left side of the person image 32B and on the left side of the person image 33B are second occlusion regions 35 that appear in the second image 31B but not in the third image 31C. A region in which the first occlusion region 34 on the right side of the person image 32B and the second occlusion region 35 on the left side of the person image 33B overlap is a third occlusion region 36 that appears in the second image 31B but neither in the first image 31A nor in the third image 31C. Thus, in the case of images of three or more frames, there exist an occlusion region (the third occlusion region 36) which indicates an image portion that does not appear in any images other than the image for which the score of the occlusion regions is calculated, as well as occlusion regions (the first occlusion regions 34 and second occlusion regions 35) which indicate image portions that do not appear only in some of the other images. When scores are calculated, the weighting of a score obtained from an occlusion region which indicates an image portion that does not appear in any images other than the image for which the score of the occlusion regions is calculated is increased, and the weighting of a score obtained from an occlusion region which indicates an image portion that does not appear only in some of the other images is increased (the score of the occlusion region 36 of overlap is raised). Naturally, such weighting need not be changed.
If a representative image is decided as set forth above, the representative image decided is displayed on a display device that displays two-dimensional images. Further, it may be arranged so that, in a case where image data representing images having a plurality of different viewpoints is stored in a single image file, image data representing a thumbnail image of the decided representative image is recorded in the header of the file. Naturally, it may be arranged so that identification data of the representative image is recorded in the header of the file.
In this embodiment, images of three frames are read (the number of frames may be more than three) (step 11A). The scores of occlusion regions are calculated in each of the images of the three frames (steps 12 to 14). From among the images of the three frames, the images of two frames having high scores are decided upon as representative images (step 15A). Thus, representative images may be of two frames and not one frame. By deciding upon the images of two frames as representative images, a stereoscopic image can be displayed using the images of the two frames that have been decided. In a case where images of four or more frames have been read, the representative images may be of three or more frames, as a matter of course.
The representative image is decided in the manner described above (step 15). The scores of occlusion regions have been stored with regard to respective ones of all read images. A compression ratio is selected. Specifically, the higher the score of an occlusion region, the lower the compression ratio, which will result in less compression, selected (step 16). Compression ratios are predetermined and a selection is made from among these determined compression ratios. Each image of the read images is compressed using the selected compression ratio (step 17). The higher the score of an occlusion region, the more important the image is deemed to be, and the more important an image, the higher the image quality thereof becomes.
In the foregoing embodiment, a compression ratio is selected (decided) upon deciding that an image having the highest calculated score is a representative image, and the image is compressed at the compression ratio selected. However, the compression ratio may be selected without deciding that an image having the high score is a representative image. That is, an arrangement may be adopted in which occlusion regions are detected from each image of a plurality of images, a compression ratio is selected in accordance with the scores of the detected occlusion regions, and each image is compressed at the compression ratio selected.
In the above-described embodiment as well, a representative image may be decided using the total score St, as mentioned above, and a compression ratio may be selected in accordance with any one score from among score Sf in accordance with face-region area ratio, score Se in accordance with average edge strength, score Sc in accordance with average saturation, score Sb in accordance with average brightness, score Sa in accordance with occlusion-region area ratio and score Sv in accordance with variance value, or in accordance with the sum of any combination of these scores. For example, the representative image may be decided from the score Sf obtained based solely upon area ratios of face regions contained in occlusion regions (here the face is an object, but the object may be other than a face). Further, the compression ratio may be selected from the score Sf of the face-region area ratio and at least one from among the score Se in accordance with average edge strength, score Sc in accordance with average saturation, score Sb in accordance with average brightness, score Sa in accordance with occlusion-region area ratio and score Sv in accordance with variance value.
a, 14b and 14c show a first image 41A, a second image 41B and a third image 41C captured from different viewpoints.
The first image 41A contains subject images 51A, 52A, 53A and 54A, the second image 41B contains subject images 51B, 52B, 53B and 54B, and the third image 41C contains subject images 51C, 52C, 53C and 54C. The subject images 51A, 51B and 51C represent the same subject, the subject images 52A, 52B and 52C represent the same subject, the subject images 53A, 53B and 53C represent the same subject, and the subject images 54A, 54B and 54C represent the same subject. It is assumed that the first image 41A, second image 41B and third image 41C have been captured from adjacent viewpoints.
In the manner described above, occlusion regions are detected from each of first image 41A, second image 41B and third image 41C (the occlusion regions are not shown in
In this embodiment, if two images having the higher-order scores are adjacent, then an image captured from a viewpoint between the two viewpoints from which these two images were captured is considered to be an important image. Accordingly, the user is notified so as to shoot from the viewpoint between the two viewpoints from which the two images having the higher-order scores were captured. In the example shown in
The image 41D contains subject images 51D, 52D, 53D and 54D. The subject image 51D represents the same subject as that of the subject image 51A of first image 41A, the subject image 51B of second image 41B and the subject image 51C of third image 41C shown in
a, 16b and 16c show a first image 61A, a second image 61B and a third image 61C captured from different viewpoints.
The first image 61A contains subject images 71A, 72A, 73A and 74A, the second image 61B contains subject images 71B, 72B, 73B and 74B, and the third image 61C contains subject images 71C, 72C, 73C and 74C. The subject images 71A, 71B and 71C represent the same subject, the subject images 72A, 72B and 72C represent the same subject, the subject images 73A, 73B and 73C represent the same subject, and the subject images 74A, 74B and 74C represent the same subject. It is assumed that the first image 61A, second image 61B and third image 61C also have been captured from adjacent viewpoints.
Occlusion regions are detected from each of first image 61A, second image 61B and third image 61C (the occlusion regions are not shown in
If two images having the higher-order scores are adjacent, then an image captured from a viewpoint between the two viewpoints from which these two images were captured is considered to be an important image, as mentioned above. However, in a case where two images having the higher-order scores are not adjacent, the image having the highest score is considered important and the user is notified so as to shoot from a viewpoint that is in the vicinity of the viewpoint from which this image was captured. In the example shown in
The image 61D contains subject images 71D, 72D, 73D and 74D. The subject image 71D represents the same subject as that of the subject image 71A of first image 61A, the subject image 71B of second image 61B and the subject image 71C of third image 61C shown in
The user can be made to capture the image thought to be important.
This processing procedure is started by setting the shooting assist mode. If the shooting mode per se has not ended owing to end of imaging or the like (“NO” at step 41), then whether captured images obtained by imaging the same subject are more than two frames in number is ascertained (step 42). If the captured images are not more than two frames in number (“NO” at step 42), then a shooting viewpoint cannot be decided using images of three or more frames in the manner set forth above. Accordingly, imaging is performed from a different viewpoint decided by the user.
If the captured images are more than two frames in number (“YES” at step 42), then image data representing the captured images is read from the memory card and score calculation processing is executed for every image in the manner described above (step 43).
In a case where the viewpoints of two frames of images having the higher-order scores for the occlusion regions are adjacent (“YES” at step 44), as illustrated in
When the user ascertains the candidate for the shooting viewpoint, the users shoots the subject upon referring to this candidate (step 47). An image thought to be important is thus obtained. Highly precise shooting assist becomes possible.
In the embodiment shown in
When the user ascertains the candidate for a shooting viewpoint, the user shoots the subject upon referring to the candidate (step 47). An image thought to be important is thus obtained in this embodiment as well. Highly precise shooting assist becomes possible.
The program for controlling the above-described operation has been stored in a memory card 132, the program is read by a media control unit 131 and is installed in a stereoscopic imaging digital camera. Naturally, the operation program may be pre-installed in the stereoscopic imaging still camera or may be applied to the stereoscopic imaging digital camera via a network.
The overall operation of the stereoscopic imaging digital camera is controlled by a main CPU 81. The stereoscopic imaging digital camera is provided with an operating unit 88 that includes various buttons such as a mode setting button for setting a shooting assist mode, a stereoscopic imaging mode, a two-dimensional imaging mode, a stereoscopic playback mode and a two-dimensional playback mode, etc., and a shutter-release button of two-stage stroke type. An operation signal that is output from the operating unit 88 is input to the main CPU 81.
The stereoscopic imaging digital camera includes a left-eye image capture device 90 and a right-eye image capture device 110. When the stereoscopic imaging mode is set, a subject is imaged continuously (periodically) by the left-eye image capture device 90 and right-eye image capture device 110. When the shooting assist mode or the two-dimensional imaging mode is set, a subject is imaged only by the left-eye image capture device 90 (or the right-eye image capture device 110).
The left-eye image capture device 90 images the subject, thereby outputting image data representing a left-eye image that constitutes a stereoscopic image. The left-eye image capture device 90 includes a first CCD 94. A first zoom lens 91, a first focusing lens 92 and a diaphragm 93 are provided in front of the first CCD 94. The first zoom lens 91, first focusing lens 92 and diaphragm 93 are driven by a zoom lens control unit 95, a focusing lens control unit 96 and a diaphragm control unit 97, respectively. When the stereoscopic imaging mode is set and the left-eye image is formed on the photoreceptor surface of the first CCD 94, a left-eye video signal representing the left-eye image is output from the first CCD 94 based upon clock pulses supplied from a timing generator 98.
The left-eye video signal that has been output from the first CCD 94 is subjected to prescribed analog signal processing in an analog signal processing unit 101 and is converted to digital left-eye image data in an analog/digital converting unit 102. The left-eye image data is input to a digital signal processing unit 104 from an image input controller 103. The left-eye image data is subjected to prescribed digital signal processing in the digital signal processing unit 104. Left-eye image data that has been output from the digital signal processing unit 104 is input to a 3D image generating unit 139.
The right-eye image capture device 110 includes a second CCD 114. A second zoom lens 111, second focusing lens 112 and a diaphragm 113 driven by a zoom lens control unit 115, a focusing lens control unit 116 and a diaphragm control unit 117, respectively, are provided in front of the second CCD 114. When the imaging mode is set and the right-eye image is formed on the photoreceptor surface of the second CCD 114, a right-eye video signal representing the right-eye image is output from the second CCD 114 based upon clock pulses supplied from a timing generator 118.
The right-eye video signal that has been output from the second CCD 114 is subjected to prescribed analog signal processing in an analog signal processing unit 121 and is converted to digital right-eye image data in an analog/digital converting unit 122. The right-eye image data is input to the digital signal processing unit 124 from an image input controller 123. The right-eye image data is subjected to prescribed digital signal processing in the digital signal processing unit 124. Right-eye image data that has been output from the digital signal processing unit 124 is input to the 3D image generating unit 139.
Image data representing the stereoscopic image is generated in the 3D image generating unit 139 from the left-eye image and right-eye image and is input to a display control unit 133. A monitor display unit 134 is controlled by the display control unit 133, whereby the stereoscopic image is displayed on the display screen of the monitor display unit 134.
When the shutter-release button is pressed through the first stage of its stroke, the items of left-eye image data and right-eye image data are input to an AF detecting unit 142 as well. Focus-control amounts of the first focusing lens 92 and second focusing lens 112 are calculated in the AF detecting unit 142. The first focusing lens 92 and second focusing lens 112 are positioned at in-focus positions in accordance with the calculated focus-control amounts.
The left-eye image data is input to an AE/AWB detecting unit 144. Respective amounts of exposure of the left-eye image capture device 90 and right-eye image capture device 110 are calculated in the AE/AWB detecting unit 144 using the data representing the face detected from the left-eye image (which may just as well be the right-eye image). The f-stop value of the first diaphragm 93, the electronic-shutter time of the first CCD 94, the f-stop value of the second diaphragm 113 and the electronic-shutter time of the second CCD 114 are decided in such a manner that the calculated amounts of exposure will be obtained. An amount of white balance adjustment is also calculated in the AE/AWB detecting unit 144 from the data representing the face detected from the entered left-eye image (or right-eye image). Based upon the calculated amount of white balance adjustment, the left-eye video signal is subjected to a white balance adjustment in the analog signal processing unit 101 and the right-eye video signal is subjected to a white balance adjustment in the analog signal processing unit 121.
When the shutter-release button is pressed through the second stage of its stroke, the image data (left-eye image and right-eye image) representing the stereoscopic image generated in the 3D image generating unit 139 is input to a compression/expansion unit 140. The image data representing the stereoscopic image is compressed in the compression/expansion unit 140. The compressed image data is recorded on the memory card 132 by the media control unit 131. In a case where the compression ratio is selected, as described above, in accordance with the degrees of importance of the left-eye image and right-eye image, the items of left-eye image and right-eye image are stored in an SDRAM 136 temporarily and which of the left- and right-eye images is important is determined as set forth above.
Compression is carried out in the compression/expansion unit 140 upon raising the compression ratio (raising the percentage of compression) of whichever of the left- and right-eye images is determined to be important. The compressed image data is recorded on the memory card 132.
The stereoscopic imaging camera further includes a VRAM 135 for storing various types of data, the SDRAM 136 in which the above-described score tables have been stored, a flash ROM 137 and a ROM 138 for storing various data. The stereoscopic imaging digital camera further includes a battery 83. Power supplied from the battery 83 is applied to a power control unit 83. The power control unit 83 supplies power to each device constituting the stereoscopic imaging digital camera. The stereoscopic imaging digital camera further includes a flash unit 86 controlled by a flash control unit 85.
When the stereoscopic image playback mode is set, the left-eye image data and right-eye image data recorded on the memory card 132 is read and input to the compression/expansion unit 140. The left-eye image data and right-eye image data is expanded in the compression/expansion unit 140. The expanded left-eye image data and right-eye image data is applied to the display control unit 133, whereupon a stereoscopic image is displayed on the display screen of the monitor display unit 134.
If, in a case where the stereoscopic image playback mode has been set, images captured from three or more different viewpoints exist with regard to the same subject, two images from among these three or more images are decided upon as representative images in the manner described above. A stereoscopic image is displayed by applying the two decided images to the monitor display unit 134.
When the two-dimensional image playback mode is set, the left-eye image data and right-eye image data (which may just as well be image data representing three of more images captured from different viewpoints) that has been recorded on the memory card 132 is read and is expanded in the compression/expansion unit 140 in a manner similar to that of the stereoscopic image playback mode. Either the left-eye image represented by the expanded left-eye image data or the right-eye image represented by the expanded right-eye image data is decided upon as the representative image in the manner described above. The image data representing the image decided is applied to the monitor display unit 134 by the display control unit 133.
If, in a case where the shooting assist mode has been set, three or more images captured from different viewpoints with regard to the same subject have been stored on the memory card 132, as mentioned above, shooting-viewpoint assist information (an image or message, etc.) is displayed on the display screen of the monitor display unit 134. The subject is shot from the shooting viewpoint using the left-eye image capture device 90 of the left-eye image capture device 90 and right-eye image capture device 110 (or use may be made of the right-eye image capture device 110).
In the above-described embodiment, a stereoscopic imaging digital camera is used. However, a digital camera for two-dimensional imaging may be used rather than a stereoscopic imaging digital camera.
In a case where a representative image is decided, as described above, left-eye image data, right-eye image data and data identifying a representative image (e.g., a frame number or the like) are correlated and recorded on the memory card 132. For example, in a case where left-eye image data and right-eye image data is stored in the same file, data indicating which of the left-eye image or right-eye image is the representative image would be stored in the header of the file.
Furthermore, in the above-described embodiment, the description is rendered with regard to two images, namely a left-eye image and a right-eye image. However, it goes without saying that a decision regarding a representative image and selection of compression ratio can be carried out in similar fashion with regard to three or more images and not just two images.
Number | Date | Country | Kind |
---|---|---|---|
2010-147755 | Jun 2010 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2011/060687 | Apr 2011 | US |
Child | 13726389 | US |