The present invention relates to an image generation device and method for generating a virtual view images according to viewing an object from virtual viewpoints on the basis of view images of the object created from two viewpoints, and a printer having such an image generation device.
A technique for viewing a 3-dimensional image by use of a lenticular sheet having a great number of lenticules arranged horizontally is known. Linear images are arranged alternately on a back of the lenticular sheet, the linear images being formed by linearly splitting L and R view images captured from two viewpoints on the right and left. The linear images adjacent with one another are positioned under each one of the lenticules. Left and right eyes view the L and R view images with disparity through the lenticules to observe the 3-dimensional image.
By the way, if only two of the linear images for the L and R viewpoints are recorded to the back of each one of the lenticules, the 3-dimensional image of an unnatural double image form is observable.
In Patent Document 1, a printing system is disclosed, in which virtual view images are created by viewing an object from plural virtually set virtual viewpoints different from the L and R viewpoints according to electronic interpolation on the basis of the L and R view images obtained by a multi-view camera, so that the linear images are recorded to the lenticular sheet according to the L and R view images being original and the virtual view images being new. Thus, n (equal to or more than 3) images of the linear images can be arranged on the back of each of the lenticules. Stereoscopic appearance of the 3-dimensional image can be enhanced.
However, if a portion of one of taking lenses is blocked by a finger of a user for photography with the multi-view camera, one of the L and R view images cannot be created properly. If the linear images are recorded to the lenticular sheet according to the virtual view images created from the L and R view images, the 3-dimensional image of an unnatural form is viewed. To prevent this, it is conceivable to dispose a sensor near to the taking lenses for detecting a touch of a finger. A warning message is indicated if the sensor detects the touch of the finger. However, disposition of the sensor with all of the multi-view cameras is not practical due to highness of a manufacturing cost.
The present invention has been brought for solving the foregoing problems. An object of the present invention is to provide an image generation device and method and printer in which virtual view images can be acceptably obtained even if failure has occurred to one of object images of the L and R viewpoints.
In order to achieve the above object, an image generation device of the present invention for generating a virtual view image according to first and second view images captured with disparity by imaging an object from different viewpoints is provided, the virtual view image being set by viewing the object from a predetermined number of virtual viewpoints different from the viewpoints, the image generation device being characterized in including a detection unit for detecting whether there is a failure in the first and second view images, a disparity map generator for operating if one of the first and second view images is an abnormal image with the failure according to a result of detection of the detection unit, for extracting a corresponding point in the abnormal image corresponding respectively to a pixel in a normal image included in the first and second view images, and for generating a disparity map for expressing a depth distribution of the object according to a result of extraction, and an image generating unit for generating the virtual view image according to the disparity map and the normal image.
Preferably, an image output unit for outputting the normal image and the virtual view image to a predetermined receiving device is provided. Preferably, a viewpoint setting unit for setting a larger number of the virtual viewpoints than the predetermined number between the viewpoints of the abnormal image and the normal image is provided. The image generating unit selects the virtual viewpoints of the predetermined number among the virtual viewpoints set by the viewpoint setting unit in a sequence according to nearness to the viewpoint of the normal image.
Preferably, the virtual viewpoints are disposed equiangularly from each other about the object. An area detector detects a region area where the failure has occurred in the abnormal image is provided. The viewpoint setting unit makes a set number of the virtual viewpoints higher according to an increase of the area.
Preferably, an image acquisition unit for acquiring the first and second view images from an imaging apparatus which includes plural imaging units for imaging the object from the different viewpoints is provided. Preferably, the failure includes at least any one of flare and an image of a blocking portion blocking a taking lens of the imaging units at least partially.
Also, a printer of the present invention is characterized in including an image generation device as defined in any one of claims 1-7, and a recording unit for, if either one of the first and second view images is the abnormal image, recording a stereoscopically viewable image to a recording medium according to the normal image and the virtual view image. Preferably, a warning device for displaying a warning if the failure has occurred with both of the first and second view images is provided.
Preferably, an image generation method of generating a virtual view image according to first and second view images captured with disparity by imaging an object from different viewpoints is provided, the virtual view image being set by viewing the object from a predetermined number of virtual viewpoints different from the viewpoints, the image generation method is characterized in including a detection step of detecting whether there is a failure in the first and second view images, a disparity map generating step of, if one of the first and second view images is an abnormal image with the failure according to a result of detection of the detection step, extracting a corresponding point in the abnormal image corresponding respectively to a pixel in a normal image included in the first and second view images, and generating a disparity map for expressing a depth distribution of the object according to a result of extraction, and an image generating step of generating the virtual view image according to the disparity map and the normal image.
In the image generation device and method, and printer, if one of the first and second view images is an abnormal image with failure according to a result of detection of the detection unit, a corresponding point is extracted in the abnormal image corresponding respectively to a pixel in a normal image. A disparity map is generated according to the result of the extraction. The virtual view image is generated according to the disparity map and the normal image. Consequently, good virtual view images can be obtained even if either one of the first and second view images is abnormal.
As shown in
The printer 12 operates according to L and R view image data I(L) and I(R) recorded in the memory card 16, and prints plural view image data to a back surface of a lenticular sheet 17 (hereinafter referred to as sheet 17 as shown in
As shown in
The image areas 19 are divided in an arrangement direction of the lenticules 18 according to the number of view images. For image recording of six viewpoints, for example, the image areas 19 are divided into six areas or first to sixth small areas 19a-19f, where linear images are respectively recorded in a linearly divided manner of images of the six viewpoints. The small areas 19a-19f correspond to the images of the six viewpoints in a one-to-one correspondence.
Again in
To the CPU 21 are connected the input device unit 22, the memory 23, a sheet transport mechanism 26, an image recording unit 27, an image input interface 28 (I/F), an image processing device 29 (image generation device), a monitor 30 and the like by means of a bus 25.
The input device unit 22 is used for turning on/off of a power source of the printer 12, starting image recording and the like. The sheet transport mechanism 26 transports the sheet 17 in a sub scan direction in parallel with the arrangement direction of the lenticules 18.
The image recording unit 27 records the linear images extending in a main scan direction to a back surface of the sheet 17. The image recording unit 27 records the linear images line by line at each time of transporting the sheet 17 in the sub scan direction by one line. It is therefore possible to record the linear images arranged in the sub scan direction.
In the image input interface 28, the memory card 16 is set. The image input interface 28 reads the image file 15 from the memory card 16 and sends this to the image processing device 29.
The image processing device 29 generates virtual view image data from a plurality of virtual viewpoints different from the L and R viewpoints according to L and R view image data I(L) and I(R) in the image file 15. Also, the image processing device 29, upon generation of the virtual view image data, supplies the image recording unit 27 with disparity image data of n viewpoints, which include at least one of the L and R view image data I(L) and I(R) and the virtual view image data. Note that the disparity image data are a group of discrete view image data of images directed by viewing an object from different viewpoints.
The monitor 30 displays a selection screen for selecting a menu for image recording, a setting screen for setting various parameters, and a warning message upon occurrence of difficulties.
As shown in
The image reader 31 reads and memorizes the image file 15 from the memory card 16 through the image input interface 28 according to designation in the input device unit 22.
The imaging error detection circuit 32 analyzes the image file 15 in the image reader 31, and detects whether an imaging error has occurred with L and R view image data I(L) and I(R) or not. Examples of the imaging error include finger presence as physical failure, and flare as optical failure. The “finger presence” means interference of a finger (obstacle) of a user with at least one portion of the taking lenses 14a to cause appearance of an image of the finger in the object image. (See
Occurrence of the finger presence is detectable, for example, by previously storing a plurality of image patterns captured upon occurrence of finger presence and by checking similarity of those image patterns to the L and R view image data I(L) and I(R). Occurrence of the flare is detectable, for example, by comparing the L and R view image data I(L) and I(R), and by checking whether a difference in brightness between portions of those data is higher than a predetermined threshold.
The disparity map generation circuit 33 generates a disparity map expressing distribution of a depth of an object according to the L and R view image data I(L) and I(R) in the image reader 31, and outputs the disparity map to the image generation circuit 34. The disparity map generation circuit 33 generates at least one of a disparity map 38L with reference to the L view image data I(L) and a disparity map 38R with reference to the R view image data I(R). Description is made now for an example of method of generating the disparity map 38L.
As shown in
Then a position shift of the corresponding point 40 in the R view image data I(R) relative to each of the pixels 39 in the L view image data I(L) in the horizontal direction is obtained. Thus, disparity for each of the pixels 39 of the L view image data I(L) is obtained, so as to constitute the disparity map 38L shown in
On the other hand, the disparity map 38R shown in
Again in
The viewpoint setting unit 43 sets plural virtual viewpoints between the L and R viewpoints. The viewpoint setting unit 43 selectively carries out either of normal viewpoint setting and special viewpoint setting which will be hereinafter described.
In
In
Again in
In the normal image generation, L normal image generation and R normal image generation are carried out successively. In the L normal image generation, virtual view image data is generated from a virtual viewpoint disposed on a side of an L viewpoint V(L) from the center defined between the L and R viewpoints V(L) and V(R). (See
In the R normal image generation, on the other hand, virtual view image data is generated from a virtual viewpoint disposed on a side of an R viewpoint V(R) from the center defined between the L and R viewpoints V(L) and V(R). Specifically, virtual view image data (hereinafter referred to as R virtual view image data) is generated according to the R view image data I(R) and the disparity map 38R. Consequently, (n−2) L and R virtual view image data in all on the right and left sides are generated.
In the special image generation, one of the L and R special image generations is selectively carried out. In the L special image generation, (n−1) virtual viewpoints are selected in an order of nearness to the L viewpoint V(L), so as to generate L virtual view image data corresponding to the virtual viewpoints. (See
The image output unit 35, when the virtual view image data is input by the image generating unit 44, outputs disparity image data of n viewpoints to the image recording unit 27. In case of no input of virtual view image data, the image output unit 35 outputs L and R view image data I(L) and I(R) in the image reader 31 to the image recording unit 27. The disparity image data of n viewpoints is any one of normal disparity image data, L disparity image data and R disparity image data described below, according to the number and type of the virtual view image data input from the image generating unit 44.
The normal disparity image data are constituted by (n−2) data of L and R virtual view image data from the image generating unit 44, and L and R view image data I(L) and I(R) from the image reader 31. The L disparity image data is constituted by (n−1) L virtual view image data from the image generating unit 44, and L view image data I(L) from the image reader 31. The R disparity image data is constituted by (n−1) R virtual view image data from the image generating unit 44, and R view image data I(R) from the image reader 31.
The CPU 21 selectively carries out an output process for the normal disparity image data, an output process for the L disparity image data, an output process for the R disparity image data, and an output process for the L and R disparity image data, according to the result of the detection of the imaging error detection circuit 32.
The data output process for the normal disparity image data is carried out when both of the L and R view image data I(L) and I(R) have been captured properly. The data output process for the L disparity image data is carried out when an imaging error has occurred with the R view image data I(R). The data output process for the R disparity image data is carried out when an imaging error has occurred with the L view image data I(L).
The data output process for the L and R view image data is carried out when an imaging error has occurred with both of the L and R view image data I(L) and I(R). The CPU 21 causes the monitor 30 to display a warning message and the like for the fact that the imaging error have occurred with both of the L and R view image data I(L) and I(R).
Image recording of the 3-dimensional printing system 10 constructed above is described by use of the flow chart of
At first, the memory card 16 removed from the multi-view camera 11 is set on the image input interface 28. After the setting, the input device unit 22 selects the image file 15 for start of the recording. The CPU 21 sends a command for reading the image file 15 to the image reader 31. Thus, the image reader 31 reads the designated image file 15 from the memory card 16 through the image input interface 28, and stores this in a temporary manner.
Then the CPU 21 sends a command of detecting an imaging error to the imaging error detection circuit 32. The imaging error detection circuit 32 in response to the command analyzes the L and R view image data I(L) and I(R) in the image reader 31, checks occurrence of the imaging error of the L and R view image data I(L) and I(R), and sends a result of the detection to the CPU 21.
The CPU 21 carries out the data output process for the normal disparity image data in case of no occurrence of an imaging error with any of the L and R view image data I(L) and I(R). Also, the CPU 21 carries out the data output process for the L disparity image data in case of occurrence of an imaging error with the R view image data I(R), and carries out the data output process for the R disparity image data in case of occurrence of an imaging error with the L view image data I(L). The CPU 21 performs the data output process for the L and R view image data in case of occurrence of an imaging error with both of the L and R view image data I(L) and I(R).
[Data Output Process for Normal Disparity Image Data]
As shown in
The CPU 21 sends a command for the normal viewpoint setting to the viewpoint setting unit 43. The viewpoint setting unit 43 responsively starts the normal viewpoint setting shown in
Then the CPU 21 sends a command for carrying out the L normal image generation to the image generating unit 44. Thus, the image generating unit 44 generates L virtual view image data IL(1) and IL(2) corresponding to the virtual viewpoints V(1) and V(2) according to the disparity map 38L and the L view image data I(L). Note that a method of generating virtual view image data according to the disparity map and the L view image data is a well-known technique, which is not described further herein. (For example, see JP-A 2001-346226 and JP-A 2003-346188.)
After the L virtual view image data IL(1) and IL(2) are generated, the CPU 21 sends a command for generating the disparity map 38R to the disparity map generation circuit 33. The disparity map generation circuit 33 in response to the command extracts the corresponding point 40 in the L view image data I(L) corresponding to each of the pixels 39 in the R view image data I(R), and generates the disparity map 38R according to the result of the extraction.
Then the CPU 21 sends a command for setting a normal viewpoint to the viewpoint setting unit 43, and sends a command of carrying out the R normal image generation to the image generating unit 44. Thus, R virtual view image data IR(3) and IR(4) are created in association with the virtual viewpoints V(3) and V(4).
As shown in
The CPU 21 sends a command for outputting normal disparity image data to the image output unit 35. The image output unit 35 in response to the command reads the L and R view image data I(L) and I(R) from the image reader 31. Then the image output unit 35 outputs normal disparity image data of the six viewpoints to the image recording unit 27, the normal disparity image data including the virtual view image data IL(1), IL(2), IR(3) and IR(4) and the L and R view image data I(L) and I(R). Finally, the data output process for the normal disparity image data is completed.
[Data Output Process for L Disparity Image Data]
As shown in
As shown in
As shown in
Upon the search of corresponding points with reference to the R view image data I(R) after generation of the finger image 46, no disparity value is found, because no corresponding point is present in association with respective pixels of the finger image 46 in the L view image data I(L). As shown in
For the reasons described heretofore, the disparity map 38L is generated in case of occurrence of an imaging error in the R view image data I(R). The disparity map 38L is outputted to the image generation circuit 34. Then the CPU 21 sends a command for setting special viewpoints to the viewpoint setting unit 43.
As shown in
The image generating unit 44 upon receiving the command from the CPU 21 generates L virtual view image data IL(1) to IL(5) corresponding to the virtual viewpoints V(1) to V(5) according to the disparity map 38L and the L view image data I(L). The finger image 46 is not included in the virtual view image data, because the virtual view image data is generated according to the normal L view image data I(L).
The abnormal area 47 as shown in
The image generating unit 44 generates the L virtual view image data IL(1) to IL(5) corresponding to the five virtual viewpoints V(1) to V(5) according to nearness to the L viewpoint V(L). Probability of occurrence of a failure with those image data will be low. Should such a failure occur, the degree of the error will be small. The L virtual view image data IL(1) to IL(5) are input to the image output unit 35.
The CPU 21 sends a command for outputting L disparity image data to the image output unit 35. Thus, the image output unit 35 outputs the L disparity image data of the six viewpoints to the image recording unit 27, the L disparity image data including the L virtual view image data IL(1) to IL(5) and the L view image data I(L) read from the image reader 31. Then the data output process for the L disparity image data is completed.
[Data Output Process for R Disparity Image Data]
As shown in
[Data Output Process for L and R View Image Data]
As shown in
When the input device unit 22 is operated for continuing the image recording, the CPU 21 sends a command for outputting the L and R view image data to the image output unit 35. The image output unit 35 reads the L and R view image data I(L) and I(R) from the image reader 31 and sends those to the image recording unit 27. If the input device unit 22 is operated for stopping the image recording, the CPU 21 stops the image recording.
Again in
On the other hand, when only the L and R disparity image data I(L) and I(R) are input to the image recording unit 27, the CPU 21 sends a command of image recording of two viewpoints to the image recording unit 27. In response to the command, the image recording unit 27 records linear images to the back of the sheet 17, the linear images being formed by respectively splitting the L and R disparity image data I(L) and I(R) linearly. Also, the process described above is carried out repeatedly for image recording of the remaining image file 15 in the memory card 16.
In the first embodiment described above, the description has been made for the structure in which disparity image data of the six viewpoints is recorded to the sheet 17. It is possible to use the present invention for a structure in which disparity image data of three or more viewpoints is recorded to the sheet 17. L and R virtual view image data generated by a data output process for respective disparity image data for recording the disparity image data of n viewpoints to the sheet 17 are expressed according to expressions 1-3 as follows:
1. Data Output Process for the Normal Disparity Image Data
(1) Division number of viewpoints: K=n−1
(2) Set number of virtual viewpoints: n
(3) L virtual view image data: IL(1), IL(2), . . . , IL((K+1)/2−1)
(4) R virtual view image data: IR((K+1)/2), IR((K+1)/2+1), . . . , IR(K−1)
2. Data Output Process for the L Disparity Image Data
(1) Division number of viewpoints: K=2n−1
(2) Set number of virtual viewpoints: n
(3) L virtual view image data: IL(1), IL(2), . . . , IL((K+1)/2−1)
3. Data Output Process for the R Disparity Image Data
(1) Division number of viewpoints: K=2n−1
(2) Set number of virtual viewpoints: n
(3) R virtual view image data: IR((K+1)/2), IR((K+1)/2+1), . . . , IR(K−1)
A printer 52 of a second embodiment of the invention is described by referring to
The printer 52 is constructed in a basically equal manner to the printer 12 of the first embodiment. However, the imaging error detection circuit 32 has an area detector 53. The CPU 21 operates as a viewpoint setting control unit 54 for virtual viewpoints. A number setting table 55 for the set number is stored in the memory 23.
The number setting table 55 stores an area S of the imaging error region and the set number of the virtual points in association with one another. In the number setting table 55, their association is so made that the set number of the virtual viewpoints increases according to an increase in the area S by a predetermined amount.
The area detector 53 operates when the imaging error detection circuit 32 detects occurrence of an imaging error, acquires an area S of the imaging error region, and outputs a result of the acquisition to the CPU 21. The area S is obtained, for example, by designating the imaging error region in image data and by counting a number of pixels in the region. Also, it is possible to designate the imaging error region by various matching methods for use in comparison of the image data captured normally to the image data with occurrence of the imaging error.
The viewpoint setting control unit 54 operates at the time of the special viewpoint setting and determines the set number of the virtual viewpoints. The viewpoint setting control unit 54 determines the set number of the virtual viewpoints by referring to the number setting table 55 of the memory 23 and according to a value of the area S input by the area detector 53, and sends a result of the determination to the viewpoint setting unit 43. Therefore, the set number of the virtual viewpoints in the special viewpoint setting can be increased or decreased according to the area S of the imaging error region.
If the area S of the imaging error region is large, it is possible to increase the set number of virtual viewpoints. Positions of the virtual viewpoints of respectively the virtual view image data can be set nearer to the L and R viewpoints where no imaging error has occurred. In
A 3-dimensional printing system 58 of a third embodiment of the present invention is described now by use of
The multi-view camera 59 is basically the same as the multi-view camera 11 of the first embodiment. Imaging modes of the multi-view camera 59 are a portrait imaging mode, landscape imaging mode and normal imaging mode. The portrait imaging mode is a mode for imaging in an imaging condition suitable for portrait imaging, for example, by focusing on a near field. The landscape imaging mode is a mode for imaging in an imaging condition suitable for landscape imaging, for example, by focusing on a far field. The normal imaging mode is a mode for widely covering an imaging condition suitable for portrait imaging and landscape imaging. The multi-view camera 59 assigns the image file 15 with auxiliary information 62 for expressing a mode setting of the imaging modes at the time of recording the image file 15 to the memory card 16.
The printer 60 is constructed in a basically equal manner to the printer 12 of the first embodiment described above. However, an image processing device 64 of the printer 60 has a disparity map storage medium 65, a disparity map output unit 66, and an image generation circuit 67 for a virtual view image.
The disparity map storage medium 65 stores a disparity map 71 for normal imaging, a disparity map 72 for portrait imaging, and a disparity map 73 for landscape imaging.
As shown in
The area A(0) is substantially in an trapezoidal shape, and set at the center of the map. This is because a viewer is the most likely to gaze at the principal object H and his or her eye fatigue may increase in case of occurrence of disparity at the center of the map. The other areas including the area A(−10), area A(+10) and area A(+20) are disposed in a lower portion, intermediate portion and upper portion of the map different from the center of the map.
As shown in
As shown in
Again in
The object scene detector 75 refers to the auxiliary information 62 of the image file 15, checks the mode setting of the imaging mode upon obtaining the image file 15, and judges that a category of an object scene of the image file 15 is any one of the portrait imaging, landscape imaging and normal imaging.
The image generation circuit 67 generates virtual view image data in a basically similar manner to the image generation circuit 34 of the first embodiment. However, if an imaging error has occurred with either one of the L and R view image data I(L) and I(L), a viewpoint setting unit 77 for virtual viewpoints carries out a special viewpoint setting (hereinafter referred to as special viewpoint setting X) different from the first embodiment. An image generation unit 78 for a virtual view image carries out the special image generation (hereinafter referred to as special image generation X) different from the first embodiment.
In the special viewpoint setting X, (n−1)=5 virtual viewpoints V(1) to V(5) are set in relation to n=6 viewpoints. (See
In the special image generation X, the L and R special image generations X described below are selectively carried out according to any one of the L and R view image data I(L) and I(R) in which an imaging error has occurred.
The L special image generation X generates L virtual view image data IL(1) to IL(5) corresponding to respectively the virtual viewpoints V(1) to V(5) by use of the L view image data I(L) and the optimum disparity map. The R special image generation X generates R virtual view image data IR(1) to IR(5) corresponding to respectively the virtual viewpoints V(1) to V(5) by use of the R view image data I(R) and the optimum disparity map.
The image recording in the 3-dimensional printing system 58 constructed above is described now by referring to a flow chart in
In case of occurrence of an imaging error in imaging in either one of the L and R view image data I(L) and I(R), the CPU 21 sends a command to the disparity map output unit 66 for outputting an image of an optimum disparity map. The disparity map output unit 66 in response to this command drives the object scene detector 75. The object scene detector 75 detects a mode setting of the imaging mode recorded in the auxiliary information 62 of the image file 15 in the image reader 31. Thus, it is judged that a category of the object scene is one of the portrait imaging, landscape imaging and normal imaging.
Then the disparity map output unit 66 selects an optimum disparity map from the disparity map storage medium 65 to correspond to a result of detection in the object scene detector 75, and sends the optimum disparity map to the image generation circuit 67. In case of occurrence of an imaging error in the R view image data I(R), the CPU 21 carries out the data output process for the L disparity image data.
[Data Output Process for L Disparity Image Data]
As shown in
As shown in
The image generation unit 78, upon receiving a command from the CPU 21, generates L virtual view image data IL(1) to IL(5) corresponding to virtual viewpoints V(1) to V(5) according to the optimum disparity map and L view image data I(L). Unlike the first embodiment, it is unnecessary to generate the disparity map 38L. It is possible to reduce load of the image processing device 64 and set a process time shorter than the first embodiment. Also, the disparity map stored previously is used. Even when an area of a region of the imaging error having occurred in one of the L and R view image data I(L) and I(R) is large, virtual view image data of a somewhat good quality can be obtained.
The L virtual view image data IL(1) to IL(5) are input to the image output unit 35. Then the CPU 21 sends a command for outputting L disparity image data to the image output unit 35. Thus, the image output unit 35 outputs the L disparity image data of the six viewpoints to the image recording unit 27, the L disparity image data including the L virtual view image data IL(1) to IL(5) and the L view image data I(L). Then the data output process for the L disparity image data is completed.
[Data Output Process for R Disparity Image Data]
As shown in
Steps succeeding to outputting the L and R disparity image data are the same as the first embodiment, and are not described further. It is possible also in the third embodiment to view a 3-dimensional image acceptably because the virtual view image data are generated according to image data without occurrence of an imaging error.
In the third embodiment described above, the description has been made for the structure in which disparity image data of the six viewpoints is recorded to the sheet 17. It is possible to use the present invention for a structure in which disparity image data of three or more viewpoints is recorded to the sheet 17.
In the third embodiment described above, the disparity maps 71-73 for normal, portrait and landscape imaging are examples for disparity maps stored in the disparity map storage medium 65. However, disparity maps corresponding to various other object scenes can be stored. In the third embodiment, an object scene is detected according to the auxiliary information 62 of the image file 15. However, it is possible to use, for example, a well-known processing of face detection, and to detect an object scene according to a result of detecting presence of a face and its size in the L and R view image data I(L) and I(R).
In the third embodiment described above, five virtual viewpoints are set upon the data output process for L and R disparity image data of the six viewpoints. However, ten virtual viewpoints can be set to generate five virtual view image data in a manner similar to the first embodiment. It is possible to generate the virtual view image data in a manner similar to the first embodiment except for the use of the optimum disparity map instead of the disparity maps 38L and 38R.
A 3-dimensional printing system 80 of the fourth embodiment of the present invention is described by referring to
The multi-view camera 81 includes the pair of the imaging units 14L and 14R. The imaging units 14L and 14R include an image sensor which is not shown and the like in addition to the taking lenses 14a.
A CPU 85 operates according to a control signal from an input device unit 86, successively runs various programs and the like read from a memory 87, and entirely controls various elements of the multi-view camera 81. To the CPU 85 are connected the input device unit 86, the memory 87, a signal processing unit 89, a display driver 90, a monitor 91, an image processing device 92, a recording control unit 93 and the like by use of a bus 88.
The input device unit 86 is constituted by, for example, a power switch, a mode changer switch for changeover of operation modes of the multi-view camera 81 (for example, imaging mode and playback mode), a shutter button and the like.
An AFE (analog front end) 95 processes image signals of an analog form output by the imaging units 14L and 14R for processing of noise reduction, amplification of the image signals, and digitization, to generate L and R image signals. The L and R image signals are output to the signal processing unit 89.
The signal processing unit 89 processes the L and R image signals input from the AFE 95 in the image processing of various functions such as gradation conversion, white balance correction, gamma correction, YC conversion and the like, and creates L and R view image data I(L) and I(R). The signal processing unit 89 causes the memory 87 to store the L and R view image data I(L) and I(R).
At each time that the L and R view image data I(L) and I(R) are stored to the memory 87, the display driver 90 reads the L and R view image data I(L) and I(R) from the memory 87, generates a signal for displaying an image, and outputs the signal to the monitor 91 at a predetermined time sequence. Thus, a live image is displayed by the monitor 91.
The image processing device 92 operates when the shutter button of the input device unit 86 is depressed. The image processing device 92 is constructed in a basically equal manner to the image processing device 29 of the first embodiment. See
The image reader 31 of the fourth embodiment reads the L and R view image data I(L) and I(R) from the memory 87. The image output unit 35 of the fourth embodiment causes the memory 87 to store the disparity image data of the six viewpoints or the L and R view image data.
The recording control unit 93 reads disparity image data or L or R view image data from the memory 87 when the shutter button of the input device unit 86 is depressed fully, and creates an image file 97 in which those are unified. The recording control unit 93 records the image file 97 to the memory card 16.
The printer 82 is constructed equally to the printer 12 of the first embodiment described above except for the lack of the image processing device 29. See
In the fourth embodiment described above, the multi-view camera 81 generates disparity image data in contrast with its generation in the printer 10 in the first embodiment described above. However, it is possible also to generate disparity image data in the multi-view camera 81 in a manner of the generation in the printer 60 according to the third embodiment.
In the above embodiment, virtual view image data are generated according to the L and R view image data obtained by the dual lens camera as a multi-view camera. Furthermore, it is possible to use the present invention for generating virtual view image data by use of any two of view image data of three or more viewpoints obtained by a multi-view camera of three views or more. In each of the above embodiments, virtual viewpoints are defined between the L and R viewpoints. However, virtual viewpoints may be defined on a left side from the L viewpoint or on a right side from the R viewpoint.
In each of the above embodiments, the description has been made with the examples of the printer or multi-view camera for generating virtual view image data. However, it is possible to use the present invention in various apparatuses for generating virtual view image data, such as a 3-dimensional image display apparatus for 3-dimensional display according to a disparity image, and a display apparatus for displaying disparity images in a predetermined sequence.
Number | Date | Country | Kind |
---|---|---|---|
2010-087751 | Apr 2010 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2011/056561 | 3/18/2011 | WO | 00 | 9/12/2012 |