1. Field of the Invention
The present invention relates to an imaging device and, more particularly, to an imaging device for displaying an image, a display control method, and a program for executing the method on a computer.
2. Description of the Related Art
Recently, imaging devices for capturing a subject, such as a person or an animal, so as to generate image data and recording the image data as image content, such as a digital camera or a digital video camera (for example, a camcorder), have come into wide use. An imaging device for displaying an image to be recorded on a display unit when an imaging action is finished so as to confirm image content is being proposed (a so-called review display).
An imaging device for generating a plurality of images by a series of imaging actions and recording the plurality of generated images in association with each other exists. For example, there is an imaging device for recording a plurality of images generated by consecutive photographing in association with each other. In the case where the plurality of recorded images is reproduced, for example, a list of representative images set in a consecutive photographing unit is displayed and a desired representative image is selected from the list of representative images. A plurality of images corresponding to the selected representative image may be displayed.
For example, an image display device for adjusting the display size of each consecutive image according to the number of consecutive images to be displayed as an image list and displaying a list of a plurality of consecutive images by the adjusted display size is proposed (for example, see Japanese Unexamined Patent Application Publication No. 2009-296380 (
According to the above-described related art, in order to display the list of the plurality of consecutive images by the adjusted display size, it is possible to simultaneously display the list of consecutive images.
Here, a case where an imaging action is performed using an imaging device for recording a plurality of images generated by a series of imaging action in association with each other is considered. In the case of performing the series of imaging actions using this imaging device, if the plurality of images generated by the imaging actions is confirmed after the imaging actions are finished, at least a part of the images is review-displayed.
For example, in the case where photographing is performed at a tourist spot of a travel destination, since each person may move, photographing timing becomes important. To this end, even after a series of imaging actions is finished, it is important to rapidly confirm the composition and desired subject. For example, as described above, after the series of imaging actions is finished, at least a part of the plurality of images generated by the imaging actions is review-displayed.
Although the plurality of images generated by the imaging actions may be confirmed by performing display after the series of imaging actions is finished, if the number of images to be generated is large, the processing time thereof is relatively long. If the progress situation is not checked when the processing time associated with the generation of the plurality of images increases, the preparation of the next imaging action may not be adequately performed.
It is desirable to be able to easily check the progress situation of image generation when a plurality of synthesized images is generated by a series of imaging actions.
According to an embodiment of the present invention, there are provided an imaging device including: an image unit that captures a subject and generates a plurality of consecutive captured images in time series; a synthesis unit that performs synthesis using at least a part of each of the plurality of generated captured images and generates a plurality of synthesized images having an order relationship based on a predetermined rule; and a control unit which performs control for displaying information about the progress of the generation of the synthesized images by the synthesis unit on a display unit as progress information, after the process of generating the plurality of captured images by the imaging unit is finished, a display control method thereof, a program for, on a computer, executing the method. Accordingly, a subject is captured and a plurality of consecutive captured images in time series is generated, synthesis is performed using at least a part of each of the plurality of generated captured images and a plurality of synthesized images having an order relationship based on a predetermined rule is generated, and information about the progress of the generation of the synthesized images is displayed as progress information, after the process of generating the plurality of captured images is finished.
The synthesis unit may generate multi-viewpoint images as the plurality of synthesized images, and the control unit may perform control for displaying a central image or an image near the central image of the multi-viewpoint images as a representative image on the display unit along with the progress information, immediately after the process of generating the plurality of captured images by the imaging unit is finished. Accordingly, immediately after the process of generating the plurality of captured images is finished, the central image or the image near the central image of the multi-viewpoint images is displayed as the representative image along with the progress information.
The control unit may perform control for displaying the progress information based on the number of synthesized images generated by the synthesis unit to the total number of the plurality of synthesized images as an object to be generated by the synthesis unit. Accordingly, the progress information is displayed based on the number of synthesized images generated by the synthesis unit to the total number of the plurality of synthesized images as the object to be generated by the synthesis unit.
The control unit may perform control for displaying a progress bar indicating to what extent the synthesized images have been generated by the synthesis unit using a bar graph as the progress information. Accordingly, the progress bar indicating to what extent the synthesized images have been generated by the synthesis unit using a bar graph is displayed.
The control unit may perform control for displaying the progress information on the display unit immediately after the process of generating the plurality of captured images by the imaging unit is finished. Accordingly, the progress information is displayed immediately after the process of generating the plurality of captured images by the imaging unit is finished.
The control unit may perform control for sequentially displaying at least a part of the generated synthesized images on the display unit along with the progress information. Accordingly, at least a part of the generated synthesized images is sequentially displayed along with the progress information.
The control unit may perform control for initially displaying a synthesized image which is arranged in a predetermined order of the generated synthesized images on the display unit as a representative image. Accordingly, a synthesized image which is arranged in the predetermined order of the generated synthesized images is initially displayed as a representative image.
The imaging device may further include a recording control unit that associates representative image information indicating the representative image and the order relationship with the plurality of generated synthesized images and records the plurality of generated synthesized images on a recording medium. Accordingly, representative image information indicating the representative image and the order relationship are associated with the plurality of generated synthesized images and the plurality of synthesized images is recorded on a recording medium.
The recording control unit may record the plurality of generated synthesized images associated with the representative image information and the order relationship on the recording medium as an MP file. Accordingly, the plurality of synthesized images associated with the representative image information and the order relationship is recorded on the recording medium as an MP file.
According to the embodiment of the present invention, it is possible to easily identify the progress situation of the generation of the plurality of synthesized images by a series of imaging actions.
Hereinafter, modes (hereinafter, referred to as embodiments) for carrying out the present invention will be described. The description is given in the following order.
1. First Embodiment (display control: Example of displaying representative image and progress situation notification information after imaging actions of multi-viewpoint images are finished)
2. Second Embodiment (display control: Example of sequentially review-displaying representative image candidates of multi-viewpoint images according to change in device attitude and deciding on representative image)
The imaging unit 110 converts incident light from the subject, generates the image data (captured image), and supplies the generated image data to the RAM 150, based on the control of the CPU 160. Specifically, the imaging unit 110 includes an optical unit 112 (shown in
The gyro sensor 115 detects an angular velocity of the imaging device 100 and outputs the detected angular velocity to the CPU 160. Acceleration, motion, inclination and the like of the imaging device 100 may be detected using a sensor (for example, an acceleration sensor) other than the gyro sensor, and the CPU 160 may detect a change in the attitude of the imaging device 100 based on the detected result.
The resolution conversion unit 120 converts resolution of a variety of input image data into resolution to suit image processes, based on a control signal from the CPU 160.
The image compression/decompression unit 130 compresses or decompresses the variety of input image data according to image processes, based on a control signal from the CPU 160. The image compression/decompression unit 130 compresses or decompresses, for example, the variety of input image data to image data of a Joint Photographic Experts Group (JPEG) format.
The ROM 140 is a read only memory and stores various control programs and the like.
The RAM 150 is a memory used in the main memory (main storage device) of the CPU 160, includes a working region and the like for a program executed in the CPU 160, and temporarily holds a program or data necessary to perform various processes by the CPU 160. The RAM 150 includes an image storage region for various image processes.
The CPU 160 controls the units of the imaging device 100 based on various control programs stored in the ROM 140. The CPU 160 controls the units of the imaging device 100 based on an operation input or the like received by the operation unit 182.
The LCD controller 171 displays a variety of image data on the LCD 172 based on a control signal from the CPU 160.
The LCD 172 is a display unit for displaying an image corresponding to the variety of image data supplied from the LCD controller 171. The LCD 172 sequentially displays, for example, the captured images corresponding to the image data generated by the imaging unit 110 (a so-called monitoring display). The LCD 172 displays, for example, an image corresponding to an image file stored in the removable medium 192. Instead of the LCD 172, for example, a display panel such as an organic Electro Luminescence (EL) panel may be used. As the display panel, a touch panel for performing an operation input by touching or approaching a user's finger to a display surface may be used.
The input control unit 181 performs control of the operation input received by the operation unit 182 based on an instruction from the CPU 160.
The operation unit 182 receives the operation input manipulated by the user and outputs a signal corresponding to the received operation input to the CPU 160. For example, in a multi-viewpoint photographing mode for recording a multi-viewpoint image, an operation member such as a shutter button 183 (shown in
The removable media controller 191 is connected to the removable medium 192, and reads and records data in the removable medium 192 based on a control signal from the CPU 160. For example, the removable media controller 191 records a variety of image data such as the image data generated by the imaging unit 110 in the removable medium 192 as an image file (image content). The removable media controller 191 reads content such as the image file from the removable medium 192 and outputs the content to the RAM 150 or the like through the bus 101.
The removable medium 192 is a recording device (recording medium) for recording the image data supplied from the removable media controller 191. In the removable medium 192, for example, a variety of data such as JPEG format image data is recorded. As the removable medium 192, for example, a tape (for example, a magnetic tape) or an optical disc (for example, a recordable Digital Versatile Disc (DVD)) may be used. As the removable medium 192, for example, a magnetic disk (for example, a hard disk), a semiconductor memory (for example, a memory card) or a magneto-optical disc (for example, a Mini Disc (MD)) may be used.
In the file structure shown in
Between the SOI and the EOI, Application Segment (APP) 1, APP2 and JPEG image data are arranged. APP1 and APP2 are application marker segments for storing auxiliary information of JPEG image data. Marker segments of DQT, DHF, SOF and Start of Scan (SOS) are inserted in front of compression image data and are not shown. The recording order of Define Quantization Table (DQT), Define Huffman Table (DHF) and Start of Frame (SOF) is arbitrary. In images 304 and 305 for monitor display shown in
APP2 (301 to 303) located on the uppermost sides of the file structures have important roles representing the file structures, in which the image position (offset address) of each viewpoint, the byte size, or information indicating whether or not it is a representative image is recorded.
Now, recording of multi-viewpoint images will be briefly described by referring to “6.2.2.2 stereoscopic image” and “A.2.1.2.3 selection of representative image” of “CIPA DC-007-2009 Multi Picture Format”. The following (1) is described in “6.2.2.2 stereoscopic image” and the following (2) is described in “A.2.1.2.3 selection of representative image”.
(1) In a stereoscopic image, a viewpoint number is applied toward a subject in ascending order from a left viewpoint to a right viewpoint.
(2) In the case where a stereoscopic image is recorded, it is recommended that an image used as a representative image uses an image having a viewpoint number represented by (number of viewpoints/2) or ((number of viewpoints/2)+1) if the number of viewpoints is an even number and uses an image (image near the center of all viewpoints) having a viewpoint number represented by (number of viewpoints/2+0.5) if the number of viewpoints is an odd number.
In the case of following this rule, since a left viewpoint image is packed to a higher-level address, the left viewpoint image is first subjected to a synthesis process or an encoding process. In this case, for example, if a representative image which is a central image is review-displayed, the review display of the representative image is not performed until the synthesis process or the like of the central image is finished. In the first embodiment of the present invention, an example of rapidly displaying the representative image after finishing the imaging action is described. However, the display timing of the representative image may be appropriately changed according to the tastes or the liking of the user. The review display is a display operation for automatically displaying captured images generated by the imaging process for a predetermined period of time after finishing the imaging process of the captured image by a recording instruction operation when the recording instruction operation of a still image is performed in a state in which a photographing mode of a still image is set by the recording instruction operation.
The 2-viewpoint image photographing mode selection button 351 is pressed when the 2-viewpoint image photographing mode is set as the photographing mode of the multi-viewpoint image. The 2-viewpoint image photographing mode is a photographing mode for photographing a 2-viewpoint image. When the 2-viewpoing image photographing mode is set by the pressing operation of the 2-viewpoing image photographing mode selection button 351, an image generated by the imaging unit 110 is recorded as an image file of a 2-viewpoint image shown in
The multi-viewpoint image photographing mode selection button 352 is pressed when a multi-viewpoint image photographing mode is set as the photographing mode of the multi-viewpoint image. The multi-viewpoint image photographing mode is a photographing mode for photographing a multi-viewpoint image of 3 viewpoints or more, the number of viewpoints to be recorded may be set in advance or the number of viewpoints to be recorded may be changed by a user operation. This change example is shown in
The confirm button 353 is pressed when the selection is decided on after the pressing operation for selecting the 2-viewpoint image photographing mode or the multi-viewpoint image photographing mode. The return button 354 is pressed, for example, when returning to a display screen displayed immediately before.
The number-of-viewpoints axis 361 represents the number of viewpoints to be specified by a user operation and each scale mark on the number-of-viewpoints axis 361 corresponds to the value of the viewpoint. For example, among scale marks on the number-of-viewpoints axis 361 the scale mark closest to the minus display region 362 corresponds to 3 viewpoints. Among scale marks on the number-of-viewpoints axis 361, the scale mark closest to the plus display region 363 corresponds to maximum number of viewpoints (for example, 15 viewpoints).
The specified position marker 364 indicates the number of viewpoints specified by a user operation. For example, through an operation using a cursor 367 or a touch operation (in the case of including a touch panel), the specified position marker 364 is moved to a position on the number-of-viewpoints axis 361 desired by the user so as to specify the number of viewpoints to be recorded.
A confirm button 365 is pressed when the specification is decided on after the specified position marker 364 is moved to the position on the number-of-viewpoints axis 361 desired by the user. A return button 366 is pressed, for example, when returning to a display screen displayed immediately beforehand.
Imaging action Example of Multi-viewpoint Images and Notification Example of Progress Situation
The progress bar 381 is a bar graph for notifying the user of the progress situation of the user operation (the panning operation of the imaging device 100) when the multi-viewpoint image photographing mode is set. Specifically, the progress bar 381 indicates to what extent the current operation amount (a gray portion 384) of the entire operation amount (for example, a rotation angle of the panning operation) necessary for the multi-viewpoint image photographing mode has progressed. In addition, in the progress bar 381, based on the results of detecting the movement amount and the movement direction between adjacent captured images on a time axis, the CPU 160 calculates the current operation amount so as to change the display state based on the current operation amount. As the movement amount and the movement direction, for example, a motion vector (Global Motion Vector (GMV)) corresponding to motion of the entire captured image generated by the movement of the imaging device 100 is detected. In addition, based on an angular velocity detected by the gyro sensor 115, the CPU 160 may calculate the current operation amount. Using the results of detecting the movement amount and the movement direction and the angular velocity detected by the gyro sensor 115, the CPU 160 may calculate the current operation amount. By displaying the progress bar 381 while photographing the multi-viewpoint image, the user may easily check to what extent the panning operation is necessary to be performed.
The operation assisting information 382 and 383 is to assist a user operation (the panning operation of the imaging device 100) when the multi-viewpoint image photographing mode is set. As the operation assisting information 382, for example, a message assisting the user operation is displayed. As the operation assisting information 383, for example, an arrow (arrow indicating the operation direction) assisting the user operation is displayed.
Imaging action Example of Multi-viewpoint Images and Recording Example of Captured image Generated by Imaging Action
Next, a setting method of setting an extraction region for the images (#1) 401 to (#M) 405 held in the RAM 150 will be described.
IEl=p×h (1)
In addition, p [μm] denotes a value indicating the pixel pitch of the imaging element 111 and h [pixel] denotes a value indicating the number of horizontal pixels of the imaging element 111.
The angle of view of the imaging device 100 of the example shown in
α=(180/π)×2×tan−1((p×h×10−3)/(2×f)) (2)
In addition, f [mm] denotes a value indicating a focal length of the imaging device 100.
By using the calculated angle α of view, the angle of view per pixel (pixel density) μ [deg/pixel] configuring the imaging element 111 may be obtained by the following equation 3.
μ=α/h (3)
Here, if the multi-viewpoint image photographing mode is set in the imaging device 100, the consecutive speed (that is, the number of frames per second) of the image in the multi-viewpoint image photographing mode is set to s [fps]. In this case, the length w [pixel] of the horizontal direction (width of the extraction region) of the extraction region (maximum extraction region) of one viewpoint of one captured image may be obtained by the following equation 4.
w=(d/s)×(1×μ) (4)
In addition, d [deg/sec] denotes a value indicating a shake angular velocity of a user who operates the imaging device 100. By using the shake angular velocity d of the user who operates the imaging device 100, the width w of the extraction region (width of the maximum extraction region) may be obtained.
As described above, if the synthesis process of the multi-viewpoint image is performed, images (strip images) as objects to be synthesized of the multi-viewpoint image are extracted from each of the captured images (images (#1) 401 to (#M) 405) generated by the imaging unit 110 and held in the RAM 150. That is, images (strip images) as objects to be synthesized are sequentially extracted while shifting the position of the extraction region (strip region) of one captured image held in the RAM 150. In this case, the extracted images are synthesized so as to be superimposed based on correlation between images. Specifically, the movement amount and the movement direction between two adjacent captured images (that is, relative displacement between adjacent captured images) on a time axis are detected. Based on the detected movement amount and movement direction (movement amount and movement direction between the adjacent images), the extracted images are synthesized such that the overlapped regions are superimposed on each other so as to generate the multi-viewpoint image.
Now, the method of calculating the size and position of the extraction region (strip region) of one captured image held in the RAM 150 and the shift amount of the viewpoint j will be described.
After the imaging process by the imaging unit 110 and the recording process in the RAM 150 are finished, it is calculated which region is an extraction region, in each of the plurality of captured images held in the RAM 150. Specifically, as shown in Equation 4, the width of the extraction region is calculated and the position of the horizontal direction of the extraction region used for the synthesis of the central image (multi-viewpoint image of viewpoint 8) is set to the central position of the captured images held in the RAM 150.
Here, the position of the horizontal direction of the extraction region used for the synthesis of the multi-viewpoint image other than the central image (multi-viewpoint image of viewpoint 8) is calculated based on the position of the horizontal direction of the extraction region used for the synthesis of the central image (multi-viewpoint image of viewpoint 8). Specifically, the position shifted from the first position (central position) is calculated according to a difference in viewpoint number between the central viewpoint (viewpoint 8) and the viewpoint j. That is, the shift amount MQj of the viewpoint j may be obtained by the following equation 5.
MQj=(CV−OVj)×β (5)
In addition, CV denotes a value indicating a central viewpoint of the multi-viewpoint image, and OVj denotes a value indicating a viewpoint (viewpoint j) other than the central viewpoint of the multi-viewpoint image. In addition, β denotes a value indicating the shift amount (strip position shift amount) of the position of the extraction region per viewpoint. In addition, the size (strip size) of the extraction region is not changed.
Now, the method of calculating the strip position shift amount β will be described. The strip position shift amount β may be obtained by the following equation 6.
β=(W1−w×2)/VN (6)
In addition, W1 denotes a value indicating a horizontal size per captured image held in the RAM 150, w denotes a value indicating the width of the extraction region (width of the maximum extraction region), and VN denotes a value indicating the number of viewpoints of the multi-viewpoint image. That is, a value obtained by dividing W3 (=W1−w×2) shown in
In this way, the strip position shift amount β is calculated such that the image (strip image) extracted when the synthesis process of the leftmost-viewpoint image or the rightmost-viewpoint image is arranged at the positions of at least the left end and the right end of the captured image held in the RAM 150.
In addition, if the synthesis process of a panoramic plane image (two-dimensional image) is performed, the central strip image (image corresponding to viewpoint 8) corresponding to the width w of the extraction region (width of the maximum extraction region) is sequentially extracted and synthesized. If the synthesis process of the 2-viewpoint image is performed, two extraction regions are set such that the shift amount (offset amount) OF from the central strip image is identical at the left viewpoint and the right viewpoint. In this case, an allowable offset amount (minimum strip offset amount) OFmin [pixel] in the shake angular velocity d of the user who operates the imaging device 100 may be obtained by the following equation 7.
OFmin=w/2 (7)
In addition, the minimum strip offset amount OFmin is the minimum allowable strip offset amount in which a left-eye strip image and a right-eye strip image are not superimposed (overlapped).
A maximum allowable strip offset amount (maximum strip offset amount) OFmax which does not protrude the extraction region used for the synthesis process of the 2-viewpoint image to the outside of the image region of the captured image held in the RAM 150 may be obtained by the following equation 8.
OFmax=(t−OFmin)/2 (8)
Here, t [pixel] denotes a horizontal valid size of one image generated by the imaging unit 110. The horizontal valid size t corresponds to the number of horizontal pixels which is the horizontal width of the captured image held in the RAM 150.
As described above, the images (#1) 401 to (#M) 405 generated by the imaging unit 110 are sequentially recorded in the RAM 150. Subsequently, in each of the images (#1) 401 to (#M) 405 held in the RAM 150, the CPU 160 calculates the extraction region of the viewpoint j and acquires the image included in the extraction region. Subsequently, by using the image acquired from the extraction region of each of the images (#1) 401 to (#M) 405, the CPU 160 generates the synthesized image (viewpoint j image 411) of the viewpoint j. Although the example in which the CPU 160 generates the synthesized image of the multi-viewpoint image is described in this example, image synthesis hardware or software (accelerator) may be separately provided and the synthesized image of the multi-viewpoint image may be generated.
Subsequently, the resolution conversion unit 120 performs resolution conversion with respect to the viewpoint j image 411 and sets a final image (viewpoint j image 420) of the viewpoint j. Subsequently, the image compression/decompression unit 130 compresses the viewpoint j image 420 to JPEG format image data. Subsequently, the CPU 160 performs a packing process (packing process such as header addition) of the viewpoint j image 420 of the JPEG to the MP file 430. The same process is similarly performed with respect to the generation of other multi-viewpoint images. If the synthesis process of all multi-viewpoint images is finished, the removable media controller 191 records the MP file 430 in the removable medium 192 based on the control of the CPU 160.
Since the generation of the synthesized image (representative image 441) of the viewpoint 8 and the final image (representative image 442) of the viewpoint 8 is equal to the example shown in
After the representative image 442 is generated, the resolution conversion unit 120 performs resolution conversion with respect to the representative image 442 to become an optimal screen size to the display and sets a display image (representative image 443) of the viewpoint 8. Subsequently, the LCD controller 171 displays the representative image 443 on the LCD 172 based on the control of the CPU 160. That is, the representative image 443 is review-displayed. Even after review display, the generated representative image 442 is held in the RAM 150 until the packing process to the MP file 430 shown in
In this way, the multi-viewpoint images are generated using the plurality of images generated by the imaging unit 110. A representative image of the generated multi-viewpoint images is initially displayed on the LCD 172.
The operation reception unit 210 receives operation content operated by the user and supplies an operation signal corresponding to the received operation content to the control unit 230. The operation reception unit 210, for example, corresponds to the input control unit 181 and the operation unit 182 shown in
The attitude detection unit 220 detects a change in attitude of the imaging device 100 by detecting acceleration, motion, inclination and the like of the imaging device 100 and outputs attitude change information of the detected change in attitude to the control unit 230. In addition, the attitude detection unit 220 corresponds to the gyro sensor 115 shown in
The control unit 230 controls the units of the imaging unit 100 based on the operation content from the operation reception unit 210. For example, when a setting operation of a photographing mode is received by the operation reception unit 210, the control unit 230 sets a photographing mode corresponding to the setting operation. For example, the control unit 230 analyzes the change amount (movement direction, the movement amount, or the like) of the attitude of the imaging device 100 based on the attitude change information output from the attitude detection unit 220 and outputs the analyzed result to the synthesis unit 270 and the display control unit 280. For example, the control unit 230 performs control for displaying a multi-viewpoint image which is located at a predetermined order (for example, a central viewpoint) among the plurality of multi-viewpoint images as an object to be generated by the synthesis unit 270 on the display unit 285 as a representative image, after a process of generating a plurality of captured images by the imaging unit 240 is finished. After the representative image is displayed, the control unit 230, for example, performs control for sequentially displaying at least a part of the generated multi-viewpoint images on the display unit 285 according to a predetermined rule (for example, each viewpoint). For example, the control unit 230 performs control for displaying information (for example, the progress bar 521 shown in FIGS. 19A to 21D) about progress of the generation of the multi-viewpoint image by the synthesis unit 270 on the display unit 285, after the process of generating the plurality of captured images by the imaging unit 240 is finished. In this case, the control unit 230, for example, performs control for displaying the progress information on the display unit 285 immediately after the process of generating the plurality of captured images by the imaging unit 240 is finished. In addition, the control unit 230 corresponds to the CPU 160 shown in
The imaging unit 240 captures a subject and generates captured images based on the control of the control unit 230 and supplies the generated captured images to the captured image holding unit 250. In addition, if a 2-viewpoint image photographing mode or a multi-viewpoint image photographing mode is set, the imaging unit 240 captures the subject, generates a plurality of consecutive captured images in time series, and supplies the generated captured images to the captured image holding unit 250. In addition, the imaging unit 240 corresponds to the imaging unit 110 shown in
The captured image holding unit 250 is an image memory for holding the captured images generated by the imaging unit 240 and supplies the held captured image to the synthesis unit 270. The captured image holding unit 250 corresponds to the RAM 150 shown in
The movement amount detection unit 260 detects the movement amount and the movement direction between the captured images adjacent on the time axis with respect to the captured images held in the captured image holding unit 250 and outputs the detected movement amount and the movement direction to the synthesis unit 270. For example, the movement amount detection unit 260 performs a matching process (that is, a matching process of discriminating a photographing region of the same subject) between pixels configuring two adjacent captured images and calculates the number of pixels moved between the captured images. In this matching process, fundamentally, a process of supposing that the subject is stopped is performed. If a movable body is included in the subject, a motion vector different from the motion vector of the entire captured image is detected and the motion vector corresponding to the movable body is processed as separate to the detection object. That is, only the motion vector (GMV: global motion vector) corresponding to the motion of the entire captured image generated by the movement of the imaging device 100 is detected. In addition, the movement amount detection unit 260 corresponds to the CPU 160 shown in
The synthesis unit 270 generates the multi-viewpoint image using the plurality of captured images held in the captured image holding unit 250 based on the control of the control unit 230 and supplies the generated multi-viewpoint image to the display control unit 280 and the recording control unit 290. That is, the synthesis unit 270 calculates the extraction regions in the plurality of captured images held in the captured image holding unit 250 based on the analysis result (analysis result of the change amount of the attitude of the imaging device 100) output from the control unit 230. The synthesis unit 270 extracts the images (strip images) from the extraction regions of the plurality of captured images and synthesizes the extracted images so as to generate the multi-viewpoint image. In this case, the synthesis unit 270 synthesizes the extracted images so as to be superimposed based on the movement amount and the movement direction output from the movement amount detection unit 260 in order to generate the multi-viewpoint image. The generated multi-viewpoint image is a plurality of synthesized images having an order relationship (each viewpoint) based on a predetermined rule. For example, the synthesis unit 270 initially generates the representative image immediately after the process of generating the plurality of captured images by the imaging unit 240 is finished. In addition, the initially generated image may be changed by the user operation or the setting content. In addition, the synthesis unit 270 corresponds to the resolution conversion unit 120, the RAM 150 and the CPU 160 shown in
The display control unit 280 displays the multi-viewpoint image generated by the synthesis unit 270 on the display unit 285 based on the control of the control unit 230. For example, the display control unit 280 displays the multi-viewpoint image which is located at a predetermined order (for example, a central viewpoint) among the plurality of multi-viewpoint images as an object to be generated by the synthesis unit 270 on the display unit 285 as a representative image, after the process of generating the plurality of captured images by the imaging unit 240 is finished. After the representative image is displayed, the display control unit 280, for example, sequentially displays at least a part of the generated multi-viewpoint images on the display unit 285 according to a predetermined rule (for example, each viewpoint). For example, the display control unit 280 displays information (for example, the progress bar 521 shown in
The display unit 285 displays an image supplied from the display control unit 280. Various menu screens or various images are displayed on the display unit 285. In addition, the display unit 285 corresponds to the LCD 172 shown in
The recording control unit 290 performs control for recording the multi-viewpoint image generated by the synthesis unit 270 in the content storage unit 300 based on the control of the control unit 230. That is, the recording control unit 290 records the multi-viewpoint image on the recording medium as the MP file in a state in which representative image information indicating the representative image of the multi-viewpoint image and the order relationship (for example, a viewpoint number) of the multi-viewpoint image is associated with the generated multi-viewpoint image. In addition, the recording control unit 290 corresponds to the image compression/decompression unit 130 and the removable media controller 191 shown in
The content storage unit 300 stores the multi-viewpoint image generated by the synthesis unit 270 as an image file (image content). The content storage unit 300 corresponds to the removable medium 192 shown in
In
In the above description, the example of review displaying only the representative image if the multi-viewpoint images of 3 viewpoints or more are recorded. However, the multi-viewpoint images other than the representative image may be sequentially displayed according to the taste of the user. Hereinafter, an example of sequentially review displaying the multi-viewpoint images other than the representative image will be described.
In
In
The representative image may be initially review-displayed and the multi-viewpoint images generated by the synthesis process may be sequentially review-displayed according to a predetermined rule after the display of the representative image. Thus, it is possible to initially and rapidly confirm the representative image of the multi-viewpoint images and easily confirm the other multi-viewpoint images after confirmation.
For example, if the multi-viewpoint images are reproduced, on a selection screen for selecting a desired multi-viewpoint image, the representative image of the multi-viewpoint images may be list-displayed. Immediately after the imaging process by the imaging unit 240 is finished, the representative image of the multi-viewpoint images is review-displayed. For example, immediately after the imaging process by the imaging unit 240 is finished, the representative image is initially review-displayed. To this end, during review display it is possible to easily confirm the same image as the representative image list-displayed during reproduction. Thus, it is possible to reduce a sense of unease during reproduction.
By initially synthesizing and review displaying the representative image of the multi-viewpoint images immediately after the imaging process by the imaging unit 240 is finished, it is unnecessary for the user to wait for the time consumed for synthesizing the representative image from the left viewpoint image. To this end, timing when the user confirms the multi-viewpoint image as the object to be recorded may be hastened. Accordingly, it is possible to solve a problem that photographing cancel timing is delayed after confirming the multi-viewpoint image as the object to be recorded. The display order of multi-viewpoint images may be changed according to the taste of the user. Hereinafter, the display transition examples thereof will be described.
In
The synthesis process of the multi-viewpoint images in ascending order by viewpoint number may be performed and the multi-viewpoint images generated by this synthesis process may be sequentially review-displayed. Thus, it is possible to easily confirm the other multi-viewpoint images in ascending or descending order by viewpoint number of the multi-viewpoint images along with the representative image of the multi-viewpoint images. By performing review display in ascending or descending order by viewpoint number, it is possible to easily confirm the multi-viewpoint images according to reproduction order of multi-viewpoint images.
Although the review display is performed in ascending order or descending order by viewpoint number in
If the 7-viewpoint image is generated as the multi-viewpoint image, the display control unit 280 calculates a value obtained by dividing the horizontal length of the progress bar 500 by 7 and sets 7 rectangular regions in the progress bar 500 by the calculated value. That is, the length L11 (=L12 to L17) is calculated as the value obtained by dividing the horizontal length of the progress bar 500 by 7, and 7 rectangular regions corresponding to the lengths L11 to L17 are set. These rectangular regions become units for sequentially changing the display state when the synthesis process of one multi-viewpoint image is finished.
For example, the progress situation notification screen (for example, the progress situation notification screen 520 shown in
As shown in
Whenever the synthesis process of the multi-viewpoint images is finished, the display state of the progress bar 500 is changed and the progress situation of the synthesis process of the multi-viewpoint image is indicated such that the user can easily identify the situation of the synthesis process.
In this example, the example of changing the display state of the progress bar 500 whenever the synthesis process of the multi-viewpoint images is finished is described. For example, if the number of multi-viewpoint images as an object to be synthesized is large, a plurality of multi-viewpoint images may be set as one unit and the display state of the progress bar 500 may be changed whenever the synthesis process of the multi-viewpoint images is finished. For example, if 5 multi-viewpoint images are set to one unit, the display state of the progress bar 500 is changed whenever the synthesis process of a fifth multi-viewpoint image is finished. Accordingly, it is possible to prevent the display state of the progress bar 500 from being frequently updated and enable the user to easily view the progress bar.
Display Example of Progress Situation Notification Screen of Synthesis Process of 2-viewpoint Images
The during-processing message 511 is a character indicating that the synthesis process of the 2-viewpoint images is being executed. In addition, on the progress situation notification screen 510, only the during-processing message 511 is displayed until the synthesis process of the representative image of the 2-viewpoint images is finished.
If the recording process of the 2-viewpoint images is performed as described above, since the number of images to be synthesized is small, the synthesis process may be finished relatively quickly. To this end, on the progress situation notification screen displayed in the case where the recording process of the 2-viewpoint images is performed, the progress bar notifying the progress situation may not be displayed. In addition, the progress bar may be displayed according to the taste of the user.
Display Example of Progress Situation Notification Screen of Synthesis Process of Multi-viewpoint Images (3 viewpoints or more)
In the above description, the example of displaying the representative image of the multi-viewpoint image and the progress bar while the synthesis process of the multi-viewpoint images is performed is described. As shown in
Since the progress situation notification screen 540 shown in
In this way, it is possible to more easily identify the progress situation by displaying the progress bar 521 and the progress situation notification information 541 while the synthesis process of the multi-viewpoint images is performed. Although the example of simultaneously displaying the progress bar 521 and the progress situation notification information 541 is described in this example, only the progress situation notification information 541 may be displayed. Other progress situation notification information (progress situation notification information of the synthesis process of the multi-viewpoint images) indicating to what extent the synthesis process of the multi-viewpoint images has progressed may be displayed. As the other progress situation notification information, for example, the ratio may be a numeral value (t) or a circular graph.
Although the example of setting the total number of multi-viewpoint images as the object to be synthesized as the denominator is described in
First, a determination as to whether or not a recording instruction operation of multi-viewpoint images is performed is made (step S901) and monitoring is continuously performed if the recording instruction operation is not performed. If the recording instruction operation is performed (step S901), a captured image recording process is performed (step S910). The captured image recording process will be described in detail with reference to
Subsequently, a representative image decision process is performed (step S920). The representative image decision process will be described in detail with reference to
Subsequently, a determination as to whether or not the multi-viewpoint images are displayed on the display unit 285 is made (step S902) and, if the multi-viewpoint images are displayed on the display unit 285, a viewpoint j image generation process is performed (step S950). The view j image generation process will be described in detail with reference to
Subsequently, the display control unit 280 converts the resolution of the representative image generated by the synthesis unit 270 into a resolution for display (step S903) and displays the representative image for display with the converted resolution on the display unit 285(step S904).
After the viewpoint j image generation process (step S950), the recording control unit 290 records a plurality of multi-viewpoint images generated by the viewpoint j image generation process in the content storage unit 300 as an MP file (step S905).
First, the imaging unit 240 generates captured images (step S911) and sequentially records the generated captured images in the captured image holding unit 250 (step S912). Subsequently, a determination as to whether or not an imaging action end instruction operation is performed is made (step S913) and the action of the captured image recording process is finished if the imaging action end instruction operation is performed. If the imaging action end instruction operation is not performed (step S913), the process returns to step S911.
First, the photographing mode set by the user operation is acquired (step S921). A determination as to whether or not the 2-viewpoint image photographing mode is set is made (step S922) and the control unit 230 decides on the left-viewpoint image as the representative image if the 2-viewpoint image photographing mode is set (step S923).
In contrast, if the 2-viewpoint image photographing mode is not set (that is, a multi-viewpoint image photographing mode of 3 viewpoints or more is set) (step S922), the control unit 230 acquires the number of viewpoints of the set multi-viewpoint image photographing mode (step S924). Subsequently, a determination as to whether or not the acquired number of viewpoints is an odd number is made (step S925) and the control unit 230 decides on a central image as the representative image (step S926) if the acquired number of viewpoints is an odd number.
In contrast, if the acquired number of viewpoints is an even number (step S925), the control unit 230 decides on the left image of two images near the center as the representative image (step S927).
First, the control unit 230 acquires the number of viewpoints of the set multi-viewpoint image photographing mode (step S931) and acquires the recording time per viewpoint (step S932). Subsequently, the control unit 230 calculates a recording time of the total number of viewpoints based on the acquired number of viewpoints and the recording time per one viewpoint (step S933).
Subsequently, a determination as to whether or not the calculated recording time of the total number of viewpoints is equal to or greater than a predefined value is made (step S934). If the calculated recording time of the total number of viewpoints is equal to or greater than the predefined value (step S934), the control unit 230 calculates a display region of a progress bar based on the acquired number of viewpoints (step S935). In this case, for example, if the number of multi-viewpoint images as an object to be synthesized is large, a plurality of multi-viewpoint images is set as one unit and the display state of the progress bar is set to be changed whenever the synthesis process of each multi-viewpoint image corresponding to each unit is finished. Subsequently, the display control unit 280 displays the progress bar on the display unit 285 (step S936). Step S936 is an example of a control step of claims.
If the calculated recording time of the total number of viewpoints is less than the predefined value (step S934), the control unit 230 decides that the progress bar is not displayed (step S937). In this case, the progress bar is not displayed on the display unit 285.
First, the synthesis unit 270 calculates the positions and sizes of extraction regions (strip regions) of the captured images held in the captured image holding unit 250 based on the analyzed result output from the control unit 230 (step S941). Subsequently, the synthesis unit 270 acquires the strip images from the captured images held in the captured image holding unit 250 based on the calculated positions and sizes of the extraction regions (step S942).
Subsequently, the synthesis unit 270 synthesizes the strip images acquired from the captured images and generates the representative image (step S943). In this case, the synthesis unit 270 synthesizes the acquired images so as to be superimposed based on the movement amount and the movement direction output from the movement amount detection unit 260 and generates the representative image.
Subsequently, the synthesis unit 270 converts the resolution of the generated representative image into a resolution for recording (step S944) and acquires a viewpoint number of the synthesized representative image (step S945). Subsequently, a determination as to whether it is necessary to update the progress bar is made (step S946). For example, if the display state of the progress bar using a plurality of multi-viewpoint images as one unit is set to be changed, it is determined that it is not necessary to update the progress bar until the synthesis process of each multi-viewpoint image corresponding to each unit is finished. If it is necessary to update the progress bar (step S946), the display control unit 280 changes the display state of the progress bar (step S947) and finishes the action of the representative image generation process. If it is not necessary to update the progress bar (step S946), the action of the representative image generation process is finished.
First, j=1 (step S951). Subsequently, the synthesis unit 270 calculates the strip position shift amount β using the size of the extraction region (strip region) calculated in step S941 (step S952). Subsequently, the synthesis unit 270 calculates the shift amount (for example, MQj shown in Equation 5) of the viewpoint j using the calculated strip position shift amount β (step S953).
Subsequently, the synthesis unit 270 acquires the strip image from each captured image held in the captured image holding unit 250 based on the calculated shift amount of the viewpoint j and the position and size of the extraction region (step S954).
Subsequently, the synthesis unit 270 synthesizes the strip image acquired from each captured image and generates the viewpoint j image (multi-viewpoint image) (step S955). At this time, the synthesis unit 270 synthesizes the acquired image so as to be superimposed based on the movement amount and the movement direction output from the movement amount detection unit 260 so as to generate the viewpoint j image.
Subsequently, the synthesis unit 270 converts the resolution of the generated viewpoint j image into the resolution for recording (step S956) and acquires the viewpoint number of the synthesized viewpoint j image (step S957). Subsequently, a determination as to whether or not it is necessary to update the progress bar is made (step S958) and, if it is necessary to update the progress bar, the display control unit 280 changes the display state of the progress bar (step S959). In contrast, if it is not necessary to update the progress bar (step S958), the process proceeds to step S960.
Subsequently, the recording control unit 290 encodes the viewpoint j image with the converted resolution (step S960) and records the encoded viewpoint j image in the MP file (step S961). Subsequently, a determination as to whether or not the viewpoint j is the last viewpoint is made (step S962) and, if the viewpoint j is the last viewpoint, the action of the viewpoint j image generation process is performed. In contrast, if the viewpoint j is not the last viewpoint (step S962), j is increased (step S963) and a determination as to whether or not the viewpoint j image is a representative image is made (step S964). If the viewpoint j image is the representative image (step S964), the process returns to step S960 and, if the viewpoint j image is not the representative image, the process returns to step S953.
In the first embodiment of the present invention, the example of displaying the plurality of image generated by the series of imaging actions based on the predetermined rule is described. In the case of confirming the multi-viewpoint images generated by the imaging actions after the imaging actions of the multi-viewpoint images of the multi-viewpoint image photographing mode are finished, the user may wish to display a multi-viewpoint image of a specific viewpoint. Therefore, in the second embodiment of the present invention, an example of changing and displaying an image as an object to be displayed according to the attitude of the imaging device after the imaging actions of the multi-viewpoint images are finished will be described. The configuration of the imaging device of the second embodiment of the present invention is substantially equal to that of the examples shown in
The input/output panel 710 displays various images and detects a touch action of the input/output panel 710 so as to receive an operation input from a user. That is, the input/output panel 710 includes a touch panel. The touch panel is, for example, provided so as to be superimposed on the display panel to transmit through the screen of the display panel and detects an object touching the display surface so as to receive an operation input from the user.
The imaging device 700 includes other operation members such as a power switch or a mode switch, a lens unit, or the like, which are not described and shown for ease of description. The optical unit 112 is partially mounted in the imaging device 700.
Now, the change of the attitude of the imaging device 700 will be described. For example, in a state in which the user holds the imaging device 700 in both hands, the rotation angles (that is, the yaw angle, the pitch angle and the roll angle) around orthogonal 3 axes may be changed. For example, in the state of the imaging device 700 shown in
In the second embodiment of the present invention, as shown in
Association Example with Rotation Angle
The multi-viewpoint images (viewpoints 1 to 5) shown in
On the display screen shown in
The confirm button 751 is pressed when the multi-viewpoint image (representative image candidate) displayed on the input/output panel 710 is newly decided on as the representative image. That is, if the confirm button 751 is pressed, the multi-viewpoint image displayed on the input/output panel 710 when the pressing operation is decided on as the new representative image. The recording control unit 290 associates the representative image information indicating the new representative image decided on and the order relationship (for example, viewpoint number) of the multi-viewpoint image with the generated multi-viewpoint images and records the multi-viewpoint images on the recording medium as an MP file.
The re-take button 752 is pressed, for example, when the imaging action of the multi-viewpoint image are performed again. That is, after the multi-viewpoint image displayed on the input/output panel 710 is confirmed, if the user determines that it is necessary to photograph the multi-viewpoint image again, it is possible to rapidly photograph the multi-viewpoint image again by pressing the re-take button 752.
The operation assisting information 753 and 754 is an operation guide to assist the operation for changing the multi-viewpoint image displayed on the input/output panel 710. The message 755 is an operation guide to assist the decision operation of the operation and the representative image.
For example, as shown in
In addition, for example, if the person 800 inclines the imaging device 700 to the left side by γ degrees or more in a state in which the multi-viewpoint image of viewpoint 3 is review-displayed on the input/output panel 710, the multi-viewpoint image of viewpoint 2 is review-displayed on the input/output panel 710. In addition, for example, if the person 800 inclines the imaging device 700 to the left side by γ degrees or more in a state in which the multi-viewpoint image of viewpoint 2 is review-displayed on the input/output panel 710, the multi-viewpoint image of viewpoint 1 is review-displayed on the input/output panel 710. In this way, the multi-viewpoint images other than the representative image may be review-displayed on the input/output panel 710 as the representative image candidate by the operation for inclining the imaging device 700.
If the confirm button 751 is pressed in a state in which the representative image candidate is review-displayed on the input/output panel 710 by the operation for inclining the imaging device 700, the representative image candidate is decided on as a new representative image. For example, if the confirm button 751 is pressed in a state in which the multi-viewpoint image of viewpoint 2 is review-displayed on the input/output panel 710 by the operation for inclining the imaging device 700, the multi-viewpoint image of viewpoint 2 is decided on as a new representative image, instead of the multi-viewpoint image of viewpoint 3.
For example, if the person 800 inclines the imaging device 700 in any one direction by γ degrees or more in a state in which the multi-viewpoint image of viewpoint 3 is review-displayed on the input/output panel 710, another multi-viewpoint image is review-displayed. In this case, the synthesis unit 270 may not finish the synthesis process of the multi-viewpoint image as an object to be displayed. Therefore, in the case where an image as an object to be displayed is changed by the operation for inclining the imaging device 700, if the synthesis process of the multi-viewpoint image as the object to be displayed is not finished, the synthesis process of the multi-viewpoint image as the object to be displayed is preferentially performed rather than the other multi-viewpoint images. That is, in the case where the change of the image as the object to be displayed by the operation for inclining the imaging device 700 is not performed, the synthesis process is sequentially performed in the same order as the first embodiment of the present invention. In contrast, in the case where the image as the object to be displayed is changed by the operation for inclining the imaging device 700 and the synthesis process of the multi-viewpoint image as the object to be displayed is not finished, the synthesis unit 270 preferentially performs the synthesis process of the multi-viewpoint image as the object to be displayed.
Accordingly, it is possible to easily and rapidly review display the multi-viewpoint image desired by the user according to the inclination of the imaging device 700. To this end, in the case where the user confirms the multi-viewpoint image, it is possible to easily perform confirmation. By the pressing of the confirm button 751, it is possible to decide on a desired multi-viewpoint image as the representative image.
Although, in the example shown in
That is, the attitude detection unit 220 detects the change in attitude of the imaging device 700 based on the attitude of the imaging device 700 when the representative image is displayed on the input/output panel 710 as a reference. The control unit 230 performs control for sequentially displaying the multi-viewpoint image (representative image candidate) on the input/output panel 710 based on the detected change in attitude and the predetermined rule, after the representative image is displayed on the input/output panel 710. The predetermined rule, for example, indicates association between the multi-viewpoint images (viewpoints 1 to 5) shown in
Although, in the second embodiment of the present invention, the example of initially displaying the representative image on the input/output panel 710 is described, an initially displayed multi-viewpoint image may be decided on based on the change in attitude immediately after the process of generating the plurality of captured images by the imaging unit 240 is finished. That is, the attitude detection unit 220 detects the change in attitude of the imaging device 700 based on the attitude of the imaging device 700 immediately after the process of generating the plurality of captured images by the imaging unit 240 is finished as a reference. The control unit 230 may display the multi-viewpoint image corresponding to the order (viewpoint) according to the detected change in attitude on the input/output panel 710 as the initially displayed representative image. In this case, if the synthesis process of the multi-viewpoint image as the object to be displayed is not finished, the synthesis unit 270 preferentially performs the synthesis process of the multi-viewpoint image as the object to be displayed.
Although, in the second embodiment of the present invention, an example of using an operation method for inclining the imaging device 700 as an operation method for displaying a representative image candidate is described, the representative image candidate may be displayed using an operation member such as a key button.
In the second embodiment of the present invention, the example of displaying the representative image candidate by the user operation and deciding on the representative image is described. As described in the first embodiment of the present invention, if the multi-viewpoint images are automatically and sequentially displayed, the representative image may be decided on from the displayed multi-viewpoint images by the user operation. In this case, for example, if a desired multi-viewpoint image is displayed, the representative image may be decided on by a decision operation, using an operation member such as a confirm button.
After the encoded viewpoint j image is recorded in the MP file (step S961), the display control unit 280 converts the resolution of the viewpoint j image generated by the synthesis unit 270 into the resolution for displaying (step S971). Subsequently, the display control unit 280 displays the viewpoint j image for display with the converted resolution on the display unit 285 (step S972).
Subsequently, a determination as to whether or not a decision operation of the representative image is performed (step S973) and, if the decision operation of the representative image is performed, the control unit 230 decides the viewpoint j image displayed on the display unit 285 as a new representative image (step S974). In contrast, if the decision operation of the representative image is not performed (step S973), the process proceeds to step S962.
After the strip position shift amount β is calculated (step S952), a determination as to whether or not the attitude of the imaging device 700 is changed by a predetermined level or more is made (step S981) and, if the attitude of the imaging device 700 is not changed by the predetermined level or more, the process proceeds to step S985. In contrast, if the attitude of the imaging device 700 is changed by the predetermined level or more(step S981), the viewpoint j corresponding to the change is set (step S982). Subsequently, a determination as to whether or not the synthesis process of the multi-viewpoint image of viewpoint j is finished is made (step S983) and, if the synthesis process of the multi-viewpoint image of viewpoint j is finished, a determination as to whether or not the recording process of the multi-viewpoint image of viewpoint j is finished is made (step S984). Here, the case where the synthesis process of the multi-viewpoint image of viewpoint j is finished corresponds to, for example, the case where the conversion of resolution for recording is performed with respect to the viewpoint j image (multi-viewpoint image) generated by the synthesis of the strip image (for example, the viewpoint j image (final image) 420 shown in
If the synthesis process of the multi-viewpoint image of viewpoint j is not finished (step S983), the process proceeds to step S953. If the recording process of the multi-viewpoint image of viewpoint j is finished (step S984), the process proceeds to step S971 and, if the recording process of the multi-viewpoint image of viewpoint j is not finished, the process proceeds to step S985.
In step S985, a determination as to whether or not the recording process of a viewpoint (j−1) image is finished is determined and, if the recording process of the viewpoint (j−1) image is finished, the process proceeds to step S960. In contrast, if the recording process of the viewpoint (j−1) image is not finished (step S985), the process proceeds to step S971.
If the attitude of the imaging device 700 is not changed by the predetermined level or more (step S981), j=0 (step S986) and j is increased (step S987). Subsequently, a determination as to whether or not the synthesis process of the multi-viewpoint image of viewpoint j is finished is determined (step S988) and, if the synthesis process of the multi-viewpoint image of viewpoint j is finished, a determination as to whether or not the recording process of the multi-viewpoint image of viewpoint j is finished is made (step S989). If the recording process of the multi-viewpoint image of viewpoint j is finished (step S989), the process returns to step S987 and, if the recording process of the multi-viewpoint image of viewpoint j is not finished, the process returns to step S985. If the synthesis process of the multi-viewpoint image of viewpoint j is not finished (step S988), the process returns to step S953.
If all the recording processes of the multi-viewpoint images are finished (step S990), the action of the viewpoint j image generation process is finished. In contrast, if all the recording processes of the multi-viewpoint images are not finished (step S990), the process returns to step S981.
In the embodiment of the present invention, the display example of the review display in the case of where the multi-viewpoint images are generated using the plurality of consecutive captured images in time series is described. In the case of generating the consecutive images using the plurality of consecutive captured images in time series, the embodiment of the present invention is applicable to the case of performing the review display with respect to the consecutive images. For example, if a consecutive mode is set, the imaging unit 240 generates the plurality (for example, 15) of consecutive captured images in time series. The recording control unit 290 assigns an order relationship based on a predetermined rule to at least a part (or all) of the plurality of generated captured images and records the captured image in the content storage unit 300 in association with each other. That is, the order relationship according to the generation order is assigned to the plurality of consecutive captured images in time series and the plurality of captured images are recorded as the image file of the consecutive image in association with each other. In this case, the control unit 230 performs control for displaying a captured image (for example, a central image (a seventh image)) which is arranged in the predetermined order of the plurality of captured images as an object to be recorded on the display unit 285 as the representative image, after the process of generating the plurality of captured images by the imaging unit 240 is finished.
The embodiments of the present invention are applicable to an imaging device of a mobile phone having an imaging function or a mobile terminal device having an imaging function.
In addition, the embodiments of the present invention are examples for realizing the present invention and, as described in the embodiments of the present invention, matters of the embodiments of the present invention respectively correspond to the specific matters of claims. Similarly, the specific matters of claims correspond to the matters of the embodiments of the present invention having the same names. The present invention is not limited to the embodiments and may be modified without departing from the scope of the present invention.
The procedures described in the embodiments of the present invention may be a method having a series of procedures or a program for executing, on a computer, the series of procedures or a recording medium for storing the program. As the recording medium, for example, a Compact Disc (CD), a Mini Disc (MD), a Digital Versatile Disc (DVD), a memory card, a Blu-ray Disc (registered trademark) or the like may be used.
The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2010-090118 filed in the Japan Patent Office on Apr. 9, 2010, the entire contents of which are hereby incorporated by reference.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2010-090118 | Apr 2010 | JP | national |