This application is a national stage application of International Application No. PCT/JP2008/070539, filed Nov. 5, 2008, whose benefit is claimed and which claims the benefit of Japanese Patent Application No. 2007-315210, filed Dec. 5, 2007, the entire disclosure of which is incorporated herein by reference.
The present invention relates to an image processing apparatus, a control method of the image processing apparatus, and a program. More particularly, the present invention relates to a technique of processing a plurality of picked-up still images.
As a method for obtaining an image having a wider angle of view than the image pickup angle of view of an image pickup apparatus, there is a known method of picking up a plurality of images while shifting each picked-up area to small extent so that two successively picked-up images have shared portions (hereinafter referred to as a “division pickup method”).
As a conventional technique relating to the division pickup method, Japanese Patent Laid-Open No. H06-121226 (FIG. 7) discloses a technique of generating a single image having a large angle of view from a panning image taken by a video camera. According to the technique of Japanese Patent Laid-Open No. H06-121226, a motion vector is calculated from a panning image by image pattern matching, a shared image area (a crosshatched portion at S5 in FIG. 7) is detected, and the shared image area and the following portion are combined (a memory 20C at S5 in FIG. 7).
Japanese Patent Laid-Open No. H06-121226 also discloses reproduction and displaying of the thus-generated synthesized image data (FIG. 20). According to the technique of Japanese Patent Laid-Open No. H06-121226, a size-reduced version of a whole synthesized image is displayed in a lower right area (sub-screen) of a display unit, while a portion (crosshatched portion of the sub-screen) of the whole synthesized image is displayed on a main screen set from a middle to a left side. In other words, a portion of the size-reduced image is enlarged and displayed on the main screen. If the size-reduced image is directly enlarged, the resolution is insufficient. Therefore, image data corresponding to a portion to be enlarged and displayed is read out from a storage medium again.
Japanese Patent Laid-Open No. 2005-197785 discloses a method for displaying a plurality of related picked-up images without synthesis (FIG. 14). According to Japanese Patent Laid-Open No. 2005-197785, a play list with which the order of reproduction of a plurality of related picked-up images is controlled is generated during image pickup. If a user selects this play list for reproduction, the user can sequentially browse picked-up images within a whole image pickup range while maintaining the resolution, without performing a complicated operation.
However, in the technique of Japanese Patent Laid-Open No. H06-121226, every time an enlarged display portion of a synthesized image is moved, corresponding image data needs to be read out from a storage medium. Therefore, it takes a time to read out image data, and there are cases in which the enlarged display portion is not smoothly moved. Also, since the user needs to select a portion to be enlarged and displayed, a complicated procedure is required to display a whole synthesized image with its original high image quality.
In the technique of Japanese Patent Laid-Open No. 2005-197785, displayed images are switched in units of a picked-up image. Therefore, it is not easy for the user to intuitively recognize the relevance between successively displayed picked-up images.
In view of such circumstances, the present invention has been achieved. A feature of the present invention is to provide a technique of enabling reproduction of a whole image from a plurality of still images that are picked up in a manner that allows two successively picked-up still images to have shared portions, while suppressing a deterioration in image quality and allowing the user to more easily recognize the image.
According to an aspect of the present invention, there is provided an image processing apparatus for processing at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, the apparatus comprising: a synthesis unit which generates a synthesized image from the at least two still images by synthesizing the shared portions; a capturing unit which captures a plurality of frame images from a plurality of areas in the synthesized image so that a frame image is captured from an area straddling two adjacent still images across the shared portions; and a generation unit which generates a moving image in which the synthesized image is scrolled and displayed, from the plurality of frame images captured by the capturing unit.
According to another aspect of the present invention, there is provided a method for controlling an image processing apparatus which processes at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, the method comprising: generating a synthesized image from the at least two still images by synthesizing the shared portions; capturing a plurality of frame images from a plurality of areas in the synthesized image so that a frame image is captured from an area straddling two adjacent still images across the shared portions; and generating a moving image in which the synthesized image is scrolled and displayed, from the plurality of frame images captured by the capturing.
According to another aspect of the present invention, there is provided a computer program stored in a computer-readable storage medium, the program causes an image processing apparatus which processes at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, to function as: a synthesis unit which generates a synthesized image from the at least two still images by synthesizing the shared portions; a capturing unit which captures a plurality of frame images from a plurality of areas in the synthesized image so that a frame image is captured from an area straddling two adjacent still images across the shared portions; and a generation unit which generates a moving image in which the synthesized image is scrolled and displayed, from the plurality of frame images captured by the capturing unit.
According to another aspect of the present invention, there is provided an image processing apparatus for capturing a plurality of frame images from at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, the apparatus comprising: a temporary storage unit which stores the at least two still images; a capturing unit which, for each of combinations of two adjacent still images across the shared portions, of the at least two still images, captures a plurality of frame images from a plurality of areas in the two adjacent still images so that a frame image is captured from an area straddling the two adjacent still images across the shared portions of the two adjacent still images; and a generation unit which generates a moving image in which the two adjacent still images are scrolled and displayed, from the plurality of frame images captured by the capturing unit.
According to another aspect of the present invention, there is provided a method for controlling an image processing apparatus for capturing a plurality of frame images from at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, the method comprising: storing temporarily the at least two still images; for each of combinations of two adjacent still images across the shared portions, of the at least two still images, capturing a plurality of frame images from a plurality of areas in the two adjacent still images so that a frame image is captured from an area straddling the two adjacent still images across the shared portions of the two adjacent still images; and generating a moving image in which the two adjacent still images are scrolled and displayed, from the plurality of frame images captured by the capturing.
According to another aspect of the present invention, there is provided a computer program stored in a computer-readable storage medium for causing an image processing apparatus for capturing a plurality of frame images from at least two still images that are picked up in such a manner that two still images successively picked up have shared portions, to function as: a temporary storage unit which stores the at least two still images; a capturing unit which, for each of combinations of two adjacent still images across the shared portions, of the at least two still images, captures a plurality of frame images from a plurality of areas in the two adjacent still images so that a frame image is captured from an area straddling the two adjacent still images across the shared portions of the two adjacent still images; and a generation unit which generates a moving image in which the two adjacent still images are scrolled and displayed, from the plurality of frame images captured by the capturing unit.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Preferred embodiments of the present invention will now be described with reference to attached drawings. Each embodiment described below will be helpful in understanding a variety of concepts from the generic to the more specific.
It should be noted that the technical scope of the present invention is defined by claims, and is not limited by each embodiment described below. In addition, not all combinations of the features described in the embodiments are necessarily required for realizing the present invention.
(Embodiment 1)
An embodiment in which an image processing apparatus according to the present invention is applied to a digital camera will be described. Firstly, an image pickup process (division pickup process), and a process of generating a single synthesized image from a plurality of still images obtained by the division pickup process (hereinafter referred to as a “stitch process”), in a division pickup method will be described with reference to
In
Reference numeral 120 denotes a compression unit for compressing image data by a still image compression-encoding process, such as JPEG encoding or the like. Reference numeral 130 denotes a recording control unit for causing the format of compression-encoded image data (hereinafter referred to as “compressed image data”) to conform to the recording format of the storage medium 140 by converting it into the JPEG file format, for example.
Reference numeral 160 denotes a display unit for displaying image data or the like obtained by the image pickup unit 105. Reference numeral 170 denotes a synthesis unit for synthesizing two pieces of image data (e.g., image data obtained by the image pickup unit 105, and image data stored in the temporary storage unit 110). Reference numeral 180 denotes a control unit for controlling overall operation of the digital camera 100 (e.g., recording, reproduction, displaying and the like of image data), including a micro-computer or the like. Reference numeral 190 denotes an input unit with which the user gives an instruction to the control unit 180, including a menu button, a determination button or the like.
Reference numeral 210 denotes a reproduction control unit for reading out a JPEG file or the like from the storage medium 140 and extracting compressed image data or the like. Reference numeral 220 denotes an expansion unit for decoding compressed image data to obtain non-compressed image data.
Before describing the division pickup process, firstly, a process of picking up separate images on a one-by-one basis (hereinafter referred to as a “normal image pickup process”) will be described.
In
In the normal image pickup process, the synthesis unit 170 outputs input image data to the display unit 160 without performing a synthesis process. The display unit 160 sequentially displays image data input from the synthesis unit 170.
The user confirms an object image by viewing the display unit 160 and presses the shutter button 190a with desired timing, thereby instructing the control unit 180 to start a process of recording image data. The control unit 180 outputs a recording starting operation signal to the temporary storage unit 110 and the compression unit 120.
In the normal image pickup process, the temporary storage unit 110, when receiving the recording starting signal, outputs latest image data input from the image pickup unit 105 to the compression unit 120. The compression unit 120, when receiving the recording starting signal of the control unit 180, compresses one screen of image data input from the temporary storage unit 110 and outputs the resultant data to the recording control unit 130. The recording control unit 130 converts the format of the compressed image data input from the compression unit 120 and records the resultant data into the storage medium 140.
The process described above is the normal image pickup process that is executed by a digital camera.
Next, the division pickup process and the stitch process will be described. Here, for the sake of simplicity, the division pickup process and the stitch process that employ two picked-up images will be described as an example.
The user instructs the control unit 180 to start the division pickup process via the input unit 190 of the digital camera 100. The control unit 180 then outputs a signal indicating the start of the division pickup process to the temporary storage unit 110 and the compression unit 120. When receiving this signal, the temporary storage unit 110 operates to temporarily accumulate image data input from the image pickup unit 105. The compression unit 120 operates to add, to image data, identification data indicating that the image data has been obtained in the division pickup process. Note that the data format of the identification data may be any known data format and will not be described in detail.
Firstly, an image pickup process of the first piece of image data that is used as the starting point during the division pickup process, will be described. Note that a process similar to the normal image pickup process will not be described.
The image data generated in the image pickup unit 105 is sequentially displayed on the display unit 160. When the user presses the shutter button 190a with desired timing, a recording starting signal for the first piece of image data is input to the control unit 180.
The control unit 180 instructs the temporary storage unit 110 to execute division image pickup with the timing of the input recording starting signal. The temporary storage unit 110 stores the first piece of image data and outputs the stored image data to the compression unit 120 in accordance with the instruction from the control unit 180. The compression unit 120, when receiving the division image pickup instruction from the control unit 180, compresses input image data and further adds identification data of division image pickup to the compressed data, and outputs the resultant data to the recording control unit 130. The recording control unit 130 converts the format of the compressed image data input from the compression unit 120 and records the result data to the storage medium 140.
Next, the image pickup process of the second piece of image data in the division pickup process will be described.
As is different from the case where the first piece of image data is picked up, the temporary storage unit 110 has stored the first piece of image data. The synthesis unit 170 reads out the first piece of image data and also sequentially reads out digital data of an object image input to the image pickup unit 105, from the image pickup unit 105, and synthesizes the first piece of image data and the digital data of the object image, and displays the resultant data on the display unit 160.
The user operates a direction of the digital camera 100 so that the image data 310 and the image data 320 are substantially identical to each other in the overlapping portion 330, and presses the shutter button 190a. Thereby, as is similar to the first piece of image data 310, the image data 320 is recorded as the second piece of image data into the storage medium 140. The image data 310 and the image data 320 have shared portions (portions corresponding to the overlapping portion 330). Also, identification data is added to the image data 320 by the compression unit 120.
The two pieces of image data thus picked up are synthesized (stitched) and are then displayed on the display unit 160, in accordance with an instruction of the user. The image data of the synthesized image (synthesized image data) generated by the stitch process is also recorded into the storage medium 140 in accordance with an instruction of the user. Hereinafter, the detail will be described.
The user instructs, via the input unit 190, the control unit 180 to execute the stitch process. In accordance with this instruction, the control unit 180 instructs the reproduction control unit 210 to reproduce a plurality of pieces of image data obtained by the division image pickup. The reproduction control unit 210 retrieves and reads out compressed image data to be stitched to which identification data is added, with reference to file management information of the storage medium 140. The compressed image data thus read out is decoded by the expansion unit 220 and the resultant data is input to the temporary storage unit 110. For the sake of simplicity, it is here assumed that the two pieces of image data are read out and are stored into the temporary storage unit 110.
The synthesis unit 170 synthesizes shared portions of the two pieces of image data stored in the temporary storage unit 110, thereby generating a single synthesized image. The thus-generated synthesis result is displayed on the display unit 160 (see
Typically, the aspect ratio of the display unit 160 is different from that of the synthesized image. Therefore, as shown in
The synthesized image thus size-reduced is also compressed by the compression unit 120, and the resultant data is recorded into the storage medium 140 by the recording control unit 130. Thereafter, when given an instruction to execute the stitch process with respect to the same image data, the control unit 180 causes the reproduction control unit 210 to obtain the synthesized image recorded in the storage medium 140 instead of actual execution of the stitch process. Thereby, a time required to display the synthesized image is reduced.
Although it has been described above that the picked-up area is shifted from left to right in the division pickup method, the picked-up area may be shifted from right to left. Also, if the picked-up area is shifted from left to right, downward, and to left, a synthesized image having a wide angle of view in vertical and horizontal directions can be generated.
The stitch process may be performed by, for example, a personal computer (PC) instead of the digital camera 100. Particularly, when the stitch process is performed with respect to a large number of pieces of image data, a large resource (a large memory capacity, etc.) is required, and therefore, the stitch process may often be performed using a PC or the like.
As described above, when the entirety of a synthesized image is displayed on the display unit 160, the size of the synthesized image is reduced, so that the visibility decreases (see
In
Next, an outline of a moving image generating process will be described with reference to
Initially, the reproduction control unit 210 reads out a plurality of pieces of compressed image data obtained by division image pickup from the storage medium 140, and outputs them to the expansion unit 220. The expansion unit 220 expands the compressed image data thus input, and stores the resultant data to the temporary storage unit 110. In
The synthesis unit 170 obtains original images from the temporary storage unit 110 and synthesizes their shared portions to generate a single synthesized image 600. The extraction unit 10 extracts images from a plurality of areas of the synthesized image 600 to capture frame images. Here, the extraction unit 10 preferably extracts images having an aspect ratio that is the same as (or at least approximate to) the aspect ratio of a display device for reproducing and displaying a moving image.
As shown in
Next, the image extracting process will be described in detail with reference to
In S110 of
In S117 of
In S119, the control unit 180 obtains synthesis reference position information about an original image. The synthesis reference position information refers to information indicating a position of an original image that serves as a reference in a synthesized image that is generated when the original image is synthesized. In this embodiment, the reference position of the image data 510 is x1, and the reference position of the image data 520 is x2.
In S121, the control unit 180 compares the image pickup time t1 and t2. If t1<t2, the control unit 180 sets an extraction direction to be a direction from x1 to x2 in which extraction is shifted, and sets an extraction position x to be x1 in S123. If otherwise, the control unit 180 sets the extraction direction to be a direction from x2 to x1 in which extraction is shifted, and sets the extraction position x to be x2 in S125. In other words, the extraction direction is set so that extraction is performed from a previously picked-up still image to a subsequently picked-up still image.
Next, in S127, the control unit 180 sets an extraction position change amount d1 in accordance with the following expression:
d1=α×|x2−x1|/|t2−tset|/fs (1)
fs: frame sampling frequency (Hz)
tset: division image pickup adjusting time (sec)
In Expression (1), tset adjusts an interval in image pickup time between two original images. Also, α may be a fixed value (typically, 1) or may be caused to correlate with a spacing between reference positions (|x2−x1|).
When α=1, the extraction position change amount d1 is set in accordance with an image pickup interval during division image pickup.
When α is caused to correlate with the reference position spacing, α is set to be large when the reference position spacing is large (α>1) and small when the reference position spacing is small (α<1) in accordance with the following expression:
α=|x2−x1|/Δxα1 (2)
Δxα1: reference position spacing set as a standard (α=1) by the user
According to Expression (2), as the image pickup interval during division image pickup increases (the shared portions of two still images successively picked up decrease), the extraction position change amount d1 is set to be larger.
Finally, in S129, the control unit 180 sets the number of times Ned that an original image is repeatedly displayed. In an extraction process described below, if extraction of an original image is repeatedly performed a plurality of times (Ned), the original image is displayed for a long time. Note that Ned may be one (Ned=1).
By the processes described above, the setting of the extraction method is completed.
Referring back to
In S152, the control unit 180 determines whether or not an original image to which extraction is to be performed is a starting image (initially picked-up image). If the original image is a starting image, the process goes to S156, and if otherwise, the process goes to S162. In the initial process, since original image is determined to be a starting image, the process goes to S156.
In S156, the extraction unit 10 repeatedly extracts a frame image from an area of the synthesized image corresponding to the extraction position x, Ned times at frame updating intervals (fs). Note that the number of times of extraction may be here one. In this case, when the compression unit 20 generates a moving image, a plurality of frame images (Ned frame images) captured from areas corresponding to original images may be arranged in sequence to generate a moving image. Also, in S156, since a frame image is captured from an area corresponding to any of original images, the extraction unit 10 may capture an original image itself as a frame image instead of extraction of an image from a synthesized image. The original image is not affected by a deterioration in image quality due to the synthesis of shared portions. Therefore, if an original image itself is captured as a frame image, the image quality of a frame image can be improved (the same is true of S162).
As shown in
Next, in S164, the control unit 180 changes the extraction position x (x=x+d1). In the example of
The determination of S150 is performed again. In this case, the extraction position x does not coincide with the position of an original image, so the process goes to S154. In S154, the extraction unit 10 extracts an image from the extraction position x of the synthesized image once. Thereby, a frame image 711 of
When the extraction position x coincides with the extraction position 620, the process goes from S150 to S152. Further, since the image of the extraction position 620 is not a starting image, i.e., is a final image, the process goes to S162. In S162, the extraction unit 10 performs the extraction process Ned times (this process is similar to that of S156 and will not be described).
By the processes described above, the extraction process is completed.
During the processes described above, as shown in
Thus, by extracting and displaying a portion of a synthesized image, the entirety of the synthesized image can be displayed while securing high image quality as it was during image pickup. The initial and final frame images are displayed for a longer time than those of the other frame images. The initial and final frame images are original images, and therefore, have a less deterioration in image quality due to a distortion or the like caused by synthesis. Therefore the initial and final frame images are suitable for viewing for a long time. Moreover, also for the user, it is preferable to display an actually picked-up image for a long time when an image as it was during image pickup is reappeared.
Also, since the extraction direction or the extraction position change amount is controlled based on the image pickup time in the division pickup process, the order of image pickup is reflected on the generated moving image.
Moreover, the extraction position change amount can be caused to correlate with the extraction reference position spacing of an original image. Therefore, when an image pickup position is largely shifted during division image pickup (the spacing is large), the position change amount can be set to be large. Therefore, the speed of movement to an original image that more attention is paid increases, so that the visibility of such an image is improved.
Next, the moving image generating process in the compression unit 20 will be described with reference to
Initially, in S182, the compression unit 20 determines whether or not a frame image to be encoded was captured from an area corresponding to an original image. If the result is positive, the process goes to S184, and if otherwise, the process goes to S186.
In S184, the compression unit 20 determines whether or not a frame image captured from an area corresponding to an original image was captured by the first one of the Ned extraction operations. If the result is positive, the process goes to S188, and if otherwise, the process goes to S186.
In S188, the compression unit 20 performs intra-frame coding with respect to a frame image. As shown in
On the other hand, in S186, the compression unit 20 encodes a frame image by inter-frame predictive coding or bi-directional predictive coding. For example, as shown in
In S194, the compression unit 20 determines whether or not all frame images have been processed. If the result is positive, the process of this flowchart is ended. If otherwise, the process returns to S182, in which the next frame image is processed.
For frame images other than an original image, a portion in which an image is somehow distorted due to the synthesis process (i.e., a shared portion) is often positioned in the vicinity of a middle of a screen. Therefore, a particularly high image quality is not obtained even if it is sought after. Therefore, as shown in
By the processes described above, the moving image generating process is completed. Note that, according to the flowchart of
It has thus been described in this embodiment that frame images are arranged in order in which they were extracted by the extraction unit 10 to generate a moving image, and moving images are collectively compressed. However, since the same image is displayed for a predetermined time for the initial and final frame images of the moving image, the initial and final frames are displayed as normal still images, and only portions whose extraction positions are moved from the initial frame to the final frame may be generated as a moving image as described above and may be compressed. In this case, a play list is generated with which a reproduction control is performed during reproduction to display a still image for a predetermined time (calculated from the number of times of repeated displaying Ned), perform moving image reproduction, and finally display a still image for a predetermined time. This case also falls within this embodiment.
As described above, according to this embodiment, the extraction unit 10 captures frame images from a plurality of areas of a synthesized image. In this case, the extraction unit 10 captures a larger number of frame images than the number of still images constituting the synthesized image. Also, for each of all combinations of two still images successively picked up, a frame image is captured from at least one area straddling the two still images across a shared portion. The compression unit 20 generates a moving image in which the synthesized image is scrolled and displayed, from the captured frame images.
Thereby, the entirety of a plurality of still images where any two still image successively picked up have shared portions can be reproduced in a manner that allows the user to more easily recognize while suppressing a deterioration in image quality.
Also, the compression unit 20 can perform smooth reproduction and displaying even if the resource of a reproduction apparatus of a moving image is limited, since a large portion of frame images is predictive coded so that the data amount of moving images can be reduced.
(Embodiment 2)
In Embodiment 2, a case where three or more original images exist will be specifically described for the sake of easy understanding, and such a case is also not excluded in Embodiment 1. Also, a technique will be described in which even if three or more original images exist, a partial synthesized image is generated by performing synthesis in units of two original images, thereby reducing a memory capacity required for generation of a moving image.
Note that the configuration of the digital camera 200 as an image processing apparatus is similar to that of Embodiment 1 and will not be described (see
In this embodiment, as shown in
An image extracting process will be described in detail with reference to
In S1110 of
In S1113 of
Next, in S1115, the control unit 180 determines whether or not the counter i has reached the number of original images n. If the result is negative, the process goes to S1117, and if the result is positive, the process goes to S1133.
In Embodiment 1, the value of n is two (n=2) and thus, only when i=1, the processes of S1117 through S1127 are executed. Specifically, the processes of S1117 through S1127 are similar to S117 through S127 of
In S1129, the control unit 180 sets the number of times of repetitive displaying of an original image (Ni) in accordance with the following expression only when i≧2.
Ni=β×(Δxi−1+Δxi)
Δxi=|xi+1−xi| (3)
The number of repetitions Ni is proportional to the magnitudes of extraction reference position spacings Δxi−1 and Δxi of adjacent original images. In other words, the larger the magnitudes of adjacent extraction reference position spacings, the larger the number of repetitions that is set. β is a constant of proportion.
In S1131, the control unit 180 adds one to i, and the process then returns to S1115.
In S1115, when the counter i reaches n, the process goes to S1133, in which the control unit 180 sets the number of times of repetitive displaying of an original image Ned in accordance with the following expression.
Ned=γ×ΣΔxi (4)
As can be seen from Expression (4), the number of repetitions Ned is set to be sufficiently larger than the number of repetitions of an intermediate original image Ni. γ is a constant of proportion.
Note that, in this embodiment, Ned is applied only to original images that were initially and finally picked up (i=1 and n).
By the processes described above, the extraction method setting process is completed.
Referring back to
In S158, the control unit 180 determines whether or not an original image to which extraction is to be performed is a final image (an image finally picked up). If the result is positive, the process goes to S162, and if otherwise, the process goes to S160.
In S160, the extraction unit 10 performs extraction the number of times Ni with respect to the i-th original image. In the example of
Note that, in
By the processes described above, extraction of frame images is completed.
The compression unit 20 encodes the thus-captured frame images to generate a moving image as in Embodiment 1 (see
In this embodiment, for the intermediate original images (reference numerals 1720 and 1730 in
In the example of
Specifically, the process of
Also, since frame images are extracted in order of image pickup, a moving image can be generated which is reproduced and displayed in order of image pickup. Note that reference numerals 2710 through 2740 correspond to reference numerals 1710 through 1740 of
Specifically, when division image pickup is performed from an original image 3610 to an original image 3640 of
Also, the synthesis unit 170 may synthesize all picked-up images to generate a single synthesized image before extraction of an initial frame image, or may synthesize only two picked-up images corresponding to an area to be extracted with required timing, to generate a partial synthesized image.
In the latter case, the extraction unit 10 captures a frame image from a partial synthesized image instead of a synthesized image. The temporary storage unit 110 does not have to hold a synthesized image, and may hold a partial synthesized image smaller than the synthesized image. Therefore, the temporary storage unit 110 needs to have only a capacity required to synthesize two original images. Therefore, the process of this embodiment can be executed in a compact device, such as a digital camera or the like, in which the capacity of the temporary storage unit 110 is relatively limited.
As described above, according to this embodiment, even when three or more original images exist, the technique of the present invention can be used as in Embodiment 1.
(Embodiment 3)
In Embodiment 3, when the synthesis unit 170 synthesizes shared portions of two still images, the ratio of synthesis is changed, depending on the extraction position of an image extracted by the extraction unit 10. Specifically, the synthesis unit 170 performs so-called α blending, and changes the value of α, depending on the extraction position of an image extracted by the extraction unit 10. Hereinafter, this will be described with reference to
In Embodiment 3, as is different from Embodiment 1, S148 is provided before S150. In S148, the synthesis unit 170 synthesizes shared portions of two still images corresponding to an extraction position x, with use of weighting based on the extraction position x, to generate a partial synthesized image.
Here, a case shown in
Also, when the extraction position corresponds to the area of an original image, the value of α is set to be 0 or 1 so that only the original image is used. For example, when an image is extracted from the extraction position 4610, the value of α is 0. Thereby, a frame image that is extracted from a position corresponding to the area of an original image, is not affected by a deterioration in image quality due to an image distortion or the like during synthesis.
As the extraction position is closer to the extraction position 4620, the synthesis unit 170 sets the value of α to be larger. For example, the value of α may be determined based on the proportion of a current extraction position shift amount |x−x1| to an extraction reference position spacing Δx1=|x2−x1|. When the extraction position x coincides with x1, α=0. When the extraction position x coincides with x2, α=1. The value of α is calculated by linear interpolation between these positions. Thus, the value of α is determined, depending on the extraction position.
In this embodiment, the value of α is changed between successive frame images. Therefore, when the compression unit 20 encodes a frame image by a bi-directional predictive coding scheme, the prediction efficiency is improved, so that the code amount is reduced. Also, as shown in
The image synthesizing method described above is performed using α blending by pixel addition. However, the image synthesizing method of the present invention is not limited to α blending. For example, there is a technique called morphing that links two similar images by coordinate conversion. A method of linking images of synthesis shared areas of original images in time series using such an image conversion technique may fall within another embodiment of the present invention.
Moreover, as another image synthesizing method, a method of switching images in synthesis shared areas of original images at a predetermined position, and changing the switching positions in time series, may fall within another embodiment of the present invention.
In the case of an imaging synthesizing method employing the α blending technique, the morphing technique or the switching technique, a frame image corresponding to an original image completely matches the original image. In other words, when an image is displayed for a long time, an original image without a distortion can be displayed.
Moreover, there is another method for synthesizing an image in which a distortion occurring in a synthesis shared area portion is reduced and the occurrence of an unnatural discontinuous image is prevented. In this case, since it is not necessary to change a synthesis process in time series, the processing load is reduced. Note that although even an image having a long display time does not completely match an original image, the image is smoothly synthesized, so that unnaturalness can be reduced for a display method employing division image pickup.
Note that, in this embodiment, even when the extraction unit 10 extracts an image from at least one area between predetermined two still images, the synthesis unit 170 generates a separate partial synthesized image for each area.
As described above, according to this embodiment, frame images are captured while changing stepwise a synthesis ratio of shared portions of two still images.
Thereby, the deterioration in image quality of a frame image due to synthesis can be reduced, and the encoding efficiency can be improved.
(Other Embodiment)
The processing described in the above embodiments may be realized by providing a storage medium, storing program codes of software realizing the above-described functions, to a computer system or apparatus. By reading the program codes stored in the storage medium with a computer (or a CPU or MPU) of the system or apparatus and executing them, the functions of the above-described embodiments can be realized. In this case, the program codes read from the storage medium realize the functions according to the embodiments, and the storage medium storing the program codes constitutes the invention. The storage medium, such as a Floppy® disk, a hard disk, an optical disk, a magneto-optical disk and the like can be used for providing the program codes. Also, CD-ROM, CD-R, a magnetic tape, a non-volatile memory card, ROM, and the like can be used.
Furthermore, the functions according to the above embodiments are realized not only by executing the program codes read by the computer. The present invention also includes a case where an OS (operating system) or the like working on the computer performs part or the entire processes in accordance with designations of the program codes and realizes the functions according to the above embodiments.
Furthermore, the program codes read from the storage medium may be written in a function expansion card which is inserted into the computer or in a memory provided in a function expansion unit which is connected to the computer. Thereafter, a CPU or the like contained in the function expansion card or unit may perform part or the entire processes in accordance with designations of the program codes and may realize the functions of the above embodiments.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2007-315210, filed on Dec. 5, 2007, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2007-315210 | Dec 2007 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2008/070539 | 11/5/2008 | WO | 00 | 4/23/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/072375 | 6/11/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4019036 | Hiramatsu et al. | Apr 1977 | A |
4602251 | Sawada et al. | Jul 1986 | A |
4920504 | Sawada et al. | Apr 1990 | A |
5018023 | Kubota | May 1991 | A |
5142616 | Kellas et al. | Aug 1992 | A |
5153716 | Smith | Oct 1992 | A |
5364270 | Aoyama et al. | Nov 1994 | A |
5601353 | Naimark et al. | Feb 1997 | A |
5666459 | Ohta et al. | Sep 1997 | A |
5841473 | Chui et al. | Nov 1998 | A |
5867208 | McLaren | Feb 1999 | A |
5880778 | Akagi | Mar 1999 | A |
5886742 | Hibi et al. | Mar 1999 | A |
6058212 | Yokoyama | May 2000 | A |
6416477 | Jago | Jul 2002 | B1 |
6542642 | Takizawa et al. | Apr 2003 | B2 |
6867801 | Akasawa et al. | Mar 2005 | B1 |
6891561 | Achituv et al. | May 2005 | B1 |
6982749 | Matsui | Jan 2006 | B2 |
7206017 | Suzuki | Apr 2007 | B1 |
7409105 | Jin et al. | Aug 2008 | B2 |
7412155 | Kasai | Aug 2008 | B2 |
7424218 | Baudisch et al. | Sep 2008 | B2 |
7428007 | Kitaguchi et al. | Sep 2008 | B2 |
7453479 | Le et al. | Nov 2008 | B2 |
7551203 | Nakayama et al. | Jun 2009 | B2 |
7577314 | Zhou et al. | Aug 2009 | B2 |
7580952 | Logan et al. | Aug 2009 | B2 |
7593635 | Jeon | Sep 2009 | B2 |
7596177 | Imagawa et al. | Sep 2009 | B2 |
7602993 | Nishiyama | Oct 2009 | B2 |
7671894 | Yea et al. | Mar 2010 | B2 |
7710462 | Xin et al. | May 2010 | B2 |
7728877 | Xin et al. | Jun 2010 | B2 |
7728878 | Yea et al. | Jun 2010 | B2 |
8040952 | Park et al. | Oct 2011 | B2 |
8068693 | Sorek et al. | Nov 2011 | B2 |
8228994 | Wu et al. | Jul 2012 | B2 |
8311108 | Lee et al. | Nov 2012 | B2 |
8350892 | Hayashi | Jan 2013 | B2 |
8717412 | Linder et al. | May 2014 | B2 |
20020196852 | Yamada et al. | Dec 2002 | A1 |
20050058330 | Mitsuhashi et al. | Mar 2005 | A1 |
20050084175 | Olszak | Apr 2005 | A1 |
20050097442 | Green | May 2005 | A1 |
20050120094 | Tuli | Jun 2005 | A1 |
20060279568 | Matsumoto | Dec 2006 | A1 |
20070002372 | Sekizawa | Jan 2007 | A1 |
20070046771 | Luellau et al. | Mar 2007 | A1 |
20070132784 | Easwar et al. | Jun 2007 | A1 |
20100245540 | Fukuzawa | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
06-121226 | Apr 1994 | JP |
10-155109 | Jun 1998 | JP |
11-004398 | Jan 1999 | JP |
11-146243 | May 1999 | JP |
11-289515 | Oct 1999 | JP |
11-308618 | Nov 1999 | JP |
2000-101916 | Apr 2000 | JP |
2000-197003 | Jul 2000 | JP |
2001-016591 | Jan 2001 | JP |
2002-094870 | Mar 2002 | JP |
2002342753 | Nov 2002 | JP |
2005-175620 | Jun 2005 | JP |
2005-197785 | Jul 2005 | JP |
2005-217902 | Aug 2005 | JP |
2005-303991 | Oct 2005 | JP |
2005-328497 | Nov 2005 | JP |
2006011862 | Jan 2006 | JP |
2006-033353 | Feb 2006 | JP |
2006-166208 | Jun 2006 | JP |
2006-174178 | Jun 2006 | JP |
2006-270676 | Oct 2006 | JP |
2006-345400 | Dec 2006 | JP |
2007079644 | Mar 2007 | JP |
Entry |
---|
Nov. 2, 2011 Chinese Office Action, English Translation, Chinese Patent Application No. 200880119619.7. |
Jan. 16, 2012 Japanese Office Action, English Translation, Japanese Patent Application No. 2007-315210. |
May 25, 2012 Japanese Office Action, English Translation, Japanese Patent Application No. 2007-315210. |
Number | Date | Country | |
---|---|---|---|
20100245540 A1 | Sep 2010 | US |