This nonprovisional application claims priority under 35 U.S.C. §119(a) on Patent Application No. 2009-191072 filed in Japan on Aug. 20, 2009 and on Patent Application No. 2010-150739 filed in Japan on Jul. 1, 2010, the entire contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to an image sensing apparatus such as a digital still camera or a digital video camera, and an image processing apparatus which performs image processing on an image.
2. Description of Related Art
As illustrated in
In this case, if a sequential photography interval that is an photography interval between temporally neighboring taken images is appropriate, images of the specific subject at different photography time points are arranged at an appropriate position interval on a taken image sequence and the stroboscopic image. However, when the sequential photography interval is too short, as illustrated in
In a first conventional method, slit frames for dividing a photography region into a plurality of regions are displayed on the display unit, and guides a photographer to press a shutter button at timings when the specific subject exists in individual slit frames, so as to obtain the taken image sequence in which the images of the specific subject are arranged at an appropriate position interval. However, in this conventional method, the photographer is required to decide whether or not the specific subject exists in each of the slit frames so that the photographer presses the shutter button at appropriate timings. Therefore, a large load is put on the photographer, and the photographer may often let the appropriate timing for pressing the shutter button slip away.
On the other hand, there is proposed another method in which a frame image sequence is taken at a constant frame rate and is recorded in a recording medium, and in a reproduction process, images of a subject part having a motion are extracted from the recorded frame image sequence and combined.
In a second conventional method, only partial images extracted from frame images that are partial images of the subject having a motion larger than a predetermined level from the previous frame image are combined in decoding order. However, in this method, if a speed of the specific subject to be noted is small, it is decided that the motion of the specific subject between neighboring frame images is not the motion larger than the predetermined level, so that the specific subject is excluded from a target of combination (as a result, a stroboscopic image noting the specific subject cannot be generated).
In addition, as to the above-mentioned first conventional method, in a method of reproduction, as illustrated in
As to the first conventional method, in the method of reproduction, a so-called background image in which there is no specific subject having a motion (image like the frame image 901) is necessary.
With reference to
A first image sensing apparatus according to the present invention includes an imaging unit which outputs image data of images obtained by photography, and a photography control unit which controls the imaging unit to take sequentially a plurality of target images including a specific object as a subject. The photography control unit sets a photography interval of the plurality of target images in accordance with a moving speed of the specific object.
A second image sensing apparatus according to the present invention includes an imaging unit which outputs image data of images obtained by photography, and a photography control unit which controls the imaging unit to take sequentially a plurality of frame images including a specific object as a subject. The photography control unit includes a target image selection unit which selects a plurality of target images from the plurality of frame images on the basis of a moving speed of the specific object.
A first image processing apparatus according to the present invention includes an image selection unit which selects p selected images from m input images among a plurality of input images obtained by sequential photography including a specific object as a subject (m and p denote an integer of two or larger, and m>p holds), the image selection unit selects the p selected images including i-th and the (i+1)th selected images so that a distance between the specific object on the i-th selected image and the specific object on the (i+1)th selected image becomes larger than a reference distance corresponding to a size of the specific object (i denotes an integer in a range from one to (p−1)).
A third image sensing apparatus according to the present invention includes an imaging unit which outputs image data of images obtained by photography, a sequential photography control unit which controls the imaging unit to perform sequential photography of a plurality of target images including a specific object as a subject, and an object characteristic deriving unit which detects a moving speed of the specific object on the basis of image data output from the imaging unit before the plurality of target images are photographed. The sequential photography control unit sets a sequential photography interval of the plurality of target images in accordance with the detected moving speed.
A second image processing apparatus according to the present invention includes an image selection unit which selects p selected images from m input images obtained by sequential photography including a specific object as a subject (m and p denote an integer of two or larger, and m>p holds), and an object detection unit which detects a position and a size of the specific object on each input image via a tracking process for tracking the specific object on the m input images on the basis of image data of each input image. The image selection unit selects the p selected images including i-th and the (i+1)th selected images so that a distance between the specific object on the i-th selected image and the specific object on the (i+1)th selected image based on a detection result of position by the object detection unit is larger than a reference distance corresponding to the size of the specific object on the i-th and the (i+1)th selected images based on a detection result of size by the object detection unit (i denotes an integer in a range from one to (p−1)).
Meanings and effects of the present invention will be apparent from the following description of embodiments. However, the embodiments described below are merely examples of the present invention, and meanings of the present invention and terms of elements thereof are not limited to those in the embodiments described below.
Hereinafter, embodiments of the present invention will be described with reference to attached drawings. In the referred drawings, the same portion is denoted by the same numeral or symbol, so that overlapping description of the same portion is omitted as a rule.
A first embodiment of the present invention will be described.
The imaging unit 11 is equipped with an image sensor 33 as well as an optical system, an aperture stop and a driver that are not shown. The image sensor 33 is constituted of a plurality of light receiving pixels arranged in the horizontal and the vertical directions. The image sensor 33 is a solid-state image sensor constituted of a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS) image sensor or the like. Each light receiving pixel of the image sensor 33 performs photoelectric conversion of an optical image of a subject entering through the optical system and the aperture stop, and an electric signal obtained by the photoelectric conversion is output to an AFE (analog front end) 12. Individual lenses constituting the optical system form an optical image of the subject on the image sensor 33.
The AFE 12 amplifies an analog signal output from the image sensor 33 (each light receiving pixel), and converts the amplified analog signal into a digital signal, which is output to an video signal processing unit 13 from the AFE 12. An amplification degree of the signal amplification in the AFE 12 is controlled by a CPU (central processing unit) 23. The video signal processing unit 13 performs necessary image processing on an image expressed by the output signal of the APE 12, so as to generate an video signal of the image after the image processing. A microphone 14 converts sounds around the image sensing apparatus 1 into an analog sound signal, and a sound signal processing unit 15 converts the analog sound signal into a digital sound signal.
A compression processing unit 16 compresses the video signal from the video signal processing unit 13 and the sound signal from the sound signal processing unit 15 by using a predetermined compression method. An internal memory 17 is constituted of a dynamic random access memory (DRAM) or the like for temporarily storing various data. An external memory 18 as a recording medium is a nonvolatile memory such as a semiconductor memory or a magnetic disk, which records the video signal and the sound signal after the compression process performed by the compression processing unit 16, in association with each other.
An expansion processing unit 19 expands the compressed video signal and sound signal read from the external memory 18. The video signal after the expansion process performed by the expansion processing unit 19 or the video signal from the video signal processing unit 13 are sent via a display processing unit 20 to the display unit 27 constituted of a liquid crystal display or the like and is displayed as an image. In addition, the sound signal after the expansion process performed by the expansion processing unit 19 is sent via a sound output circuit 21 to the speaker 28 and is output as sounds.
A timing generator (TG) 22 generates a timing control signal for controlling timings of operations in the entire image sensing apparatus 1, and the generated timing control signal is imparted to individual units in the image sensing apparatus 1. The timing control signal includes a vertical synchronizing signal Vsync and a horizontal synchronizing signal Hsync. The CPU 23 integrally controls operations of individual units in the image sensing apparatus 1. An operating unit 26 includes a record button 26a for instructing start and stop of taking and recording a moving image, a shutter button 26b for instructing to take and record a still image, an operation key 26c and the like, for receiving various operations performed by a user. The contents of operation to the operating unit 26 are transmitted to the CPU 23.
Operation modes of the image sensing apparatus 1 include a photography mode in which images (still images or moving images) can be taken and recorded, and a reproduction mode in which images (still images or moving images) recorded in the external memory 18 are reproduced and displayed on the display unit 27. In accordance with the operation to the operation key 26c, a transition between modes is performed. The image sensing apparatus 1 operating in the reproduction mode functions as an image reproduction apparatus.
In the photography mode, photography of a subject is performed sequentially so that taken images of the subject are sequentially obtained. The digital video signal expressing an image is also referred to as image data.
Note that compression and expansion of the image data are not relevant to the essence of the present invention. Therefore, in the following description, compression and expansion of the image data are ignored (i.e., for example, recording of compressed image data is simply referred to as recording of image data). Further, in this specification, image data of certain image may be simply referred to as an image.
As illustrated in
As one type of the photography mode of the image sensing apparatus 1, there is a special sequential photography mode. In the special sequential photography mode, as illustrated in
The user can set the operation mode of the image sensing apparatus 1 to the special sequential photography mode by performing a predetermined operation to the operating unit 26. Hereinafter, in the first embodiment, an operation of the image sensing apparatus 1 in the special sequential photography mode will be described.
Target images taken first, second, . . . , and p-th order among the p target images are denoted by symbols In, In+1, . . . , and In+p−1, respectively (n is an integer). A taken image obtained by photography before taking the first target image In is referred to as a preview image. The preview image is taken sequentially at a constant frame rate (e.g., 60 frames per second (fps)). Symbols I1 to In−1 are assigned to the preview image sequence. As illustrated in
The tracking process unit 51 performs a tracking process for tracking on an input moving image a noted object on an input moving image on the basis of image data of the input moving image. Here, the input moving image means a moving image constituted of the preview image sequence including the preview images I1 to In−1 and the target image sequence including the target images In to The noted object is a noted subject of the image sensing apparatus 1 when the input moving image is taken. The noted object to be tracked in the tracking process is referred to as a tracking target in the following description.
The user can specify the tracking target. For instance, the display unit 27 is equipped with a so-called touch panel function. Further, when the preview image is displayed on the display screen of the display unit 27, the user touches a display region in which the noted object is displayed on the display screen, so that the noted object is set as the tracking target. Alternatively, for example, the user can specify the tracking target also by a predetermined operation to the operating unit 26. Further, alternatively, it is possible that the image sensing apparatus 1 automatically sets the tracking target by using a face recognition process. Specifically, for example, a face region that is a region including a human face is extracted from the preview image on the basis of image data of the preview image, and then it is checked by the face recognition process whether or not a face included in the face region matches a face of a person enrolled in advance. If matching is confirmed, the person having the face included in the face region may be set as the tracking target.
Further, alternatively, it is possible to set the moving object on the preview image sequence automatically to the tracking target. In this case, a known method may be used so as to extract the moving object to be set as the tracking target from an optical flow between two temporally neighboring preview images. The optical flow is a bundle of motion vectors indicating direction and amplitude of a movement of an object on an image.
For convenience sake of description, it is supposed that the tracking target is set on the preview image I1 in the following description. After the tracking target is set, the position and size of the tracking target is sequentially detected on the preview images and the target images in the tracking process on the basis of image data of the input moving image. Actually, an image region in which image data indicating the tracking target exists is set as the tracking target region in each preview image and each target image, and a center position and a size of the tracking target region is detected as the position and size of the tracking target. The image in the tracking target region set in the preview image is a partial image of the preview image (the same is true for the target image and the like). A size of the tracking target region detected as the size of the tracking target can be expressed by the number of pixels belonging to the tracking target region. Note that it is possible to replace the term “center position” in the description of each embodiment of the present invention with “barycenter position”.
The tracking process unit 51 outputs tracking result information including information indicating the position and size of the tracking target in each preview image and each target image. It is supposed that a shape of the tracking target region is also defined by the tracking result information. For instance, although it is different from the situation illustrated in
The tracking process between the first and the second images to be calculated can be performed as follows. Here, the first image to be calculated means a preview image or a target image in which the position and size of the tracking target are already detected. The second image to be calculated means a preview image or a target image in which the position and size of the tracking target are to be detected. The second image to be calculated is usually an image that is taken after the first image to be calculated.
For instance, the tracking process unit 51 can perform the tracking process on the basis of image characteristics of the tracking target. The image characteristics include luminance information and color information. More specifically, for example, a tracking frame that is estimated to have the same order of size as a size of the tracking target region is set in the second image to be calculated, and a similarity evaluation between image characteristics of an image in the tracking frame on the second image to be calculated and image characteristics of an image in the tracking target region on the first image to be calculated is performed while changing a position of the tracking frame in a search region. Then, it is decided that the center position of the tracking target region in the second image to be calculated exists at the center position of the tracking frame having the maximum similarity. The search region with respect to the second image to be calculated is set on the basis of a position of the tracking target in the first image to be calculated.
After the center position of the tracking target region in the second image to be calculated is determined, a closed region enclosed by an edge including the center position can be extracted as the tracking target region in the second image to be calculated by using a known contour extraction process or the like. Alternatively, an approximation of the closed region may be performed by a region having a simple figure shape (such as a rectangle or an ellipse) so as to extract the same as the tracking target region. In the following description, it is supposed that the tracking target is a person and that the approximation of the tracking target region is performed by an ellipse region including a body and a head of the person as illustrated in
Note that it is possible to adopt any other method different from the above-mentioned method as the method of detecting position and size of the tracking target on the image (e.g., it is possible to adopt a method described in JP-A-2004-94680 or a method described in JP-A-2009-38777).
The tracking target characteristic calculation unit 52 calculates, on the basis of the tracking result information of the tracking process performed on the preview image sequence, moving speed SP of the tracking target on the image space and a subject size (object size) SIZE in accordance with the size of the tracking target on the image space. The moving speed SP functions as an estimated value of the moving speed of the tracking target on the target image sequence, and the subject size SIZE functions as an estimated value of the size of the tracking target on each target image.
The moving speed SP and the subject size SIZE can be calculated on the basis of the tracking result information of two or more preview images, i.e., positions and sizes of the tracking target region on two or more preview images.
A method of calculating the moving speed SP and the subject size SIZE from the tracking result information of two preview images will be described. The two preview images for calculating the moving speed SP and the subject size SIZE are denoted by IA and IB. The preview image IB is a preview image taken at time as close as possible to a photography time point of the target image In, and the preview image IA is a preview image taken before the preview image IB. For instance, the preview images IA and IB are the preview images In−2 and In−1, respectively. However, it is possible to set the preview images IA and IB to the preview images In−3 and In−1, respectively, or to the preview images In−3 and In−2, respectively, or to other preview images. In the following description, it is supposed that preview images IA and IB are the preview images In−2 and In−1, respectively, unless otherwise stated.
The moving speed SP can be calculated in accordance with the equation (1) below, from a center position (xA,yA) of the tracking target region on the preview image IA and a center position (xB,yB) of the tracking target region on the preview image IB. As illustrated in
SP=d
AB
/INT
PR (1)
On the other hand, the subject size SIZE can be calculated from a specific direction size LA of the tracking target region in the preview image IA and a specific direction size LB of the tracking target region in the preview image IB.
A method of calculating the moving speed SP and the subject size SIZE by using the tracking result information of the preview images IA and IB that are the preview images In−2 and as well as the tracking result information of the preview image IC that is the preview image In−3 will be described. In this case, the moving speed SP can be calculated in accordance with the equation SP=(dCA+dAB)/(2·INTPR). Here, dCA denotes a distance between the center positions (xC,yC) and (xA,yA) on the image space, and the center position (xC,yC) is a center position of the tracking target region in the preview image IC. In addition, positions of two intersection points at which the straight line connecting the center positions (xC,yC) and (xA,yA) crosses the contour of the tracking target region 330C on the preview image IC are specified, and a distance between the two intersection points is determined as the specific direction size LC, so that an average value of the specific direction sizes LA, LB and LC can be determined as a subject size SIZE. Also in the case where the moving speed SP and the subject size SIZE are calculated from the tracking result information of four or more preview images, they can be calculated in the same manner.
The moving speed SP (an average moving speed of the tracking target) and the subject size SIZE (an average size of the tracking target) determined by the method described above is sent to the sequential photography control unit 53.
The sequential photography control unit 53 sets the sequential photography interval INTTGT in photography of the target image sequence in accordance with the equation, (sequential photography interval INTTGT)=(target subject interval α)/(moving speed SP), more specifically, in accordance with the equation (2) below.
INT
TGT
=α/SP (2)
The sequential photography interval INTTGT means an interval between photography time points of two temporally neighboring target images (e.g., In and In+1). The photography time point of the target image In means, in a strict sense, for example, a start time or a mid time of exposure period of the target image In (the same is true for the target image In+1 and the like).
The target subject interval α indicates a target value of a distance between center positions of tracking target regions on the two temporally neighboring target images. Specifically, for example, a target value of a distance between the center position (xn,yn) of the tracking target region on the target image In and the center position (xn+1,yn+1) of the tracking target region on the target image In+1 is the target subject interval α. The sequential photography control unit 53 determines the target subject interval α in accordance with the subject size SIZE. For instance, the target subject interval α is determined from the subject size SIZE so that “α=SIZE” or “α=k0×SIZE” or “α=SIZE+k1” is satisfied. Symbols k0 and k1 are predetermined coefficients. However, it is possible to determine the target subject interval α in accordance with user's instruction. In addition, it is possible that the user determines values of the coefficients k0 and k1.
The sequential photography control unit 53 controls the imaging unit 11 in cooperation with the TG 22 (see
Therefore, the sequential photography possibility decision unit 55 (see
The position 350 is a position shifted from the center position of the tracking target region on the preview image In−1 in the movement direction 360 by a distance (SP×INTPR). Here, however, it is supposed that a time difference between photography time points of the preview image In−1 and the target image In is equal to the photography interval INTPR of the preview images.
The sequential photography possibility decision unit 55 calculates a movable distance DISAL of the tracking target on the target image sequence on the assumption that the tracking target moves in the movement direction 360 at the moving speed SP on the target image sequence during a photography period of the target image sequence. A line segment 361 extending from the position 350 in the movement direction 360 is defined, and an intersection point 362 of the line segment 361 and the contour of the virtual target image In′ is determined. A distance between the position 350 and the intersection point 362 is calculated as the movable distance DISAL.
On the other hand, the sequential photography possibility decision unit 55 estimates a moving distance DISEST of the tracking target on the image space (and on the target image sequence) during the photography period of the p target images.
The positions 351, 352, 353 and 354 are estimated center positions of the tracking target region on the target images In+1, In+2, In+3 and In+4, respectively. The position 351 is a position shifted from the position 350 in the movement direction 360 by the target subject interval α. The positions 352, 353 and 354 are positions shifted from the position 350 in the movement direction 360 by (2×α), (3×α) and (4×α), respectively.
The sequential photography possibility decision unit 55 estimates a distance between the positions 350 and 354 as the moving distance DISEST. Specifically, the moving distance DISEST is estimated on the assumption that the tracking target moves in the movement direction 360 by the moving speed SP on the target image sequence during the photography period of the target image sequence. Since p is five, an estimation equation (3) of the moving distance DISEST is as follows (see the above-mentioned equation (2)).
DIS
EST=(4×α)=α×(p−1)=INTTGT×SP×(p−1) (3)
Only in the case where it is estimated that the entire tracking target region is included in each of p (five in this example) target images, the sequential photography possibility decision unit 55 decides that the sequential photography of p target images can be performed. Otherwise, it is decided that the sequential photography of p target images cannot be performed. As understood also from
DIS
AL
≧DIS
EST+SIZE/2 (4)
DIS
AL
≧DIS
EST+SIZE/2+Δ (5)
If the sequential photography possibility decision unit 55 decides that the sequential photography cannot be performed, the notification control unit 56 (
In the following description, it is supposed that the sequential photography possibility decision unit 55 decides that the sequential photography of p target images can be performed, and that the entire tracking target region (i.e., the entire image of the tracking target) is included in each of the actually taken p target images, unless otherwise stated.
The stroboscopic image generation unit 54 generates the stroboscopic image by combining images in the tracking target regions of the target images In to In+p−1 on the basis of tracking result information for the target images In to In+p−1 and image data of the target images In to In+p−1. The generated stroboscopic image can be recorded in the external memory 18. Note that the target images In to In+p−1 can also be recorded in the external memory 18.
Specifically, images in the tracking target regions on the target images In+1 to In+p−1 are extracted from the target images In+1 to In+p−1 on the basis of the tracking result information for the target images In+1 to In+p−1, and the images extracted from the target images In+1, In+2, . . . In+p−1 are sequentially overwritten on the target image In, so that a stroboscopic image like the stroboscopic image 315 illustrated in
Alternatively, it is possible to extract images in the tracking target regions on the target images In to In+p−1 from the target images In to In+p−1 on the basis of the tracking result information for the target images In to In+p−1, and to prepare a background image such as a white image or a black image so as to sequentially overwrite the images extracted from the target images In, In+1, In+2, . . . In+p−1 on the background image for generating the stroboscopic image.
It is also possible to generate the stroboscopic image without using the tracking result information for the target images. For instance, when p is five, images in the regions 371 to 374 illustrated in
<<Operational Flow>>
Next, with reference to
After the tracking process is started, it is checked in Step S13 whether or not the shutter button 26b is in the half-pressed state. When it is checked that the shutter button 26b is in the half-pressed state, the moving speed SP and the subject size SIZE are calculated on the basis of the latest tracking result information (tracking result information of two or more preview images) obtained at that time point, and then setting of the sequential photography interval INTTGT and decision of the sequential photography possibility by the sequential photography possibility decision unit 55 are performed (Steps S14 and S15).
When the sequential photography possibility decision unit 55 decides that the sequential photography can be performed (Y in Step S16), the notification control unit 56 notifies information corresponding to the sequential photography interval INTTGT to the outside of the image sensing apparatus 1 in Step S17. This notification is performed by using visual or hearing means so that the user can recognize the information. Specifically, for example, intermittent electronic sound is output from the speaker 28. When the sequential photography interval INTTGT is relatively short, an output interval of the electronic sound is set to a relatively short value (e.g., sound pi-pi-pi is output from the speaker 28 in 0.5 seconds). When the sequential photography interval INTTGT is relatively long, an output interval of the electronic sound is set to a relatively long value (e.g., sound pi-pi-pi is output from the speaker 28 in 1.5 seconds). It is possible to display an icon or the like corresponding to the sequential photography interval INTTGT on the display unit 27. The notification in Step S17 enables the user to recognize a sequential photography speed of the sequential photography that will be performed after that and to estimate overall photography time of the target image sequence. As a result, it is possible to avoid a situation where the user changes the photography direction or turns off the power of the image sensing apparatus 1 during the photographing operation of the target image sequence in mistake that the photography of the target image sequence is finished.
After the notification in Step S17, it is checked whether or not the shutter button 26b is in a fully-pressed state in Step S18. If the shutter button 26b is not in the fully-pressed state, the process goes back to Step S12. If the shutter button 26b is in the fully-pressed state, the sequential photography of p target images is performed in Step S19. Further, also in the case where it is checked during the notification in Step S17 that the shutter button 26b is fully-pressed state, the process goes promptly to Step S19 in which the sequential photography of p target images is performed.
As the sequential photography interval INTTGT of the of p target images that is taken sequentially in Step S19, the one set in Step S14 can be used. However, it is possible to recalculate the moving speed SP and the subject size SIZE and to reset the sequential photography interval INTTGT by using the tracking result information for a plurality of preview images (e.g., preview images In−2 and In−1) including the latest preview image obtained at the time point when the fully-pressed state of the shutter button 26b is confirmed, and to perform the sequential photography in Step S19 in accordance with the reset sequential photography interval INTTGT.
In Step S20 following the Step S19, the stroboscopic image is generated from the p target images obtained in Step S19.
If the sequential photography possibility decision unit 55 decides that the sequential photography cannot be performed (N in Step S16), the process goes to Step S21 in which a warning display is performed. Specifically, for example, in Step S21, as illustrated in
In Step S22 following the Step S21, it is checked whether or not the shutter button 26b is maintained to be the half-pressed state. If the half-pressed state of the shutter button 26b is canceled, the process goes back to Step S12. If the half-pressed state of the shutter button 26b is not canceled, the process goes to Step S17. When the process goes from Step S22 to Step S17, and then the shutter button 26b becomes the fully-pressed state, the sequential photography of p target images is performed. However, in this case, there is a case where the tracking target is not included in a target image that is taken at later timing (e.g., target image In+p−1). Therefore, the number of tracking targets on the stroboscopic image generated in Step S20 becomes smaller than p with high probability.
According to this embodiment, the sequential photography interval is optimized so that the tracking target is arranged at a desired position interval in accordance with a moving speed of the tracking target. Specifically, it is possible to adjust the position interval between tracking targets at different time points to a desired value. As a result, for example, it is possible to avoid overlapping of tracking targets at different time points on the stroboscopic image (see
Further, the stroboscopic image is generated from p target images in this embodiment, but the generation of the stroboscopic image is not essential. The p target images have a function as so-called frame advance images (top forwarding images) noting the tracking target. In the case where the p target images are noted, the action and the effect of adjusting the position interval between tracking targets at different time points to a desired one is realized.
A second embodiment of the present invention will be described. An image sensing apparatus according to the second embodiment is also the image sensing apparatus 1 illustrated in
The tracking process unit 61 illustrated in
The tracking process unit 61 performs the tracking process on each frame image in accordance with the method described above in the first embodiment after the tracking target is set, so as to generate the tracking result information including information indicating the position and size of the tracking target region on each frame image. The generation method of the tracking result information is the same as that described above in the first embodiment. The tracking result information generated by the tracking process unit 61 is sent to the image selection unit 62 and the stroboscopic image generation unit 63.
The image selection unit 62 selects and extracts a plurality of frame images from the frame image sequence as a plurality of selected images on the basis of the tracking result information from the tracking process unit 61, so as to send image data of each selected image to the stroboscopic image generation unit 63. The number of the selected images is smaller than the number of frame images forming the frame image sequence.
The stroboscopic image generation unit 63 generates the stroboscopic image by combining images in the tracking target regions of the selected images based on the tracking result information for each selected image and image data of each selected image. The generated stroboscopic image can be recorded in the external memory 18. The generation method of the stroboscopic image by the stroboscopic image generation unit 63 is the same as that of the stroboscopic image generation unit 54 according to the first embodiment except for that a name of the image to be a base of the stroboscopic image is different between the stroboscopic image generation units 63 and 54.
Now, supposing that the frame image sequence read out from the external memory 18 is constituted of ten frame images FI1 to FI10 illustrated in
In the special reproduction mode, the first frame image FI1 is displayed first on the display unit 27, and in this state of the display, a user's operation of setting the tracking target is received. For instance, as illustrated in
The tracking process unit 61 derives a position and size of the tracking target region on each frame image based on image data of the frame images FI1 to FI10. Center positions of the tracking target regions on the frame images FIi and FIj are denoted by (xi,yi) and (xj,yj), respectively (i and j denote integers, and i is not equal to j). In addition, as illustrated in
The image selection unit 62 first extracts the first frame image FI1 as a first selected image. Frame images that are taken after the frame image FI1 as the first selected image are candidates of a second selected image. In order to extract the second selected image, the image selection unit 62 substitutes integers in the range from 2 to 10 for the variable j one by one so as to compare the distance between tracking targets d[1,j] with the target subject interval β. Then, among one or more frame images satisfying the inequality d[1,j]>β, a frame image FIj that is taken after the first selected image and at a time closest to the first selected image is selected as the second selected image. Here, it is supposed that the inequality d[1,j]>β is not satisfied whenever j is two or three, while the inequality d[1,j]>β is satisfied whenever j is an integer in the range from four to ten. Then, the frame image FI4 is extracted as the second selected image.
The target subject interval β means a target value of the distance between center positions of the tracking target regions on temporally neighboring two selected images. Specifically, for example, a target value of the distance between center positions of the tracking target regions on i-th and (i+1)th selected images is the target subject interval β. The image selection unit 62 can determine the target subject interval β to be said as a reference distance in accordance with the subject size SIZE′. As the subject size SIZE′ in the case where it is decided whether or not the inequality d[i,j]>β is satisfied, an average value of the specific direction sizes Li and Lj can be used. However, it is possible to determine the subject size SIZE′ on the basis of three or more specific direction sizes. Specifically, for example, an average value of the specific direction sizes L1 to L10 may be substituted for the subject size SIZE′.
The image selection unit 62 determines the target subject interval β from the subject size SIZE′ so that β=SIZE′ is satisfied, or β=k0×SIZE′ is satisfied, or β=SIZE′+k1 is satisfied. Symbols k0 and k1 are predetermined coefficients. However, it is possible to determine the target subject interval β in accordance with a user's instruction. In addition, value of the coefficients k0 and k1 may be determined by the user.
In this way, the extraction process of selected images is performed so that the a distance between tracking targets (in this example, d[1,4]) on the first and the second selected images based on the detection result of position of the tracking target by the tracking process unit 61 is larger than the target subject interval β to be said as a reference distance (e.g., average value of L1 and L4) based on the detection result of size of the tracking target by the tracking process unit 61. The same is true for a third and later selected images to be extracted.
Specifically, frame images taken after the frame image FI4 as the second selected image are candidates for the third selected image. In order to extract the third selected image, the image selection unit 62 substitutes integers in the range from five to ten for the variable j one by one so as to compare the distance between tracking targets d[4,j] with the target subject interval β. Then, among one or more frame images satisfying the inequality d[4,j]>β, a frame image FIj that is taken after the second selected image and at a time closest to the second selected image is selected as the third selected image. Here, it is supposed that the inequality d[4,j]>β is not satisfied whenever j is within the range from 5 to 8, while the inequality d[4,j]>β is satisfied whenever j is nine or ten. Then, the frame image FI9 is extracted as the third selected image.
Frame images taken after the frame image FI9 as the third selected image are candidates for the fourth selected image. In this example, only the frame image FI10 is a candidate for the fourth selected image. In order to extract the fourth selected image, the image selection unit 62 substitutes 10 for the variable j so as to compare the distance between tracking targets d[9,j] and the target subject interval β. Then, if the inequality d[9,j]>β is satisfied, the frame image FI10 is extracted as the fourth selected image. On the other hand, if the inequality d[9,j]>β is not satisfied, the extraction process of selected images is completed without extracting the frame image FI10 as the fourth selected image. Here, it is supported that the inequality d[9,j]>β is not satisfied when the variable j is 10. Then, eventually, three selected images including frame images FI1, FI4 and FI9 are extracted.
<<Operational Flow>>
Next, with reference to
In the next Step S65, on the basis of the tracking result information from the tracking process unit 61, the above-mentioned comparison between the distance between tracking targets (corresponding to d[i,j]) and the target subject interval β is performed. Then, if the former is larger than the latter (β) the frame image FIn is extracted as the selected image in Step S66. Otherwise, the process goes directly to Step S68. In Step S67 following the Step S66, it is checked whether or not the number of extraction of the selected images is the same as a predetermined necessary number. If the numbers are identical, the extraction of selected images is finished at that time point. On the contrary, if the numbers are not identical, the process goes from Step S67 to Step S68. The user can specify the necessary number described above.
In Step S68, the variable n is compared with a total number of frame images forming the frame image sequence (ten in the example illustrated in
According to this embodiment, it is possible to realize extraction of the selected image sequence and generation of the stroboscopic image, in which the tracking targets are arranged at a desired position interval. Specifically, it is possible to adjust the position interval between tracking targets at different time points to a desired one. As a result, for example, overlapping of images of tracking targets at different time points on the stroboscopic image can be avoided (see
Further in this embodiment, unlike the method described in JPA-2008-147721, the extraction of selected images is performed by using the tracking process. Therefore, a so-called background image in which no tracking target exists is not necessary, and extraction of a desired selected image sequence and generation of the stroboscopic image can be performed even if the background image does not exist. In addition, it is possible to set the target subject interval β to be smaller than the subject size SIZE′ in accordance with a user's request. In this case, it is possible to generate a stroboscopic image on which the images of the tracking targets at different time points are overlapped a little for each (such generation of the stroboscopic image cannot be performed by the method described in JP-A-2008-147721)
Further, although the stroboscopic image is generated from the plurality of selected images in this embodiment, generation of the stroboscopic image is not essential. The plurality of selected images have a function as so-called frame advance images (top forwarding images) noting the tracking target. Also in the case where the plurality of selected images are noted, the action and the effect of adjusting the position interval between tracking targets at different time points to a desired one is realized.
A third embodiment of the present invention will be described. The plurality of taken images (images 311 to 314 in the example illustrated in
It is supposed that the moving image obtained by photography using the imaging unit 11 includes images I1, I2, I3, . . . In+1, In+2, and so on (n denotes an integer). In the first embodiment, the images In to In+p−1 are regarded as the target images, and the image In−1 and images taken before the same are regarded as preview images (see
The tracking process unit 151, the tracking target characteristic calculation unit 152, the photography control unit 153 and the stroboscopic image generation unit 154 illustrated in
Specifically, the tracking process unit 151 performs the tracking process for tracking on the moving image 600 the tracking target on the moving image 600 on the basis of image data of the moving image 600, so as to output the tracking result information including information indicating a position and size of the tracking target in each frame image.
The tracking target characteristic calculation unit 152 calculates, on the basis of the tracking result information of the tracking process performed on the non-target frame image sequence, moving speed SP of the tracking target on the image space and a subject size (object size) SIZE in accordance with the size of the tracking target on the image space. The moving speed SP functions as an estimated value of the moving speed of the tracking target on the target frame image sequence, and the subject size SIZE functions as an estimated value of the size of the tracking target on each target frame image. The moving speed SP and the subject size SIZE can be calculated on the basis of positions and sizes of the tracking target regions of two or more non-target frame images. This calculation method is the same as the method described above in the first embodiment, i.e., the method of calculating the moving speed SP and the subject size SIZE on the basis of positions and sizes of the tracking target regions of two or more preview images. For instance, when the two non-target frame images are denoted by IA and IB, the moving speed SP and the subject size SIZE can be calculated from the positions and sizes of the tracking target regions on the non-target frame images IA and IB (see
The photography control unit 153 determines a value of INTTGT in accordance with the equation (2) as described above in the first embodiment on the basis of the moving speed SP calculated by the tracking target characteristic calculation unit 152. In this case, as described above in the first embodiment, the target subject interval α in the equation (2) can be determined on the basis of the subject size SIZE calculated by the tracking target characteristic calculation unit 152 or on the basis of a user's instruction. In the first embodiment, the physical quantity represented by INTTGT is referred to as the sequential photography interval, but in this embodiment the physical quantity represented by INTTGT is referred to as the photography interval. The photography interval INTTGT means an interval between photography time points of temporally neighboring two target frame images (e.g., In and In+1). The photography time point of the target frame image In means, in a strict sense, for example, a start time or a mid time of exposure period of the target frame image In (the same is true for any other frame images).
The photography control unit 153 sets the photography interval INTTGT and then controls the imaging unit 11 together with the TG 22 (see
The stroboscopic image generation unit 154 generates a stroboscopic image by combining images in the tracking target regions of the target frame images In to In+p−1 on the basis of the tracking result information for the target frame images In to In+p−1 and image data of the target frame images In to In+p−1. The generated stroboscopic image can be recorded in the external memory 18. The generation method of the stroboscopic image on the basis of the images In to In+p−1 is as described above in the first embodiment. Note that any stroboscopic image described above is a still image. To distinguish the stroboscopic image as a still image from the stroboscopic image of a moving image format described below, the stroboscopic image as a still image is also referred to as a stroboscopic still image, if necessary in the following description.
The stroboscopic image generation unit 154 can also generate a stroboscopic moving image. It is supposed that p is three, and the target frame images In to In+2 are respectively images 611 to 613 illustrated in
With reference to
After starting the tracking process, it is checked in Step S113 whether or not the stroboscopic specifying operation is performed. When it is checked that the stroboscopic specifying operation is performed, the moving speed SP and the subject size SIZE are calculated on the basis of the latest tracking result information obtained at that time point (tracking result information of two or more non-target frame images). Further, the photography interval INTTGT is set by using the moving speed SP and the subject size SIZE, so that the target frame image sequence is photographed (Steps S114 and S115). Specifically, the frame rate (1/INTTGT) for the target frame image sequence is set, and in accordance with the set contents, the frame rate of the imaging unit 11 is actually changed from a reference rate to (1/INTTGT). Then, the target frame images In to In+p−1 are photographed. The reference rate is a frame rate for non-target frame images.
When the photography of the target frame images In to In+p−1 is completed, the frame rate is reset to the reference rate (Step S116). After that, the stroboscopic still image (e.g., stroboscopic still image 633 illustrated in
When the stroboscopic specifying operation is performed, the photography possibility decision unit 155 may perform the photography possibility decision of the target frame image and/or the notification control unit 156 may perform the photography interval notification before (or during) the photography of the target frame image sequence. Specifically, for example, when the stroboscopic specifying operation is performed, the process in Steps S121 to S123 illustrated in
According to this embodiment, the frame rate is optimized so that the tracking targets are arranged at a desired position interval in accordance with the moving speed of the tracking target. Specifically, the position interval of the tracking targets at the different time points is optimized, so that overlapping of tracking targets at different time points on the stroboscopic image can be avoided, for example (see
There are many common features between the first and the third embodiments. In the first embodiment, the target image sequence including p target images is obtained by the sequential photography. In contrast, in the third embodiment, the target frame image sequence including p target frame images is obtained by photography of the moving image 600. The sequential photography control unit 53 in the first embodiment or the photography control unit 153 in the third embodiment (see
Note that the generation of the stroboscopic image is not essential (the same is true in other embodiments described later). The plurality of target frame images (or a plurality of select images described later) have a function as so-called frame advance images (top forwarding images) noting the tracking target. Also in the case where a plurality of target frame images (or a plurality of select images described later) are noted, the action and the effect of adjusting the position interval between tracking targets at different time points to a desired one is realized.
In addition, it is possible to set a time length of exposure period of each target frame image (hereinafter referred to as exposure time) on the basis of the moving speed SP calculated by the tracking target characteristic calculation unit 152. Specifically, for example, it is preferred to set the exposure time of each target frame image so that the exposure time of each target frame image decreases along with an increase of the moving speed SP. Thus, it is possible to suppress image blur of the tracking target on each target frame image. This setting operation of the exposure time can be applied also to the first embodiment described above. Specifically, in the first embodiment, it is preferred to set the exposure time of each target image so that the exposure time of each target image decreases along with an increase of the moving speed SP on the basis of the moving speed SP calculated by the tracking target characteristic calculation unit 52.
A fourth embodiment of the present invention will be described. Another method of generating a stroboscopic image from a frame image sequence forming a moving image will be described in a fourth embodiment. The fourth embodiment is an embodiment based on the first and the third embodiment, and the description in the first or the third embodiment can be applied also to this embodiment concerning matters that are not described in particular in the fourth embodiment, as long as no contradiction arises. The following description in the fourth embodiment is a description of a structure of the image sensing apparatus 1 working effectively in the photography mode and an operation of the image sensing apparatus 1 in the photography mode, unless otherwise stated.
Also in the fourth embodiment, it is supposed that the moving image 600 including the frame images I1, I2, I3, . . . In, In+1, In+2, and so on is obtained by photography similarly to the third embodiment.
The tracking process unit 151, the tracking target characteristic calculation unit 152, the photography control unit 153a and the stroboscopic image generation unit 154 in
The target image selection unit 157 determines a value of INTTGT in accordance with the equation (2) described above in the first embodiment on the basis of the moving speed SP calculated by the tracking target characteristic calculation unit 152. In this case, as described above in the first embodiment, the target subject interval α in the equation (2) can be determined on the basis of the subject size SIZE calculated by the tracking target characteristic calculation unit 152 or on the basis of a user's instruction. In the first embodiment, the physical quantity represented by INTTGT is referred to as the sequential photography interval, but in this embodiment the physical quantity represented by INTTGT is referred to as a reference interval. The reference interval INTTGT means an ideal interval between photography time points of temporally neighboring two target frame images (e.g., In and In+3).
Unlike the third embodiment, in the fourth embodiment, the frame rate in the photography of the moving image 600 is fixed to a constant rate. The target image selection unit 157 selects the p target frame images from the target frame image candidates on the basis of the reference interval INTTGT. After this selection, the stroboscopic image generation unit 154 can generate the stroboscopic still image or the stroboscopic moving image on the basis of the p target frame images and the tracking result information at any timing in accordance with the method described above in the third embodiment.
For specific description, it is supposed that the frame rate in the photography of the moving image 600 is fixed to 60 frames per second (fps) and that p is three, and the select method of the target frame images will be described. In this case, the photography interval between temporally neighboring frame images is 1/60 seconds. As illustrated in
First, the target image selection unit 157 selects the frame image In that is a first target frame image candidate as the first target frame image regardless of the reference interval INTTGT. Next, the target image selection unit 157 sets the target frame image candidate whose photography time point is closest to the time (tO+1×INTTGT) as a second target frame image among all target frame image candidates. Next, the target image selection unit 157 sets the target frame image candidate whose photography time point is closest to the time (tO+2×INTTGT) as a third target frame image among all target frame image candidates. The same is true for the cases where p is four or larger. In generalization, the target image selection unit 157 sets the target frame image candidate whose photography time point is closest to the time (tO+(j−1)×INTTGT) as a j-th target frame image among all target frame image candidates (here, j denotes an integer of two or larger).
Therefore, for example, in the case where the frame rate of the moving image 600 is 60 fps, if the reference interval INTTGT is 1/20 seconds, the images In+3 and In+6 are selected as the second and the third target frame images (see
With reference to
After starting the tracking process, it is checked in Step S133 whether or not the stroboscopic specifying operation is performed. When it is checked that the stroboscopic specifying operation is performed, the moving speed SP and the subject size SIZE are calculated on the basis of the latest tracking result information obtained at that time point (tracking result information of two or more non-target frame images) in Step S134. Then, the reference interval INTTGT is calculated by using the moving speed SP and the subject size SIZE. The target image selection unit 157 selects the p target frame images from the target frame image candidates by using the reference interval INTTGT as described above.
The image data of the frame images forming the moving image 600 are recorded in the external memory 18 in time sequence order. In this case, combining tag is assigned to the target frame image (Step S135). Specifically, for example, a header region of the image file for storing image data of the moving image 600 should store the combining tag indicating which frame image the target frame image is. When the image file is stored in the external memory 18, the image data of the moving image 600 and the combining tag are associated with each other and are recorded in the external memory 18.
After the moving image 600 is recorded, at any timing, the stroboscopic image generation unit 154 can read the p target frame images from the external memory 18 on the basis of the combining tag recorded in the external memory 18. From the read p target frame images, the stroboscopic still image (e.g., the stroboscopic still image 633 illustrated in
Further, when the stroboscopic specifying operation is performed, similarly to the third embodiment, it is possible to perform photography possibility decision of the target frame image by the photography possibility decision unit 155 and/or the photography interval notification by the notification control unit 156 before (or during) the photography of the target frame image candidates.
According to this embodiment, the target frame images are selected so that the tracking targets are arranged at a desired position interval. Specifically, the position interval between tracking targets at different time points is optimized on the target frame image sequence. As a result, for example, overlapping of tracking targets at different time points on the stroboscopic image can be avoided (see
Further, as described above in the third embodiment, it is possible to set exposure time of each target frame image candidate on the basis of the moving speed SP calculated by the tracking target characteristic calculation unit 152. Specifically, for example, it is preferred to set the exposure time of each target frame image candidate so that the exposure time of each target frame image candidate decreases along with an increase of the moving speed SP. Thus, it is possible to suppress image blur of the tracking target on each target frame image candidate and each target frame image.
A fifth embodiment of the present invention will be described. The fifth embodiment is an embodiment based on the second embodiment. Concerning matters that are not described in fifth embodiment in particular, the description in the second embodiment can also be applied to this embodiment as long as no contradiction arises. Also in the fifth embodiment, similarly to the second embodiment, the operation of the image sensing apparatus 1 in the special reproduction mode will be described. In the special reproduction mode, the tracking process unit 61, the image selection unit 62 and the stroboscopic image generation unit 63 illustrated in
As described above in the second embodiment, the image sequence obtained by the sequential photography performed by the imaging unit 11 at a predetermined frame rate is stored as the frame image sequence on the external memory 18, and in the special reproduction mode, the image data of the frame image sequence is read out from the external memory 18. The frame image in this embodiment is a frame image read out from the external memory 18 in the special reproduction mode unless otherwise stated.
The tracking process unit 61 performs the tracking process on each frame image after the tracking target is set, so as to generate the tracking result information including the information indicating position and size of the tracking target region on each frame image. The image selection unit 62 selects and extracts a plurality of frame images as a plurality of selected images from the frame image sequence on the basis of the tracking result information from the tracking process unit 61, and sends image data of each selected image to the stroboscopic image generation unit 63. The stroboscopic image generation unit 63 combines images in the tracking target region of each selected image on the basis of tracking result information for each selected image and image data of each selected image so as to generate the stroboscopic image. The generated stroboscopic image can be recorded in the external memory 18. The stroboscopic image to be generated may be a stroboscopic still image as the stroboscopic still image 633 illustrated in
The moving image as the frame image sequence read from the external memory 18 is referred to as a moving image 700.
The user can specify freely the frame image to be a candidate of the selected image from the frame images forming the moving image 700. Usually, temporally continuing plurality of frame images are set as candidates of the selected images. Here, it is supposed that m frame images FIn to FIn+m−1, are set as candidates of the selected images as illustrated in
The image selection unit 62 can use the detection result of the moving speed SP of the tracking target so as to determine the selected images. The detection methods of the moving speed SP performed by the image selection unit 62 are roughly divided into a moving speed detection method based on the non-candidate image and a moving speed detection method based on the candidate image.
In the moving speed detection method based on the non-candidate image, the tracking result information for the non-candidate image is utilized, so that the moving speed SP of the tracking target on the candidate image sequence is estimated and detected on the basis of positions of the tracking target regions on the plurality of non-candidate images. For instance, two different non-candidate images are regarded as frame images FIi and FIj illustrated in
In the moving speed detection method based on the candidate image, the tracking result information for the candidate image is used, the moving speed SP of the tracking target on the candidate image sequence is detected on the basis of positions of the tracking target regions on the plurality of candidate images. For instance, two different candidate images are regarded as the frame images FIi and FIj illustrated in
On the other hand, the image selection unit 62 determines the target subject interval β described above in the second embodiment. The image selection unit 62 can determine the target subject interval β in accordance with the method described above in the second embodiment. Specifically, for example, the target subject interval β can be determined in accordance with the subject size SIZE′. As a calculation method of the subject size SIZE′, the method described above in the second embodiment can be used. Specifically, for example, an average value of the specific direction sizes Li and Lj (more specifically, the specific direction sizes Ln and Ln+1, for example) may be determined as the subject size SIZE′. If a value of m is fixed before the subject size SIZE′ is derived, an average value of the specific direction sizes Ln to Ln+m−1 may be determined as the subject size SIZE′.
The image selection unit 62 first sets the frame image FIn that is the first candidate image to the first selected image. Then, based on the detected moving speed SP, a moving distance of the tracking target between different candidate images is estimated. Since the frame rate of the moving image 700 is FR, the estimated moving distance of the tracking target between the frame images FIn and FIn+i is “i×SP/FR” as illustrated in
The image selection unit 62 extracts the second selected image from the candidate image sequence so that the distance between tracking targets on the first and the second selected images based on the estimated moving distance is larger than the target subject distance β that is to be said as a reference distance based on the detection result of the size of the tracking target by the tracking process unit 61. The frame image photographed after the frame image FIn as the first selected image is to be a candidate of the second selected image. In order to extract the second selected image, the image selection unit 62 substitutes integers from (n+1) to (n+m−1) for the variable j one by one and compares the estimated moving distance “(j−n)×SP/FR” that is an estimated value of the distance between tracking targets d[n,j] with the target subject interval β. Then, among one or more candidate images satisfying the inequality (j−n)×SP/FR>β, the candidate image FIj that is photographed after the first selected image and at a time point closest to the first selected image is selected as the second selected image. Here, it is supposed that the inequality is not satisfied whenever j is (n+1) or (n+2) while the inequality is satisfied whenever j is an integer of (n+3) or larger. Then, the candidate image FIn+3 is extracted as the second selected image.
Third and later selected images are also selected in the same manner. Specifically, the image selection unit 62 extracts the third selected image from the candidate image sequence so that a distance between tracking targets on the second and the third selected images based on the estimated moving distance is larger than the target subject distance (in this case, however, there is imposed the condition that a photography time difference between the second and the third selected images is set to be as small as possible).
Note that the third selected image may be automatically determined from the photography interval between the first and the second selected images. Specifically, the third selected image may be determined so that the photography interval between the second and the third selected images becomes the same as the photography interval between the first and the second selected images. In this case, for example, when the frame image FIn+3 is extracted as the second selected image, the third selected image is automatically determined as the frame image FIn+6. The same is true for the fourth and later selected images.
With reference to
In the next Step S165, based on the tracking result information from the tracking process unit 61, the estimated value of the distance between tracking targets is compared with the target subject interval β. Then, if the former is large than the latter (β), the frame image FIn+i is extracted as the selected image in Step S166. Otherwise, the process goes directly to Step S168. In Step S167 following the Step S166, it is checked whether or not the number of extraction of selected images is the same as a predetermined necessary number (i.e., a value of p). If the numbers are identical, the extraction of selected images is finished at that time point. On the contrary, if the numbers are not identical, the process goes from Step S167 to Step S168. The user can specify the necessary number.
In Step S168, the variable i is compared with a total number of the candidate images (i.e., a value of m). Then, if the current variable i is identical to the total number, it is decided that the reproduction of the candidate image sequence is finished, and the extraction process of the selected images is finished. Otherwise, one is added to the variable i (Step S169) and the process goes back to Step S164, so that the above-mentioned processes are repeated.
In this embodiment too, the same effect as the second embodiment can be obtained.
A sixth embodiment of the present invention will be described. In the sixth embodiment, compression and expansion of the image data are considered, and a method that can be applied to the second and the fifth embodiment will be described. For specific description, it is supposed that the moving image 700 illustrated in
When the moving image 700 is recorded in the external memory 18, the image data of the moving image 700 is compressed by a predetermined compression method performed by the compression processing unit 16 illustrated in
The non-compressed image data of the moving image 700 is a set of still images that are independent of each other. Therefore, the non-compressed image data that is the same as that transmitted to the display processing unit 20 is written in the internal memory 17 illustrated in
Further, in MPEG, an MPEG moving image that is a compression moving image is generated by utilizing a difference between frames. As known well, the MPEG moving image is constituted of three types of picture, including an I-picture that is an intra-coded picture, a P-picture that is a predictive-coded picture, and a B-picture that is a bidirectionally predictive-coded picture. Since the I-picture is an image obtained by coding an video signal of one frame image within the frame image, it is possible to decode the video signal of the one frame image by the single I-picture. In contrast, by a single P-picture, the video signal of one frame image cannot be decoded. In order to decode the frame image corresponding to the P-picture, it is necessary to perform a differential operation or the like with another picture. The same is true for the B-picture. Therefore, operational load necessary for decoding the frame image corresponding to the P-picture or the B-picture is larger than that corresponding to the I-picture.
Considering this, in order to reduce the operational load, it is possible to constitute the candidate image sequence in the fifth embodiment by using only I-pictures (similarly, it is possible to constitute the frame images FI1 to FI10 in the second embodiment by using only I-pictures). In this case, even if the frame rate of the moving image 700 is 60 fps, the frame rate of the candidate image sequence is approximately 3 to 10 fps, for example. However, there is little problem in the case where the moving speed of the tracking target is not so high.
<<Variations>>
Specific numerical values indicated in the above description are merely examples, and they can be changed to various values as a matter of course. As variations or annotations of the embodiments described above, Note 1 and Note 2 are described below. Descriptions in the Notes can be combined in any way as long as no contradiction arises.
[Note 1]
In the examples described above, the image processing apparatus including the tracking process unit 61, the image selection unit 62 and the stroboscopic image generation unit 63 illustrated in
[Note 2]
The image sensing apparatus 1 can be realized by hardware or a combination of hardware and software. In particular, a whole or a part of processes performed by the units illustrated in
Number | Date | Country | Kind |
---|---|---|---|
2009-191072 | Aug 2009 | JP | national |
2010-150739 | Jul 2010 | JP | national |