These objects and other objects and advantages of the present invention will become more apparent upon reading of the following detailed description and the accompanying drawings in which:
Hereinafter, one embodiment of the present invention will now be described with reference to the accompanying drawings.
The DSP/CPU 3 is connected to a TG (Timing Generator) 4 that drives the CCD 2 at a frame rate (e.g., 60 fps or 240 fps). The TG 4 is connected to a unit circuit 5 inputting an analog image pickup signal that varies according to an optical image of an object output from the CCD 2. The unit circuit 5 includes a correlated double sampling circuit (CDS circuit) for reducing the drive noise of the CCD 2 that is included in the image pickup signal output from the CCD 2, an automatic gain control circuit (AGC circuit) for adjusting gain on the signal after noise reduction, and an A/D converter for converting the signal after the gain adjustment into a digital signal, the unit circuit 5 converts the analog image pickup signal from the CCD 2 into a digital image signal, and subsequently sends this digitized Bayer data to the DSP/CPU 3.
The DSP/CPU 3 are connected to a display device 6 and a key input part 7, and buffer memory (DRAM) 11, ROM 12, storage memory 13, and an input/output interface 14 are also connected to the DSP/CPU 3 via address/data bus 10. The buffer memory 11 is a buffer for temporarily saving the above-described Bayer data, and in addition serves as the working memory for the DSP/CPU 3.
In other words, the DSP/CPU 3 performs pedestal clamp processing on the Bayer data sent from the unit circuit 5, subsequently converts it into RGB data, and furthermore converts the RGB data into luminance (Y) and chrominance (UV) signals (YUV data). One frame's worth of this YUV data is then stored in the buffer memory 11. When picking up a through-image, one frame's worth of YUV data stored in the buffer memory 11 is sent to the display device 6, wherein it is converted into a video signal and displayed as a through-image.
In addition, when the shutter key operation by the user is detected while in still picture photographing mode, the DSP/CPU 3 executes still picture photographing processing by switching the CCD 2 and unit circuit 5 to a drive system or drive timing for a still picture photographing, being different from those of the through-image. This one frame's worth of YUV data stored in the buffer memory 11 as a result of the still picture photographing processing, by the DSP/CPU 3, is subsequently compressed according to the JPEG or another system, encoded, become a file inside the buffer memory 11, and then recorded to the storage memory 13 (recording unit) via the address/data bus 10 as still picture data (a still picture file).
In addition, in ordinary consecutive photographing (moving picture photographing) mode, when starting instructions for consecutive photographing is detected by a first shutter key operation of the user consecutive photographing, consecutive photographing processing starts, until end instructions for consecutive photographing is detected by a second shutter key operation consecutive photographing, a plurality of frames' worth of YUV data is stored in the buffer memory 11. This plurality of frames' worth of YUV data stored in buffer memory 11 is in series sent to the DSP/CPU 3, compressed according to the JPEG or another system (in the case of moving picture photographing, a predetermined MPEG codec), encoded, and then written as frame data with an given file name to the storage memory 13 via buffer memory 11 and the address/data bus 10. Furthermore, when reviewing still pictures or consecutive photographing (moving pictures), by the DSP/CPU 3, the still picture or consecutive photographing (moving picture) data read from the storage memory 13 is expanded and opened at an image data working area of the buffer memory 11 as still picture data or consecutive photographing (moving picture) frame data. Moreover, in high-speed consecutive photographing (moving picture photographing) mode, when starting instructions for video recording is detected by the shutter key operation, the CCD 2 and the unit circuit 5 start high-speed consecutive photographing processing by switching the through-image photographing (ordinary consecutive photographing) drive timing (60 fps) to a different drive timing (240 fps).
The display device 6 (display unit) includes a color LCD and the driving circuit thereof, in a photographing standby state, the object image imaged by the CCD 2 is displayed as a through-image and during the reproduction of a recorded image, recorded images read and expanded from the storage memory 13. The key input/output part 7 includes a plurality of operation keys, such as a shutter key, a mode settings key, and a power key, and outputs key input signals to the DSP/CPU 3 according to key operation by the user. The shutter key also functions as a consecutive photographing (record) start/stop button during continuous (moving picture) shooting.
In addition, a program chart is stored in the ROM 12, showing combinations of aperture values (F) and shutter speeds corresponding to EVs (Exposure Values) suitable for a variety of photographing modes, e.g., still picture photographing mode, consecutive photographing mode, and through-image photographing mode. The program chart consists of program AE (Auto Exposure) data and an EV value list. Also, the DSP/CPU 3 configures a charge accumulation time based on the shutter speed configured by the program chart. A configured charge accumulation time is supplied as a shutter pulse to the CCD 2 via TG 4. The CCD 2 operates according to this shutter pulse, and as a result, the charge accumulation time, i.e., the exposure time, is controlled. In other words, the CCD 2 functions as an electronic shutter. Furthermore, in the ROM 12 are stored various programs necessary for digital camera functions as well as for the programs shown in the flowcharts to be hereinafter described.
It is possible to connect this digital camera 1 to external devices, such as printers, computers, TV receivers, etc., via the input/output interface 14.
In other words, the DSP/CPU 3 controls the TG 4 to drive the CCD 2 at a frame rate of 60 fps (step S2), and based on the frame image data acquired this frame rate, causes the display device 6 to display a through-image (step S3). During this time, the first shutter key operation is monitored (step S4), and if the first shutter key operation is detected, the frame image data acquired the above-described frame rate is successively recorded to the buffer memory 11 (step S5).
Furthermore, until a second shutter key operation is detected (step S6), the processing in steps S5 and S6 are executed, if the second shutter key operation is detected, recording processing starts, wherein the plurality of frame image data accumulated in the buffer memory 11 is given file names and recorded to the storage memory 13 as frame data (step S7). At this time, this data may be collectively recorded as one moving picture file, or separately as one still picture file. After being like this, the quickview display is subsequently executed, wherein all of this recorded frame image data is displayed by the display device 6 (step S8), and then the process returns to step S1.
On the other hand, the result of the determination in step S1, in the case where the high-speed consecutive photographing mode is configured (step S1; YES), the DSP/CPU 3 controls the TG 4 to drive the CCD 2 at a frame rate of 60 fps (step S9), and based on the frame image data acquired this frame rate, causes the display device 6 to display a through-image (step S1). During this time, the first shutter key operation is monitored (step S11), and if the first shutter key operation is detected, the DSP/CPU 3 controls the TG 4 to drive the CCD 2 at a frame rate of 240 fps (step S12). In other words, the frame rate of the CCD 2 is switched from 60 fps to 240 fps. Additionally, while the frame image data acquired this frame rate is successively recorded to the buffer memory 11 (step S13), one frame image data is extracted for every four shots (step S14), and through-image based on these extracted frame image data are displayed by the display device 6 (step S15).
Consequently, as shown in
Moreover, until a second shutter key operation is detected (step S16), the processing in steps S13-S16 are executed, if a second shutter key operation is detected, recording processing starts, wherein the plurality of frame image data accumulated in the buffer memory 11 is given file names and recorded to the storage memory 13 as frame data (step S17). Similar to the above-described, at this time this data may be collectively recorded as one moving picture file, or separately recorded as one still picture file. After being like this, the quickview display processing is executed (step S18), and the process subsequently returns to step S1.
In the foregoing description of the present embodiment, steps S12-S15 are conducted during the time between the first shutter key operation and the second shutter key operation. However the present embodiment may also be configured such that the above processing steps are conducted while the shutter key is pushed down, or for a predetermined time after the shutter key operation. Additionally, in the foregoing description of the present embodiment, when the high-speed consecutive photographing ends by the second shutter key operation, the quickview display processing (step S18) is automatically executed. However the present embodiment may also be configured such that a determined step is provided whether or not a predetermined key operation has occurred, in the case where this predetermined key operation has occurred, the quickview display processing (step S18) is executed. Furthermore, the present embodiment may also be configured such that a determined step is provided whether or not a predetermined key operation has occurred within a predetermined time, in the case where this predetermined key operation has occurred within the predetermined time.
Consequently, by executing the processing steps according to this flowchart, the following occurs. As a result of the processing in the first step S102, the value of L becomes (L1=4), as shown in
Additionally, as a result of the processing in steps S104 and S105, images based on the image data read from every fourth frames are displayed by the display device 6 at a predetermined time interval. Consequently, ¼ of the recorded consecutive photographing images are sampled and displayed by the display device 6, and as a result, the amount of time necessary for quickview display can shorten while still clearly displaying quickview images.
In the foregoing description of the present embodiment, one frame image out of every four is extracted from the frame data, but it should be appreciated that the number of sampled frame images is not limited thereto, and the present embodiment may also be configured such that more or fewer frame images are sampled.
Subsequently, it is determined whether or not the final frame of image data in the plurality of frame data acquired by consecutive photographing has been read in step S202 (step S207), and this process repeats from step S202 until the final frame of image data is read. Consequently, successive frames of image data are iteratively read from the frame data (step S202), and compared to the image read in the previous iteration (step S203). Subsequently, in the case where the result of this comparison is such that the difference (change) between the two images meets or exceeds a set value (step S204; YES), an image based on the image data read in the present iteration is displayed by the display device 6 (step S205). This image display is maintained until a predetermined time (for example, 1 second) has passed (step S206), and once the predetermined time has passed (step S206; YES), the process proceeds to the above-described step S207.
Consequently, by executing the processing steps according to this flowchart, from among the plurality of frame data acquired by consecutive photographing, only the image data corresponding to scenes wherein the object changes greatly or there is intense movement are extracted and displayed as quickview images. As a result, the amount of time necessary for quickview display is shortened while efficiently and clearly displaying quickview images of the movement of the object shot by high-speed consecutive photographing.
In other words, as a result of the processing in steps S302 to S303, if the current value of N is taken to be N=1, (N=N˜N+3) is equivalent to N=1˜4. Using these values, as shown in
Additionally, as a result of the processing in steps S304 and S305, images, being based on the synthesized image data L synthesized from every four of these image data frames, are displayed by the display device 6 at a predetermined time interval. Consequently, by reducing to ¼ the number of frames of image data acquired by consecutive photographing and displaying these frames by the display device 6, the time required for quickview display is shortened while still clearly displaying quickview images.
In the foregoing description of the first, second, and third embodiments, one frame of image data is extracted every four shots, and through-images based on this extracted frame image data are displayed by the display device 6, as shown in steps S14 and S15 of
In addition, in the foregoing description of the first, second, and third embodiments, a value of 1 second was given as an example of the predetermined time for maintaining the display of each image in the quickview display. However, the predetermined time is not limited to this value, and the predetermined time may be configured to be any suitable value in the range such that the object of the present invention is achieved; namely, in the range such that the quickview display time is shortened, and in addition confirming the contents of the last shot is simplified.
In addition, in the descriptions of each of the foregoing embodiments, images were successively switched at a predetermined time interval, but the present invention may also be configured such that the extracted images are displayed on one screen on a multi-screen display. In this case, the time required for the quickview display is shortened even further. In addition, by reducing the number of images displayed in the multi-screen view to far below that of the number of consecutive photographing frames, each image does not becomes extremely small in the multi-screen view, and therefore each image can be clearly displayed.
In addition, in the case where the number of images to be displayed is large, it is preferable to conduct the multi-screen view display using a plurality of screens, displaying images by switching multi-screen views. In this case, reducing the display frame rate when switching the display of the multi-screen view to lower than that of the above-described high-speed consecutive photographing frame rate makes it easier to confirm the last shot. As a result, even in the case where there are many extracted images, each image can be clearly displayed, and furthermore by using the multi-screen view, the time required for the quickview display can be shortened.
In addition, in the case where the “synthesize all images” option has not been selected (step S401; NO), a counter N in the DSP/CPU 3 is set to an initial value of 1 (step S403), and another counter L is set to 4 times the counter N (step S404). Additionally, the Lth frame of image data among the above-described frame data is extracted from the buffer memory 11 (or alternatively the storage memory 13) and stored in the buffer memory 11 (step S405). Subsequently, it is determined whether or not (L=Nmax−3); in other words, when the processing is conducted to set (L=N×4) in step next S404, it is determined whether or not L will exceed the total number of image data frames (Nmax) in the above-described frame data (step S406). Additionally, until this determination in step S406 results in a YES, the value of the counter N is incremented (step S407), and the processing in steps S404-S407 are repeated.
Consequently, by executing the processing steps according to this flowchart, the following occurs, in a manner equivalent to that of the above-described first embodiment. As a result of the processing in the first iteration of step S404, the value of L becomes (L1=4), as shown in
Additionally, when the determination of step S406 results in a YES, the plurality of frames of image data extracted in the above-described iterations of step S405 are multiply synthesized to create a single frame of synthesized image data LP (step S408). Subsequently, an image based on either the synthesized image data LO multiply synthesized in the above-described step S402, or alternatively the synthesized image data LP multiply synthesized in the above-described step S408, is displayed by the display device 6 (step S409). This image display is maintained until a predetermined time has passed (step S410), and if the predetermined time has passed (step S410; YES), the image display (i.e., the quickview display) ends.
Consequently, in the present embodiment, since a single image is displayed in the quickview display, the time required for quickview display is shortened. In addition, since the displayed image is either an image multiply synthesized from all the images constituting the above-described frame data, or alternatively an image multiply synthesized from a plurality of images intermittently extracted at a set interval, a synthesized image reflecting the movement of the object is created. As a result, an effective image display can be conducted as the quickview display following high-speed consecutive photographing.
In the foregoing description of the present embodiment, a plurality of images intermittently extracted at a predetermined interval were multiply synthesized. However, it should be appreciated that, as shown in the second embodiment, images showing change in the object may also be extracted and multiply synthesized. As a result, a synthesized image reflecting the particular movement of the object is created, and therefore a more effective image display can be conducted as the quickview display following high-speed consecutive photographing. In addition, in the description of the foregoing embodiments, the phrases “consecutive photographing” and “high-speed consecutive photographing” were solely used. However, these phrases are synonymous with “moving picture photographing” and “high-speed moving picture photographing”, and therefore “moving picture photographing” and “high-speed moving picture photographing” may be respectively substituted in the foregoing description.
Various embodiments and changes may be made thereunto without departing from the broad spirit and scope of the invention. The above-described embodiments are intended to illustrate the present invention, not to limit the scope of the present invention. The scope of the present invention is shown by the attached claims rather than the embodiments.
Various modifications made within the meaning of an equivalent of the claims of the invention and within the claims are to be regarded to be in the scope of the present invention.
This application is based on Japanese Patent Application No. 2006-253633 filed on Sep. 20, 2006 and including specification, claims, drawings and summary. The disclosure of the above Japanese Patent Application is incorporated herein by reference in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2006-253633 | Sep 2006 | JP | national |