1. Field of the Invention
The present invention relates to an imaging device, and more particularly to an imaging device which is capable of performing a ranging process for measuring the distance up to a subject, a captured image recording method, and a program for enabling a computer to perform such a captured image recording method.
2. Description of the Related Art
In recent years, there have widely been used imaging devices such as digital still cameras for imaging a subject such as an urban area or the like to generate a captured image, and recording the captured image. There has been proposed a panoramic image generating method for joining a plurality of captured images recorded by such an imaging device into a panoramic image of a subject that is present in a relatively wide range.
For generating a panoramic image according to the proposed panoramic image generating method, feature points need to be extracted from a plurality of captured images, and the extracted feature points are superimposed to joint the captured images into a panoramic image. As the feature points need to be extracted from the captured images, the proposed panoramic image generating method poses a relatively large image processing burden for extracting such feature points.
There has also been proposed another panoramic image generating method for sampling a plurality of slits, each having a constant vertical length and a variable small width, from image data captured by a video camera, the variable small width depending on the speed at which the imager moves, and joining the slits into a panoramic image (see, for example, Japanese Patent Laid-open No. 2000-156818,
According to the proposed other panoramic image generating method, since it is not necessary to extract feature points from captured images for the purpose of joining the captured images, the image processing burden is reduced.
However, inasmuch as slits have to be sampled from captured image data, the amount of data that is discarded from the captured image data is large. Therefore, the captured image data cannot effectively be utilized, and the processing rate at the time the captured image data are recorded tends to be low.
It is an embodiment of the present invention to reduce an image processing burden in generating a panoramic image.
According to a first embodiment of the present invention, there is provided an imaging device, including an imager configured to image a subject to generate a captured image thereof, a subject distance calculator configured to calculate a subject distance up to the subject, a captured image recording commander configured to give a recording instruction to record the captured image based on a present position and the subject distance, and a recording controller configured to record the captured image based on the recording instruction. There are also provided a method of recording captured images and a program for enabling a computer to carry out such a method. The imaging device, the method, and the program make it possible to calculate the subject distance and give the recording instruction to record the captured image based on the present position and the subject distance.
In the first embodiment of the present invention, the imaging device may further include a captured image recording position calculator configured to calculate, based on the subject distance, a captured image recording position which is a position to record a second captured image generated by the imager after the captured image is recorded, wherein the captured image recording commander may give a recording instruction to record the second captured image when the imaging device reaches the captured image recording position. Therefore, the captured image recording position may be calculated based on the subject distance, and the recording instruction may be given to record the second captured image when the imaging device reaches the captured image recording position.
In the first embodiment of the present invention, the imaging device may further include a traveled distance acquirer configured to acquire a traveled distance in a particular direction of the imaging device, wherein the subject distance calculator may calculate, as the subject distance, a distance up to an end in the particular direction of the subject included in the captured image, the captured image recording position calculator may calculate the captured image recording position in the particular direction based on the subject distance, and the captured image recording commander may determine whether the imaging device reaches the captured image recording position or not based on the acquired traveled distance, and gives the recording instruction to record the second captured image if the imaging device reaches the captured image recording position. Therefore, the traveled distance in the particular direction of the imaging device may be acquired, it may be determined whether the imaging device reaches the captured image recording position or not based on the acquired traveled distance, and the recording instruction may be given to record the second captured image if the imaging device reaches the captured image recording position.
In the first embodiment of the present invention, the imaging device may further include a present position acquirer configured to acquire a present position in a particular direction of the imaging device, wherein the subject distance calculator may calculate, as the subject distance, a distance up to an end in the particular direction of the subject included in the captured image, the captured image recording position calculator may calculate the captured image recording position in the particular direction based on the subject distance, and the captured image recording commander may give the recording instruction to record the second captured image if the acquired present position reaches the captured image recording position. The present position in the particular direction of the imaging device may be acquired, and the recording instruction may be given to record the second captured image if the acquired present position reaches the captured image recording position.
In the first embodiment of the present invention, the imaging device may further include a present position acquirer configured to acquire the present position, and an angular displacement detector configured to detecte an angular displacement of the imaging device about a vertical axis thereof, wherein the subject distance calculator may calculate, as the subject distance, a distance up to an end in a particular direction of the subject included in the captured image, and the captured image recording commander may give a recording instruction to record a second captured image generated by the imager after the captured image is recorded, based on the acquired present position and the detected angular displacement. Therefore, the present position of the imaging device may be acquired, the angular displacement of the imaging device may be detected, and the recording instruction may be given to record the second captured image based on the acquired present position and the detected angular displacement.
In the first embodiment of the present invention, the captured image recording commander may make constant an imaging angle which is an angle to specify a recording range for the captured image, and give the recording instruction to record the second captured image if the acquired present position reaches a position to record the second captured image which is determined based on the subject distance and the detected angular displacement. Therefore, the imaging angle may be made constant, and the recording instruction may be given to record the second captured image if the present position reaches the captured image recording position which is determined based on the detected angular displacement and the subject distance.
In the first embodiment of the present invention, the captured image recording commander may make constant a captured image recording position which is a position to record the second captured image, determine an imaging angle which is an angle to specify a recording angle for the captured image based on the subject distance and the detected angular displacement if the acquired present position reaches the captured image recording position, and give the recording instruction to record the second captured image in the recording range specified by the determined imaging angle. Therefore, the captured image recording position may be made constant, the imaging angle may be determined based on the detected angular displacement and the subject distance, and the recording instruction may be given to record the second captured image in the recording range which is specified by the imaging angle.
In the first embodiment of the present invention, the imaging device may further include a vibration detector configured to detect vibration of the imaging device, and a corrector configured to correct a vertical recording range for the captured image based on the detected vibration. Therefore, vibration of the imaging device may be detected, and the vertical recording range for the captured image may be corrected based on the detected vibration.
In the first embodiment of the present invention, the imaging device may further include a display controller configured to display captured images in a recorded sequence as a panoramic image. Therefore, the recorded captured images may be displayed in the recorded sequence as the panoramic image.
According to the present invention, processing burdens imposed for generating a panoramic image are reduced.
As shown in
The optical system 111 includes a plurality of lenses, including a zoom lens, a focusing lens, etc., for converging light from a subject. The light from the subject is applied through the lenses and an iris, not shown, to the imager 112.
The imager 112 serves to convert the applied light from the subject to generate a captured image according to given imaging parameters, and output the captured image to the recording controller 113 and the display controller 190. Specifically, the imager 112 includes a photoelectric transducer and a signal processor, not shown. The photoelectric transducer converts a light signal applied from the subject through the optical system 111 into an analog image signal. The analog image signal is then processed by the signal processor for noise removal, A/D (Analog-to-Digital) conversion, etc. The processed digital image signal is then supplied to the recording controller 113 and the display controller 190. The imager 112 also performs a ranging process to focus on a given position in an image to be captured.
When the recording controller 113 is instructed to record a captured image by the captured image recording commander 170, the recording controller 113 records the captured image, which is output from the imager 112 at the time the recording controller 113 is instructed to record same, in the image storage 200.
The lenses 121, 122 are used when the subject distance calculator 130 calculates the distance up to the subject. The lenses 121, 122 converge light from the subject and supplies the converged light to the subject distance calculator 130.
Based on the converged light from the subject, the subject distance calculator 130 calculates the distance up to the subject. The subject distance calculator 130 supplies the calculated the distance up to the subject (subject distance) to the captured image recording position calculator 140. Specifically, the subject distance calculator 130 includes a first sensor array and a second sensor array, not shown, which are associated with the lenses 121, 122, respectively. The first sensor array converts a light signal applied from the subject through the lens 121 into an analog image signal, and the second sensor array converts a light signal applied from the subject through the lens 122 into an analog image signal. Based on the analog image signals generated by the first and second sensor arrays, the subject distance calculator 130 calculates the distance up to the subject and the angle of the imaging device 100 with respect to the subject according to the principles of a triangulation ranging process. In the present embodiment, it is assumed that the subject distance calculator 130 calculates the distance up to the subject which corresponds to a central area at the right end of the captured image, as the subject distance.
Based on the subject distance output from the subject distance calculator 130, the captured image recording position calculator 140 calculates a captured image recording position, which is a position where a next captured image is to be recorded after the preceding captured image has been recorded. The captured image recording position calculator 140 then outputs the calculated captured image recording position to the captured image recording commander 170. A process of calculating a captured image recording position will be described in detail later with referent to
The timer 150 supplies the traveled distance calculator 160 with successive timer values to be used by the traveled distance calculator 160 to calculate a distance. The timer 150 counts up its timer value by 1 per 0.1 second, for example.
The traveled distance calculator 160 calculates the distance that the imaging device 100 has traveled, and outputs the calculated distance to the captured image recording commander 170. Specifically, the traveled distance calculator 160 holds the value of the speed received by the user action receiver 180, and multiplies the held value of the speed by the timer value supplied from the timer 150, thereby calculating the distance that the imaging device 100 has traveled. The traveled distance calculator 160 serves as an example of a traveled distance acquirer as claimed.
The captured image recording commander 170 determines a timing to record a captured image based on the captured image recording position output from the captured image recording position calculator 140 and the traveled distance output from the traveled distance calculator 160. Then, the captured image recording commander 170 outputs an instruction to record a captured image to the recording controller 113 according to the determined timing. A process of determining a timing to record a captured image will be described in detail later with reference to
The user action receiver 180 receives a user action made by the user of the imaging device 100, and outputs a signal depending on the received user action to the traveled distance calculator 160, the captured image recording commander 170, or the display controller 190. For example, the user action receiver 180 includes a manually operable member such as a shutter button 181 on the imaging device 100, as shown in
The display controller 190 displays captured images stored in the image storage 200 and captured images generated by the imager 112 on the display 300 based on a user action for a display instruction which is received by the user action receiver 180. The display controller 190 also displays various screens shown in
The image storage 200 stores captured images supplied from the recording controller 113. The image storage 200 also supplies stored captured images to the display controller 190. The captured images that are stored in the image storage 200 are in the form of image data according to the JPEG (Joint Photographic Experts Group) format. The image storage 200 may include a recording medium such as a memory card, an optical disk, a magnetic disk, a magnetic tape, or the like, which may be fixedly or removably mounted in the imaging device 100.
The display 300 displays various captured images supplied from the display controller 190.
The “panoramic imaging mode” setting button 311 is a button which the user touches for setting a panoramic imaging mode.
The “panoramic display mode” setting button 312 is a button which the user touches for setting a panoramic display mode for displaying captured images recorded in the panoramic imaging mode as a panoramic image.
The “portrait imaging mode” setting mode 313 is a button which the user touches for setting a portrait imaging mode that is an imaging mode for imaging portraits. The “landscape imaging mode” setting button 314 is a button which the user touches for setting a landscape imaging mode that is an imaging mode for imaging landscapes. The displayed mode setting screen may include setting buttons for setting other imaging modes and display modes. These setting buttons will not be de-scribed in detail below.
The “speed” selecting area 321 is an area for selecting a speed in the panoramic imaging mode. When the user wants to select a speed in the “speed” selecting area 321, the user touches the pull-down button 322 to display a pull-down menu including a list of speeds, and then selects a desired speed from the displayed list of speeds. In
The “confirm” button 323 is a button which the user touches to go to a panoramic imaging mode starting screen shown in
The presently set mode display area 331 is an area wherein the letters “PANORAMIC IMAGING MODE IS PRESENTLY SET” are displayed, indicating to the user that the panoramic imaging mode is presently set.
The panoramic imaging mode description display area 332 is an area wherein a description of the panoramic imaging mode is displayed. The description of the panoramic imaging mode will be described in detail later with reference to
The “return” button 333 is a button which the user touches to change from the panoramic imaging mode starting screen shown in
The panoramic imaging mode finishing screen shown in
The next panoramic imaging mode guidance display area 334 is an area wherein a description about how to perform a next panoramic imaging mode and how to cancel a panoramic imaging mode is displayed when the present panoramic imaging mode is finished.
The “next panoramic imaging mode” button 335 is a button which the user touches to perform a next panoramic imaging mode. When the user touches the “next panoramic imaging mode” button 335, the panoramic imaging mode starting screen shown in
The “cancel” button 336 is a button which the user touches to cancel the panoramic imaging mode and return to the mode setting screen shown in
It is assumed in
When the shutter button 181 is pressed in captured image recording position Pi, the subject distance calculator 130 calculates distance Fi from captured image recording position Pi to point Xi. Based on calculated distance Fi and imaging angle θ, the captured image recording position calculator 140 calculates distance Li up to a next captured image recording position according to the following equation (1):
L
i=2×Fi×sin(θ÷2) (1)
The captured image recording commander 170 monitors whether the distance calculated successively by the traveled distance calculator 160 has reached the distance Li calculated by the captured image recording position calculator 140 or not. If the distance calculated successively by the traveled distance calculator 160 has reached the distance Li calculated by the captured image recording position calculator 140, i.e., if the imaging device 100 has reached captured image recording position Pi+1, then the captured image recording commander 170 outputs an instruction to record a captured image to the recording controller 113. The recording controller 113 now records a captured image in captured image recording position Pi+1. An imaging range at captured image recording position Pi+1 has a left end or boundary whose center intersects with the subject surface 430 at point Xi, and a right end or boundary whose center intersects with the subject surface 430 at point Xi+1, and the imaging range whose horizontal extent is defined between points Xi, Xi+1 is represented by B. When the imaging device 100 has reached captured image recording position Pi+1, the subject distance calculator 130 calculates distance Fi+1 from captured image recording position Pi+1 to point Xi+1. Based on calculated distance Fi+1 and imaging angle θ, the captured image recording position calculator 140 calculates distance Li+1 up to a next captured image recording position according to the above equation (1). Subsequently, the above process of recording a captured image is repeated until the user presses the shutter button 181. For example, as shown in
Each of the panoramic image IDs 201 represents identification information for identifying a series of captured images recorded in the panoramic imaging mode. For example, “#1,” “#2,” or “#3” representing a group of captured images is stored as each of the panoramic image IDs 201. Specifically, one ID is assigned to a group of captured images recorded after the shutter button 181 is pressed while the panoramic imaging mode starting screen shown in
The imaging sequence numbers 202 represent a sequence of captured images belonging to one group which are recorded in the panoramic imaging mode. For example, “1” through “3” are assigned as imaging sequence numbers 202 to the respective captured images 431, 432, 433 which have been recorded according to the sequence shown in
The captured images 203 represent a succession of captured images recorded in the panoramic imaging mode. In
The panoramic image selection buttons 451, 452, 453 are buttons which the user touches to display captured images stored in the image storage 200 as a panoramic image. In
The “return” button 454 is a button which the user touches to change from the panoramic image display selection screen shown in
The panoramic image display screen also includes a “return” button 455. The “return” button 455 is a button which the user touches to change from the panoramic image display screen shown in
As described above, for displaying captured images recorded in the panoramic imaging mode as a panoramic image, the captured images stored in the image storage 200 do not need to be specially processed, but can simply be displayed as the panoramic image on the liquid crystal panel 310. Specifically, when the captured images stored in the image storage 200 are displayed in an array according to the sequence in which they have been recorded, they are displayed as a panoramic image.
Operation of the imaging device 100 according to the present embodiment will be described below.
First, it is determined whether the panoramic imaging mode has been set or not (step S901). If the panoramic imaging mode has been set, then the panoramic imaging mode starting screen shown in
If the panoramic imaging mode has been set (Yes in step S901), then the captured image recording commander 170 determines whether an imaging start button is pressed or not (step S902). In the present embodiment, it is judged that the imaging start button is pressed if the shutter button 181 is pressed while the panoramic imaging mode starting screen shown in
Then, the imager 112 generates a captured image (step S903). Then, the captured image recording commander 170 determines whether the present time is immediately after the imaging start button is pressed or not (step S904). If the present time is immediately after the imaging start button is pressed (Yes in step S904), then the captured image recording commander 170 outputs an instruction to record a captured image to the recording controller 113, which records the captured image generated by the imager 112 in the image storage 200 (step S905). Then, the subject distance calculator 130 calculates the distance up to the subject (subject distance) included in an area corresponding to the center of the right end of the imaging range (step S906).
Then, the captured image recording position calculator 140 calculates a next captured image recording position based on the subject distance calculated by the subject distance calculator 130 (step S907). Then, the captured image recording commander 170 determines whether an imaging end button is pressed or not (step S908). In the present embodiment, it is judged that the imaging end button is pressed if the shutter button 181 is pressed while a panoramic image is being recorded. If the imaging end button is pressed (Yes in step S908) is pressed, then the panoramic image recording process is put to an end.
If the imaging end button is not pressed (No in step S908), the control returns to step S903, repeats the panoramic image recording process (steps S903 to S910)
If the present time is not immediately after the imaging start button is pressed (No in step S904), then the traveled distance calculator 160 calculates the distance from the preceding captured image recording position up to the present position (step S909). Then, the captured image recording commander 170 determines whether the present traveled distance calculated by the traveled distance calculator 160 has reached the next captured image recording position calculated by the captured image recording position calculator 140 or not (step S910). If the present traveled distance has reached the next captured image recording position (Yes in step S910), then the captured image recording commander 170 outputs an instruction to record a captured image to the recording controller 113, which records the captured image generated by the imager 112 in the image storage 200 (step S905). If the present traveled distance has not reached the next captured image recording position (No in step S910), then control goes to step S908.
According to the above processing sequence, the user manually enters a speed value, and the traveled distance calculator 160 calculates a present traveled distance based on the entered speed value. However, the traveled distance calculator 160 may calculate a present traveled distance based on an acceleration value from an acceleration sensor. If a speed value can be acquired from a moving apparatus, such as a vehicle or the like, which moves the imaging device 100, then the traveled distance calculator 160 may calculate a present traveled distance using the speed value acquired from the moving apparatus. Furthermore, if a traveled distance can be acquired from a moving apparatus, such as a vehicle or the like, which moves the imaging device 100, then the acquired traveled distance from the moving apparatus may directly be used. For example, the vehicle for moving the imaging device 100 may have an output terminal for outputting a speed or a traveled distance, and the imaging device 100 may acquire a speed or a traveled distance from the output terminal.
According to the above processing sequence, a timing to record a captured image is determined base on a present traveled distance. However, a timing to record a captured image may be determined based on the present position of imaging device 100. For example, as shown in
As shown in
The captured image recording commander 470 determines a timing to record a captured image based on the captured image recording position output from the captured image recording position calculator 140 and the present positional information output from the GPS signal processor 460. Specifically, the captured image recording commander 470 determines a timing to record a captured image when the present position of the imaging device 102 which is represented by the positional information output from the GPS signal processor 460 has reached the captured image recording position output from the captured image recording position calculator 140.
In
In
The subject distance calculator 130 supplies the calculated subject distance to the captured image recording commander 530.
The GPS signal processor 510 calculates positional information based on a GPS signal received from a GPS reception antenna, not shown, and outputs the calculated positional information to the captured image recording commander 530. The calculated positional information includes data about the present position of the imaging device 500, such as the latitude, longitude, altitude, etc. The GPS signal processor 510 serves as an example of a present position acquirer as claimed.
The gyrosensor 520 comprises an angular velocity sensor for detecting angular velocities about three axes which extend perpendicularly to each other. The gyrosensor 520 outputs detected angular velocities to the captured image recording commander 530.
The captured image recording commander 530 determines a timing to record a captured image based on the subject distance output from the subject distance calculator 130, the positional information output from the GPS signal processor 510, and the angular velocities output from the gyrosensor 520. The captured image recording commander 530 outputs an instruction to record a captured image to the recording controller 540 according to the determined timing. A process of determining a timing to record a captured image will be described in detail later with reference to
When the recording controller 540 is instructed to record a captured image by the captured image recording commander 530, the recording controller 540 records the captured image, which is output from the imager 112 at the time the recording controller 540 is instructed to record same, in the image storage 200. A process of recording captured images will be described in detail later with reference to
The panoramic imaging mode starting screen shown in
The panoramic imaging mode description display area 552 is an area wherein a description of the panoramic imaging mode is displayed. In the present embodiment, the user does not need to manually enter a speed value, and the imaging device 500 does not move straight. The panoramic imaging mode description display area 552 differs from the panoramic imaging mode description display area 332 shown in
It is assumed in
It is assumed that a next captured image recording position is represented by Pi+1, the distance from captured image recording position Pi to captured image recording position Pi+1 by Li, and an imaging angle at captured image recording position Pi+1 by θi. An imaging range at captured image recording position Pi+1 has a left end or boundary whose center intersects with the subject surface 620 at point Xi, and a right end or boundary whose center intersects with the subject surface 620 at point Xi+1, and the imaging range whose horizontal extent is defined between points Xi, Xi+1 serves as the imaging range of a second captured image.
A line segment starting from captured image recording position Pi and ending at captured image recording position Pi+1 is referred to as vector Li having length Li. A line segment starting from captured image recording position Pi and ending at point Xi is referred to as vector Fi having length Fi. A line segment starting from captured image recording position Pi+1 and ending at point Xi is referred to as vector Ri having length Ri. These vectors are related to each other according to the following equation (2):
vector Fi−vector Li=vector Ri (2)
The vectors will be described below with respect to an xy plane coordinate system having its origin (0, 0) at captured image recording position Pi, a y-axis extending in a direction aligned with vector Fi, and an x-axis extending in a direction perpendicular to vector Fi. On the xy plane coordinate system, it is assumed that the angle formed between vector Li and vector Fi is represented by φi, and the angle formed between the optical axis of the imaging device 500 at captured image recording position Pi+1 and vector Fi is represented by ψi The angle formed between vector Ri at captured image recording position Pi+1 and the y-axis is represented by ψi−(θi/2).
Putting the values thus defined on the xy plane coordinate system into the equation (2), the following equation (3) is satisfied:
(0,Fi)−(Li sin φi, Li cos φi)=(−Ri sin((θi/2)−ψi), Ri cos((θi/2)−ψi)) (3)
Length Li can be calculated based on the present positional information output from the GPS signal processor 510. Angles φi, ψi can be calculated from the angular velocities output from the gyrosensor 520. Length Fi can be specified from the subject distance output from the subject distance calculator 130. In the equation (3), length Ri is unknown. By eliminating length Ri from the equation (3) using an equation about x coordinates and an equation about y coordinates, the following equation (4) is satisfied:
L
i sin φi×cos(ψi−(θi/2))+(Fi−Li cos φi)×sin(ψi−(θi/2))=0 (4)
The captured image recording commander 530 determines a timing to record a captured image, using distance Li which satisfies the equation (4) and the imaging angle θi. Since the equation (4) includes two variables Li, θi, either one of these variations need to be constant in determining a timing to record a captured image. If the distance Li from preceding captured image recording position Pi to next captured image recording position Pi+1 is constant (Li−1=Li), then the captured image recording commander 530 sets the imaging angle θi to a value which satisfies the equation (4) when the present position has reached a position represented by the constant distance Li. The captured image recording commander 530 outputs an instruction to record a captured image and also outputs the set imaging angle θi to the recording controller 540. If the imaging angle θi is constant (θi−1=θi), then the captured image recording commander 530 outputs an instruction to record a captured image and also outputs the constant imaging angle θi to the recording controller 540 when the present position has reached a position represented by the distance Li which satisfies the equation (4). Timings to recording captured images at subsequent captured image recording positions Pi+2, Pi+3, . . . can similarly be determined.
If the imaging device 500 moves straight, then θi=θ, φi=Π/2−θ/2, and ψi=−θ/2. By putting these values into the equation (4), the equation (4) becomes equal to the equation (1). Therefore, the example shown in
Operation of the imaging device 500 according to the present embodiment will be described below with reference to
If the present time is not immediately after the imaging start button is pressed (No in step S904), then the captured image recording commander 530 determines whether the present position has reached a position which satisfies the equation (4) or not, based on the subject distance output from the subject distance calculator 130, the positional information output from the GPS signal processor 510, and the angular velocities output from the gyrosensor 520. If the present position has reached a position which satisfies the equation (4) (Yes in step S921), then control goes to step S905.
If the present position has not reached a position which satisfies the equation (4) (No in step S921), then the GPS signal processor 510 acquires positional information (step S922), and the gyrosensor 520 acquires angular velocities (step S923).
If the present time is not immediately after the imaging start button is pressed (No in step S904), then the captured image recording commander 530 determines whether the present position has reached a position which is represented by the constant distance or not, based on the positional information output from the GPS signal processor 510. If the present position has reached a position which is represented by the constant distance (Yes in step S931), then the captured image recording commander 530 sets an imaging angle which satisfies the equation (4) based on the subject distance output from the subject distance calculator 130 and the angular velocities output from the gyrosensor 520 (step S932). Then, the recording controller 540 determines a recording range for the captured image based on the set imaging angle, and records the captured image in the determined recording range in the image storage 200 (step S933). If the present time is immediately after the imaging start button is pressed (Yes in step S904), then the recording controller 540 determines a recording range for the captured image based on the set imaging angle, and records the captured image in the determined recording range in the image storage 200 (step S933). If the present position has not reached a position which is represented by the constant distance (No in step S931), then control goes to step S922.
If a vehicle is used as a moving apparatus for moving the imaging device which records captured images for generating a panoramic image, then the vehicle tends to vibrate vertically due to surface irregularities of the road on which the vehicle is moving. In addition, the imaging device may vibrate along its optical axis as it is carried by the hands of the user. A process of correcting captured images against vibrations or small movements of the imaging device for generating an appropriate panoramic image will be described below.
The gyrosensor 520 outputs detected angular velocities to the captured image recording commander 530 and the recording controller 730. The gyrosensor 520 serves as an example of a vibration detector as claimed.
The acceleration sensor 710 includes an acceleration sensor for detecting accelerations (a change in speed per unit time) along three axes which extend perpendicularly to each other. The acceleration sensor 710 outputs the detected accelerations to recording controller 730. The acceleration sensor 710 serves as an example of a vibration detector as claimed.
When the recording controller 730 is instructed to record a captured image by the captured image recording commander 530, the recording controller 730 records the captured image, which is output from the imager 112 at the time the recording controller 730 is instructed to record same, in the image storage 200. When the captured image is to be recorded, the recording controller 730 corrects the captured image based on the angular velocities output from the gyrosensor 520 and the acceleration output from the acceleration sensor 710, and records the corrected captured image in the image storage 200. Details of a process of correcting the captured image will be described below with reference to
First, a captured image correcting process in which the imaging device 700 turns or oscillates about a horizontal axis across the optical axis thereof will be described in detail below with reference to
When the imaging device 700 turns in the direction indicated by the arrow 810 through an angle η, the recording controller 730 corrects the captured image in a direction to cancel the angle η. The angle η can be detected by the gyrosensor 520. In other words, even when the imaging device 700 turns in the direction indicated by the arrow 810 through the angle η, the captured image can be recorded in the same vertical recording range as before the imaging device 700 turns, by correcting the captured image using the correcting angle η.
A captured image correcting process in which the imaging device 700 moves vertically will be described in detail below with reference to
As shown in
It is assumed that a vertical angle γi−1 with respect to the ranging point Si−1 at the position Pi−1 of the imaging device 700 before it moves is preset. The subject distance fi−1 at the position Pi−1 of the imaging device 700 before it moves is determined by the imager 112. The distance h can be detected by the acceleration sensor 710. A point where a straight line extending parallel to the optical axis from the ranging point Si−1 intersects with a vertical straight line passing through the position Pi−1 is indicated by Qi−1. The straight line Si−1Qi−1 and the straight line Qi−1Pi are determined by the following equations (5), (6):
S
i−1
Q
i−1
=f
i−1×sin γi−1 (5)
Q
i−1
P
i
=h+Q
i−1
P
i−1
=h−f
i−1×cos γi−1 (6)
Since tan(Π−γi)=Si−1Qi−1/Qi−1Pi is satisfied for a triangle Si−1Qi−1Pi, an angle γi is determined by the following equation:
Using the angle γi thus determined, the correcting angle η is determined according to the following equation (7):
The captured image can be recorded in the same vertical recording range as before the imaging device 700 moves, by correcting the captured image using the correcting angle η thus determined.
A captured image correcting process in which the imaging device 700 moves slightly along the optical axis thereof will be described below.
As shown in
It is assumed that an angle δi−1 from the optical axis with respect to the ranging point Si−1 at the position Pi−1 of the imaging device 700 before it moves is preset. The subject distance fi−1 at the position Pi−1 of the imaging device 700 before it moves is determined by the imager 112. A point where a straight line extending downwardly from the ranging point Si−1 intersects with a straight line extending along the optical axis and passing through the positions Pi−1, Pi is indicated by Ti−1. The following equation (8) with respect to δi for a triangle Si−1Ti−1Pi is satisfied:
tan δi=fi−1×sin δi−1÷(fi−1×cos δi−1−L) (8)
If the gradient of the subject surface 800 is nearly vertical and a change in ωi (Δω=ωi−ωi−1) and a change in δi (Δδ=δi−δi−1) are of sufficiently small values, then the following approximating equation is satisfied:
ωi=ωi−1×(δi/δi−1)
Using the above approximating equation, the imaging angle ωi of the imaging device 700 after it moves is calculated according to the following equation (9):
The captured image can be recorded in the same vertical recording range as before the imaging device 700 moves, by correcting the captured image using the imaging angle ωi thus determined.
The three captured image correcting processes have been described above with reference to
Even when vibrations are applied to the imaging device 700, the imaging device 700 can correct captured images according to the above correcting process for aligning joints of areas at upper and lower ends of the captured images.
Operation of the imaging device 700 according to the present embodiment will be described below with reference to
If the present time is immediately after the imaging start button is pressed (Yes in step S904), or if the present position has reached a position which satisfies the equation (4) (Yes in step S921), then the captured image recording commander 530 outputs an instruction to record a captured image to the recording controller 730. The recording controller 730 corrects the captured image generated by the imager 112 based on the angular velocities output from the gyrosensor 520 and the acceleration output from the acceleration sensor 710 (step S941). Then, the recording controller 730 records the corrected captured image in the image storage 200 (step S905).
According to the embodiments of the present invention, as described above, for generating a panoramic image, captured images free of overlapping areas are successively recorded without the need for image processing such as trimming. Therefore, a panoramic image can quickly be recorded without any special image processing. Since a panoramic image is displayed using all the recorded captured images, image data are prevented from being wasted. Furthermore, if a series of captured images are to be displayed as a panoramic image, then they can be displayed as a panoramic image simply by being displayed in the recorded sequence. Consequently, the panoramic image can simply be displayed without any special image processing. Processing burdens imposed for recording and displaying a panoramic image are thus reduced. A small-size digital still camera, a cellular phone, etc. can generate a panoramic image without the need for any device for performing special image processing. If captured images that are recorded as a panoramic image in the image storage 200 are to be displayed by a personal computer, for example, then the personal computer can simply display a panoramic image without the need for any device for performing special image processing.
In the embodiments of the present invention, a vehicle has been described as a moving apparatus for moving the imaging device. However, the imaging device may be moved by a moving apparatus such as a train or the like. Alternatively, the imaging device may be moved by a moving apparatus such as a radio-controlled car or the like under wireless control.
In some of the embodiments of the invention, the imaging device has the GPS signal processor as the present position acquirer. Instead, GPS information may be acquired from a car navigation system or the like incorporated in the vehicle which carries the imaging device. The gyrosensor is used as a device for detecting angular displacements of the imaging device. However, a sensor such as a geomagnetic sensor may be used as such a device.
In the embodiments of the invention, the subject distance is calculated by the subject distance calculator 130 according to the principles of the triangulation ranging process. If the speed at which the imaging device moves is sufficiently low, then the subject distance may be measured according to an AF (automatic focusing) distance measuring process. Specifically, for recording a captured image, the imaging device is focused on the central position of the captured image according to an AF process, and immediately before or after the captured image is recorded, the subject distance used to calculate a captured image recording position is calculated according to the AF process.
The principles of the invention are applicable to an imaging device such as a camcorder (camera and recorder), a cellular phone with an imager, or the like.
Although certain preferred embodiments of the present invention have been shown and described in detail, it should be understood that various changes and modifications may be made therein without departing from the scope of the appended claims.
Each of the processing sequences according to the embodiments of the invention may be grasped as a method for carrying out the processing sequence, a program for enabling a computer to carry out the processing sequence, or a recording medium storing such a program. The recording medium may comprise a CD (Compact Disc), an MD (MiniDisc), a DVD (Digital Versatile Disk), a memory card, a Blu-ray Disc (registered trademark), or the like.
The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2008-102942 filed in the Japan Patent Office on Apr. 10, 2008, the entire content of which is hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2008-102942 | Apr 2008 | JP | national |