Some embodiments described in the present application relate to an image capture apparatus and an image capture method which are applicable to image-capture functions of digital cameras and mobile electronic apparatuses, such as mobile phones.
Hitherto, providing assistance to users so that they can take good pictures has been conceived. For example, Japanese Unexamined Patent Application Publication No. 2009-239397 discloses a technology in which a position detection function is provided and, when any posted picture captured in the vicinity of the current position is available, the posted picture and composition information thereof are received from a server. Guide information for guiding the user to move to the shooting position of the posted picture is generated and is displayed on a display section with an arrow and numeric values. In accordance with the guide information, the user is guided to the shooting position.
When the user reaches the shooting position in accordance with the guide information, a prompt for shooting is given. That is, an actual image to be captured is displayed on a finder with a transparent image of a reference picture being superimposed thereon. The user makes fine adjustment so that the actual image to be captured matches the reference picture. With this arrangement, even when the user is not familiar with how to take a picture or even when the user is taking a picture at an unfamiliar place, it is possible to take a picture with a more preferable composition.
In general, the shooting point and the composition for shooting are elements that are inseparable from each other, and all three elements, i.e., the position, the attitude, and the angle of view of the camera, are involved to define a specific composition. When the flow of the shooting is considered from the standpoint of the user of the camera, three steps are taken. That is, first, the user moves to the shooting point, then determines the direction in which the camera is aimed, and lastly determines the angle of view for shooting by zoom adjustment and so on. Thus, for example, when the user convenience is taken into account, it is important that the three-step guidance for a recommended composition for shooting be seamlessly given to the user.
However, in the technology disclosed in Japanese Unexamined Patent Application Publication No. 2009-239397, the screen display is switched between the flow of the guidance for guiding the user up to the shooting point and a prompt for shooting after the user reaches the shooting point. Thus, there is a problem in that the technology lacks consistency in giving information to the user and does not provide a shooting guide that enables sufficiently intuitive operations. In addition, in the technology disclosed in Japanese Unexamined Patent Application Publication No. 2009-239397, since the image data of the reference picture is stored, there is a problem in that the amount of data of the reference image increases.
Accordingly, it is desirable to provide an image capture apparatus and an image capture method which are capable of giving easy-to-understand intuitive guidance through a coherent user interface without switching images displayed and are capable of reducing the amount of data processed.
With this arrangement, guidance for steps of moving to the shooting point, determining the shooting direction, and determining the angle of view can be seamlessly given to the user without switching images. Accordingly, it is possible to give guidance that is easy to understand for the user. In addition, since no reference picture data is used, it is possible to suppress an increase in the amount of data.
Accordingly, in some embodiments, an image capture apparatus configured to obtain at least one image including a first image is disclosed. The image capture apparatus comprises at least one processor configured to produce a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image, and at least one display configured to display the first superimposed image. The first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
In some embodiments, a method is disclosed for assisting a user obtain at least one image, including a first image, with an image capture apparatus. The method comprises producing, with the image capture apparatus, a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image; and displaying the first superimposed image, wherein the first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
In some embodiments at least one computer-readable storage medium is disclosed. The at least one computer-readable storage medium stores processor-executable instructions that, when executed by an image capture apparatus, cause the image capture apparatus to perform a method for assisting a user obtain at least one image including a first image. The method comprises producing, with the image capture apparatus, a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image, and displaying the first superimposed image. The first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
The foregoing is a non-limiting summary of the invention, which is defined by the attached claims.
Hereinafter, some illustrative preferred embodiments will be described below. The scope of the present disclosure, however, is not limited to the embodiments described below, unless otherwise specifically stated.
One embodiment of the present disclosure will now be described. One example of an image capture apparatus to which the present disclosure is applicable will first be described with reference to
The image capture apparatus 20 has, at its rear surface, an LCD (liquid crystal display) 30, a strap attachment portion 31, a moving-image button 32, a playback button 33, a deletion button 34, a menu button 35, and a control button 36. As illustrated in a magnified view in
As illustrated in
The recording medium 10 is, for example, a memory card using a semiconductor memory or the like. Instead of the memory card, the recording medium 10 may be implemented by, for example, a hard disk device, a magnetic disk, or an optical recording medium, such as a recordable DVD (digital versatile disc) or recordable CD (compact disc).
The camera section 1 has an optical block 11, an imaging device 12, a pre-processing circuit 13, an optical-block driver 14, an imaging-device driver 15, and a timing-signal generating circuit 16. Examples of the imaging device 12 include a CCD (charge coupled device) and a CMOS (complementary metal oxide semiconductor). The optical block 11 has a lens, a focusing mechanism, a shutter mechanism, a diaphragm (iris) mechanism, and so on.
The control section 5 may be a microcomputer to control the individual sections of the image capture apparatus 20 according to this embodiment. The control section 5 may have a configuration in which a CPU (central processing unit) 51, a RAM (random access memory) 52, a flash ROM (read only memory) 53, and a clock circuit 54 are interconnected through a system bus 55. The RAM 52 is primarily used as a work area for temporarily storing results obtained during processing. The flash ROM 53 stores various programs executed by the CPU 51, data used for processing, and so on. The clock circuit 54 has a function for providing the current year, month, and day, the current day of week, the current time, shooting date and time, and so on and a function for adding date-and-time information, such as the shooting date and time, to a captured-image file.
During shooting, the optical-block driver 14 generates a drive signal for driving the optical block 11, in accordance with control performed by the control section 5, and supplies the drive signal to the optical block 11 to operate the optical block 11. In response to the drive signal supplied from the optical-block driver 14, the optical block 11 controls the focusing mechanism, the shutter mechanism, and the diaphragm mechanism to capture an image of a subject. The optical block 11 then supplies the subject image to the imaging device 12. The optical block 11 may have a replaceable lens device. For example, the lens device has a microcomputer therein to transmit information, such as the type of lens device and the current focal length, to the CPU 51.
The imaging device 12 photoelectrically transforms the subject image supplied from the optical block 11 and then outputs the resulting subject image. In response to a drive signal from the imaging-device driver 15, the imaging device 12 operates to capture the subject image. On the basis of a timing signal from the timing-signal generating circuit 16 controlled by the control section 5, the imaging device 12 supplies the captured subject image to the pre-processing circuit 13 as an electrical signal.
Under the control of the control section 5, the timing-signal generating circuit 16 generates a timing signal for providing predetermined-timing information. On the basis of the timing signal from the timing-signal generating circuit 16, the imaging-device driver 15 generates the drive signal to be supplied to the imaging device 12.
The pre-processing circuit 13 performs CDS (correlated double sampling) processing on a supplied captured-image signal to improve the S/N (signal-to-noise) ratio, performs AGC (automatic gain control) processing to control the gain, and performs A/D (analog-to-digital) conversion to generate captured-image data, which includes a digital signal.
The pre-processing circuit 13 supplies the digital captured-image data to the digital-signal processing section 2. The digital-signal processing section 2 performs camera-signal processing on the captured-image data. Examples of the camera-signal processing include AF (auto focus) processing, AE (auto exposure) processing, and AWB (auto white balance) processing. Image data resulting from the camera-signal processing is compressed by a predetermined compression system, the compressed image data is supplied to the recording medium 10, attached to the media interface 4, and/or to the hard disk drive 17 through the system bus 55 and is recorded to the recording medium 10 and/or the hard disk drive 17 as an image file that complies with, for example, a DCF (Design rule for Camera File system) standard.
An intended piece of the image data recorded on the recording medium 10 is read from the recording medium 10 via the media interface 4 in accordance with an operation input received from the user via the operation section 6. The read piece of image data is then supplied to the digital-signal processing section 2. The operation section 6 includes, for example, a lever, a dial, and various buttons, such as a shutter release button. The LCD 30 may be implemented as a touch panel so that the user can perform an input operation by touching/pressing the screen with his or her finger or a pointing device.
The digital-signal processing section 2 performs decompression processing (extraction processing) on the compressed image data, read from the recording medium 10 and supplied via the media interface 4, and supplies the decompressed image data to the LCD controller 8 through the system bus 55. The LCD controller 8 generates a display image signal by using the image data and supplies the generated display image signal to the LCD 30. As a result, an image corresponding to the image data recorded on the recording medium 10 is displayed on the screen of the LCD 30. In addition, under the control of the control section 5 and the LCD controller 8, graphics and text for a menu and so on can be displayed on the screen of the LCD 30. An image may be displayed in a format according to a display-processing program recorded in the flash ROM 53.
The image capture apparatus has the external interface 9, as described above. The image capture apparatus 20 may be connected to, for example, an external personal computer via the external interface 9. In such a case, upon receiving image data from the personal computer, the image capture apparatus 20 can record the image data to the recording medium loaded thereinto. Also, the image capture apparatus 20 can supply image data recorded on the recording medium, loaded thereinto, to the external personal computer.
A communication module may also be connected to the external interface 9 to connect to a network, such as the Internet. In such a case, the image capture apparatus can obtain various types of image data or other information through the network and can record the image data or the information to the loaded recording medium. Alternatively, the image capture apparatus 20 can transmit data, recorded on the loaded recording medium, to intended equipment through the network.
The image capture apparatus 20 can also read and reproduce information regarding the image data, obtained from the external personal computer or through the network and recorded on the recording medium, and can display the information on screen of the LCD 30.
The external interface 9 may also be provided as a wired interface, such as an IEEE (Institute of Electrical and Electronics Engineers) 1394 interface or a USB (universal serial bus) interface, or may also be provided as a wireless interface utilizing light or radio waves. That is, the external interface 9 may be any of such wired and wireless interfaces. For example, through connection to an external computer apparatus (not illustrated) via the external interface 9, the image capture apparatus 20 can receive image data supplied from the computer apparatus and can record the received image data to the recording medium 10 and/or the hard disk drive 17. The image capture apparatus 20 can also supply the image data, recorded on the recording medium 10 and/or the hard disk drive 17, to the external computer device or the like.
After capturing an image (a still image or a moving image) of a subject, the image capture apparatus 20 can record the subject image to the loaded recording medium 10 and/or the hard disk drive 17. In addition, the image data recorded on the recording medium 10 and/or the hard disk drive 17 can be read and a corresponding image can be displayed for arbitrary viewing and editing. An index file for managing the image data is recorded in a specific area of the recording medium 10 and/or the hard disk drive 17.
As illustrated in
The position data, the azimuth data, and the attitude data are supplied from the sensor section 7 to an AR (augmented reality) display control section 56. The AR display control section 56 is one function of the control section 5. The AR is a technology that allows a virtual object to be displayed on the screen of the LCD 30 with the virtual object being superimposed on an image (a captured image) in the real-life environment. The configuration (illustrated in
Now, an operation of the above-described image capture apparatus will be briefly described. The imaging device 12 receives light, photoelectrically converts the light into a signal, and supplies the signal to the pre-processing circuit 13. The pre-processing circuit 13 performs the CDS processing and AGC processing on the signal to convert it into a digital signal and supplies the digital signal to the digital-signal processing section 2. The digital-signal processing section 2 performs image-quality correction processing on image data and supplies the resulting image data to the control section 5 as image data of a through-the-camera image. The image data is then supplied from the control section 5 to the LCD controller 8 and the through-the-camera image is displayed on the screen of the LCD 30.
With this arrangement, the user can adjust the angle of view while viewing the through-the-camera image displayed on the screen of the LCD 30. As described below, in the present disclosure, the AR is used to display virtual objects on the screen of the LCD 30 on which a subject image is displayed. By displaying the virtual objects, the image capture apparatus 20 is adapted to guide the user for taking a recommended picture.
When the shutter button of the operation section 6 is pressed, the CPU 51 outputs a control signal to the camera section 1 to operate the shutter of the optical block 11. In response, the digital-signal processing section 2 processes image data (record-image data) for one frame, the image data being supplied from the pre-processing circuit 13, and then stores the image data in the SDRAM 3. The digital-signal processing section 2 further compresses and encodes the record-image data. The resulting encoded data may be stored on the hard disk drive 17 and may also be stored on the recording medium 10 through the system bus 55 and the media interface 4.
With respect to a still-image data, the CPU 51 obtains the shooting date and time from the clock circuit 54, adds the shooting date and time to the still-image data, and stores the resulting image data on the hard disk drive 17 and/or the recording medium 10. In addition, the position data, the azimuth data, and the attitude data obtained from the sensor section 7 may also be added to the obtained image data. Additionally, with respect to a still image, data of a reduced-size image (a thumbnail) thereof is generated and is stored on the hard disk drive 17 and/or the recording medium 10 in association with the original still image.
On the other hand, when the record-image data stored in the hard disk drive 17 and the recording medium 10 is to be reproduced, the record-image data selected by the CPU 51 is read to the SDRAM 3 in accordance with an operation input from the operation section 6. The digital-signal processing section 2 then decodes the record-image data. The decoded image data is then supplied to the LCD 30 via the LCD controller 8 and a reproduced image is displayed on the screen of the LCD 30.
In the present disclosure, the image capture apparatus 20 has a function for guiding a photographing person (user) for taking a good picture. For the guidance, the AR is used to display virtual objects on the screen of the LCD 30 on which a subject image is displayed. The virtual objects change like real-life subjects, in accordance with the shooting position, the shooting angle, and the angle of view. The virtual objects include a first display object and a second display object. The image capture apparatus 20 is adapted to detect the orientation of the camera with high responsiveness in order to present the virtual objects so that they correspond to the real-life environment image obtained by the image capture apparatus 20.
The AR display control section 56 generates information of the first and second display objects. As described above, the signal output from the sensor section 7 is supplied to the AR display control section 56. In addition, composition data is supplied from a storage device 57 (illustrated in
The reference position data, the reference angle data, and the reference angle-of-view data (these data may be collectively referred to as “reference data” hereinafter) are pre-stored in the storage device 57. For example, the reference data may be obtained through the Internet and be stored in the storage device 57. For example, when the user sets the shooting mode to a guide mode, the reference data of pictures taken in the vicinity of the current position of the image capture apparatus 20 (the user) are searched for and the found reference data is read from the storage device 57 and is supplied to the AR display control section 56.
The AR display control section 56 uses the current data supplied from the sensor section 7 and the reference data to generate display objects corresponding to the first and second display objects. The display objects are supplied to a screen-display control section 58, which generates a display signal for display on the screen of the LCD 30. In addition, a signal resulting from the user's camera operation is supplied to a camera control section 59 and is subjected to control for image capture. In addition, the angle-of-view information regarding the angle of view is supplied to the screen-display control section 58.
The angle of view refers to a range in which the shooting can be performed through a lens and varies according to the focal length of the lens. Typically, the angle of view increases as the focal length decreases and the angle of view decreases as the focal length increases. Thus, even when an image of the same subject is captured, a difference in the angle of view causes the shooting range to vary and also causes a composition at the angle of view for the shooting to vary. In addition, since the angle of view is affected by not only the focal length but also the lens characteristics, information of the lens characteristics is also used as the angle-of-view information. Additionally, even when the focal length is the same, the angle of view increases as the area of the imaging device increases, and the angle of view decreases as the area of the imaging device decreases. The area of the imaging device has a constant value according to the model type of the image capture apparatus. The angle of view has three types of information, i.e., a horizontal angle of view, a vertical angle of view, and a diagonal angle of view. All or part of the information of the angles may be used. The angle-of-view information is expressed in units of degrees.
Considering the factors described above, the angle-of-view information, which is calculated from the focal length, the lens characteristics, and other information, is supplied from the camera control section 59 to the screen-display control section 58. In addition, on the basis of the data, such as the focal length and the lens characteristics, supplied from the camera control section 59, the screen-display control section 58 determines the angle-of-view information. On the basis of the relationship between the angle of view used for the shooting guidance and the current angle of view, a display object indicating the angle of view used for the shooting guidance is generated.
Now, display objects generated by the AR display control section 56 will be briefly described with reference to
When an image of the subject O is captured at a distance equal to the distance of the reference position data and at a different shooting angle as illustrated in
That is, since an actual subject is displayed on the screen of the LCD 30 with the virtual object (the frame) being superimposed thereon, it is possible to take a picture that is equivalent to the recommended image by setting the shooting position and the shooting angle so that the frame has an undistorted shape, such as a square, and the size thereof becomes the largest on the screen or the frame goes out of and disappears from the screen. Since the virtual display object is, for example, a graphics obtained by transforming a three-dimensional object into a two-dimensional representation, the user can easily recognize the current shooting position and shooting angle.
The present disclosure will further be described below. As illustrated in
When the photographing person moves closer to the subject than the shooting position illustrated in
When the direction (the shooting direction) of the image capture apparatus 20 relative to the subject is changed leftward at the same shooting position as the shooting position illustrated in
As described above, the pin and the frame displayed on the screen of the LCD 30 change in the same manner as the subject, as in the real-life environment, in response to the motion of the image capture apparatus 20. As illustrated in
The user views the screen of the LCD 30 illustrated in
As illustrated in
Additionally, in the present disclosure, when the shooting position, the shooting angle, and the angle of view match the reference shooting position, the reference shooting angle, and the reference angle of view, respectively, the frame F and the pin P disappear from the screen of the LCD 30, as illustrated in
As described above, when the user changes the direction in which the image capture apparatus 20 is to be aimed, signals output from the azimuth detector 72 and the attitude detector 73 vary. The positions at which the pin and the frame are displayed are varied according to the values of the output signals. When the user changes the azimuth of the image capture apparatus 20 to the left by 10 degrees as illustrated in the example of
This display transformation may be processed in real time without inconsistency by, for example, viewing-pipeline processing widely used in three-dimensional games and so on.
The world coordinates are coordinates defined by longitude and latitude information of a GPS. Next, the world coordinates are transformed into view coordinates, since the virtual objects are viewed from the viewpoint of an individual. The attitude, position, and azimuth of the image capture apparatus 20 may be defined to transform the world coordinates into the view coordinates. As a result of the transformation into the view coordinates, the image capture apparatus 20 is located at the coordinate origin.
Since the angle of view changes according to zooming or the like, the image capture apparatus 20 is adapted to transform the view coordinates into perspective coordinates on the basis of the angle-of-view information. A method for the coordinate transformation may be implemented by parallel projection or perspective projection. The transformation of the view coordinates into the perspective coordinates means that a 3D object is transformed into a 2D object.
In addition, in order to match the display screen with the screen of the LCD 30 of the image capture apparatus 20, the perspective coordinates are transformed into display coordinates corresponding to the display screen size (e.g., 480×640). Thus, the virtual objects constituted by the frame and the pin are displayed on the screen of the LCD 30 of the image capture apparatus 20 in accordance with the current position, attitude, and azimuth, and angle of view of the image capture apparatus 20.
The display transformation may be accomplished by not only the above-described viewing-pipeline processing but also other processing that can display a 3D virtual object on the screen of a 2D display device by changing the shape and the position of the 3D virtual object in accordance with the current position, attitude, azimuth, and angle of view of the image capture apparatus 20.
In the present disclosure, the processing is performed as illustrated in the flowchart in
In step S5, a determination is made as to whether or not the current position data, azimuth data, and attitude data of the image capture apparatus 20 and the reference data are obtained. When it is determined that all of the data are obtained, the process proceeds to step S6 in which the viewing-pipeline processing is performed. In the viewing-pipeline processing, display objects (e.g., the frame and the pin) are generated. In step S7, superimposition display processing is performed. The process then returns to step S1 and the processing in step S1 and the subsequent steps is repeated after a predetermined time passes.
In the present disclosure, in the procedure of the three steps including moving to the shooting point, determining the direction in which the image capture apparatus is aimed, and determining the angle of view by performing zoom adjustment and so on to perform shooting, the coherent user interface can provides intuitive guidance that is easy to understand for anyone. The user of the image capture apparatus can easily recognize a shooting point within sight by starting the image capture apparatus and viewing the scenery therethrough and also can intuitively understand the significance of moving to that point. Additionally, the frame indicating the composition for shooting is displayed in a three dimensional manner, so that the user can easily understand the direction in which a desirable composition for shooting can be obtained at the shooting point. Thus, even a user who is not good at taking a picture can take a picture with a preferable composition.
Although the embodiment of the present disclosure has been specifically described above, the present disclosure is not limited thereto and various modifications can be made based on the technical ideas of the present disclosure. For example, in the above-described embodiment, the pin indicating a position and the frame are used as display objects. However, any other mark that can give guidance for a composition for shooting may also be used. For example, a mark, such as a + (cross) or x, may be used. In addition, the subject for shooting is not limited to stationary scenery and may be a moving subject.
The configurations, the methods, the processes, the shapes, the materials, the numeric values, and so on in the above-described embodiment may be combined together without departing from the spirit and scope of the present disclosure.
Some embodiments may comprise a computer-readable storage medium (or multiple computer-readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage media) encoded with one or more programs (e.g., a plurality of processor-executable instructions) that, when executed on one or more computers or other processors, perform methods that implement the various embodiments discussed above. As is apparent from the foregoing examples, a computer-readable storage medium may retain information for a sufficient time to provide computer-executable instructions in a non-transitory form.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such ordinal terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2011-176498 filed in the Japan Patent Office on Aug. 12, 2011, the entire contents of which are hereby incorporated by reference. In addition, the following configurations are included in the technical scope of the present disclosure.
(1) An image capture apparatus configured to obtain at least one image including a first image, the image capture apparatus comprising:
at least one processor configured to produce a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image; and
at least one display configured to display the first superimposed image,
wherein the first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
(2) The image capture apparatus of (1), wherein the at least one image includes a second image, and wherein the at least one processor is further configured to:
produce a second superimposed image by superimposing second navigation information with the second image at least in part by using second composition data associated with the second image and the reference composition data, wherein the second navigation information is different from the first navigation information.
(3) The image capture apparatus of (1), wherein the first composition data associated with the first image comprises position information, shooting angle information, and angle-of-view information of the image capture apparatus obtained when the image capture apparatus obtained the first image.
(4) The image capture apparatus of (1), wherein the at least one processor is configured to superimpose the first navigation information by superimposing at least one virtual object with the first image.
(5) The image capture apparatus of (4), wherein the at least one processor is configured to superimpose the at least one virtual object by superimposing at least one virtual object having a size and an orientation that are determined based at least in part on the composition data.
(6) The image capture apparatus of (2), wherein the at least one processor is configured to:
superimpose the first navigation information at least by superimposing at least one virtual object with the first image; and
superimpose the second navigation information at least by superimposing at least another virtual object with the second image,
wherein the at least another virtual object has a size and an orientation that are determined based at least in part on the second composition data.
(7) The image capture apparatus of (1), wherein the at least one display is further configured to display the reference image concurrently with the first superimposed image.
(8) The image capture apparatus of (1), wherein the at least one display is further configured to display at least one cursor concurrently with the first superimposed image, wherein the at least one cursor is displayed so as to indicate a direction toward a position from which the reference image was obtained.
(9) The image capture apparatus of (1), wherein the image capture apparatus is a smart phone.
(10) A method for assisting a user obtain at least one image, including a first image, with an image capture apparatus, the method comprising:
producing, with the image capture apparatus, a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image; and
displaying the first superimposed image,
wherein the first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
(11) The method of (10), wherein the at least one image includes a second image, and wherein the method further comprises:
producing a second superimposed image by superimposing second navigation information with the second image at least in part by using second composition data associated with the second image and the reference composition data, wherein the second navigation information is different from the first navigation information.
(12) The method of (10), wherein the first composition data associated with the first image comprises position information, shooting angle information, and angle-of-view information of the image capture apparatus obtained when the image capture apparatus obtained the first image.
(13) The method of (10), wherein superimposing the first navigation information comprises superimposing at least one virtual object with the first image.
(14) The method of (13), wherein superimposing the at least one virtual object comprises superimposing at least one virtual object having a size and an orientation that are determined based at least in part on the composition data.
(15) The method of (10), further comprising displaying the reference image concurrently with the first superimposed image or displaying at least one cursor concurrently with the first superimposed image, wherein the at least one cursor is displayed so as to indicate a direction toward a position from which the reference image was obtained.
(16) At least one computer-readable storage medium storing processor-executable instructions that, when executed by an image capture apparatus, cause the image capture apparatus to perform a method for assisting a user obtain at least one image including a first image, the method comprising:
producing, with the image capture apparatus, a first superimposed image by superimposing first navigation information with the first image at least in part by using first composition data associated with the first image and reference composition data associated with a reference image; and
displaying the first superimposed image,
wherein the first composition data comprises shooting angle information and/or angle-of-view information of the image capture apparatus when the image capture apparatus obtained the first image.
(17) The at least one computer-readable storage medium of (16), wherein the at least one image includes a second image, and wherein the method further comprises:
producing a second superimposed image by superimposing second navigation information with the second image at least in part by using second composition data associated with the second image and the reference composition data, wherein the second navigation information is different from the first navigation information.
(18) The at least one computer-readable storage medium of (16), wherein the first composition data associated with the first image comprises position information, shooting angle information, and angle-of-view information of the image capture apparatus obtained when the image capture apparatus obtained the first image.
(19) The at least one computer-readable storage medium of (16), wherein superimposing the first navigation information comprises superimposing at least one virtual object with the first image, the at least one virtual object having a size and an orientation that are determined based at least in part on the composition data.
(20) The at least one computer-readable storage medium of (16), wherein the method further comprises displaying the reference image concurrently with the first superimposed image or displaying at least one cursor concurrently with the first superimposed image, wherein the at least one cursor is displayed so as to indicate a direction toward a position from which the reference image was obtained.
Number | Date | Country | Kind |
---|---|---|---|
2011-176498 | Aug 2011 | JP | national |