1. Field of the Invention
The present invention relates to an imaging apparatus presenting proper content to the user to support an imaging action of the user, and an imaging method using the same.
2. Description of the Related Art
In recent years, imaging apparatuses having a function of assisting the user in setting composition in imaging have been proposed. For example, the display control apparatus disclosed in the specification of US Patent Application Publication No. 2013/0293746 is configured to generate a plurality of pieces of assistant image data with compositions different from each other, based on the original image data stored in a frame memory as a result of imaging, and cause a display to display a plurality of assistant images based on the generated assistant image data. In addition, the display control apparatus disclosed in the specification of US Patent Application Publication No. 2013/0293746 is configured to display arrows indicating moving directions of the imaging circuit required for the user's imaging the individual assistant images. The imaging apparatus disclosed in Japanese Patent Application KOKAI Publication No. 2013-183306 is configured to recognize the main subject and other subjects in a live image acquired as a result of imaging, and sense a composition frame to achieve a composition satisfying a predetermined composition condition based on the positional relation between the recognized main subject and the other subjects, the areas occupied by the respective subjects, and the radio of the occupied areas thereof. The imaging apparatus disclosed in Japanese Patent Application KOKAI Publication No. 2013-183306 is also configured to display, on the live image, a movement mark indicating the moving direction of the imaging apparatus to achieve the predetermined composition condition, when a composition frame can be sensed.
According to a first aspect of the invention, an imaging apparatus comprises: an imaging circuit configured to acquire image data from a subject image; a display configured to display an image based on the image data; an operation interface configured to provide an instruction to the imaging apparatus; and a controller configured to cause the display to display a live view image based on image data acquired as live view image data by the imaging circuit, cause, when a first instruction serving as the instruction is provided from the operation interface, the imaging circuit to acquire the image data as first photographed image data, generate at least one piece of model image data based on the first photographed image data, generating guide information based on the generated model image data, and cause the display to display the generated guide information together with the live view image.
According to a second aspect of the invention, an imaging method comprises: acquiring image data from a subject image with an imaging circuit; causing a display to display a live view image based on image data acquired as live view image data with the imaging circuit; causing the imaging circuit to acquire the image data as first photographed image data, when a first instruction is provided from an operation interface; generating at least one piece of model image data based on the first photographed image data; generating guide information based on the model image data; and causing the display to display the guide information together with the live view image.
Advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the general description given above and the detailed description of the embodiments given below, serve to explain the principles of the invention.
Embodiments of the present invention will be explained hereinafter with reference to the drawings.
A first embodiment of the present invention will be explained hereinafter.
An imaging apparatus 100 illustrated in
The imaging lens 102 is an optical system to guide an imaging light beam from a subject (not illustrated) onto a light-receiving surface of the imaging circuit 114. The imaging lens 102 includes a plurality of lenses such as a focus lens, and may be formed as a zoom lens. The lens drive circuit 104 includes a motor and a drive circuit thereof and the like. The lens drive circuit 104 drives various types of lenses forming the imaging lens 102 in its optical axis direction (direction of a long-dash-short-dash line in the drawing), in accordance with control of a CPU 1281 in the controller 128.
The aperture 106 is configured to be openable and closable, to adjust the amount of the imaging light beam made incident on the imaging circuit 114 through the imaging lens 102. The aperture drive circuit 108 includes a drive circuit to drive the aperture 106. The aperture drive circuit 108 drives the aperture 106 in accordance with control of the CPU 1281 in the controller 128.
The shutter 110 is configured to change the light-receiving surface of the imaging circuit 114 to a light shielding state or an exposure state. The shutter 110 adjusts the exposure time of the imaging circuit 114. The shutter drive circuit 112 includes a drive circuit to drive the shutter 110, and drives the shutter 110 in accordance with control of the CPU 1281 in the controller 1281.
The imaging circuit 114 includes the light-receiving surface on which an imaging light beam from the subject that is condensed through the imaging lens 102 is imaged. The light-receiving surface of the imaging circuit 114 is formed by arranging a plurality of pixels in a two-dimensional manner, and a light-incident side of the light-receiving surface is provided with a color filter. The imaging circuit 114 as described above converts an image (subject image) corresponding to the imaging light beam imaged on the light-receiving surface into an electrical signal (hereinafter referred to as “image signal”) corresponding to the light quantity thereof. The imaging circuit 114 subjects the image signal to analog processing such as CDS (Correlated Double Sampling) and AGC (Automatic Gain Control), in accordance with control of the CPU 1281 in the controller 128. In addition, the imaging circuit 114 converts the image signal subjected to analog processing into a digital signal (hereinafter referred to as “image data”).
The volatile memory 116 includes a work area as a storage area. The work area is a storage area provided in the volatile memory 116 to temporarily store data generated in the circuits of the imaging apparatus 100, such as image data acquired in the imaging circuit 114.
The display 118 is, for example, a liquid crystal display (LCD), and displays various images such as an image (live view image) for live view and images recorded on the recording medium 126. The display drive circuit 120 drives the display 118 based on the image data that is input from the CPU 1281 of the controller 128, to cause the display 118 to display the image.
The touch panel 122 is formed as one unitary piece on the display screen of the display 118, and detects a contact position of a user's finger or the like on the display screen. The touch panel detection circuit 124 drives the touch panel 122, and outputs a contact detection signal from the touch panel 122 to the CPU 1281 of the controller 128. The CPU 1281 detects a contact operation of the user on the display screen 128 from the contact detection signal, and executes processing corresponding to the contact operation.
The recording medium 126 is, for example, a memory card, and records an image file acquired by an imaging operation. The image file is a file formed by providing a predetermined header to image data acquired by an imaging operation. The image file may record model image data and guide information, as well as the image data acquired by an imaging operation. The model image data is image data of other imaging conditions that is generated based on the image data imaged by the user's intention. The imaging conditions herein include, for example, conditions of changing (framing) the composition. The guide information is information to guide imaging, when the user wishes to image an image similar to a model image. For example, when the model image data is generated by changing the framing conditions, the guide information is information to cause the user to recognize the moving direction of the imaging apparatus 100 required for imaging an image similar to the model image.
The controller 128 is a control circuit to control operations of the imaging apparatus 100, and includes the CPU 1281, an AE circuit 1282, an AF circuit 1283, an image processor 1284, and a motion detection circuit 1285.
The CPU 1281 controls operations of blocks outside the controller 128, such as the lens drive circuit 104, the aperture drive circuit 108, the shutter drive circuit 112, the imaging circuit 114, the display drive circuit 120, and the touch panel detection circuit 124, and controls operations of the control circuits inside the controller 128. The CPU 1281 also performs processing to generate the guide information described above. The details of the processing of generating the guide information will be explained later.
The AE circuit 1282 controls AE processing. More specifically, the AE circuit 1282 calculates subject luminance using the image data acquired by the imaging circuit 114. The CPU 1281 calculates the aperture amount (aperture value) of the aperture 106 in exposure, the opening time (shutter speed value) of the shutter 110, and the sensitivity of the imaging element, and the like, in accordance with the subject luminance.
The AF circuit 1283 detects the focal state in the imaging screen, and controls AF processing. More specifically, the AF circuit 1283 evaluates the contrast of the image data in accordance with an AF evaluation value calculated from the image data, and controls the lens drive circuit 104 to cause the focus lens to change to a focused state. Such AF processing is referred to as the contrast method. A phase-difference method may be used as AF processing.
The image processor 1284 performs various types of image processing on the image data. Examples of the image processing include white balance correction, color correction, gamma (y) correction, enlargement and reduction processing, and compression, and the like. The image processor 1284 also performs expansion processing on the compressed image data. The image processor 1284 also performs processing to generate the model image data described above. The details of processing to generate model image data will be explained later.
The motion detection circuit 1285 detects movement of the imaging apparatus 100. Movement of the imaging apparatus 100 is detected by, for example, motion vector detection using image data of a plurality of frames, or based on output of the motion sensor 134.
The operation interface 130 includes various types of operation interfaces operated by the user. The operation interface 130 includes, for example, an operation button 1301, a release button 1302, a mode dial 1303, and a zoom switch 1304, and the like. The operation button 1301 is provided, for example, on the back surface of the imaging apparatus 100, as illustrated in
The nonvolatile memory 132 stores program codes to execute various types of processing with the CPU 1281. The nonvolatile memory 132 also stores various control parameters, such as control parameters necessary for operations of the imaging lens 102, the aperture 106, and the imaging circuit 114, and the like, and control parameters necessary for image processing in the image processor 1284.
The motion sensor 134 includes an angular velocity sensor 1341 and a posture sensor 1342. The angular velocity sensor 1341 is, for example, a gyro sensor, and detects angular velocity around three axes generated in the imaging apparatus 100. The posture sensor 1342 is, for example, a triaxial acceleration sensor, and detects acceleration generated in the imaging apparatus 100.
The wireless communication circuit 136 is, for example, a wireless LAN communication circuit, and performs processing in communications between the imaging apparatus 100 and the external device 200. The external device 200 is, for example, a smartphone.
The following is explanation of an imaging method using the imaging apparatus 100 according to the present embodiment.
The process of
During the live view operation, the user selects the operation mode of the imaging apparatus 100 (Step S102). The operation mode is selected, for example, by an operation of the operation button 1301 or an operation of the touch panel 122. After selection of the operation mode, the CPU 1281 determines whether the composition guide mode is selected as the operation mode (Step S104).
When it is determined that the composition guide mode is selected as the operation mode in Step S104, the CPU 1281 determines whether an imaging start instruction is sensed as a first instruction (Step S106). The imaging start instruction is, for example, an operation of pushing the release button 1302, or a touch release operation using the touch panel 122. The CPU 1281 waits until an imaging start instruction is sensed in Step S106. The process may return to Step S100, when a predetermined time passes without sense of an imaging start instruction.
With the framing as described above, the user changes other imaging conditions including imaging parameters such as the aperture and the shutter speed, and image processing parameters such as white balance setting, if necessary. When the desired imaging conditions are set, the user issues an imaging start instruction as a first instruction. For example, suppose that the user issues an imaging start instruction in the composition illustrated in
When it is determined that an imaging start instruction is sensed in Step S106, the CPU 1281 suspends the live view operation, and starts an imaging operation by the imaging circuit 114. In the imaging operation, the CPU 1281 operates the imaging circuit 114 in accordance with imaging parameters set by the user, and acquires image data (first photographed image data) as a first photographed image. Thereafter, the CPU 1281 processes the first photographed image data in the image processor 1284, and records the processed first photographed image data on the recording medium 126 (Step S108).
After the imaging operation, the CPU 1281 generates at least one piece of model image data from the first photographed image data with the image processor 1284. Thereafter, the CPU 1281 inputs the first photographed image data and the model image data to the display drive circuit 120, and displays the first photographed image based on the first photographed image data and a model image based on the model image data on the display 118 (Step S110).
The processing in Step S110 will be further explained hereinafter.
Trimming is performed on the image data in the state where the subject is disposed, for example, in intersection points P1, P2, P3, and P4 of the trisection lines. When trimming is performed in the state where the subject is disposed at the point P1, model image data i1 illustrated in
In the present embodiment, trisection lines are used for generating model image data. However, the method for generating model image data is not limited thereto. For example, a frame formed of generally known composition lines such as golden section lines and triangular composition lines may be used as the composition frame f. The aspect ratio of the region to be trimmed may be an aspect ratio generally used for photographs, such as 16:9, 3:2, and 1:1.
In the present embodiment, the subject is a subject in the focus position. However, the subject may be determined using a well-known feature extraction technique, such as face detection, instead of determining the subject according to the focus position.
In the present embodiment, reduced images of the model images are displayed in a tiled manner, but the method for displaying the model images is not limited thereto. For example, when the number of model images is large, only some of the model images may be displayed, and display may be switched to display of other model images when a predetermined time passes or by a user's operation. In addition, the first photographed image and the model images may be displayed such that their image regions overlap or image regions of the model images overlap. In addition, the images may be displayed one by one on the whole screen of the display 118, without reducing the images. In this case, the images are successively displayed when a predetermined time passes or by a user's operation.
In the present embodiment, a plurality of pieces of model image data are generated from the first photographed image data, but only one piece of model image data may be generated. In this case, determination in the following Step S112 is unnecessary.
When it is determined in Step S112 that a model image is selected, the CPU 1281 controls the display drive circuit 120 to display the selected model image enclosed with a thick frame, for example, as illustrated in
The processing in Step S114 will be explained hereinafter. As an example, the processing to display guide information for framing will be explained. First, the CPU 1281 resumes the live view operation, to acquire live view image data. Thereafter, the CPU 1281 calculates a matching region between the acquired live view image data and the selected model image data. A well-known technique such as template matching may be used as a method for searching the matching region between image data. After calculation of the matching region, the CPU 1281 generates guide information based on the matching region. Specifically, the CPU 1281 determines coordinates of the matching region in the live view image data corresponding to the model image data, as the guide information. Thereafter, the CPU 1281 inputs the guide information to the display drive circuit 120, to display the guide information.
While the guide information G is displayed, the user finds a desired composition while moving the imaging apparatus 100 in the x, y, and z directions, with reference to the guide information G. For example,
In the same manner as the time of imaging the first photographed image, the user may change other imaging conditions including imaging parameters such as the aperture value and the shutter speed, and image processing parameters such as white balance setting, if necessary, together with change of framing.
When it is determined that no imaging start instruction is sensed in Step S118, the CPU 1281 returns the process to Step S116. When it is determined that an imaging start instruction is sensed in Step S118, the CPU 1281 starts an imaging operation with the imaging circuit 114. In the imaging operation, the CPU 1281 operates the imaging circuit 114 in accordance with imaging parameters that are set by the user, and acquires image data (second photographed image data) serving as a second photographed image. Thereafter, the CPU 1281 processes the second photographed image data in the image processor 1284, and records the processed second photographed image data as an image file associated with the first photographed image data on the recording medium 126 (Step S120). Thereafter, the CPU 1281 ends the process in
In Step S104, when it is determined that the composition guide mode is not set as the operation mode, that is, the normal imaging mode is set, the CPU 1281 performs processing of the normal imaging mode (Step S122). The processing of the normal imaging mode is the same as a conventional imaging mode, and will be briefly explained hereinafter. Specifically, when the user issues an imaging start instruction, the CPU 1281 operates the imaging circuit 114 in accordance with imaging parameters that are set by the user, to acquire photographed image data. Thereafter, the CPU 1281 processes the photographed image data in the image processor 1284, and records the processed photographed image data as an image file on the recording medium 126. After the processing of the normal imaging mode, the CPU 1281 ends the process of
As described above, according to the first embodiment, model images generated from the first photographed image that is imaged by the user with an intention are presented to the user. This structure enables the user to obtain a perception with respect to framing, for example, by comparing the first photographed image with the model images. In addition, the guide information corresponding to the model image selected by the user is displayed together with the live view image on the display 118. This structure enables the user to reflect the perception obtained with the model image in the next photographing. This structure enables improvement of the user's photographing technique.
The first embodiment described above illustrates the example of generating guide information for framing based on the matching region between image data. By contrast, guide information for framing may be generated, by detecting the change amount of the posture of the imaging apparatus 100 during display of the live view image before imaging of the second photographed image with the motion sensor 134. The guide information is obtained by converting the change amount of the posture detected with the motion sensor 134 into a movement amount on the image.
In the first embodiment described above, a frame image indicating the region of the model image is displayed as the guide information for framing. By contrast, instead of the frame image, for example, a frame image G may be displayed in only corner portions of the region of the model image, as illustrated in
The guide information may be displayed only for a predetermined time from display of the live view image before imaging of the second photographed image.
In addition, the model image data generated in the first embodiment described above is image data with a composition different from that of the first photographed image data. However, the method for generating the model image data is not limited thereto. For example, the model image data may be generated also with a change relating to picture taking, such as white balance, as well as change of composition.
In the first embodiment described above, the model images are displayed on the display 118 of the imaging apparatus 100.
By contrast, in Step S110 of
Next, a second embodiment of the present invention will be explained hereinafter. Explanation of constituent elements of the second embodiment that are the same as those in the first embodiment is omitted. Specifically, because the configuration of the imaging apparatus 100 of the second embodiment is the same as that of the first embodiment, and explanation thereof is omitted. In addition, because the process of the flowchart illustrated in
In the second embodiment, model image data for change of angle (rotation in the pitch direction) for the subject is generated.
The model image data as illustrated in
When a model image is selected, guide information is displayed on the display 118 also in the second embodiment, in the same manner as the first embodiment.
As described above, the present embodiment enables the user to obtain a perception with respect to change of angle, by comparing the first photographed image with the model images. In addition, because the guide information is displayed on the display 118 as in the present embodiment, the user is enabled to recognize the angle to photograph an image equivalent to the model image, as a bodily sensation, while viewing the live view image.
As the guide information, a rotation axis G2 of rotation to acquire the model image as illustrated in
The embodiment described above illustrates generation of model image data based on change of angle only in the pitch direction. By contrast, change of angle other than the pitch direction, that is, change of angle in the yaw direction may be considered.
In addition, the method for indicating the degree of change of angle is not limited to change in shape of the rectangular image. For example, a normal line of a model image surface for the live view image may be displayed, or a rotation arrow corresponding to the angle change amount may be displayed. As another example, as illustrated in
Number | Date | Country | Kind |
---|---|---|---|
2014-134529 | Jun 2014 | JP | national |
This application is a Continuation application of PCT Application No. PCT/JP2015/063874, filed May 14, 2015 and based upon and claiming the benefit of priority from the prior Japanese Patent Application No. 2014-134529, filed Jun. 30, 2014, the entire contents of both of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2015/063874 | May 2015 | US |
Child | 15391665 | US |