The technology of the present disclosure relates to an imaging support device, an imaging apparatus, an imaging support method, and a program.
JP2007-006033A discloses a target decision device that selects a face to be processed from among a plurality of faces included in an image. The target decision device disclosed in JP2007-006033A includes a face detection unit that detects the face from the image, a face information recording unit that records the face detected in the past by the face detection unit and a detection history related to the detection in association with each other, and a face selection unit that selects the face to be processed from the faces included in the image based on the detection history.
JP2009-252069A discloses an image processing device comprising a face recognition dictionary, an image acquisition unit, a face region detection unit, a feature extraction unit, a discrimination unit, a face recognition dictionary correction unit, and a classification unit. In the face recognition dictionary, a face feature for discriminating whether or not persons are the same person is registered for each person. The image acquisition unit acquires an image including a person. The face region detection unit detects a face region from the image acquired by the image acquisition unit. The feature extraction unit extracts the face feature in the face region based on the face region detected by the face region detection unit. The discrimination unit discriminates whether or not the face feature of the same person is registered in the face recognition dictionary based on the face feature extracted by the feature extraction unit and the face feature registered in the face recognition dictionary. The face recognition dictionary correction unit corrects the registered face feature based on the extracted face feature in a case in which it is discriminated by the discrimination unit that the face feature of the same person is registered in the face recognition dictionary, and registers the extracted face feature as a face feature of a new person in a case in which it is discriminated by the discrimination unit that the face feature of the same person is not registered in the face recognition dictionary. The classification unit classifies the face of the person in the image acquired by the image acquisition unit as a known face in a case in which it is discriminated by the discrimination unit that the face feature of the same person is registered in the face recognition dictionary, and classifies the face of the person in the image acquired by the image acquisition unit as an unknown face in a case in which it is discriminated by the discrimination unit that the face feature of the same person is not registered in the face recognition dictionary.
JP2012-099943A discloses an image processing device comprising a storage unit that stores face image data, a face detection unit that detects a face from a video signal, a face recognition unit that determines whether or not the face detected by the face detection unit is included in the face image data stored in the storage unit, and an image processing unit. The image processing unit performs image processing of making regions of the face determined to be included in the face image data and the face determined not to be included have higher image quality than other regions, in a case in which the face determined to be included in the face image data stored in the storage unit by the face recognition unit and the face determined not to be included in the face image data are detected from the video signal.
JP2009-003012A discloses an imaging apparatus including an imaging lens which drives at least a part of a plurality of lenses arranged along an optical axis direction along the optical axis direction and of which a lens focal position can be changed, an image acquisition unit, a subject detection unit, a focus evaluation value calculation unit, a selection unit, and a recording unit. The image acquisition unit acquires a plurality of image data by continuously executing imaging while changing the lens focal position of the imaging lens. The subject detection unit detects a main subject region in accordance with a movement state of the subject between the plurality of image data acquired by the image acquisition unit. The focus evaluation value calculation unit calculates a focus evaluation value of the main subject region obtained by the subject detection unit for each of the plurality of image data. The selection unit selects at least one of the plurality of image data based on the focus evaluation value obtained by the focus evaluation value calculation unit. The recording unit records the image data selected by the selection unit in a recording medium.
One embodiment according to the technology of the present disclosure provides an imaging support device, an imaging apparatus, an imaging support method, and a program capable of supporting imaging by an imaging apparatus in accordance with a frequency in which a feature of a subject is classified into a category.
A first aspect according to the technology of the present disclosure relates to an imaging support device comprising a processor, and a memory connected to or built in the processor, in which the processor acquires frequency information indicating a frequency of a feature of a subject specified from a captured image obtained by imaging with an imaging apparatus, the feature being classified into a category based on the feature, and performs support processing of supporting the imaging with the imaging apparatus based on the frequency information.
A second aspect according to the technology of the present disclosure relates to the imaging support device according to the first aspect, in which the category is categorized into a plurality of categories including at least one target category, the target category is a category determined based on the frequency information, and the support processing is processing including processing of supporting the imaging for a target category subject having the feature belonging to the target category.
A third aspect according to the technology of the present disclosure relates to the imaging support device according to the second aspect, in which the support processing is processing including display processing of performing display for recommending to image the target category subject.
A fourth aspect according to the technology of the present disclosure relates to the imaging support device according to the third aspect, in which the display processing is processing of displaying an image for display obtained by the imaging with the imaging apparatus on a display and displaying a target category subject image indicating the target category subject in the image for display in an aspect that is distinguishable from other image regions.
A fifth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to fourth aspects, in which the processor detects the target category subject based on an imaging result of the imaging apparatus, and acquires an image including an image corresponding to the target category subject on a condition that the target category subject is detected.
A sixth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to fifth aspects, in which the processor displays an object indicating a designated imaging range determined in accordance with a given instruction from an outside and an object indicating the target category subject in different display aspects.
A seventh aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to sixth aspects, in which, in a case in which a degree of difference between a first imaging condition given from an outside and a second imaging condition given to the target category subject is equal to or larger than a predetermined degree of difference, the processor performs predetermined processing.
An eighth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to seventh aspects, in which the target category is a low-frequency category having a relatively low frequency among the plurality of categories.
A ninth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to eighth aspects, in which, in a case in which the target category subject is imaged by the imaging apparatus, the target category is a category into which the feature for the target category subject is classified, the category being determined in accordance with a state of the target category subject.
A tenth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the second to ninth aspects, in which, in a case in which a plurality of objects are imaged by the imaging apparatus, the target category is an object target category in which each of the plurality of objects themselves is able to be specified.
An eleventh aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the first to tenth aspects, in which the category is created for at least one unit.
A twelfth aspect according to the technology of the present disclosure relates to the imaging support device according to the eleventh aspect, in which one of the units is a period.
A thirteenth aspect according to the technology of the present disclosure relates to the imaging support device according to the eleventh or twelfth aspect, in which one of the units is a position.
A fourteenth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the first to thirteenth aspects, in which the processor causes a classifier to classify the feature, and in a case in which a scene to be imaged by the imaging apparatus matches a specific scene, the classifier classifies the feature.
A fifteenth aspect according to the technology of the present disclosure relates to the imaging support device according to the fourteenth aspect, in which the specific scene is a scene imaged in the past.
A sixteenth aspect according to the technology of the present disclosure relates to the imaging support device according to any one of the first to fifteenth aspects, in which the support processing is processing including processing of displaying the frequency information.
A seventeenth aspect according to the technology of the present disclosure relates to the imaging support device according to the sixteenth aspect, in which the support processing is processing including processing of, in a case in which the frequency information is designated by a reception device in a state in which the frequency information is displayed, supporting the imaging related to the category corresponding to the designated frequency information.
An eighteenth aspect according to the technology of the present disclosure relates to an imaging support device comprising a processor, and a memory connected to or built in the processor, in which the processor acquires frequency information indicating a frequency of a captured image obtained by imaging with an imaging apparatus, the captured image being classified into a category based on a feature of a subject included in the captured image, and performs support processing of supporting the imaging with the imaging apparatus based on the frequency information.
A nineteenth aspect according to the technology of the present disclosure relates to an imaging apparatus comprising the imaging support device according to any one of the first to eighteenth aspects, and an image sensor, in which the processor supports the imaging with the image sensor by performing the support processing.
A twentieth aspect according to the technology of the present disclosure relates to an imaging support method comprising acquiring frequency information indicating a frequency of a feature of a subject specified from a captured image obtained by imaging with an imaging apparatus, the feature being classified into a category based on the feature, and performing support processing of supporting the imaging with the imaging apparatus based on the frequency information.
A twenty-first aspect according to the technology of the present disclosure relates to an imaging support method comprising acquiring frequency information indicating a frequency of a captured image obtained by imaging with an imaging apparatus, the captured image being classified into a category based on a feature of a subject specified from the captured image, and performing support processing of supporting the imaging with the imaging apparatus based on the frequency information.
A twenty-second aspect according to the technology of the present disclosure relates to a program causing a computer to execute a process comprising acquiring frequency information indicating a frequency of a feature of a subject specified from a captured image obtained by imaging with an imaging apparatus, the feature being classified into a category based on the feature, and performing support processing of supporting the imaging with the imaging apparatus based on the frequency information.
A twenty-third aspect according to the technology of the present disclosure relates to a program causing a computer to execute a process comprising acquiring frequency information indicating a frequency of a captured image obtained by imaging with an imaging apparatus, the captured image being classified into a category based on a feature of a subject specified from the captured image, and performing support processing of supporting the imaging with the imaging apparatus based on the frequency information.
Exemplary embodiments of the technology of the disclosure will be described in detail based on the following figures, wherein:
In the following, an example of an embodiment of an imaging support device, an imaging apparatus, an imaging support method, and a program according to the technology of the present disclosure will be described with reference to accompanying drawings.
First, the terms used in the following description will be described.
CPU refers to an abbreviation of “Central Processing Unit”. RAM refers to an abbreviation of “Random Access Memory”. IC refers to an abbreviation of “Integrated Circuit”. ASIC refers to an abbreviation of “Application Specific Integrated Circuit”. PLD refers to an abbreviation of “Programmable Logic Device”. FPGA refers to an abbreviation of “Field-Programmable Gate Array”. SoC refers to an abbreviation of “System-on-a-chip”. SSD refers to an abbreviation of “Solid State Drive”. USB refers to an abbreviation of “Universal Serial Bus”. HDD refers to an abbreviation of “Hard Disk Drive”. EEPROM refers to an abbreviation of “Electrically Erasable and Programmable Read Only Memory”. EL refers to an abbreviation of “Electro-Luminescence”. I/F refers to an abbreviation of “Interface”. UI refers to an abbreviation of “User Interface”. TOF refers to an abbreviation of “Time of Flight”. fps refers to an abbreviation of “frame per second”. MF refers to an abbreviation of “Manual Focus”. AF refers to an abbreviation of “Auto Focus”. CMOS refers to an abbreviation of “Complementary Metal Oxide Semiconductor”. CCD refers to an abbreviation of “Charge-Coupled Device”. RTC refers to an abbreviation of “real time clock”. GPS refers to an abbreviation of “global positioning system”. LAN refers to an abbreviation of “local area network”. WAN refers to an abbreviation of “wide area network”. GNSS is an abbreviation of “global navigation satellite system”. In the following, for convenience of description, a CPU is described as an example of a “processor” according to the technology of the present disclosure. However, the “processor” according to the technology of the present disclosure may be a combination of a plurality of processing devices, such as the CPU and a GPU. In a case in which the combination of the CPU and the GPU is applied as an example of the “processor” according to the technology of the present disclosure, the GPU is operated under the control of the CPU and is responsible for executing the image processing.
In the description of the present specification, “vertical” refers to the verticality in the sense of including an error generally allowed in the technical field to which the technology of the present disclosure belongs, that is the error to the extent that it does not contradict the purpose of the technology of the present disclosure, in addition to the exact verticality. In the description of the present specification, “match” refers to the match in the sense of including an error generally allowed in the technical field to which the technology of the present disclosure belongs, that is the error to the extent that it does not contradict the purpose of the technology of the present disclosure, in addition to the exact match.
As an example, as shown in
An image sensor 16 is provided in the imaging apparatus body 12. The image sensor 16 is a CMOS image sensor. The image sensor 16 images an imaging region including a subject group. In a case in which the interchangeable lens 14 is mounted on the imaging apparatus body 12, subject light indicating a subject is transmitted through the interchangeable lens 14 and imaged on the image sensor 16, so that image data indicating the image of the subject is generated by the image sensor 16.
It should be noted that, in the present embodiment, the CMOS image sensor is described as the image sensor 16, but the technology of the present disclosure is not limited to this. For example, the technology of the present disclosure is established even in a case in which the image sensor 16 is another type of image sensor, such as a CCD image sensor.
A release button 18 and a dial 20 are provided on an upper surface of the imaging apparatus body 12. The dial 20 is operated in a case of setting an operation mode of an imaging system, an operation mode of a playback system, and the like, and by operating the dial 20, the imaging apparatus 10 selectively sets an imaging mode and a playback mode as the operation modes.
The release button 18 functions as an imaging preparation instruction unit and an imaging instruction unit, and a push operation of two stages of an imaging preparation instruction state and an imaging instruction state can be detected. For example, the imaging preparation instruction state refers to a state in which the release button 18 is pushed to an intermediate position (half push position) from a standby position, and the imaging instruction state refers to a state in which the release button 18 is pushed to a final push position (full push position) beyond the intermediate position. It should be noted that, in the following, the “state in which the release button 18 is pushed to the half push position from the standby position” will be referred to as a “half push state”, and the “state in which the release button 18 is pushed to the full push position from the standby position” will be referred to as a “full push state”. Depending on the configuration of the imaging apparatus 10, the imaging preparation instruction state may be a state in which a finger of a user is in contact with the release button 18, and the imaging instruction state may be a state in which the finger of the user who performs operation proceeds from the state of being in contact with the release button 18 to a state of being separated from the release button 18.
As an example, as shown in
The touch panel display 22 comprises a display 26 and a touch panel 28 (see also
The display 26 displays an image and/or text information. The display 26 is used for imaging for the live view image, that is, for displaying the live view image obtained by performing the continuous imaging in a case in which the imaging apparatus 10 is in the imaging mode. The imaging for the live view image (hereinafter, also referred to as “imaging for the live view image”) is performed in accordance with, for example, a frame rate of 60 fps. 60 fps is merely an example, and a frame rate smaller than 60 fps may be used or a frame rate exceeding 60 fps may be used.
Here, the “live view image” refers to a video for display based on the image data obtained by the imaging performed by the image sensor 16. The live view image is also generally referred to as a live preview image. It should be noted that the live view image is an example of an “image for display” according to the technology of the present disclosure.
The display 26 is also used for displaying the still picture obtained by performing the imaging for the still picture in a case in which the instruction for the imaging for the still picture is given to the imaging apparatus 10 via the release button 18. Further, the display 26 is used for displaying a playback image and displaying a menu screen and the like in a case in which the imaging apparatus 10 is in the playback mode.
The touch panel 28 is a transmissive touch panel, and is superimposed on a surface of a display region of the display 26. The touch panel 28 receives an instruction from the user by detecting a contact of an indicator, such as a finger or a stylus pen. It should be noted that, in the following, for convenience of description, a state in which the user turns on the soft key for starting the imaging via the touch panel 28 is included in the “full push state” described above.
In addition, in the present embodiment, examples of the touch panel display 22 include an out-cell type touch panel display in which the touch panel 28 is superimposed on the surface of the display region of the display 26, but this is merely an example. For example, the on-cell type or in-cell type touch panel display can be applied as the touch panel display 22.
The instruction key 24 receives various instructions. Here, the “various instructions” refers to various instructions, for example, an instruction for displaying a menu screen on which various menus can be selected, an instruction for selecting one or a plurality of menus, an instruction for confirming a selected content, an instruction for deleting the selected content, zooming in, zooming out, and frame advance. In addition, these instructions may be given by the touch panel 28.
As an example, as shown in
A color filter is disposed on the photodiode PD. The color filters include a green (G) filter corresponding to a G wavelength range which most contributes to obtaining a brightness signal, a red (R) filter corresponding to an R wavelength range, and a blue (B) filter corresponding to a B wavelength range.
Generally, the non-phase difference pixel N is also referred to as a normal pixel. The photoelectric conversion element 30 has three types of photosensitive pixels of R pixel, G pixel, and B pixel, as the non-phase difference pixel N. The R pixel, the G pixel, the B pixel, and the phase difference pixel P are regularly disposed with a predetermined periodicity in a row direction (for example, a horizontal direction in a state in which a bottom surface of the imaging apparatus body 12 is in contact with a horizontal surface) and a column direction (for example, a vertical direction which is a direction vertical to the horizontal direction). The R pixel is a pixel corresponding to the photodiode PD in which the R filter is disposed, the G pixel and the phase difference pixel P are pixels corresponding to the photodiode PD in which the G filter is disposed, and the B pixel is a pixel corresponding to the photodiode PD in which the B filter is disposed.
A plurality of phase difference pixel lines 32A and a plurality of non-phase difference pixel lines 32B are arranged on the light-receiving surface 30A. The phase difference pixel line 32A is a horizontal line including the phase difference pixels P. Specifically, the phase difference pixel line 32A is the horizontal line in which the phase difference pixels P and the non-phase difference pixels N are mixed. The non-phase difference pixel line 32B is a horizontal line including only a plurality of non-phase difference pixels N.
On the light-receiving surface 30A, the phase difference pixel lines 32A and the non-phase difference pixel lines 32B for a predetermined number of lines are alternately disposed along the column direction. For example, the “predetermined number of lines” used herein refers to two lines. It should be noted that, here, the predetermined number of lines is described as two lines, but the technology of the present disclosure is not limited to this, and the predetermined number of lines may be three or more lines, dozen lines, a few tens of lines, a few hundred lines, and the like.
The phase difference pixel lines 32A are arranged in the column direction by skipping two lines from the first row to the last row. A part of the pixels of the phase difference pixel lines 32A is the phase difference pixel P. Specifically, the phase difference pixel line 32A is a horizontal line in which the phase difference pixels P and the non-phase difference pixels N are periodically arranged. The phase difference pixels P are roughly divided into a first phase difference pixel L and a second phase difference pixel R. In the phase difference pixel lines 32A, the first phase difference pixels L and the second phase difference pixels R are alternately disposed at intervals of several pixels in a line direction as the G pixels.
The first phase difference pixels L and the second phase difference pixels R are disposed to be alternately present in the column direction. In the example shown in
The photoelectric conversion element 30 is divided into two regions. That is, the photoelectric conversion element 30 includes a non-phase difference pixel divided region 30N and a phase difference pixel divided region 30P. The phase difference pixel divided region 30P is a phase difference pixel group composed of a plurality of phase difference pixels P, and receives the subject light to generate phase difference image data as the electric signal in accordance with the light-receiving amount. The phase difference image data is used, for example, for distance measurement. The non-phase difference pixel divided region 30N is a non-phase difference pixel group composed of the plurality of non-phase difference pixels N, and receives the subject light to generate non-phase difference image data as the electric signal in accordance with the light-receiving amount. The non-phase difference image data is displayed on the display 26 (see
As an example, as shown in
The second phase difference pixel R comprises a light shielding member 34B, the microlens 36, and the photodiode PD. In the second phase difference pixel R, the light shielding member 34B is disposed between the microlens 36 and the light-receiving surface of the photodiode PD. A right half (right side in a case of facing the subject from the light-receiving surface (in other words, a left side in a case of facing the light-receiving surface from the subject)) of the light-receiving surface of the photodiode PD in the row direction is shielded against the light by the light shielding member 34B. It should be noted that, in the following, for convenience of description, in a case in which the distinction is not needed, the light shielding members 34A and 34B are referred to as a “light shielding member” without designating the reference numeral.
The interchangeable lens 14 comprises an imaging lens 40. Luminous flux passing through an exit pupil of the imaging lens 40 is roughly divided into left region passing light 38L and right region passing light 38R. The left region passing light 38L refers to the left half luminous flux of the luminous flux passing through the exit pupil of the imaging lens 40 in a case of facing the subject side from the phase difference pixel P side. The right region passing light 38R refers to the right half luminous flux of the luminous flux passing through the exit pupil of the imaging lens 40 in a case of facing the subject side from the phase difference pixel P side. The luminous flux passing through the exit pupil of the imaging lens 40 is divided into the right and left by the microlens 36, the light shielding member 34A, and the light shielding member 34B functioning as a pupil division unit. The first phase difference pixel L receives the left region passing light 38L as the subject light, and the second phase difference pixel R receives the right region passing light 38R as the subject light. As a result, first phase difference image data corresponding to the subject image corresponding to the left region passing light 38L and second phase difference image data corresponding to the subject image corresponding to the right region passing light 38R are generated by the photoelectric conversion element 30.
In the imaging apparatus 10, for example, in the same phase difference pixel line 32A, the distance to the subject based on a deviation amount α (hereinafter, also simply referred to as a “deviation amount α”) between the first phase difference image data for one line and the second phase difference image data for one line, that is, a subject distance is measured. It should be noted that, since a method of deriving the subject distance from the deviation amount α is a known technology, the detailed description thereof will be omitted here.
As an example, as shown in
As an example, as shown in
In addition, the interchangeable lens 14 comprises a slide mechanism 42, a motor 44, and a motor 46. The focus lens 40B is attached to the slide mechanism 42 in a slidable manner along the optical axis OA. In addition, the motor 44 is connected to the slide mechanism 42, and the slide mechanism 42 moves the focus lens 40B along the optical axis OA by receiving power of the motor 44 to operate. The stop 40C is a stop with an aperture having a variable size. The motor 46 is connected to the stop 40C, and the stop 40C adjusts exposure by receiving the power of the motor 46 to operate. It should be noted that a structure and/or an operation method of the interchangeable lens 14 can be changed as needed.
The motors 44 and 46 are connected to the imaging apparatus body 12 via a mount (not shown), and driving of the motors 44 and 46 is controlled in accordance with a command from the imaging apparatus body 12. It should be noted that, in the present embodiment, stepping motors are adopted as an example of the motors 44 and 46. Therefore, the motors 44 and 46 operate in synchronization with a pulse signal in accordance with the command from the imaging apparatus body 12. In addition, in the example shown in
In the imaging apparatus 10, in a case of the imaging mode, an MF mode and an AF mode are selectively set in accordance with an instruction given to the imaging apparatus body 12. The MF mode is an operation mode for manually focusing. In the MF mode, for example, in a case in which a focus ring of the interchangeable lens 14 is operated by the user, the focus lens 40B is moved along the optical axis OA with a movement amount corresponding to an operation amount of the focus ring to adjust the focus.
In the AF mode, the imaging apparatus body 12 calculates a focus position in accordance with the subject distance, and moves the focus lens 40B toward the calculated focus position to adjust the focus. Here, the “focus position” refers to a position of the focus lens 40B on the optical axis OA in an in-focus state.
It should be noted that, in the following, for convenience of description, the control of aligning the focus lens 40B with the focus position is also referred to as an “AF control”. In addition, in the following, for convenience of description, the calculation of the focus position is also referred to as an “AF calculation”. In the imaging apparatus 10, a CPU 48A described below performs the AF calculation to detect the focus for a plurality of subjects. Moreover, the CPU 48A described below performs focusing on the subject based on a result of the AF calculation, that is, a detection result of the focus.
The imaging apparatus body 12 comprises the image sensor 16, a controller 48, an image memory 50, a UI system device 52, an external I/F 54, a photoelectric conversion element driver 56, a motor driver 58, a motor driver 60, a mechanical shutter driver 62, and a mechanical shutter actuator 64. In addition, the imaging apparatus body 12 comprises a mechanical shutter 72. In addition, the image sensor 16 comprises a signal processing circuit 74.
An input/output interface 70 is connected to the controller 48, the image memory 50, the UI system device 52, the external I/F 54, the photoelectric conversion element driver 56, the motor driver 58, the motor driver 60, the mechanical shutter driver 62, and the signal processing circuit 74.
The controller 48 comprises the CPU 48A, a storage 48B, and a memory 48C. The CPU 48A is an example of a “processor” according to the technology of the present disclosure, the memory 48C is an example of a “memory” according to the technology of the present disclosure, and the controller 48 is an example of an “imaging support device” and a “computer” according to the technology of the present disclosure.
The CPU 48A, the storage 48B, and the memory 48C are connected via a bus 76, and the bus 76 is connected to the input/output interface 70.
It should be noted that, in the example shown in
Various parameters and various programs are stored in the storage 48B. The storage 48B is a non-volatile storage device. Here, an EEPROM is adopted as an example of the storage 48B. The EEPROM is merely an example, and an HDD and/or SSD or the like may be applied as the storage 48B instead of the EEPROM or together with the EEPROM. In addition, the memory 48C transitorily stores various pieces of information and is used as a work memory. Examples of the memory 48C include a RAM, but the technology of the present disclosure is not limited to this, and other types of storage devices may be used.
Various programs are stored in the storage 48B. The CPU 48A reads out a needed program from the storage 48B, and executes the read out program on the memory 48C. The CPU 48A controls the entire imaging apparatus body 12 in accordance with the program executed on the memory 48C. In the example shown in
The photoelectric conversion element driver 56 is connected to the photoelectric conversion element 30. The photoelectric conversion element driver 56 supplies an imaging timing signal for defining a timing of the imaging performed by the photoelectric conversion element 30 to the photoelectric conversion element 30 in accordance with an instruction from the CPU 48A. The photoelectric conversion element 30 performs reset, exposure, and output of the electric signal in response to the imaging timing signal supplied from the photoelectric conversion element driver 56. Examples of the imaging timing signal include a vertical synchronizing signal and a horizontal synchronizing signal.
In a case in which the interchangeable lens 14 is mounted on the imaging apparatus body 12, the subject light incident on the imaging lens 40 is imaged on the light-receiving surface 30A by the imaging lens 40. Under the control of the photoelectric conversion element driver 56, the photoelectric conversion element 30 photoelectrically converts the subject light received by the light-receiving surface 30A, and outputs the electric signal in accordance with the light amount of the subject light to the signal processing circuit 74 as analog image data indicating the subject light. Specifically, the signal processing circuit 74 reads out the analog image data from the photoelectric conversion element 30 in one frame unit and for each horizontal line by an exposure sequential read-out method. The analog image data is roughly divided into analog phase difference image data generated by the phase difference pixel P and analog non-phase difference image data generated by the non-phase difference pixel N.
The signal processing circuit 74 generates digital image data by digitizing the analog image data input from the photoelectric conversion element 30. The signal processing circuit 74 comprises a non-phase difference image data processing circuit 74A and a phase difference image data processing circuit 74B. The non-phase difference image data processing circuit 74A generates digital non-phase difference image data by digitizing the analog non-phase difference image data. The phase difference image data processing circuit 74B generates digital phase difference image data by digitizing the analog phase difference image data.
It should be noted that, in the following, for convenience of description, in a case in which the distinction is not needed, the digital non-phase difference image data and the digital phase difference image data are referred to as “digital image data”. In addition, in the following, for convenience of description, in a case in which the distinction is not needed, the analog image data and the digital image data are referred to as “image data”.
The mechanical shutter 72 is a focal plane shutter and is disposed between the stop 40C and the light-receiving surface 30A. The mechanical shutter 72 comprises a front curtain (not shown) and a rear curtain (not shown). Each of the front curtain and the rear curtain comprises a plurality of blades. The front curtain is disposed on the subject side with respect to the rear curtain.
The mechanical shutter actuator 64 is an actuator including a front curtain solenoid (not shown) and a rear curtain solenoid (not shown). The front curtain solenoid is a drive source for the front curtain, and is mechanically connected to the front curtain. The rear curtain solenoid is a drive source for the rear curtain, and is mechanically connected to the rear curtain. The mechanical shutter driver 62 controls the mechanical shutter actuator 64 in accordance with an instruction from the CPU 48A.
The front curtain solenoid selectively performs winding and pulling down of the front curtain by generating power under the control of the mechanical shutter driver 62 and giving the generated power to the front curtain. The rear curtain solenoid selectively performs winding and pulling down of the rear curtain by generating power under the control of the mechanical shutter driver 62 and giving the generated power to the rear curtain. In the imaging apparatus 10, the opening and closing of the front curtain and the opening and closing of the rear curtain are controlled by the CPU 48A, so that an exposure amount with respect to the photoelectric conversion element 30 is controlled.
In the imaging apparatus 10, the imaging for the live view image and the imaging for a recording image for recording the still picture and/or the video are performed by the exposure sequential read-out method (rolling shutter method). The image sensor 16 has an electronic shutter function, and the imaging for the live view image is realized by activating the electronic shutter function without operating the mechanical shutter 72 in the fully opened state.
On the other hand, imaging accompanied by the main exposure, that is, the imaging for the still picture (hereinafter, also referred to as “main exposure imaging”) is realized by activating the electronic shutter function and operating the mechanical shutter 72 such that the mechanical shutter 72 transitions from the front curtain closed state to the rear curtain closed state. It should be noted that the image obtained by performing the main exposure imaging by the imaging apparatus 10 (hereinafter, also referred to as a “main exposure image”) is an example of a “captured image” according to the technology of the present disclosure.
The digital image data is stored in the image memory 50. That is, the non-phase difference image data processing circuit 74A stores the non-phase difference image data in the image memory 50, and the phase difference image data processing circuit 74B stores the phase difference image data in the image memory 50. The CPU 48A acquires the digital image data from the image memory 50 and executes various pieces of processing by using the acquired digital image data.
The UI system device 52 comprises the display 26, and the CPU 48A displays various pieces of information on the display 26. In addition, the UI system device 52 comprises a reception device 80. The reception device 80 comprises the touch panel 28 and a hard key unit 82. The hard key unit 82 is a plurality of hard keys including the instruction key 24 (see
The external I/F 54 controls the exchange of various pieces of information with the device (hereinafter, also referred to as an “external device”) that is present outside the imaging apparatus 10. Examples of the external I/F 54 include a USB interface. External devices (not shown), such as a smart device, a personal computer, a server, a USB memory, a memory card, and/or a printer, are directly or indirectly connected to the USB interface.
The motor driver 58 is connected to the motor 44 and controls the motor 44 in accordance with the instruction from the CPU 48A. The position of the focus lens 40B on the optical axis OA is controlled via the slide mechanism 42 by controlling the motor 44. The focus lens 40B is moved in accordance with the instruction from the CPU 48A while avoiding a main exposure period by the image sensor 16.
The motor driver 60 is connected to the motor 46 and controls the motor 46 in accordance with the instruction from the CPU 48A. The size of the aperture of the stop 40C is controlled by controlling the motor 46.
As an example, as shown in
By executing the imaging support processing, first, the CPU 48A acquires frequency information indicating a frequency of a subject feature classified into a category based on a feature of the subject (hereinafter, also referred to as “subject feature”) specified from the main exposure image obtained by the imaging with the imaging apparatus 10. Moreover, the CPU 48A executes support processing of supporting the imaging by the imaging apparatus 10 (hereinafter, also simply referred to as “support processing”) based on the acquired frequency information. In the following, the contents of the imaging support processing will be described in more detail.
As an example, as shown in
As an example, as shown in
It should be noted that, in the present embodiment, for convenience of description, a person is described as the subject of the imaging apparatus 10, but the technology of the present disclosure is not limited to this, and the subject may be a subject other than the person. Examples of the subject other than the person include small animals, insects, plants, architectures, landscapes, an organ of a living body, and/or a cell of the living body. That is, the imaging region does not have to include the person, and need only include a subject that can be imaged by the image sensor 16.
Each time the acquisition unit 48A1 acquires the live view image data for one frame, the control unit 48A5 displays the live view image indicated by the live view image data acquired by the acquisition unit 48A1 on the display 26. The live view image includes a plurality of person images indicating the plurality of persons as a plurality of subject images the plurality of subjects.
As an example, as shown in
In the example shown in
As an example, as shown in
Examples of the trained model 92 include a trained model using a cascade classifier. The trained model using the cascade classifier is constructed as a trained model for image recognition, for example, by performing supervised machine learning on a neural network. It should be noted that the trained model 92 is not limited to the trained model using the cascade classifier, and may be a dictionary for pattern matching. That is, the trained model 92 may be any trained model as long as it is a trained model used in image analysis performed in a case in which the subject is recognized.
The subject recognition unit 48A2 performs the image analysis on the main exposure image data to recognize the person included in the imaging region as the subject. In addition, the subject recognition unit 48A2 performs the image analysis on the main exposure image data to recognize the feature of the person, such as a face (expression) of the person, a posture of the person, the opening and closing of eyes of the person, and the presence or absence of the person within the designated imaging range, as the subject feature.
The subject recognition unit 48A2 recognizes the face of the person as one of the subject features. The face of the person is, for example, any of a smiling face, a crying face, an angry face, or a straight face. The subject recognition unit 48A2 recognizes, for example, the smiling face, the crying face, the angry face, and the straight face as the subject features belonging to the category of “face of the person”. It should be noted that, here, the smiling face refers to an expression of the person smiling, the crying face refers to an expression of the person crying, the angry face refers to an expression of the person who is angry, and the straight face refers to an expression that does not correspond to any of the smiling face, the crying face, or the angry face.
In addition, the subject recognition unit 48A2 recognizes the posture of the person as one of the subject features. The posture of the person is, for example, any of front or non-front. For example, the subject recognition unit 48A2 recognizes the front and the non-front as the subject features belonging to the category of “posture of the person”. It should be noted that, here, the front means a state in which the person faces the front with respect to the light-receiving surface 30A (see
In addition, the subject recognition unit 48A2 recognizes the eyes of the person as one of the subject features. The eyes of the person are, for example, any of open eyes or closed eyes. The subject recognition unit 48A2 recognizes, for example, open eyes and closed eyes as the subject features belonging to the category of “eyes of the person”. It should be noted that, here, the open eyes refer to a state in which the person has his/her eyes open, and the closed eyes refer to a state in which the person has his/her eyes closed.
In addition, the subject recognition unit 48A2 recognizes the presence or absence of the person within the designated imaging range as one of the subject features. The subject recognition unit 48A2 recognizes “within the designated imaging range” and “out of the designated imaging range” as the subject features belonging to the category of “designated imaging range”. It should be noted that, here, “within the designated imaging range” refers to a state in which the person is present within the designated imaging range, and “out of the designated imaging range” refers to a state in which the person is present out of the designated imaging range.
In addition, the subject recognition unit 48A2 stores recognition result information 94 indicating a result of recognizing the subject (here, the person as an example) included in the imaging region in the memory 48C. The recognition result information 94 is overwritten and saved in the memory 48C in a one frame unit. The recognition result information 94 is information including a subject name, the subject feature, and recognition region specification coordinates, and is stored in the memory 48C in a unit of the subject included in the imaging region in a state in which the subject name, the subject feature, and the recognition region specification coordinates are associated with each other.
Here, the recognition region specification coordinates refer to coordinates indicating the position in the live view image of the quadrangular frame (hereinafter, also referred to as a “subject frame”) that surrounds a feature region (for example, a face region indicating the face of the person) of a subject image indicating the subject recognized by the subject recognition unit 48A2. Examples of the recognition region specification coordinates include the coordinates of two vertices on a diagonal line of the subject frame in the live view image (for example, coordinates of an upper left corner and coordinates of a lower right corner). It should be noted that, as long as the shape of the subject frame is quadrangular, the recognition region specification coordinates may be coordinates of three vertices or may be coordinates of four vertices. In addition, the shape of the subject frame is not limited to be quadrangular and may be another shape. In this case as well, coordinates for specifying the position of the subject image in the live view image need only be used as the recognition region specification coordinates.
As an example, as shown in
As an example, as shown in
The classification unit 48A4 specifies the subject-specific category group 98 corresponding to the subject name from the subject name included in the subject-specific feature information, and classifies the subject feature corresponding to the subject name into the specified subject-specific category group 98.
The subject-specific category group 98 includes a plurality of categories. Each of the plurality of categories included in the subject-specific category group 98 is a category which is created for each unit independent of each other. The unit refers to a unit of the subject feature. In the example shown in
As an example, as shown in
In the example shown in
The number of times of classification is also associated with the face category. The number of times of classification for the face category is the sum of the number of times of classification for the smiling face category, the number of times of classification for the crying face category, the number of times of classification for the angry face category, and the number of times of classification for the straight face category.
In addition, in the example shown in
The number of times of classification is also associated with the posture category. The number of times of classification for the posture category is the sum of the number of times of classification for the front category and the number of times of classification for the non-front category.
In addition, in the example shown in
The number of times of classification is also associated with the eye category. The number of times of classification for the eye category is the sum of the number of times of classification for the open eye category and the number of times of classification for the closed eye category.
Further, in the example shown in
The number of times of classification is also associated with the designated imaging range category. The number of times of classification for the designated imaging range category is the sum of the number of times of classification for the within-designated imaging range category and the number of times of classification for the out-of-designated imaging range category.
The subject-specific category group 98 is a category determined by the subject feature in a unit of “subject name”, and the subject feature of the subject name is classified into the subject-specific category group 98. The number of times of classification is also associated with the subject-specific category group 98. The number of times of classification associated with the subject-specific category group 98 is the sum of the number of times of classification of a plurality of large categories belonging to the subject-specific category group 98.
As an example, as shown in
The subject recognition unit 48A2 recognizes the person included in the imaging region as the subject by performing the image analysis on the live view image data. In addition, the subject recognition unit 48A2 performs the image analysis on the live view image data to recognize the feature of the person, such as the face of the person, the posture of the person, the opening and closing of eyes of the person, and the presence or absence of the person within the designated imaging range, as the subject feature.
That is, the subject recognition unit 48A2 recognizes the smiling face, the crying face, the angry face, and the straight face as the subject features belonging to the face category by performing the image analysis on the live view image data. In addition, the subject recognition unit 48A2 performs the image analysis on the live view image data to recognize the front and the non-front as the subject features belonging to the posture category. In addition, the subject recognition unit 48A2 performs the image analysis on the live view image data to recognize the open eyes and closed eyes as the subject features belonging to the eye category. Further, the subject recognition unit 48A2 performs the image analysis on the live view image data to recognize “within the designated imaging range” and “out of the designated imaging range” as the subject features belonging to the designated imaging range category. The subject recognition unit 48A2 stores the recognition result information 94 indicating the result of recognizing the person included in the imaging region in the memory 48C. The recognition result information 94 is overwritten and saved in the memory 48C in a one frame unit.
As an example, as shown in
The control unit 48A5 generates an imaging support screen 100 based on the number of times of classification acquired from each category included in the category database 96, and displays the generated imaging support screen 100 on the live view image in a superimposed manner. In the example shown in
As an example, as shown in
In the example shown in
In the imaging support screen 100, in addition to the face category bubble chart, a bubble chart related to the posture category (hereinafter, also referred to as a “posture category bubble chart”), a bubble chart related to the eye category (hereinafter, also referred to as an “eye category bubble chart”), and a bubble chart related to the designated imaging range category (hereinafter, also referred to as a “designated imaging range category bubble chart”) are selectively displayed in accordance with the instruction given from the outside. Here, examples of the instruction given from the outside include the instruction received by the reception device 80. In the example shown in
In addition, in a case in which any person in the bubble chart 100A is selected via the touch panel by the operation with the finger of the user, for the selected person, the number of times of classification of each category of the bubble chart 100A which is currently displayed is displayed as a histogram 100C (see
In the example shown in
In the example shown in
Here, the reception device 80 (in the example shown in
In addition, in the example shown in
In a case in which the target category subject is imaged by the imaging apparatus 10, the target category is a category determined in accordance with a state of the target category subject S (for example, the expression of the face, the posture, and an opening/closing state of eyes, and/or a positional relationship between the person and the designated imaging range), and is a category into which the subject feature for the target category subject S is classified. In the example shown in
The smiling face category designated as the target category by the user from the histogram 100C is a category of which the number of times of classification is lowest as compared with the crying face category, the angry face category, and the straight face category. This means that the number of main exposure images in which the person A is reflected with the smiling face is smaller than the number of main exposure images in which the person A is reflected with other expressions. It should be noted that the smiling face category in the histogram 100C is an example of a “low-frequency category” according to the technology of the present disclosure.
As described above, in a case in which the target category is designated from the histogram 100C, as shown in
In addition, the display processing includes processing of displaying the target category subject image S1 in the live view image in an aspect that is distinguishable from other image regions. In this case, for example, the control unit 48A5 detects the face region of the target category subject image (image region indicating the face of the target category subject S) based on the recognition result information 94 stored in the memory 48C and displays a detection frame 102 that surrounds the detected face region in the live view image to display the target category subject image S1 in the live view image in the aspect that is distinguishable from other image regions.
It should be noted that the method of displaying the target category subject image S1 in the live view image in the aspect that is distinguishable from other image regions is not limited to this, and for example, only the target category subject image S1 may be displayed in the live view image using a peaking method.
Next, the action of the imaging apparatus 10 will be described with reference to
In the imaging support processing shown in
In next step ST102, the control unit 48A5 displays the live view image indicated by the live view image data, which is acquired in step ST100, on the display 26.
In next step ST104, the subject recognition unit 48A2 recognizes the subject in the imaging region by using the trained model 92 based on the live view image data acquired in step ST100.
In next step ST106, the control unit 48A5 acquires the number of times of classification for each category of the subject, which is recognized in step ST104, from the category database 96.
In next step ST108, the control unit 48A5 creates the imaging support screen 100 based on the number of times of classification acquired in step ST106, and displays the created imaging support screen 100 in a part of regions in the live view image.
In next step ST110, the control unit 48A5 determines whether or not any number of times of classification has been designated from the histogram 100C in the imaging support screen 100. In step ST110, in a case in which any number of times of classification is not yet designated from the histogram 100C in the imaging support screen 100, a negative determination is made, and the imaging support processing proceeds to step ST116 shown in
In step ST112, the control unit 48A5 displays the detection frame 102 in the live view image so as to surround the face region of the target category subject image S1 indicating the target category subject S having the subject feature belonging to the target category designated based on the designated number of times of classification.
In next step ST114, the control unit 48A5 displays the imaging recommend information in the live view image. After the processing of step ST114 is executed, the imaging support processing proceeds to step ST116 shown in
In step ST116, the control unit 48A5 determines whether or not a condition for starting the main exposure (hereinafter, referred to as a “main exposure start condition”) is satisfied. Examples of the main exposure start condition include a condition that the full push state is set in accordance with the instruction received by the reception device 80. In step ST116, in a case in which the main exposure start condition is not satisfied, a negative determination is made, and the imaging support processing proceeds to step ST130. In step ST116, in a case in which the main exposure start condition is satisfied, a positive determination is made, and the imaging support processing proceeds to step ST118.
In step ST118, the control unit 48A5 causes the image sensor 16 to perform the main exposure imaging, for the imaging region including the target category subject S. The main exposure image data indicating the main exposure image of the imaging region including the target category subject S is stored in the image memory 50 by performing the main exposure imaging.
In next step ST120, the acquisition unit 48A1 acquires the main exposure image data from the image memory 50.
In next step ST122, the subject recognition unit 48A2 recognizes the subject in the imaging region by using the trained model 92 based on the main exposure image data acquired in step ST120, and stores the recognition result information 94 to the memory 48C.
In next step ST124, the feature extraction unit 48A3 extracts the subject-specific feature information for each subject from the recognition result information 94 stored in the memory 48C.
In next step ST126, the classification unit 48A4 specifies the subject-specific category group 98 corresponding to the subject name from the subject name included in the subject-specific feature information extracted in step ST124, and classifies the subject feature into the corresponding category in the specified subject-specific category group 98.
In next step ST128, the classification unit 48A4 updates the number of times of classification by adding “1” to the number of times of classification of the category into which the subject feature is classified.
In next step ST130, the control unit 48A5 deletes the live view image and the like (for example, the live view image, the imaging support screen 100, the detection frame 102, and the imaging recommend information) from the display 26.
In next step ST132, the control unit 48A5 determines whether or not a condition for ending the imaging support processing (hereinafter, also referred to as “imaging support processing end condition”) is satisfied. Examples of the imaging support processing end condition include a condition that the imaging mode set for the imaging apparatus 10 is released, and a condition that an instruction to end the imaging support processing is received by the reception device 80. In step ST120, in a case in which the imaging support processing end condition is not satisfied, a negative determination is made, and the imaging support processing proceeds to step ST100. In step ST132, in a case in which the imaging support processing end condition is satisfied, a positive determination is made, and the imaging support processing ends.
As described above, in the imaging apparatus 10, the control unit 48A5 acquires the number of times of classification of the subject feature classified into the category based on the subject feature of the subject specified from the main exposure image. Moreover, the control unit 48A5 performs the support processing of supporting the imaging with the imaging apparatus 10 based on the number of times of classification. Therefore, with the present configuration, it is possible to support the imaging with the imaging apparatus 10 in accordance with the number of times of classification in which the subject feature is classified into the category.
In addition, in the imaging apparatus 10, the category is categorized into the plurality of categories including the target category. In addition, the target category is the category determined based on the number of times of classification. In the example shown in
It should be noted that, in the example shown in
It is not limited to the small category belonging to the face category, in a case in which the small category is designated by the user selecting the number of times of classification of the subject feature classified into the small category belonging to other large categories, the processing of supporting the imaging is performed on the target category subject S having the subject feature belonging to the small category designated as the target category.
In addition, in a case in which the subject-specific category group 98 is designated by the user selecting the number of times of classification of the subject name classified as the subject feature in the subject-specific category group 98, the processing of supporting the imaging is performed on the target category subject S having the subject feature belonging to the subject-specific category group 98 designated as the target category, that is, the target category subject S of the subject name corresponding to the subject-specific category group 98 designated as the target category.
Therefore, with the present configuration, it is possible to more efficiently image the target category subject desired by the user than in a case in which the imaging is performed by determining whether or not it is the target category subject S desired by the user only from the intuition of an imaging person.
In addition, in the present embodiment, the imaging recommend information (in the example shown in
In addition, in the present embodiment, the control unit 48A5 displays the live view image on the display 26 and displays the target category subject image S1 in the live view image in the aspect that is distinguishable from other image regions. Therefore, with the present configuration, it is possible to make the user visually recognize the target category subject.
In addition, in the present embodiment, the control unit 48A5 displays the live view image on the display 26 and displays the detection frame 102 for the face region of the target category subject image S1 in the live view image. Therefore, with the present configuration, it is possible to make the user recognize that the subject corresponding to the display region in which the detection frame 102 is displayed is the target category subject.
In addition, in the present embodiment, as the target category, the category having a relatively low number of times of classification among the plurality of categories (in the example shown in
In addition, in the present embodiment, the target category is the category determined in accordance with the state of the target category subject S (for example, the expression of the person). In the example shown in
In addition, in the present embodiment, the plurality of subject-specific category groups 98 are included in the category database 96 as the category in which each of the plurality of persons themselves can be specified. Therefore, the control unit 48A5 performs the processing of supporting the imaging for the target category subject S which is the subject corresponding to the subject-specific category group 98 designated by the user. Therefore, with the present configuration, it is possible to increase the number of times of the imaging for the same subject, or conversely, decrease the number of times for the imaging of the same subject.
In addition, in the present embodiment, the category into which the subject feature is classified is created for each of a plurality of units. Examples of the plurality of units include a unit of “subject name (for example, a name of the person or an identifier for specifying the person)”, a unit of “face (expression) of the person”, a unit of “posture of the person”, a unit of “eyes of the person”, and a unit of “designated imaging range”. Therefore, with the present configuration, it is possible to support the imaging with the imaging apparatus 10 in accordance with the number of times of classification in which the subject feature is classified into the category in the designated unit.
In addition, in the present embodiment, as the support processing of supporting the imaging with the imaging apparatus 10, the control unit 48A5 performs the processing including the processing of displaying the number of times of classification on the display 26. In the example shown in
In addition, in the present embodiment, in a case in which the number of times of classification in the histogram 100C is designated by the reception device 80 in a state in which the histogram 100C is displayed on the display 26, the processing of supporting the imaging related to the category corresponding to the designated number of times of classification is performed by the control unit 48A5. Therefore, with the present configuration, it is possible to support the imaging for the subject having the subject feature intended by the user.
It should be noted that, in the embodiment described above, the form example has been described in which the target category subject S is positioned within the designated imaging range, but the technology of the present disclosure is not limited to this. For example, in a case in which the target category subject S is positioned out of the designated imaging range, as shown in
In addition, in the embodiment described above, the imaging related to the category corresponding to the designated number of times of classification (in the example shown in
As described above, in a case in which the processing of supporting the imaging for the interest subject having the subject feature classified into the low-frequency category is performed by the control unit 48A5, the imaging support processing is executed by the CPU 48A as shown in
In step ST200 of the imaging support processing shown in
In next step ST202, the control unit 48A5 specifies the subject image (hereinafter, also referred to as an “interest subject image”) indicating the interest subject having the subject feature classified into the low-frequency category among the interest subjects designated by the user in the live view image, and displays the detection frame 102 (see
Therefore, in the example shown in
In the embodiment described above, the form example has been described in which the main exposure imaging is performed in a case in which the condition that the instruction from the user is received by the reception device 80 is satisfied as the main exposure start condition in the imaging support processing, but the technology of the present disclosure is not limited to this. For example, the control unit 48A5 may detect the target category subject based on the imaging result of the imaging apparatus 10, and automatically acquire the image including the image corresponding to the target category subject on a condition that the target category subject is detected.
In this case, for example, first, the control unit 48A5 specifies the low-frequency category as the target category, in the same manner as the example shown in
As described above, in a case in which the image including the image corresponding to the target category subject is automatically acquired by the control unit 48A5 on a condition that the target category subject is detected, as shown in
In step ST300 of the imaging support processing shown in
In next step ST302, the control unit 48A5 determines whether or not the low-frequency interest subject image is present in the live view image based on the recognition result information 94. In step ST302, in a case in which the low-frequency interest subject image is not present in the live view image, a negative determination is made, and the imaging support processing proceeds to step ST130. In step ST302, in a case in which the low-frequency interest subject image is present in the live view image, a positive determination is made, and the imaging support processing proceeds to step ST118.
As described above, since the main exposure imaging is started in a case in which the control unit 48A5 detects that the low-frequency interest subject image is present in the live view image, it is possible to reduce the time and effort needed for imaging the target category subject as compared with a case in which the imaging is started on the condition that the target category subject is found by the visual observation and the instruction from the user is received by the reception device 80.
In the example shown in
In step ST352 of the imaging support processing shown in
As described above, since the main exposure imaging is performed on the condition that the target category subject is included within the designated imaging range, it is possible to reduce the time and effort needed for imaging the target category subject as compared with a case in which the imaging is started on the condition that the target category subject included within the designated imaging range is found by the visual observation and the instruction from the user is received by the reception device 80.
In the example shown in
In this case, the imaging support processing shown in
In step ST400 of the imaging support processing shown in
As described above, since the main exposure imaging is performed after the target category subject is in focus within the designated imaging range, it is possible to reduce the time and effort needed for focusing and imaging the target category subject after positioning the target category subject within the designated imaging range.
In the example shown in
In this case, for example, the imaging support processing shown in
In step ST450 of the imaging support processing shown in
In next step ST452, the control unit 48A5 calculates the degree of difference between the first imaging condition and the second imaging condition acquired in step ST450 (for example, a value indicating how much the first imaging condition and the second imaging condition deviate from each other).
In next step ST454, the control unit 48A5 determines whether or not the degree of difference calculated in step ST452 is equal to or larger than the predetermined degree of difference. In step ST454, in a case in which the degree of difference calculated in step ST452 is smaller than the predetermined degree of difference, a negative determination is made, and the control unit 48A5 executes the processing corresponding to the processing of step ST400, and the processing (see
In step ST456, the control unit 48A5 executes the predetermined processing. Although the details will be described below, the predetermined processing is processing including imaging processing after depth-of-field adjustment of performing the main exposure imaging after a depth of field is adjusted and/or focus bracket imaging processing of performing the main exposure imaging using a focus bracket method. After the processing of step ST456 is executed, the imaging support processing proceeds to step ST122.
As described above, in the example shown in
In the examples shown in
In a case in which the degree of difference is equal to or larger than the predetermined degree of difference, as shown in
The acquisition unit 48A1 calculates the depth of field in which the within-designated imaging range subject and the target category subject are included, based on the plurality of focus positions calculated for each of the within-designated imaging range subject and the target category subject. The depth of field is calculated by using a first calculation expression. The first calculation expression is, for example, a calculation expression in which the plurality of focus positions are used as independent variables and the depth of field is used as a dependent variable. It should be noted that a first table in which the plurality of focus positions and the depth of field are associated with each other may be used instead of the first calculation expression.
The acquisition unit 48A1 calculates an F-number for realizing the calculated depth of field. The acquisition unit 48A1 calculates the F-number by using a second calculation expression. The second calculation expression used here is, for example, a calculation expression in which the depth of field is used as an independent variable and the F-number is used as a dependent variable. It should be noted that a second table in which the value indicating the depth of field and the F-number are associated with each other may be used instead of the second calculation expression.
The control unit 48A5 operates the stop 40C by controlling the motor 46 via the motor driver 60 in accordance with the F-number calculated by the acquisition unit 48A1.
As described above, in the examples shown in
By the way, due to the structure of the imaging apparatus 10, a situation is also considered that both the within-designated imaging range subject and the target category subject S are not included within the depth of field. In a case in which such a situation is reached, the control unit 48A5 causes the imaging apparatus 10 to image the within-designated imaging range subject and the target category subject using the focus bracket method.
In this case, as shown in
In a case in which the first and second focus positions are calculated by the acquisition unit 48A1, as shown in
As described above, in the examples shown in
In the example shown in
In addition, in a case in which the degree of difference in brightness is smaller than the predetermined degree of difference in brightness, the control unit 48A5 causes the imaging apparatus 10 to image the reference subject and the target category subject S with the exposure determined with reference to the reference subject.
The exposure bracket method imaging processing is realized, for example, by executing the imaging support processing shown in
In step ST500 of the imaging support processing shown in
In next step ST502, the control unit 48A5 calculates the degree of difference in brightness by using the two photometric values acquired for the within-designated imaging range subject and the target category subject S, respectively, in step ST500. The degree of difference in brightness is an absolute value of the difference between the two photometric values, for example.
In next step ST504, the control unit 48A5 determines whether or not the degree of difference in brightness calculated in step ST502 is equal to or larger than the predetermined degree of difference in brightness. The predetermined degree of difference in brightness may be a fixed value or may be a variable value that is changed in accordance with the given instruction and/or the given condition. In step ST504, in a case in which the degree of difference in brightness is smaller than the predetermined degree of difference in brightness, a negative determination is made, and the imaging support processing proceeds to step ST508. In step ST504, in a case in which the degree of difference in brightness is equal to or larger than the predetermined degree of difference in brightness, a positive determination is made, and the imaging support processing proceeds to step ST506.
In step ST506, the control unit 48A5 controls the imaging apparatus 10 to execute the main exposure imaging using the exposure bracket method for each of the within-designated imaging range subject and the target category subject S. After the processing of step ST506 is executed, the imaging support processing proceeds to step ST122. It should be noted that the main exposure image data of each frame obtained by performing the main exposure imaging using the exposure bracket method may be individually stored in a predetermined storage region, or may be stored in the predetermined storage region as composite image data of one frame obtained by composition.
In step ST508, the control unit 48A5 controls the imaging apparatus 10 to execute the main exposure imaging on the within-designated imaging range subject and the target category subject S with the exposure determined with reference to the within-designated imaging range subject. After the processing of step ST508 is executed, the control unit 48A5 executes the processing corresponding to steps ST120 to ST132 (see
As described above, in the example shown in
In addition, in the example shown in
In the embodiment described above, the face category, the posture category, the eye category, and the designated imaging range category are described as examples of the large category included in the subject-specific category group 98, but the technology of the present disclosure is not limited to this. For example, as shown in
The period category includes a plurality of date categories in which the dates are different from each other as the small categories. The number of times of classification is associated with each of the plurality of date categories. In addition, the number of times of classification is also associated with the period category. The number of times of classification of the period category is the sum of the number of times of classification of the plurality of date categories.
The position category includes a plurality of small position categories having different positions from each other as the small categories. The number of times of classification is associated with each of the plurality of small position categories. In addition, the number of times of classification is also associated with the position category. The number of times of classification of the position category is the sum of the number of times of classification of the plurality of small position categories.
In the example shown in
In the example shown in
The GPS receiver 108 receives radio waves from a plurality of GPS satellites (not shown), which are an example of a plurality of GNSS satellites, and calculates the position coordinates for specifying the current position of the imaging apparatus 10 based on the reception result.
Each time the main exposure imaging for one frame is performed, the classification unit 48A4 acquires the current time point from the RTC 106 and classifies the acquired current time point as an imaging time point into the corresponding date category among the plurality of date categories included in the period category. Each time the classification unit 48A4 classifies the imaging time point into the date category, “1” is added to the number of times of classification of the date category into which the imaging time point is classified. It should be noted that, here, the classification unit 48A4 acquires the current time point from the RTC 106, but the classification unit 48A4 may acquire the current time point via a communication network, such as the Internet.
Each time the main exposure imaging for one frame is performed, the classification unit 48A4 acquires the position coordinates from the GPS receiver 108 as a position at which the imaging is performed (hereinafter, also referred to as an “imaging position”). The classification unit 48A4 specifies the address corresponding to the acquired imaging position from the map data 104. Moreover, the classification unit 48A4 classifies the imaging position into the small position category corresponding to the specified address. Each time the classification unit 48A4 classifies the imaging position into the small position category, “1” is added to the number of times of classification of the small position category into which the imaging position is classified.
In the example shown in
In addition, in a case in which the position category is selected from the category selection screen 100B of the reception device 80 (for example, the touch panel 28), the imaging support screen 100 displays a bubble chart related to the position category (hereinafter, a position category bubble chart) as the bubble chart 100A. In the same manner as the bubble chart related to the face category (see
In the example shown in
As described above, the subject-specific category group 98 includes, as the large category, the period category determined by the subject feature in a unit of “period”. In addition, the period category includes the plurality of date categories in which the dates are different from each other. Moreover, each time the main exposure imaging for one frame is performed, the imaging time point is classified into the date category, and the period category bubble chart and the period category histogram corresponding to the number of times of classification are displayed on the display 26 together with the live view image. The period category bubble chart and the period category histogram are used in the same method as the face category bubble chart and the face category histogram described in the embodiment described above. Therefore, with the present configuration, it is possible to support the imaging with the imaging apparatus 10 in accordance with the number of times of classification counted by classifying the imaging time point into the date category. It should be noted that, here, the date category divided by year, month, and day is described, this is merely an example, and a category may be used in which the period is divided by year, month, day, hour, minute, or second.
In addition, the subject-specific category group 98 includes, as the large category, the position category determined by the subject feature in a unit of “position”. In addition, the position category includes the plurality of small position categories in which the positions are different from each other. Moreover, each time the main exposure imaging for one frame is performed, the imaging position is classified into the small position categories, and the position category bubble chart and the position category histogram corresponding to the number of times of classification are displayed on the display 26 together with the live view image. The position category bubble chart and the position category histogram are used in the same method as the face category bubble chart and the face category histogram described in the embodiment described above. Therefore, with the present configuration, it is possible to support the imaging with the imaging apparatus 10 in accordance with the number of times of classification counted by classifying the imaging position into the small position category.
In the embodiment described above, the form example has been described in which it is assumed that the imaging support processing is continuously executed while the imaging mode is set, but the technology of the present disclosure is not limited to this, and the imaging support processing may be intermittently executed in accordance with the time point and/or the position. For example, as shown in
In addition, in the embodiment described above, the form example has been described in which the subject feature is classified into the category by the classification unit 48A4 regardless of an imaging scene imaged by the imaging apparatus 10, but the technology of the present disclosure is not limited to this. For example, the classification unit 48A4 may classify the subject feature into the category in a case in which a scene to be imaged by the imaging apparatus 10 matches a specific scene (for example, a scene of a sports day, a scene of a beach, and a scene of a concert). The specific scene may be a scene imaged in the past.
In this case, as an example, the imaging support processing shown in
In step ST550 of the imaging support processing shown in
As described above, since the classification unit 48A4 classifies the subject feature into the category for each subject only in a case in which the current imaging scene and the specific scene match, the subject feature specified from the main exposure image data obtained by performing the main exposure imaging on the scene that is not intended by the user can be prevented from being classified into the category.
In addition, since the classification unit 48A4 classifies the subject feature into the category for each subject only in a case in which the current imaging scene and the past imaging scene match, the subject feature specified from the main exposure image data obtained by performing the main exposure imaging on the current imaging scene that matches the past imaging scene can be classified into the category.
In addition, in the embodiment described above, the form example has been described in which the subject feature is classified into each of the plurality of categories, but the technology of the present disclosure is not limited to this. For example, as shown in
In addition, in the embodiment described above, the face category histogram (see
In addition, in the embodiment described above, the bubble chart 100A and the histogram 100C are described, but the technology of the present disclosure is not limited to this. Another graph may be used or a numerical value indicating the number of times of classification may be displayed in a form divided for each subject or for each category.
In addition, in the embodiment described above, the form example has been described in which the imaging related to the category corresponding to the number of times of classification selected by the user from the histogram 100C is supported, but the technology of the present disclosure is not limited to this, and the category in which the imaging is supported may be directly selected by the user from the histogram 100C and the like via the reception device 80 (for example, the touch panel 28).
In addition, in the embodiment described above, the form example has been described in which the main exposure image data is stored in the image memory 50, but data including the main exposure image data obtained by performing the main exposure imaging supported by performing the support processing described above may be used in the machine learning of the trained model 92 as training data. Accordingly, it is possible to create the trained model 92 based on the main exposure image data obtained by performing the main exposure imaging supported by performing the support processing.
In addition, in the embodiment described above, the form example has been described in which the imaging support processing is executed by the controller 48 in the imaging apparatus 10, but the technology of the present disclosure is not limited to this. For example, as shown in
The imaging apparatus 10 requests the external device 112 to execute the imaging support processing, via the network 110. In response to this, the CPU 116 of the external device 112 reads out the imaging support processing program 84 from the storage 118, and executes the imaging support processing program 84 on the memory 120. The CPU 116 performs the imaging support processing in accordance with the imaging support processing program 84 executed on the memory. Moreover, the CPU 116 provides a processing result obtained by executing the imaging support processing to the imaging apparatus 10 via the network 110.
In addition, the imaging apparatus 10 and the external device 112 may be configured to execute the imaging support processing in a distributed manner, or a plurality of devices including the imaging apparatus 10 and the external device 112 may execute the imaging support processing in a distributed manner. In a case of performing the distribution processing, for example, the CPU 48A of the imaging apparatus 10 may be operated as the acquisition unit 48A1 and the control unit 48A5, and the CPU of the device (for example, the external device 112) other than the imaging apparatus 10 may be operated as the subject recognition unit 48A2, the feature extraction unit 48A3, and the classification unit 48A4. That is, the processing load applied to the imaging apparatus 10 may be reduced by causing the external device having a higher operation power than the imaging apparatus 10 to perform the processing having a relatively large processing load.
In addition, in the embodiment described above, the still picture is described as the main exposure image, but the technology of the present disclosure is not limited to this, and the video may be used as the main exposure image. The video may be a video for recording or a video for display, that is, the live view image or a postview image.
In addition, in the embodiment described above, the imaging recommend information is displayed in the live view image, but it is not always necessary to display the imaging recommend information in the live view image. For example, during the main exposure imaging of the video, the display (at least one of the arrow, the face frame, or the message) indicating the target category subject may be performed in the same manner as the imaging recommend information. By displaying the target category subject during the main exposure imaging of the video in this way, it is possible for the user to recognize that the target category subject is included in the video by using the imaging apparatus 10. In addition, it is possible for the user to acquire the still picture including the target category subject by using the imaging apparatus 10 to cut out the frame including the target category subject after the main exposure imaging of the video. In this case, the imaging apparatus 10 may perform the classification into the same category each time the target category subject is specified by the main exposure imaging of the video, and perform the display by updating the histogram and/or the bubble chart. As described above, it is possible for the user to grasp what kind of subject is included only by performing the imaging for the video and acquiring the video by using the imaging apparatus 10. In addition, in this case, a value based on the still picture and a value based on the video may be displayed in different aspects in the histogram and/or the bubble chart. For example, in the histogram, a histogram based on the still picture and a histogram based on the video are displayed in different colors by a stacked bar graph. As described above, it is possible for the user to grasp whether each number of times of classification is based on the still picture or the video. In addition, the histogram and/or the bubble chart may be created based only on the classification of the subject indicated by the subject region included in one video. As described above, it is possible for the user to grasp what kind of category into which the subject region is classified is included in the one video. It should be noted that such the histogram and/or the bubble chart may be displayed on the display 26 based on the user operation or the like after imaging for the still picture or the video, or in a playback mode in which the live view image is not displayed.
In addition, in the embodiment described above, the form example has been described in which the number of times of classification is continuously increased, but the technology of the present disclosure is not limited to this. For example, the number of times of classification corresponding to at least one category included in the subject-specific category group 98 may be reset on a regular basis or at a designated timing. For example, it may be reset in accordance with the time and/or the position. Specifically, it may be reset once a day, may be reset once an hour, or may be reset each 100 meters of the position change.
In addition, in the embodiment described above, the smiling face category is described as the target category, but the technology of the present disclosure is not limited to this, and other categories may be used as the target category or a plurality of categories may be used as the target category. In this case, for example, in the face category histogram shown in
In addition, in the embodiment described above, the person A is described as the target category subject, but the technology of the present disclosure is not limited to this, and a plurality of target category subjects may be used. In this case, for example, in the face category bubble chart shown in
In addition, in the embodiment described above, the number of times of classification, which is a simple number of times the subject feature is classified into the category, is described, but the technology of the present disclosure is not limited to this. For example, the number of times of classification per unit time may be used.
In addition, in the embodiment described above, the detection frame 102 (see
In addition, in the embodiment described above, a physical camera (hereinafter, also referred to as a “physical camera”) is described as the imaging apparatus 10, but the technology of the present disclosure is not limited to this. A virtual camera that generates virtual viewpoint image data by virtually imaging the subject from a virtual viewpoint based on captured image data obtained by the imaging with a plurality of physical cameras set at different positions may be applied instead of the physical camera. In this case, an image indicated by the virtual viewpoint image data, that is, a virtual viewpoint image is an example of a “captured image” according to the technology of the present disclosure.
In the embodiment described above, the form example is described in which the non-phase difference pixel divided region 30N and the phase difference pixel divided region 30P are used in combination, but the technology of the present disclosure is not limited to this. For example, an area sensor may be used in which the phase difference image data and the non-phase difference image data are selectively generated and read out instead of the non-phase difference pixel divided region 30N and the phase difference pixel divided region 30P. In this case, on the area sensor, a plurality of photosensitive pixels are two-dimensionally arranged. For the photosensitive pixels included in the area sensor, for example, a pair of independent photodiodes in which the light shielding member is not provided are used. In a case in which the non-phase difference image data is generated and read out, the photoelectric conversion is performed by the entire region of the photosensitive pixels (pair of photodiodes), and in a case in which the phase difference image data is generated and read out (for example, a case in which passive method distance measurement is performed), the photoelectric conversion is performed by at one photodiode of the pair of photodiodes. Here, one photodiode of the pair of photodiodes is a photodiode corresponding to the first phase difference pixel L described in the above embodiment, and the other photodiode of the pair of photodiodes is a photodiode corresponding to the second phase difference pixel R described in the above embodiment. It should be noted that the phase difference image data and the non-phase difference image data may be selectively generated and read out by all the photosensitive pixels included in the area sensor, but the technology of the present disclosure is not limited to this, and the phase difference image data and the non-phase difference image data may be selectively generated and read out by a part of the photosensitive pixels included in the area sensor.
In the embodiment described above, the image plane phase difference pixel is described as the phase difference pixel P, but the technology of the present disclosure is not limited to this. For example, the non-phase difference pixels N may be disposed in place of the phase difference pixels P included in the photoelectric conversion element 30, and a phase difference AF plate including a plurality of phase difference pixels P may be provided in the imaging apparatus body 12 separately from the photoelectric conversion element 30.
In the embodiment described above, an AF method using the distance measurement result based on the phase difference image data, that is, the phase difference AF method is described, but the technology of the present disclosure is not limited to this. For example, the contrast AF method may be adopted instead of the phase difference AF method. In addition, the AF method based on the distance measurement result using the parallax of a pair of images obtained from a stereo camera, or the AF method using a TI/F method distance measurement result using a laser beam or the like may be adopted.
In the embodiment described above, the focal plane shutter is described as an example of the mechanical shutter 72, but the technology of the present disclosure is not limited to this, and the technology of the present disclosure is established even in a case in which another type of mechanical shutter, such as a lens shutter, is applied instead of the focal plane shutter.
In the embodiment described above, the form example is described in which the imaging support processing program 84 is stored in the storage 48B, but the technology of the present disclosure is not limited to this. For example, as shown in
The imaging support processing program 84, which is stored in the storage medium 200, is installed in the controller 48. The CPU 48A executes the imaging support processing in accordance with the imaging support processing program 84.
In addition, the imaging support processing program 84 may be stored in a storage unit of another computer or server device connected to the controller 48 via a communication network (not shown), and the imaging support processing program 84 may be downloaded in response to a request of the imaging apparatus 10 and installed in the controller 48.
It should be noted that it is not required to store the entire imaging support processing program 84 in the storage unit or the storage 48B of another computer or server device connected to the controller 48, and a part of the imaging support processing program 84 may be stored.
In the example shown in
In the example shown in
In the example shown in
As a hardware resource for executing the imaging support processing described in the embodiment, the following various processors can be used. Examples of the processor include a CPU which is a general-purpose processor functioning as the hardware resource for executing the imaging support processing by executing software, that is, a program. In addition, examples of the processor include a dedicated electric circuit which is a processor having a circuit configuration designed to be dedicated for executing specific processing, such as the FPGA, the PLD, or the ASIC. A memory is built in or connected to any processor, and any processor executes the imaging support processing by using the memory.
The hardware resource for executing the imaging support processing may be composed of one of these various processors, or may be composed of a combination (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA) of two or more processors of the same type or different types. In addition, the hardware resource for executing the imaging support processing may be one processor.
As a configuring example of one processor, first, there is a form in which one processor is composed of a combination of one or more CPUs and software and the processor functions as the hardware resource for executing the imaging support processing. Secondly, as represented by SoC, there is a form in which a processor that realizes the functions of the entire system including a plurality of hardware resources for executing the imaging support processing with one IC chip is used. As described above, the imaging support processing is realized by using one or more of the various processors described above as the hardware resource.
Further, as the hardware structure of these various processors, more specifically, it is possible to use an electric circuit in which circuit elements, such as semiconductor elements, are combined. In addition, the imaging support processing is merely an example. Therefore, it is needless to say that the deletion of an unneeded step, the addition of a new step, and the change of a processing order may be employed within a range not departing from the gist.
The description contents and the shown contents above are the detailed description of the parts according to the technology of the present disclosure, and are merely examples of the technology of the present disclosure. For example, the description of the configuration, the function, the action, and the effect above are the description of examples of the configuration, the function, the action, and the effect of the parts according to the technology of the present disclosure. Accordingly, it is needless to say that unneeded parts may be deleted, new elements may be added, or replacements may be made with respect to the description contents and the shown contents above within a range that does not deviate from the gist of the technology of the present disclosure. In addition, in order to avoid complications and facilitate understanding of the parts according to the technology of the present disclosure, in the description contents and the shown contents above, the description of common technical knowledge and the like that do not particularly require description for enabling the implementation of the technology of the present disclosure are omitted.
In the present specification, “A and/or B” is synonymous with “at least one of A or B”. That is, “A and/or B” means that it may be only A, only B, or a combination of A and B. In addition, in the present specification, in a case in which three or more matters are associated and expressed by “and/or”, the same concept as “A and/or B” is applied.
All documents, patent applications, and technical standards described in the present specification are incorporated into the present specification by reference to the same extent as in a case in which the individual documents, patent applications, and technical standards are specifically and individually stated to be incorporated by reference.
With respect to the embodiment described above, the following supplementary notes will be further disclosed.
(Supplementary Note 1)
An imaging support device comprising a processor, and a memory connected to or built in the processor, in which the processor acquires frequency information indicating a frequency of a feature of a subject specified from a captured image obtained by imaging with an imaging apparatus, the feature being classified into a category based on the feature, and performs support processing of supporting the imaging with the imaging apparatus based on the frequency information.
(Supplementary Note 2)
The imaging support device according to Supplementary Note 1, in which the category is categorized into a plurality of categories including at least one target category, the target category is a category determined based on the frequency information, and the support processing is processing including processing of supporting the imaging for a target category subject having the feature belonging to the target category.
(Supplementary Note 3)
The imaging support device according to Supplementary Note 2, in which the support processing is processing including display processing of performing display for recommending to image the target category subject.
(Supplementary Note 4)
The imaging support device according to Supplementary Note 3, in which the display processing is processing of displaying an image for display on a display and displaying a frame that surrounds at least a part of a target category subject image in the image for display.
(Supplementary Note 5)
The imaging support device according to any one of Supplementary Notes 2 to 4, in which the processor detects the target category subject based on an imaging result of the imaging apparatus, and acquires an image including an image corresponding to the target category subject on a condition that the target category subject is detected.
(Supplementary Note 6)
The imaging support device according to Supplementary Note 5, in which the processor detects the target category subject and causes the imaging apparatus to perform the imaging accompanied by main exposure on a condition that the target category subject is included in a predetermined imaging range.
(Supplementary Note 7)
The imaging support device according to any one of Supplementary Notes 2 to 6, in which, in a case in which the target category subject is positioned out of a designated imaging range determined in accordance with a given instruction from an outside, the processor controls the imaging apparatus to include the designated imaging range and the target category subject within a depth of field.
(Supplementary Note 8)
The imaging support device according to Supplementary Note 7, in which, in a case in which both the designated imaging range and the target category subject are not included within the depth of field due to a structure of the imaging apparatus, the processor causes the imaging apparatus to image the designated imaging range and the target category subject using a focus bracket method.
(Supplementary Note 9)
The imaging support device according to Supplementary Note 7 or 8, in which, in a case in which the target category subject is positioned within the designated imaging range, the processor causes the imaging apparatus to image the target category subject in a state in which the target category subject is in focus.
(Supplementary Note 10)
The imaging support device according to any one of Supplementary Notes 2 to 9, in which, in a case in which a degree of difference between brightness of a reference subject and brightness of the target category subject is equal to or larger than a predetermined degree of difference, the processor causes the imaging apparatus to image the reference subject and the target category subject using an exposure bracket method.
(Supplementary Note 11)
The imaging support device according to Supplementary Note 10, in which, in a case in which the degree of difference is smaller than the predetermined degree of difference, the processor causes the imaging apparatus to image the reference subject and the target category subject with an exposure determined with reference to the target category subject.
(Supplementary Note 12)
The imaging support device according to any one of Supplementary Notes 1 to 11, in which the image obtained by performing the imaging supported by performing the support processing is used in learning.
Number | Date | Country | Kind |
---|---|---|---|
2020-113524 | Jun 2020 | JP | national |
This application is a continuation application of International Application No. PCT/JP2021/021756, filed Jun. 8, 2021, the disclosure of which is incorporated herein by reference in its entirety. Further, this application claims priority under 35 USC 119 from Japanese Patent Application No. 2020-113524 filed Jun. 30, 2020, the disclosure of which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/021756 | Jun 2021 | US |
Child | 18145016 | US |