The present invention relates to an information processing apparatus, an information processing method, and an information processing program, and particularly to an information processing apparatus, an information processing method, and an information processing program for processing information for imaging a desired subject with desired composition.
JP2020-017912A discloses an imaging apparatus that displays an icon resembling a subject on a display unit, sets composition on a screen using the icon, calculates a distance in which the subject can be imaged with the set composition, and presents the calculated distance to a user.
One embodiment according to the disclosed technology provides an information processing apparatus, an information processing method, and an information processing program that can acquire information for imaging a desired subject with desired composition.
(1) An information processing apparatus comprises a processor, in which the processor is configured to acquire information about an imaging apparatus and information about a first subject and a second subject to be set as an imaging target, and perform a control of estimating an imaging position at which the first subject and the second subject are imageable based on display states of a first image representing the first subject and a second image representing the second subject on a display device and/or another display device.
(2) In the information processing apparatus according to (1), the processor is configured to perform a control of displaying information about the estimated imaging position on the display device and/or the other display device.
(3) In the information processing apparatus according to (1) or (2), the processor is configured to receive input of composition of the first image and the second image on the display device and/or the other display device, and perform a control of estimating the imaging position in the composition of which input is received.
(4) In the information processing apparatus according to (3), the composition is a position and a size of the first image and a position and a size of the second image on the display device and/or the other display device.
(5) In the information processing apparatus according to any one of (1) to (4), the information about the first subject and the second subject is information about positions and sizes of the first subject and the second subject.
(6) In the information processing apparatus according to (5), the information about the position of the first subject and/or the information about the position of the second subject is information about an altitude and an azimuth of the first subject and/or information about an altitude and an azimuth of the second subject.
(7) In the information processing apparatus according to (5), the processor is configured to receive input of information for specifying the first subject and the second subject, and perform a control of acquiring the information about the position and the size of the specified first subject and the information about the position and the size of the specified second subject from a database.
(8) In the information processing apparatus according to (5), the processor is configured to perform a control of acquiring information obtained by measuring the position and the size of the first subject and the position and the size of the second subject.
(9) In the information processing apparatus according to any one of (1) to (8), the processor is configured to acquire information about a current location, generate guidance information for leading to the imaging position based on the acquired information about the current location, and perform a control of displaying the generated guidance information on the display device and/or the other display device.
(10) In the information processing apparatus according to any one of (1) to (9), the processor is configured to calculate a time slot in which backlight and/or insufficiency of a light quantity occurs in a case of imaging the first subject and the second subject at the imaging position, and perform a control of displaying the calculated time slot or a time slot excluding the calculated time slot on the display device and/or the other display device.
(11) In the information processing apparatus according to (3) or (4), the processor is configured to, in a case where the imaging position is not estimable, perform a control of performing correction to the composition in which the imaging position is estimable.
(12) In the information processing apparatus according to (3) or (4), the processor is configured to perform a control of displaying a frame having the same aspect ratio as an image captured by the imaging apparatus on the display device and/or the other display device.
(13) In the information processing apparatus according to (12), the processor is configured to, in a case where the imaging position is not estimatable because sizes of the first image and the second image in the composition of which input is received are unsuitable, perform a control of performing correction to the composition in which the imaging position is estimatable by changing the size of the first image and the size of the second image within the frame and/or a front and rear positional relationship.
(14) In the information processing apparatus according to (12) or (13), the processor is configured to, in a case where the imaging position is not estimatable because a front and rear positional relationship between the first image and the second image in the composition of which input is received is unsuitable, perform a control of performing correction to the composition in which the imaging position is estimatable by changing a size of the first image and a size of the second image within the frame and/or the front and rear positional relationship.
(15) In the information processing apparatus according to any one of (12) to (14), the processor is configured to, in a case where the imaging position is not estimatable because a distance between the first image and the second image in the composition of which input is received are unsuitable, perform correction to the composition in which the imaging position is estimatable by changing the distance between the first image and the second image within the frame.
(16) In the information processing apparatus according to any one of (12) to (15), the processor is configured to detect an obstacle in a case of imaging the first subject and the second subject at the estimated imaging position, and in a case where the obstacle is detected, perform a control of performing correction to the composition in which the first subject and the second subject are imageable by avoiding the obstacle.
(17) In the information processing apparatus according to any one of (1) to (16), the processor is configured to determine whether or not the estimated imaging position is an enterable location, and perform a control of issuing an alert in a case where the estimated imaging position is an unenterable location.
(18) In the information processing apparatus according to any one of (1) to (17), the imaging position includes information about an altitude.
(19) In the information processing apparatus according to any one of (1) to (18), the processor is configured to determine whether or not the estimated imaging position is a reachable position, and perform a control of issuing an alert in a case where the estimated imaging position is an unreachable position.
(20) In the information processing apparatus according to (12), the processor is configured to further perform a control of receiving input of directions of the first image and the second image within the frame.
(21) An information processing method comprises a step of acquiring information about an imaging apparatus and information about a first subject and a second subject to be set as an imaging target, and a step of estimating an imaging position at which the first subject and the second subject are imageable based on display states of a first image representing the first subject and a second image representing the second subject on a display device and/or another display device.
(22) An information processing program causes a computer to implement a function of acquiring information about an imaging apparatus and information about a first subject and a second subject to be set as an imaging target, and a function of estimating an imaging position at which the first subject and the second subject are imageable based on display states of a first image representing the first subject and a second image representing the second subject on a display device and/or another display device.
Hereinafter, preferred embodiments of the present invention will be described in detail in accordance with the accompanying drawings.
A case where the present invention is applied to a system (imaging position estimation system) that estimates and presents a position (imaging position) at which a desired subject can be imaged with desired composition will be illustratively described. The “imaging position” is a geographical position.
The subject to be set as an imaging target is limited to a subject of which a position can be specified at at least a time of imaging. Accordingly, the subject as the imaging target can include even a moving subject of which the position at the time of imaging can be specified. The “position” is a geographical position.
In the following description of the imaging position estimation system, a case where two subjects are selected and imaged with the desired composition will be illustratively described.
As illustrated in
The user terminal 10 is composed of a portable terminal, particularly a camera-equipped portable terminal. In a case where the user terminal 10 is a camera-equipped information terminal, imaging is performed by the camera comprised in the information terminal. In the present embodiment, a case where the user terminal 10 is composed of a camera-equipped smartphone will be described.
As illustrated in
The sensor unit 23 comprises various sensors such as a geomagnetic sensor, a gyrocompass, an acceleration sensor, and a range measurement sensor. The range measurement sensor is a two-dimensional scan-type optical distance sensor that measures a distance to a detection object while scanning light. The range measurement sensor includes a laser scanner, a laser range finder (LRF), light detection and ranging or laser imaging detection and ranging (LIDAR), and the like. By providing the range measurement sensor, a space can be three-dimensionally scanned.
The camera 18 is a so-called digital camera and comprises a lens, an image sensor, and the like. In the present embodiment, the camera 18 is assumed to have a zoom function. The camera 18 is an example of an imaging apparatus. In addition, the CPU 11 is an example of a processor. The display 15 is an example of a display device.
As illustrated in
The camera information acquisition unit 10A performs processing of acquiring information (camera information) about the camera required for estimating the imaging position. The camera information includes information about a size of an image sensor, information about a settable focal length of a lens, and the like. As described above, in the present embodiment, imaging is performed by the camera 18 comprised in the user terminal 10. Accordingly, in the present embodiment, the camera information of the camera 18 comprised in the user terminal 10 is acquired. This information is stored in the ROM 12 and/or the EEPROM 14. The camera information acquisition unit 10A acquires the camera information from the ROM 12 and/or the EEPROM 14. The acquired camera information is provided to the imaging position estimation unit 10E.
The subject selection unit 10B performs processing of receiving selection of the subject to be set as the imaging target. In the present embodiment, selection of the subject is received using a map.
As illustrated in
The map M displayed in the map display region 15A is enlarged by a pinch-out operation and is shrunk by a pinch-in operation. In addition, the map M displayed in the map display region 15A is moved in an operation direction by a swipe operation.
The subject selection unit 10B acquires data of the map M from the information provision server 100 and displays the map M on the display 15.
As illustrated in
A user selects the subject to be set as the imaging target by touching the subject candidate Ob desired to be imaged for a predetermined time period or longer (selects by performing a so-called long press). The subject selection unit 10B receives selection of the subject based on an operation input into the touch panel 16.
In a case where selection processing is confirmed, an icon of the selected subject is displayed in a predetermined information display region 15B set within the screen of the display 15. In the present embodiment, the information display region 15B is set within the same screen as the map display region 15A.
As described above, two subjects are selected in the imaging position estimation system 1 of the present embodiment. Accordingly, two icons are displayed in the information display region 15B.
As illustrated in
As illustrated in
The icon is composed of an image representing the subject. In the present embodiment, the icon is composed of an image that is an illustration of the subject. As will be described later, the icon is used for setting the composition. Accordingly, the icon is preferably composed of an image that reproduces the subject as close as possible. In addition to the image that is an illustration of the subject, an image and the like obtained by cutting out a subject part from an image in which the subject is captured can be used as the icon.
The subject selection unit 10B acquires data of the icon of the selected subject from the information provision server 100 and displays the icon on the display 15.
The subject information acquisition unit 10C performs processing of acquiring information (subject information) about the subject selected by the subject selection unit 10B.
The subject information includes information required for estimating the imaging position. The information required for estimating the imaging position includes information about a position, information about a size, and the like of the subject. The position is a geographical position. The geographical position is specified by, for example, a latitude and a longitude. In the case of specifying the geographical position in more detail, an altitude is included. The information about the size includes at least information about a height of the subject. The subject information acquisition unit 10C acquires the subject information from the information provision server 100. The acquired subject information is provided to the imaging position estimation unit 10E.
The composition setting unit 10D performs processing of receiving input of the composition. The composition to be input is composition in imaging the subject (the first subject and the second subject) selected by the subject selection unit 10B. The composition is input using the icons Ic1 and Ic2 of the subject. In addition, the composition is input in a predetermined composition setting region 15C set within the screen of the display 15. As illustrated in
As illustrated in
As illustrated in
The setting operation of the composition is performed by moving and/or enlarging and shrinking the icons Ic1 and Ic2 of the subject within the frame F displayed in the composition setting region 15C. The user adjusts positions and sizes of the icons Ic1 and Ic2 to obtain the desired composition by moving and/or enlarging and shrinking the icons Ic1 and Ic2 of the subject within the frame F.
As illustrated in
As illustrated in
As illustrated in
The setting operation of the front and rear relationship is performed by touching the icon of the subject as an operation target for a certain time period or longer (so-called long press). In a case where the icon of the subject as the operation target is touched for the certain time period or longer, a setting menu Me of the front and rear relationship is displayed on the screen as illustrated in
The imaging position estimation unit 10E performs processing of estimating the position (imaging position) at which the first subject and the second subject can be imaged with the set composition based on the subject information and on the camera information. The imaging position here includes a position at which imaging can be performed with almost the same composition as the set composition. That is, the position at which imaging can be performed with approximately the same composition is estimated. In addition, as described above, the composition is set on the display using the icons Ic1 and Ic2 of the subject. Accordingly, the imaging position is estimated based on display states of the icons Ic1 and Ic2 on the display 15. More specifically, the imaging position is estimated based on the display states of the icons Ic1 and Ic2 within the frame F displayed on the display 15.
As described above, the subject information includes the information about the position, the information about the size, and the like of the subject to be set as the imaging target. In addition, the camera information includes the information about the size of the image sensor, the information about the settable focal length of the lens, and the like. The imaging position estimation unit 10E estimates the imaging position using these pieces of information.
In
Here, a case where a subject having a known size is imaged is considered. In this case, a distance to the subject can be obtained from the information (the size of the image sensor, the focal length of the lens, and the like) of the camera during imaging in a case where the size of the subject within the captured image is known.
In the imaging position estimation system 1 of the present embodiment, the size and the position of the subject to be set as the imaging target are known, and the size of the image sensor and the settable focal length of the lens are also known. Accordingly, in a case where the size of the subject in the set composition is known, a distance for imaging the subject with the size can be obtained. Here, the size of the subject in the set composition is the size of the subject on the image sensor.
In
In addition, in
Accordingly, in
Here, the intersection between the circles of the same type occurs at two locations. In a case where the first subject and the second subject are imaged from the position of each intersection, a left and right positional relationship between the first subject and the second subject is reversed in the captured image. Accordingly, the intersection at which the left and right positional relationship is the same as that in the set composition is the intersection to be selected. That is, the intersection is a position at which the first subject and the second subject can be imaged with the same left and right positional relationship as the set composition. For example, in the example in
In the example illustrated in
However, an interval between the first subject and the second subject to be imaged is different between a case where imaging is performed by setting the focal length to the second focal length f2 [mm] and a case where imaging is performed by setting the focal length to the third focal length f3 [mm]. Accordingly, the position at which the first subject and the second subject can be imaged with the set composition can be obtained by further obtaining the focal length with which the first subject and the second subject can be imaged with the set interval.
The imaging position estimation unit 10E searches for the position at which the first subject and the second subject can be imaged with the set composition by changing the focal length in a stepwise manner.
First, the sizes and the positional relationship of the first subject and the second subject in the set composition are calculated (step S1). Specifically, the sizes and the positional relationship of the first subject and the second subject on the image sensor are calculated.
Next, an operation parameter for calculating the imaging position is set (step S2). The focal length is set to a value at a wide end as an initial value.
Next, a candidate of the imaging position is calculated based on the set operation parameter (step S3). As described above, the candidate of the imaging position is calculated as the position at which the first subject and the second subject can be imaged with the size in the set composition. This position is calculated at two locations as an intersection of circles. Out of the two intersections, the intersection at which the first subject and the second subject can be imaged with the same left and right positional relationship as the set composition is selected as the candidate of the imaging position.
Next, whether or not the same image as the composition can be captured in a case where the first subject and the second subject are imaged at the selected imaging position is determined (step S4). That is, whether or not the first subject and the second subject can be imaged with the same left and right positional relationship (interval) as the set composition is determined.
In a case where it is determined that the same image as the set composition can be captured, the selected position is estimated as the imaging position (step S5), and the processing is finished.
On the other hand, in a case where it is determined that the same image as the set composition cannot be captured, whether or not the focal length can be changed is determined (step S6). In a case where the focal length can be changed, the focal length is changed (step S7). The focal length is changed to a telephoto side by an amount of change set in advance. Then, the candidate of the imaging position is calculated again based on the focal length after change (step S3).
In a case where the focal length cannot be changed, it is determined that the imaging position is inestimable (step S9), and the processing is finished. In the present embodiment, a case where the focal length cannot be changed is a case where the focal length has reached a telephoto end.
The imaging position estimation unit 10E searches for the imaging position by switching the focal length in a stepwise manner. In a case where the focal length cannot be changed, it is determined that the imaging position is inestimable. Thus, the processing performed by the imaging position estimation unit 10E includes processing of determining availability of imaging with the set composition.
The imaging position estimation unit 10E performs the estimation processing of the imaging position in accordance with an instruction to start processing from the user. The user provides the instruction to start processing by touching the Check & Search button Bt1 (refer to
A processing result of the imaging position estimation unit 10E is provided to the result output unit 10F. Information about the processing result includes information about the focal length in addition to information about the estimated imaging position. In a case where it is determined that the imaging position is inestimable, information indicating that imaging is unavailable is provided to the result output unit 10F as the processing result.
The result output unit 10F performs processing of outputting the processing result of the imaging position estimation unit 10E. This processing is performed by displaying the processing result on the display 15 in a predetermined form.
As illustrated in
On the other hand, in a case where the imaging position is not estimated, that is, in a case where the set composition is uncapturable, a predetermined error message is displayed in the information display region 15B as illustrated in
In a case where the error message is displayed, the user sets the composition again. In a case where the user touches the composition setting region 15C in order to set the composition again, the error message disappears. In a case where the error message disappears, the Check & Search button Bt1 is displayed again in the information display region 15B as illustrated in
The information provision server 100 provides various types of information to the user terminal. The information provision server 100 is composed of a general server computer.
As illustrated in
As illustrated in
The request reception unit 100A performs processing of receiving a request to provide information from the user terminal 10. That is, a request to provide the map data, provide the subject information, and the like is received.
The request processing unit 100B processes the request received by the request reception unit 100A. For example, processing of acquiring the corresponding map data in response to the request to provide the map data is performed. In addition, processing of acquiring the corresponding subject information in response to the request to provide the subject information is performed.
The map data, the subject information, and the like are stored in an information database (DB) 100D. Accordingly, the request processing unit 100B acquires the map data, the subject information, and the like from the information database 100D. The information database 100D is stored in, for example, the HDD 104.
The request processing result output unit 100C performs processing of acquiring a processing result of the request received by the request reception unit 100A from the request processing unit 100B and outputting (transmitting) the processing result to the user terminal 10.
First, the processing of acquiring the camera information is performed (step S11). As described above, in the present embodiment, the camera information is stored in the ROM 12 and/or the EEPROM 14. Thus, the camera information is acquired from the ROM 13 and/or the EEPROM 14.
Next, the processing of receiving selection the subject to be set as the imaging target is performed (step S12). As illustrated in
Next, the processing of acquiring the subject information of the selected subject is performed (step S13). The subject information is acquired from the information provision server 100.
Next, the processing of receiving input of the composition is performed (step S14). The composition is input using the icon of the subject. As illustrated in
After setting of the composition is completed, the estimation processing of the imaging position is performed in accordance with the instruction to start processing from the user (step S15). The instruction to start processing is provided by touching the Check & Search button Bt1 (refer to
After the estimation processing is completed, the processing of outputting the result is performed (step S16). The result is displayed on the display 15. At this point, display content changes between a case where the imaging position is estimated and a case where the imaging position is not estimated. As illustrated in
As described above, according to the imaging position estimation system of the present embodiment, the position (imaging position) at which imaging can be performed can be obtained by simply selecting the subject to be imaged and inputting the composition. Accordingly, the image of the desired composition can be simply captured.
While it is configured to designate (select) the imaging target using the map in the embodiment, a method of designating the imaging target is not limited thereto. For example, it can be configured to designate the imaging target by inputting a name. In this case, the subject information is stored in the information database 100D of the information provision server 100 in association with the name of the subject.
The icon used in setting the composition may have a shape with which a target can be approximately recognized. That is, the icon of a form with which approximate composition can be set may be used. Accordingly, for example, the icon represented by a silhouette can also be used.
While it is configured to set the composition using a region (composition setting region) of a part of the screen of the display in the embodiment, it may be configured to set the composition using, for example, the entire screen. In this case, display may be switched to the entire screen in accordance with an instruction from the user.
While it is configured to estimate only the imaging position in the embodiment, it can be configured to also estimate an imaging direction in addition to the imaging position.
In addition, it can be configured to estimate the imaging position further including the altitude. Accordingly, imaging on which a vertical position is reflected can be performed. In the case of estimating the imaging position including the altitude, a spherical surface is set instead of a circle, and the position at which each subject can be imaged is estimated. That is, in a case where the lens is set to have a predetermined focal length fx [mm], the position at which each subject can be imaged with the size in the set composition is represented by the spherical surface, and the imaging position is estimated by obtaining an intersection of the spherical surface.
In addition, in setting the composition, it can be configured to set the composition including a direction of the subject and estimate the position at which the subject can be imaged in the set direction. For example, for the subject having a front surface, it can be configured to estimate the position at which imaging can be performed by setting a direction of the front surface. In setting the direction of the subject, it is configured to adjust the direction using, for example, an icon having a three-dimensional shape.
Furthermore, whether or not the estimated imaging position is an enterable location may be determined. In this case, for example, information about an unenterable location is held in the information database 100D, and whether or not the estimated imaging position corresponds to the unenterable location is determined. In a case where the estimated imaging position is the unenterable location, a predetermined alert is issued. For example, an alert message is displayed on the display 15. The unenterable location includes a sea, a river, a lake, and the like in addition to private lands. In addition, a dangerous area and a region that is difficult to enter are included. For the area that is difficult to enter such as a mountain, for example, information about a history of passage of a person can be collected, and an area not having the history of passage can be set as the area that is difficult to enter. Alternatively, it can be configured to determine whether or not a location is enterable by referring to a heat map of the history of passage that is separately generated.
In addition, in the case of estimating the imaging position including the altitude, whether or not the estimated imaging position is a reachable position may be further determined. For example, in a case where the altitude from a ground surface exceeds 2 m, it can be determined that the position is unreachable.
In a case where the estimated imaging position is the unreachable position, a predetermined alert is issued. For example, an alert message is displayed on the display 15.
A function of calculating a time slot in which backlight occurs in the case of imaging the first subject and the second subject from the location estimated as the imaging position and presenting information about the calculated time slot to the user may be further comprised. For example, the information about the time slot in which backlight occurs is displayed in the information display region 15B together with the information about the imaging position.
It may be configured to present information about a time slot in which backlight does not occur instead of the information about the time slot in which backlight occurs. In this case, a time slot excluding the time slot in which backlight occurs is presented.
Furthermore, a function of calculating a time slot in which insufficiency of a light quantity occurs and presenting information about the calculated time slot to the user may be comprised. For example, a time slot in which brightness is less than or equal to a threshold value is calculated as the time slot in which insufficiency of the light quantity occurs.
It may be configured to present both of the information about the time slot in which backlight occurs and the information about the time slot in which insufficiency of the light quantity. In addition, it may be configured to present any one of the information only.
While a case of performing imaging using the camera 18 comprised in the user terminal 10 has been illustratively described in the embodiment, imaging can be performed using a camera separated from the user terminal. In this case, it is required to separately acquire the camera information of the camera to be used. A method of acquiring the camera information is not particularly limited. For example, it can be configured to manually input the camera information into the user terminal. In addition, it can be configured to automatically acquire the camera information by communicating with the camera. Furthermore, it can be configured to acquire the camera information by acquiring identification information of the camera to be used. In this case, for example, the camera information of each camera is stored in the information database 100D of the information provision server 100, and the camera information of the corresponding camera is acquired from the information provision server 100.
In addition, while a case where the lens has a zoom function has been described in the embodiment, the lens of the camera may be a so-called single focal length lens. In this case, the imaging position is estimated with a constant focal length.
Furthermore, it can be configured to perform imaging by mounting the camera in, for example, an unmanned vehicle, an unmanned aerial vehicle (so-called drone), or an unmanned surface vehicle. In this case, imaging at a location at which a person cannot enter can be performed, and a selectable range of the imaging position can be expanded. Particularly, in the case of performing imaging by mounting the camera in the unmanned aerial vehicle, altitude constraints can be lifted.
While the user terminal implements functions of an information processing apparatus in the embodiment, the camera, for example, may comprise the functions of the information processing apparatus.
In addition, while it is configured to receive provision of various types of information from the information provision server 100 in the embodiment, the user terminal may have the functions of the information provision server 100. In this case, the information provision server is not required.
A case of setting an unmoving subject, that is, a subject (fixed object) of which the geographical position is fixed, as the imaging target has been described in the embodiment. The subject to be set as the imaging target is not limited thereto. The subject as the imaging target can include an object of which the geographical position can be specified at at least the time of imaging. For example, the subject as the imaging target can include an object such as a celestial body that moves with regularity.
Hereinafter, an example of a case where the subject candidates include a celestial body will be described. Since a basic configuration of the system is the same as that in the embodiment, only a method of selecting the subject and a method of estimating the imaging position will be described here.
In the present embodiment, the subject to be set as the imaging target is selected by selecting a target object on a map or inputting a name of the target object.
As illustrated in
On the other hand, in the case of selecting the subject by inputting the name of the target object, search boxes B1 and B2 displayed in the information display region 15B are used. As illustrated in
Then, the icon displayed in each icon display region is dragged to the composition setting region 15C to set the composition. The setting operation of the composition is the same as that in the first embodiment. In a case where the celestial body is selected as the imaging target, it is assumed that the size of the celestial body cannot be adjusted. Accordingly, in a case where the subject is selected as the imaging target, only the position is adjusted.
The user sets the desired composition by adjusting the positions and the sizes of the icons Ic1 and Ic2 in the composition setting region 15C. Then, the instruction to start the estimation processing of the imaging position is provided by touching the Check & Search button Bt1 displayed in the information display region 15B.
In
As illustrated in
tan θ=(H1/r)×(h2/h1)
A distance within the composition can be converted into an angle using the above expression. Thus, the distance w between the first subject and the second subject within the composition can also be converted into an actual angle φ using the following expression.
φ=θ×(w/h2)
A date and time in which the second subject is at the elevation angle θ from the location at which the first subject is present is searched. An azimuth at which the second subject can be observed (imaged) on the searched date and time is obtained.
For the celestial body, a position (an elevation angle and an azimuth) of observation can be specified in a case where a location and a date and time of observation can be specified. Information about the position of the celestial body may be configured to be acquired from the information provision server 100 or may be configured to be obtained by calculation.
In
The imaging position is estimated through the above series of steps. As described above, in a case where the celestial body is set as the imaging target, a date and time of imaging are also estimated in addition to the imaging position.
As illustrated in
While it is configured to also estimate the date and time of imaging in addition to the imaging position in the embodiment, it may be configured to determine the availability of imaging by receiving a setting of the date and time of imaging from the user. That is, it may be configured to determine whether or not the image of the set composition can be captured on the set date and time. The availability of imaging is determined by determining whether or not the date and time in which the second subject is at the elevation angle θ from the location at which the first subject is present matches the designated date and time. In a case where the date and time do not match the designated date and time, it is determined that imaging is unavailable. It may be configured to designate the date and time by setting a range.
According to the imaging position estimation system of the first and second embodiments, the position at which the desired composition can be captured can be estimated. However, in a case where imaging is actually performed from the estimated position, an obstacle may be present, and a desired image may not be captured. In the present embodiment, a case of determining the availability of imaging by determining whether or not the obstacle is present will be described.
As illustrated in
The imaging availability determination unit 10G acquires the information about the imaging position estimated by the imaging position estimation unit 10E and the focal length and determines whether or not an object (obstacle) that obstructs imaging is present based on the acquired information about the imaging position and the focal length and on the information about the positions of the first subject and the second subject. That is, whether or not the obstacle is present within an angle of view in the case of imaging the first subject and the second subject from the estimated imaging position is determined. Specifically, the obstacle between the imaging position and the first subject and the second subject is detected. Information about the obstacle is stored in the information database 100D of the information provision server 100. For example, an object exceeding a defined size is registered as the obstacle together with information about a position and a size of the object. The imaging availability determination unit 10G detects the obstacle based on the information stored in the information database 100D. In a case where the obstacle is detected, it is determined that imaging is unavailable.
In a case where the imaging availability determination unit 10G determines that imaging is unavailable, a message indicating that imaging is unavailable because of the presence of the obstacle is displayed on the display 15. In a case where it is determined that imaging is available, the information about the imaging position and the focal length is displayed on the display 15 as usual (refer to
According to the present embodiment, the imaging position is estimated by considering the obstacle. Accordingly, the position at which the image of the desired composition can be more securely captured can be presented.
According to the imaging position estimation system of the first and second embodiments, in a case where the imaging position cannot be estimated, it is determined that imaging is unavailable. However, imaging may become available by making simple correction to the composition set by the user. For example, in a case where imaging cannot be performed with the set composition because the front and rear disposition relationship between two subjects is unsuitable, imaging may become available by reversing the front and rear disposition relationship between the two subjects. In addition, in a case where imaging cannot be performed with the set composition because two subjects are excessively close to each other, imaging may become available by increasing the interval between the two subjects. Similarly, in a case where imaging cannot be performed with the set composition because two subjects are excessively separated from each other, imaging may become available by narrowing the interval between the two subjects. In the present embodiment, a case of automatically correcting the composition and presenting the composition to the user in a case where the imaging position cannot be estimated in the composition set by the user will be described.
As illustrated in
The composition correction unit 10H performs processing of correcting the composition set by the user in a case where the imaging position estimation unit 10E determines that imaging is unavailable. As described above, in a case where the imaging position cannot be estimated, the imaging position estimation unit 10E determines that imaging is unavailable with the set composition. In a case where the imaging position cannot be estimated, the composition correction unit 10H corrects the composition set by the user through a predetermined procedure.
In the set composition, in a case where the imaging position cannot be estimated because the sizes of the first subject and the second subject are unsuitable, the composition is corrected by correcting the sizes of the first subject and the second subject. The imaging position estimation unit 10E performs the processing of estimating the imaging position again with respect to the composition after correction.
As illustrated in
In a case where, for example, the first subject and the second subject are disposed in a superimposed manner in the set composition, the composition may be corrected by changing the front and rear positional relationship between the first subject and the second subject. In this case, front and rear disposition of the first subject and the second subject is reversed.
In a case where the imaging position is estimated in the corrected composition, the corrected composition is displayed in the composition setting region 15C.
In the set composition, in a case where the imaging position cannot be estimated because the front and rear positional relationship between the first subject and the second subject is unsuitable, the composition is corrected by correcting the front and rear positional relationship between the first subject and the second subject. The unsuitable front and rear positional relationship means, for example, a case where a subject that is originally to be imaged to be large is imaged to be small and is imaged at a front position.
As illustrated in
In a case where the imaging position is estimated in the corrected composition, the corrected composition is displayed in the composition setting region 15C.
In the set composition, in a case where the imaging position cannot be estimated because the distance (interval) between the subjects is unsuitable, the composition is corrected by correcting the distance between the subjects.
As illustrated in
In a case where the imaging position cannot be estimated because the distance between the subjects is excessively long, correction is performed in the direction of decreasing the distance between the subjects.
In correcting the distance, for example, correction is first performed in the direction of decreasing the distance. In a case where the imaging position cannot be estimated by performing correction in the direction of decreasing the distance, correction is performed in the direction of increasing the distance. Alternatively, correction is performed in reverse.
In a case where the imaging position is estimated in the corrected composition, the corrected composition is displayed in the composition setting region 15C.
According to the present embodiment, in a case where imaging cannot be performed with the composition set by the user, the imaging position can be presented to the user by automatically performing correction to similar composition. Accordingly, convenience can be improved.
The above correction can be performed in combination with each other, as appropriate. In this case, for example, each correction is performed in a predetermined order, and processing is finished in a stage where the imaging position is estimated.
Even in a case where it is determined that imaging is unavailable because of the presence of the obstacle, it may be configured to correct the composition in the same manner. That is, correction is performed to composition in which the first subject and the second subject can be imaged by avoiding the obstacle. In this case, for example, the composition is corrected by correcting the position of the subject blocked by the obstacle.
As illustrated in
In addition, in a case where the imaging position cannot be estimated because the front and rear positional relationship between the first subject and the second subject is unsuitable, a method of performing correction to composition in which a depth relationship is not known can also be employed in addition to reversing the front and rear positional relationship.
According to the imaging position estimation system of the first embodiment, the position at which the desired composition can be captured can be estimated. However, the estimated position may be a location at which a person cannot enter. In this case, it is required to change the composition or change the position for imaging. In the case of changing the position for imaging, convenience is improved in a case where how the composition will change can be perceived in advance. In the present embodiment, a case where a function of correcting the imaging direction in addition to the estimated imaging position and a function of presenting the composition that can be captured in a case where the imaging position and/or the imaging direction is corrected are comprised will be described.
As illustrated in
The correction reception unit 10I performs processing of receiving correction of the imaging position and the imaging direction. The imaging position is corrected by moving the mark Mx indicating the imaging position displayed on the display 15. In addition, the imaging direction is corrected by changing a direction of the arrow Dx indicating the imaging direction displayed on the display 15.
In a case where the imaging position is corrected, the composition estimation unit 10J performs processing of estimating the composition to be captured from the corrected imaging position and the imaging direction.
The result output unit 10F displays the composition estimated by the composition estimation unit 10J in the composition setting region 15C.
According to the present embodiment, in a case where the imaging position and/or the imaging direction is corrected, the composition of the image to be captured after correction can be presented. Accordingly, the image of the composition captured by the user can be simply captured.
While a case of correcting the imaging position and the imaging direction has been described in the embodiment, it may be configured to also correct the focal length.
A case of imaging the subject having a known position and size has been described in the series of embodiments. In the present embodiment, a case of imaging an object having an unknown position and/or size as the subject will be described.
In a case where the position and/or the size of the subject to be set as the imaging target is unknown, these pieces of information are acquired by actual measurement. In the present embodiment, the position and the size are measured by three-dimensionally scanning the imaging target using the range measurement sensor.
As illustrated in
The measurement control unit 10K performs processing of controlling the sensor unit 23 to measure the position and the size of the subject to be set as the imaging target. In the present embodiment, the position and the size are measured by three-dimensionally scanning the subject to be set as the imaging target using the range measurement sensor.
As illustrated in
The icon acquisition unit 10L performs processing of acquiring information about the icon of the subject to be set as the imaging target. In the present embodiment, the icon is acquired by imaging the subject to be set as the imaging target. Specifically, an image obtained by extracting the subject part from the image obtained by imaging the subject to be set as the imaging target is used as the icon.
The composition setting unit 10D receives input of the composition using the icon acquired by the icon acquisition unit 10L.
The imaging position estimation unit 10E estimates the position at which the image of the composition set by the composition setting unit 10D can be captured based on the information about the position and the size of the subject (the first subject and the second subject) acquired by measurement performed by the measurement control unit 10K.
According to the present embodiment, even in the case of imaging the subject having an unknown position and/or size, the position at which the image of the desired composition can be captured can be estimated.
While it is configured to generate and acquire the icon from the image obtained by actually imaging the subject in the embodiment, a method of acquiring the icon of the subject is not limited thereto. As another method, for example, it can be configured to prepare icons of a plurality of target objects (subject candidates) in advance and cause the user to select the icon. In addition, the icon can be substituted with, for example, a rectangular frame. In this case, it is preferable to set the icon of the subject to be set as the target by causing the user to input an aspect ratio.
A function of providing guidance to the estimated imaging position in a case where the imaging position can be estimated may be further comprised. That is, a navigation function may be further comprised. In this case, the user terminal 10 further comprises a function of acquiring information about a current location, a function of generating guidance information for leading to the imaging position based on the acquired information about the current location, a guiding function based on the generated guidance information, and the like. For example, the guiding function is performed by displaying the generated guidance information on the display 15. These functions can be implemented using a well-known navigation technology.
A guiding function of the imaging direction in a case where the imaging direction is estimated may be further comprised. That is, a function of showing a direction for imaging at the actual imaging position to the user may be comprised. In this case, the user terminal comprises a function of acquiring information about a direction of the camera (information about a direction of an optical axis), a function of obtaining a direction in which the camera is to be directed based on the acquired information about the direction of the camera, a function of providing notification of the obtained direction, and the like. For example, notification of the obtained direction is performed by displaying the information about the direction on the display. At this point, it is more preferable to display the information about the direction in a superimposed manner on a live view image.
In addition, the image of the composition set by the user may be displayed in a superimposed manner on the live view image. At this point, for example, the image of the composition is displayed to be semi-transparent. Accordingly, the image of the set composition can be more simply captured.
In a case where the imaging position and the imaging direction are estimated, it can be configured to automatically perform imaging based on the estimated imaging position and the estimated imaging direction. For example, in the case of performing imaging using a camera mounted in an unmanned vehicle, an unmanned aerial vehicle, an unmanned surface vehicle, or the like, it can be configured to automatically perform imaging by providing information about the estimated imaging position and the estimated imaging direction to these apparatuses. Since the unmanned vehicle, the unmanned aerial vehicle, the unmanned surface vehicle, and the like in which the camera is mounted, and the technology itself for automatically performing imaging using these apparatuses are well known, description of their details will be omitted.
While the information provision server is configured to only provide information required for estimating the imaging position in the embodiment, the information provision server may be configured to perform the estimation processing of the imaging position and the like. That is, the information provision server is configured to perform the estimation processing of the imaging position and the like using the user terminal as an interface. In this case, the information provision server functions as the information processing apparatus. In addition, in this case, the user terminal constitutes another display device.
In addition, the user terminal may have the functions of the information provision server. That is, the functions of the user terminal and the functions of the information provision server may be implemented by one apparatus.
The functions of the user terminal are implemented by various processors. The various processors include a CPU and/or a graphic processing unit (GPU) that is a general-purpose processor functioning as various processing units by executing a program, a programmable logic device (PLD) such as a field programmable gate array (FPGA) that is a processor having a circuit configuration changeable after manufacture, a dedicated electric circuit such as an application specific integrated circuit (ASIC) that is a processor having a circuit configuration dedicatedly designed to execute specific processing, and the like. The program is synonymous with software.
One processing unit may be composed of one of the various processors or may be composed of two or more processors of the same type or different types. For example, one processing unit may be composed of a plurality of FPGAs or of a combination of a CPU and an FPGA. In addition, a plurality of processing units may be composed of one processor. As an example of a plurality of processing units composed of one processor, first, as represented by a computer used as a client or a server, a form in which one processor is composed of a combination of one or more CPUs and software and the processor functions as a plurality of processing units is possible. Second, as represented by a system on chip (SoC) or the like, a form of using a processor that implements functions of the entire system including a plurality processing units in one integrated circuit (IC) chip is possible. Accordingly, various processing units are configured using one or more of the various processors as a hardware structure.
Number | Date | Country | Kind |
---|---|---|---|
2021-139098 | Aug 2021 | JP | national |
The present application is a Continuation of PCT International Application No. PCT/JP2022/025910 filed on Jun. 29, 2022 claiming priority under 35 U.S.C § 119(a) to Japanese Patent Application No. 2021-139098 filed on Aug. 27, 2021. Each of the above applications is hereby expressly incorporated by reference, in its entirety, into the present application.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/025910 | Jun 2022 | WO |
Child | 18584939 | US |