The present invention relates to a technology for determining installation positions of a plurality of cameras in creating a video of a subject range by combining videos captured by the cameras.
Patent Literatures 1 to 3 disclose camera installation simulators to simulate a virtual captured video from a camera for assisting installation of surveillance cameras. Such a camera installation simulator creates a three-dimensional model space of a facility in which a surveillance camera is to be installed by using a map image of the facility and three-dimensional models of vehicles, obstacles, and the like. The camera installation simulator then simulates a coverage range, a blind range, and a captured image when a camera is installed at a specified position within the space in a particular orientation.
Patent Literature 1: JP 2009-105802 A
Patent Literature 2: JP 2009-239821 A
Patent Literature 3: JP 2009-217115 A
The camera installation simulators disclosed in Patent Literatures 1 to 3 simulate a captured range and how a captured video looks when a camera is installed at a particular position in a particular orientation. Thus, for monitoring a particular region, a user needs to find an optimum camera installation position where the entire subject region can be captured by changing the installation condition of the camera.
In addition, the camera installation simulators disclosed in Patent Literatures 1 to 3 are based on the assumption of a single camera, but are not based on consideration of determining optimum arrangement of a plurality of cameras for creating a synthetic video from a plurality of camera images. Thus, how a video obtained by combining videos from a plurality of cameras looks cannot be known.
An object of the present invention is to enable simple determination of installation positions of a plurality of cameras, which allows a video of a subject region desired by a user to be obtained by combining videos captured by the cameras.
An installation position determining device according to the present invention includes:
a condition receiving unit to receive input of a camera condition indicating capturing conditions of cameras;
a position specifying unit to specify installation positions of cameras at which a subject region can be captured according to the camera condition received by the condition receiving unit; and
a virtual video generating unit to generate virtual captured videos obtained by capturing a virtual model with the cameras in a case where the cameras are installed at the installation positions specified by the position specifying unit, and perform overhead-view conversion on the generated virtual captured videos and combine the virtual captured videos to generate a virtual synthetic video.
According to the present invention, the installation positions of a plurality of cameras at which the plurality of cameras can capture a subject region are specified, the number of cameras being equal to or smaller than a number indicated by a camera condition, and a virtual synthetic video in a case where the cameras are installed at the specified installation positions is generated. This allows the user to determine the installation positions of the cameras where a desired video can be obtained only by checking the virtual synthetic video while changing the camera condition.
***Description of Configuration***
A configuration of an installation position determining device 10 according to a first embodiment will be described with reference to
The installation position determining device 10 is a computer.
The installation position determining device 10 includes a processor 11, a storage unit 12, an input interface 13, and a display interface 14. The processor 11 is connected to other hardware components via a signal line, and controls these hardware components.
The processor 11 is an integrated circuit (IC) to perform processing. Specifically, the processor 11 is a central processing unit (CPU), a digital signal processor (DSP), or a graphics processing unit (GPU).
The storage unit 12 includes a memory 121 and a storage 122. Specifically, the memory 121 is a random access memory (RAM). Specifically, the storage 122 is a hard disk drive (HDD). Alternatively, the storage 122 may be a portable storage medium such as a Secure Digital (SD) memory card, a CompactFlash (CF), an NAND flash, a flexible disk, an optical disk, a compact disk, a Blu-ray (registered trademark) disk, or a DVD.
The input interface 13 is a unit to which an input device 31 such as a keyboard, a mouse, or a touch panel is connected. Specifically, the input interface 13 is a connector such as a universal serial bus (USB), IEEE 1394, or PS/2.
The display interface 14 is a unit for connecting a display 32. Specifically, the display interface 14 is a connector such as a high-definition multimedia interface (HDMI: registered trademark) or a digital visual interface (DVI).
The installation position determining device 10 includes, as functional components, a condition receiving unit 21, a region receiving unit 22, a position specifying unit 23, a virtual video generating unit 24, and a display unit 25. The position specifying unit 23 includes an X position specifying unit 231 and a Y position specifying unit 232. The functions of the condition receiving unit 21, the region receiving unit 22, the position specifying unit 23, the X position specifying unit 231, the Y position specifying unit 232, the virtual video generating unit 24, and the display unit 25 are implemented by software.
The storage 122 of the storage unit 12 stores programs to implement the functions of the respective units of the installation position determining device 10. The programs are read by the processor 11 into the memory 121, and executed by the processor 11. In this manner, the functions of the respective units of the installation position determining device 10 are implemented. In addition, the storage 122 stores map data of regions including a subject region 42 of which a virtual synthetic video 46 is to be acquired.
Information, data signal values, and variable values representing results of processing of the functions of the respective units implemented by the processor 11 are stored in the memory 121, or in a register or a cache memory in the processor 11. In the description below, the information, data, signal values, and variable values representing the results of processing of the functions of the respective units implemented by the processor 11 are assumed to be stored in the memory 121.
The programs to implement the functions implemented by the processor 11 are assumed to be stored in the storage unit 12. The programs, however, may be stored in a portable storage medium such as a magnetic disk, a flexible disk, an optical disk, a compact disk, a Blu-ray (registered trademark) disk, or a DVD.
In
***Description of Operation***
Operation of the installation position determining device 10 according to the first embodiment will be explained with reference to
The operation of the installation position determining device 10 according to the first embodiment corresponds to an installation position determining method according to the first embodiment. In addition, the operation of the installation position determining device 10 according to the first embodiment corresponds to processes of an installation position determining program according to the first embodiment.
An outline of the operation of the installation position determining device 10 according to the first embodiment will be explained with reference to
As illustrated in
<Step S1: Region Receiving Process>
The region receiving unit 22 receives input of a subject region 42 of which a virtual synthetic video 46 is to be acquired.
Specifically, the region receiving unit 22 reads the map data from the storage 122, and performs texture mapping and the like to generate a two-dimensional or three-dimensional computer graphics (CG) space 43. As illustrated in
In the first embodiment, the CG space 43 is assumed to be a three-axis space expressed by X, Y, and Z axes. In addition, the subject region 42 is assumed to be a rectangle with sides parallel to an X axis and a Y axis on a plane expressed by the X and Y axes. Furthermore, the subject region 42 is assumed to be specified by upper-left coordinate values (x1, y1), a width Wx in an x direction parallel to the X axis, and a width Wy in a y direction parallel to the Y axis. In
<Step S2: Condition Receiving Process>
The condition receiving unit 21 receives input of a camera condition 41.
Specifically, the camera condition 41, which indicates information such as the maximum number 2N of cameras 50 to be installed, a critical elongation ratio K, a critical height Zh, an installation height Zs, an angle of view θ, a resolution, and the types of the cameras 50, is input by the user through the input device 31, and the condition receiving unit 21 receives the input camera condition 41. The critical elongation ratio K is an upper limit of an elongation ratio (Q/P) of a subject in overhead-view conversion of video (see
In the first embodiment, the condition receiving unit 21 displays a GUI screen on the input device 31 via the display interface 14 to prompt the user to input the respective items indicated by the camera condition 41 by selecting the items or the like. The condition receiving unit 21 writes the received camera condition 41 into the memory 121.
For the cameras type, the condition receiving unit 21 displays a list of the types of the cameras 50 to prompt the user to select the camera type. In addition, for the angle of view, the condition receiving unit 21 displays the maximum angle of view and the minimum angle of view of the cameras 50 of the selected type to prompt input of an angle of view between the maximum angle of view and the minimum angle of view.
Note that the installation height Zs is specified to the lowest position within the height at which the cameras 50 can be installed. The cameras 50 are installed at a position at a certain height, such as on a pole located near the subject region 42.
<Step S3: Position Specifying Process>
The position specifying unit 23 specifies the installation positions 45 of the respective cameras 50 at which the cameras 50 can capture a subject at a height equal to or lower than the critical height Zh in the subject region 42, the number of cameras 50 being equal to or smaller than the number 2N indicated by the camera condition 41 received by the condition receiving unit 21 in step S2. When video is subjected to overhead-view conversion by the virtual video generating unit 24 in step S5, the position specifying unit 23 specifies installation positions 45 at which the elongation ratio of the subject at the critical height Zh or lower in the subject region 42 is equal to or lower than the critical elongation ratio K.
<Step S4: Specification Determining Process>
The position specifying unit 23 advances the processing to step S5 if the installation positions 45 are specified in step S3, or returns the processing to step S2 and prompts re-entry of the camera condition 41 if the installation positions 45 cannot be specified.
A case in which the installation positions 45 cannot be specified refers to a case in which installation positions 45 at which the subject region 42 can be captured with the number of cameras being equal or smaller than 2N indicated by the camera condition 41 cannot be specified or a case in which installation positions 45 at which the elongation ratio of the subject is equal to or lower than the critical elongation ratio K cannot be specified.
<Step S5: Virtual Video Generating Process>
The virtual video generating unit 24 generates virtual captured videos obtained by capturing a virtual model with the cameras 50 in a case where the cameras 50 are installed at the installation positions 45 specified by the position specifying unit 23 in step S4. The virtual video generating unit 24 then performs overhead-view conversion on the generated virtual captured videos and combines the converted virtual captured videos to generate a virtual synthetic video 46.
In the first embodiment, the CG space 43 generated in step S1 is used as the virtual model.
<Step S6: Displaying Process>
The display unit 25 displays the virtual synthetic video 46 generated by the virtual video generating unit 24 in step S5 on the display 32 via the display interface 14. This allows the user to check whether or not the obtained video is in a desired state on the basis of the virtual synthetic video 46.
Specifically, the display unit 25 displays the virtual synthetic video 46 generated in step S5 and the virtual captured videos captured by the respective cameras 50 as illustrated in
<Step 7: Quality Determining Process>
According to the user's operation, the processing is terminated if the obtained video is in the desired state, or the processing is returned to step S2 for re-entry of the camera condition 41 if the obtained video is not in the desired state.
Step S3 according to the first embodiment will be explained with reference to
As illustrated in
In the first embodiment, as illustrated in
The installation positions 45 specified in step S3 include installation positions X in the x direction parallel to an X axis, installation positions Y in the y direction parallel to a Y axis, installation positions Z in the z direction parallel to a Z axis, yaw attitudes that are rotation angles about the Z axis being a rotation axis, pitch attitudes that are rotation angles about the Y axis being a rotation axis, and roll attitudes that are rotation angles about the X axis being a rotation axis.
In the first embodiment, the installation positions Z of the respective cameras 50 are the installation height Zs included in the camera condition 41. In addition, the yaw attitudes are such that the x direction is defined as 0 degrees, and one of the two cameras 50 facing each other has a yaw attitude of 0 degrees while the other of the two cameras 50 has a yaw attitude of 180 degrees. In
Thus, in step S3, the remaining installation positions X, installation positions Y, and pitch attitudes are specified. The pitch attitudes will hereinafter be referred to as angles of depression α.
<Step S31: Position X Specifying Process>
The X position specifying unit 231 of the position specifying unit 23 specifies installation positions X and angles of depression α of two cameras 50 with which the entire subject region 42 in the x direction can be captured and with which at least the elongation ratio of a subject in front of the cameras 50 is equal to or lower than the critical elongation ratio K.
Specifically, the X position specifying unit 231 reads the subject region 42 received in step S1 and the camera condition 41 received in step S2 from the memory 121. The X position specifying unit 231 then determines a use range Hk* to be actually used within a coverage range H of the cameras 50 so that the use range Hk* is within a range Hk expressed by Expression 4 and satisfies Expression 6, which will be explained below. The X position specifying unit 231 then calculates the installation position X of one of the two cameras 50 facing each other by Expression 7, and calculates the installation position X of the other camera 50 by Expression 8. In addition, the X position specifying unit 231 determines an angle between an upper limit and a lower limit expressed by Mathematical Expressions 10 and 12 as an angle of depression α.
A method by which the X position specifying unit 231 specifies the installation positions X and the angles of depression α will be explained in detail.
As illustrated in
O=Zs·tan(π/2−α−θ/2)
H=Zs·tan(π/2−α+θ/2)−O (Expression 1)
O=Zs·tan(π/2+α+θ/2)
H=Zs·tan(π/2−α+θ/2)+O (Expression 2)
In the description below, the case where the camera 50 can capture the position right below is assumed and explained with use of Mathematical Expression 2. In the case where the camera 50 cannot capture the position right below, Mathematical Expression 1 is used instead of Mathematical Expression 2.
The subject region 42 has a width Wx in the x direction. Thus, in a case where the entire coverage area of two cameras 50 facing each other are to be used, the angle of depression α is obtained such that Wx=2H is satisfied. In a case where a tall subject is captured as illustrated in
When the height of the subject is not considered, a range Hk where the elongation ratio of the subject is equal to or lower than the critical elongation ratio K within the coverage range is expressed by Expression 3 using the critical elongation ratio K and the installation position Zs that is the installation height of the camera 50.
Hk=K·Zs (Expression 3)
Furthermore, when the height of the subject is considered, a range Hk where the elongation ratio of the subject not taller than the critical height Zh is equal to or lower than the critical elongation ratio K is expressed by Expression 4.
Hk=K(Zs−Zh) (Expression 4)
In addition, when the height of the subject is considered, the offset O and the coverage range H of the camera 50 is expressed by Expression 5.
O=(Zs−Zh)tan(π/2+α+θ/2)
H=(Zs−Zh)tan(π/2−α+θ/2)+O (Expression 5)
As illustrated in
Wx<2Hk*+2O (Expression 6)
In this case, when the use range Hk* is determined so that the right side of Expression 6 is larger than the left side thereof to some extent, the two cameras 50 facing each other capture regions that partially overlap with each other. This allows a superimposing process such as a blending to be applied in combining videos, which makes the resulting video more seamless.
Specifically, the X position specifying unit 231 displays a range of values within the range Hk expressed by Expression 4 and satisfying Expression 6 on the display 32, and receives input of a use range Hk* within the displayed range from the user, to determine the use range Hk*. Alternatively, the X position specifying unit 231 determines a value with which an overlapping region captured by both of the two cameras 50 facing each other has a reference width as the use range Hk* from the values within the range Hk expressed by Expression 4 and satisfying Expression 6. The reference width is a width required for producing a certain effect by the superimposing process.
Note that, when a use range Hk* within the range expressed by Expression 4 and satisfying Expression 6 cannot be determined, this means that a region with an elongation ratio not higher than the critical elongation ratio and with a width Wx cannot be captured under the camera condition 41. Thus, in this case, since the position specifying unit 23 cannot specify the installation positions 45 in step S4, the position specifying unit 23 returns the processing to step S2. In step S2, the condition receiving unit 21 then receives input of a camera condition 41 in which information such as the installation height Zs or the critical elongation ratio K is changed.
The X position specifying unit 231 then calculates the installation position X1 of one of the two cameras 50 facing each other by Expression 7, and calculates the installation position X2 of the other camera 50 by Expression 8. In the case of
X
1
=x1+½Wx−Hk* (Expression 7)
X
2
=x1+½Wx+Hk* (Expression 8)
The X position specifying unit 231 also specifies the angles of depression α.
Note that, since a coverage range needs to cover from right below the camera 50 to the use range Hk* in front of the camera 50, the angles of depression α satisfy Expression 9. An upper limit of the angles of depression α is defined by Expression 10 obtained from Expression 9.
(Zs−Zh)tan(π/2−α+θ/2)>Hk* (Expression 9)
α<(π+θ)/2−arctan(Hk*/(Zs−Zh)) (Expression 10)
In addition, since a coverage range needs to cover up to the position right below the camera 50, the angles of depression α satisfy Expression 11. A lower limit of the angles of depression α is defined by Expression 12 obtained from Expression 11.
(Zs−Zh)tan(π/2−α−θ/2)<(Wx/2−Hk*) (Expression 11)
α>(π−θ)/2−arctan(Wx/2−Hk*/(Zs−Zh)) (Expression 12)
The X position specifying unit 231 then determines an angle between the upper limit and the lower limit expressed by Mathematical Expressions 10 and 12 as an angle of depression α.
Specifically, the X position specifying unit 231 displays the upper limit and the lower limit expressed by Mathematical Expressions 10 and 12 on the display 32, and receives input of an angle of depression α between the displayed upper limit and lower limit from the user, to determine the angle of depression α. Alternatively, the X position specifying unit 231 determines a certain angle such as a median angle to be the angle of depression α among angles between the upper limit and the lower limit expressed by Mathematical Expressions 10 and 12.
Note that, in the description above, a case where not only a subject T near a boundary between cameras 50 facing each other but also subjects S and U behind the cameras 50 can be captured up to the critical height Zh as illustrated in
α>(π−θ)/2−arctan(Wx/2−Hk*/Zs) (Expression 13)
<Step S32: Position Y Specifying Process>
The Y position specifying unit 232 of the position specifying unit 23 specifies installation positions Y with which the entire subject region 42 in the y direction can be captured.
Specifically, the Y position specifying unit 232 reads the subject region 42 received in step S1 and the camera condition 41 received in step S2 from the memory 121. The Y position specifying unit 232 then calculates the installation position Y of an M-th camera 50 from the coordinate value y1 in the y direction by using Expression 16 explained below.
A method by which the Y position specifying unit 232 specifies the installation position Y will be explained in detail.
As illustrated in
When a ratio of a horizontal resolution and a vertical resolution of the camera 50 is represented by Wθ:Hθ, an aspect ratio of the trapezoid of the coverage range is expressed by Expression 14.
Thus, the base W1 is as expressed by Expression 15.
W1=((Wθ sin(α−θ/2))/(Hθ cos(θ/2)))H (Expression 15)
As illustrated in
Y
M
=y1+((2M−1)W1)/2 (Expression 16)
In the case of
Y
M
=y1+W½ (Expression 17)
Y
M
=y1+(3·W1)/2 (Expression 18)
Note that, for capturing the entire width Wy with 2N cameras 50, 2N being the maximum number of cameras 50 indicated by the camera condition 41, NW1 obtained by multiplying the number N of cameras arranged in parallel along the y direction by the width W1 needs to be equal to or larger than the width Wy. Note that, although the number of cameras 50 is 2N, the number of cameras 50 arranged in parallel along the y direction is N since two cameras 50 are positioned to face each other along the x direction. When NW1 is not equal to or larger than the width Wy, the position specifying unit 23 cannot specify the installation positions 45, in step S4 and thus returns the processing to step S2. In step S2, the condition receiving unit 21 then receives input of a camera condition 41 in which information such as the maximum number 2N of cameras 50, the installation height Zs, or the critical elongation ratio K is changed.
In the description above, the installation positions Y with which the entire subject region 42 in the y direction can be captured are specified. In the y direction as well, however, similarly to the x direction, for making the elongation ratio of a subject be equal to or lower than the critical elongation ratio K, the Y position specifying unit 232 calculates the installation positions y by replacing W1 in Expression 16 with 2Hk*. In this case as well, when 2NHk* obtained by multiplying the number N of cameras installed in parallel along the y direction by 2Hk* is not equal to or larger than the width Wy, the position specifying unit 23 cannot specify the installation positions 45 in step S4, and thus returns the processing to step S2.
In this case as well, however, a region in which the elongation ratio is higher than the critical elongation ratio K may be present near the middle of the four cameras 50 or the like in the subject region 42, such as a region 47 illustrated in
When N×W1 is sufficiently larger than the width Wy, the Y position specifying unit 232 can calculate the installation positions Y so that a range captured by the cameras 50 in an overlapping manner becomes larger. In this case, the number of regions that overlap among N cameras 50 is N−1. Thus, the Y position specifying unit 232 calculates a length L in the y direction of an overlapping region between cameras 50 by Expression 19.
L=(W1×N−Wy)/(N−1) (Expression 19)
The Y position specifying unit 232 then calculates the installation position YM of the M-th camera 50 by Expression 20 for each of the second and subsequent cameras 50 from the coordinate value y1 in the y direction. The installation position YM of the first camera 50 from the coordinate value y1 is calculated by Expression 16.
Y
M
=y1+((2M−1)W1)/2−LM (Expression 20)
Note that, in the y direction as well, for making the elongation ratio of a subject be equal to or lower than the critical elongation ratio K, the Y position specifying unit 232 replaces W1 in Expressions 19 and 20 with 2Hk*.
Step S5 according to the first embodiment will be explained with reference to
As illustrated in
<Step S51: Virtual Captured Video Generating Process>
The virtual video generating unit 24 generates virtual captured videos obtained by capturing the CG space 43 generated in step S1 with the cameras 50 in a case where the cameras 50 are installed at the installation positions 45 specified by the position specifying unit 23 in step S3.
Specifically, the virtual video generating unit 24 reads the CG space 43 generated in step S1 from the memory 121. The virtual video generating unit 24 then generates a video, as a virtual captured video for each of the cameras 50, obtained by capturing the CG space 43 in the direction of the optical axis 51 obtained from the orientation of the camera 50 as the center of point of view at the installation position 45 specified in step S3. The virtual video generating unit 24 writes the generated virtual captured videos into the memory 121.
<Step S52: Overhead-View Conversion Process>
The virtual video generating unit 24 performs overhead-view conversion on the virtual captured videos for the respective cameras 50 generated in step S51 to generate overhead-view videos.
Specifically, the virtual video generating unit 24 reads the virtual captured videos for the respective cameras 50 generated in step S51 from the memory 121. The virtual video generating unit 24 then uses homography conversion to project each of the virtual captured videos generated in step S51 from a capturing plane of each of the cameras 50 onto a plane where a coordinate value of the Z axis is 0.
As illustrated in
Note that the plane for projection is not limited to the plane where the coordinate value of the Z axis is 0 but may be a plane at any height. In addition, the shape of the projection plane is not limited to flat but may be curved.
<Step S53: Video Combining Process>
The virtual video generating unit 24 combines the overhead-view videos for the respective cameras 50 generated in step S52 to generate a virtual synthetic video 46.
Specifically, the virtual video generating unit 24 reads the overhead-view videos for the respective cameras 50 generated in step S52 from the memory 121. As illustrated in
As illustrated in
The virtual video generating unit 24 then extracts the part of the subject region 42 to be a virtual synthetic video 46 from the video resulting from combining. The virtual video generating unit 24 writes the generated virtual synthetic video 46 into the memory 121.
Note that, when no overlapping part is present, the superimposing process need not be performed, and the overhead-view videos are only arranged adjacent to one another so as to be combined.
In addition, when the installation positions Y are specified such that the elongation ratio is equal to or lower than the critical elongation ratio K, the virtual video generating unit 24 discards a part out of the use range Hk* in the y direction as well from each of the overhead-view videos before combining.
As described above, the installation position determining device 10 according to the first embodiment specifies the installation positions 45 of cameras 50 at which a subject region 42 can be captured, the number of cameras 50 being equal to or smaller than a number indicated by a camera condition 41, and generates a virtual synthetic video 46 in a case where the cameras 50 are installed at the specified installation positions 45. This allows the user to determine the installation positions 45 of the cameras 50 at which a desired video can be obtained only by checking the virtual synthetic video 46 while changing the camera condition 41.
In particular, the installation position determining device 10 according to the first embodiment also takes the height of a subject into consideration, and specifies the installation positions 45 at which a subject not taller than the critical height Zh present in the subject region 42 can be captured. This eliminates such cases where the face of a person present in the subject region 42 cannot be captured from the specified installation positions 45.
In addition, the installation position determining device 10 according to the first embodiment also takes the elongation of a subject in overhead-view conversion into consideration, and specifies the installation positions 45 at which the elongation ratio of a subject is equal to or lower than the critical elongation ratio K. This eliminates such cases where the elongation ratio of a subject captured in a virtual synthetic video 46 is too high at the specified installation positions 45 and the virtual synthetic video 46 may be hard to see.
***Other Configurations***
<First Modification>
In the first embodiment, the functions of the respective units of the installation position determining device 10 are implemented by software. As a first modification, however, the functions of the respective units of the installation position determining device 10 may be implemented by hardware. The first modification will be described on differences from the first embodiment.
A configuration of the installation position determining device 10 according to the first modification will be described with reference to
When the functions of the respective units are implemented by hardware, the installation position determining device 10 includes a processing circuit 15 instead of the processor 11 and the storage unit 12. The processing circuit 15 is a dedicated electronic circuit implementing the functions of the respective units of the installation position determining device 10 and the functions of the storage unit 12.
The processing circuit 15 is assumed to be a single circuit, a composite circuit, a programmed processor, a parallel-programmed processor, a logic IC, a gate array (GA), an application specific integrated circuit (ASIC), or a field-programmable gate array (FPGA).
The functions of the respective units may be implemented by one processing circuit 15 or may be distributed to a plurality of processing circuits 15.
<Second Modification>
As a second modification, some functions may be implemented by hardware and others may be implemented by software. More specifically, some functions of the respective units of the installation position determining device 10 may be implemented by hardware and other functions may be implemented by software.
The processor 11, the storage unit 12, and the processing circuit 15 are collectively referred to as “processing circuitry.” Thus, the functions of the respective units are implemented by the processing circuitry.
<Third Modification>
In the first embodiment, the subject region 42 is a rectangular region. The subject region 42, however, is not limited to a rectangle but may be a region of another shape. For example, the subject region 42 may be a circular region, or a region having a shape with a bent corner such as an L shape.
A case where the subject region 42 is a circular region will be described with reference to
As illustrated in
As illustrated in
When the cameras 50 are arranged at the central position of the region as illustrated in
A case where the subject region 42 is a region of an L shape will be described with reference to
As illustrated in
As illustrated in
<Fourth Modification>
In the first embodiment, two cameras 50 are arranged to face each other along the short-side direction of a rectangle as illustrated in
In this case, as illustrated in
The back-to-back arrangement of two cameras 50 at the central position allows a synthetic video with little distortion of a subject present at a boundary between two cameras 50 to be obtained.
There are cases where a subject region 42 is so large that mere arrangement of two cameras 50 to face each other is not sufficient to capture the entire rectangle in the short-side direction. In such a case, face-to-face arrangement and back-to-back arrangement are combined as illustrated in
As illustrated in
In addition, as illustrated in
In addition, as illustrated in
<Fifth Modification>
A 360-degree camera capable of capturing a range of 360 degrees around the camera may be used as a camera 50. In a case where a 360-degree camera is used as a camera 50, a circular region around the installation position of the camera 50 is a range in which the elongation ratio of a subject is equal to or lower than the critical elongation ratio K as illustrated in
As illustrated in
A second embodiment is different from the first embodiment in that a range in which cameras 50 cannot be installed is specified. In the second embodiment, the differences will be described and description of the features that are the same will not be repeated.
***Description of Operation***
In step S2 of
In a case where the unusable range 47 is rectangular, the unusable range 47 is specified by upper-left coordinate values (xi, yi), a width Wxi in the x direction parallel to the X axis, and a width Wyi in the y direction parallel to the Y axis. In a case where the unusable range 47 is circular, the unusable range 47 is specified by the coordinates (xi, yi) of the center and the radius ri of the circle. Note that the specification of the unusable range 47 is not limited thereto, and the unusable range 47 may be specified in another manner such as a formula. In addition, the unusable range 47 may have a shape other than a rectangle and a circle.
In step S3 of
For example, as illustrated in
In this process, the installation positions X and the angles of depression α are specified through the same process as in step S31 in
Y
A
=yh−W½ (Expression 21)
Y
B
=yh+W½ (Expression 22)
Subsequently, the position specifying unit 23 specifies installation positions Y of the remaining cameras 50. The remaining cameras 50 are cameras 50C1 and 50C2 and cameras 50D1 and 50D2 in
The position specifying unit 23 uses the previously specified installation positions Y of the cameras 50 for capturing the unusable range 47 as references for specification of the installation positions Y of the remaining cameras 50. In
Y
M
=Y
A−((2M−1)W½ (Expression 23)
Y
M
=Y
B+((2M−1)W½ (Expression 24)
As described above, the installation position determining device 10 according to the second embodiment is capable of specifying the installation positions 45 of respective cameras 50 capable of capturing the subject region 42 when a range in which cameras 50 cannot be installed is specified.
Various equipment such as air conditioners and fire alarms is installed on ceilings of indoor facilities. Thus, there are places where cameras 50 cannot be installed. The installation position determining device 10 according to the second embodiment, however, allows appropriate installation positions 45 of cameras 50 to be determined avoiding the places where cameras 50 cannot be installed.
<Sixth Modification>
Cameras 50 cannot be installed in places without ceilings. A place without ceilings therefore corresponds to the unusable range 47. A mobile camera 53, however, which is a camera 50 mounted on a mobile object that flies such as a drone or a balloon may be used. When a mobile camera 53 can be flown, the mobile camera 53 can also be installed in a place without ceilings.
For example, mobile cameras 53 can be arranged in a place without ceilings in an outdoor stadium. In the case of an outdoor stadium, as illustrated in
The position specifying unit 23 thus specifies the installation positions 45 separately for the part with ceilings and the part without ceilings. As a result, the installation positions 45 of the normal cameras 50 and the mobile cameras 53 are specified as in
Use of the mobile cameras 53 allows arrangement of cameras in a place without ceilings. As a result, a video with high resolution can also be obtained for a place without ceilings.
A third embodiment is different from the first and second embodiments in that a capturing range 55 of an existing camera 54 is specified. In the third embodiment, the differences will be described and description of the features that are the same will not be repeated.
***Description of Operation***
In step S2 of
The capturing range 55 of the existing camera 54 is specified in the same manner as the subject region 42. Specifically, in a case where the capturing range 55 of the existing camera 54 is rectangular, the capturing range 55 of the existing camera 54 is specified by upper-left coordinate values (xj, yj), a width Wxj in the x direction parallel to the X axis, and a width Wyj in the y direction parallel to the Y axis. In the third embodiment, the capturing range 55 of the existing camera 54 is rectangular.
In step S3 of
For example, as illustrated in
In step S5 of
In the example of
As described above, the installation position determining device 10 according to the third embodiment is capable of specifying the installation positions 45 of cameras 50 to be newly installed taking the existing camera 54 into consideration. This allows the installation positions 45 of the cameras 50 capable of capturing the entire subject region 42 to be specified without installing an unnecessarily large number of cameras 50 in a case where an existing camera 54 is present.
10: installation position determining device, 11: processor, 12: storage unit, 13: input interface, 14: display interface, 15: processing circuit, 21: condition receiving unit, 22: region receiving unit, 23: position specifying unit, 231: X position specifying unit, 232: Y position specifying unit, 24: virtual video generating unit, 25: display unit, 31: input device, 32: display, 41: camera condition, 42: subject region, 43: CG space, 44: top view, 45: installation position, 46: virtual synthetic video, 47: unusable range, 50: camera, 51: optical axis, 52: capturing plane, 53: mobile camera
Number | Date | Country | Kind |
---|---|---|---|
PCT/JP2016/053309 | Feb 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/000512 | 1/10/2017 | WO | 00 |