The present invention relates to an information display system that displays information for an operator or the like who drives and operates an automobile or the like, a control method of the information display system, and a control program of the information display system.
Conventionally, proposals have been made for displaying, when an operator operates an apparatus such as a work machine, the situation of operator's surroundings on a display visible to the operator (for example, WO 2017/191853).
However, in such proposals, when displaying information on the surroundings, such as information on blind spot of the operator, on the display, shapes viewed from various angles that differ from a point of view of the operator are displayed as reference information. Therefore, there has been a problem in that it is difficult for the operator to promptly ascertain the situation of surroundings or the like. In addition, there has also been a problem in that, when displaying external information or the like, imaged by a camera, on the display in order to have the operator ascertain the external information, an image taken by the camera does not enable the operator to ascertain information on a portion that constitutes a blind spot of the operator from among structures and the like, i.e., external information.
In consideration thereof, an object of the present invention is to provide an information display system capable of showing to an object person viewing a display portion, such as a display, an object displayed on the display portion from a point of view of the operator in an accurate manner, a control method of the information display system, and a control program of the information display system.
According to the present invention, the object described above can be achieved by an information display system including: a plurality of imaging apparatuses that image an object from a point of view that differs from a point of view of a user; and a display portion that displays the object from the point of view of the user, wherein the information display system generates three-dimensional information of the object on the basis of imaging information obtained by imaging the object by the plurality of imaging apparatuses and displays, on the basis of the three-dimensional information, the object on the display portion from the point of view of the user.
According to the configuration described above, three-dimensional information of an object is generated on the basis of imaging information obtained by imaging the object by the plurality of imaging apparatuses, and the object can be displayed on the display portion from the point of view of a user on the basis of the three-dimensional information. Accordingly, the object can be displayed on the display portion from the point of view of the user and in an accurate manner.
Preferably, the three-dimensional information of the object is generated using a photogrammetric technique.
According to the configuration described above, since three-dimensional information is generated using a method of a photogrammetric technique such as SfM (Structure from Motion), the three-dimensional information of the object can be generated in an accurate manner.
Preferably, the object is present in a blind spot portion from the point of view of the user.
According to the configuration described above, even when the object is present in a blind spot portion from the point of view of the user, the object can be accurately displayed as three-dimensional information.
Preferably, the information display system includes a survey apparatus that performs a three-dimensional survey by irradiating the object with ranging light and receiving reflected ranging light from the object.
According to the configuration described above, three-dimensional shape information of the object can be accurately measured by a survey performed by a survey apparatus. Therefore, by displaying three-dimensional information (point group information or the like) obtained by a survey together with three-dimensional information based on imaging information on the display portion, information with higher accuracy can be displayed.
According to the present invention, the object described above can be achieved by a control method of an information display system including: a plurality of imaging apparatuses that image an object from a point of view that differs from a point of view of a user; and a display portion that displays the object from the point of view of the user, the control method including generating three-dimensional information of the object on the basis of imaging information obtained by imaging the object by the plurality of imaging apparatuses and displaying, on the basis of the three-dimensional information, the object on the display portion from the point of view of the user.
According to the present invention, the object described above can be achieved by a control program of a monitoring system that causes an information display system including: a plurality of imaging apparatuses that image an object from a point of view that differs from a point of view of a user; and a display portion that displays the object from the point of view of the user, to execute: a function of generating three-dimensional information of the object on the basis of imaging information obtained by imaging the object by the plurality of imaging apparatuses; and a function of displaying, on the basis of the three-dimensional information, the object on the display portion from the point of view of the user.
The present invention can advantageously provide an information display system capable of showing to an object person viewing a display portion, such as a display, an object displayed on the display portion from a point of view of the operator in an accurate manner, a control method of the information display system, and a control program of the information display system.
Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to accompanying drawings and the like. Although the embodiment described below is a preferred specific example of the present invention and is therefore subjected to various favorable technical limitations, it is to be understood that the scope of the present invention is by no means limited by the described aspects unless specifically noted otherwise hereinafter.
On the other hand, a building V that is visible from the user of the automobile B is arranged at the intersection A and, at the same time, for example, an automobile W and a pedestrian P which are objects are positioned in a blind spot portion of the user of the automobile B created by the building V, the automobile W and the pedestrian P being positioned so as to be visible from the surveillance cameras X and Y and surveyable from the survey apparatuses T and U.
In addition, an imaging lens 48 and an imaging element 49 are provided on the imaging optical axis 5. The imaging element 49 is a CCD or a CMOS sensor which is an aggregate of pixels, and a position of each pixel on an image element can be specified. For example, a position of each pixel is specified in a coordinate system of which an origin is the optical axis of the imaging unit 27.
The base unit 3 of the survey apparatus T and the like has a protractor plate 8 which rotates in a horizontal direction and which is capable of detecting a rotational angle and a vertical rotation portion 9 which is capable of rotating in a vertical direction and which can be fixed at a predetermined angle and, consequently, the survey apparatuses T and U are configured to be directly attached to the vertical rotation portion 9. Therefore, the survey apparatuses T and U are also configured so as to rotate in the vertical direction around a machine reference point and rotate in the horizontal direction around the machine reference point.
In addition, the measurement unit 20, the attitude detecting unit 26, and the optical axis deflecting unit 36 are integrally arranged. The measurement unit 20 has a ranging light emitting portion 21, a light-receiving portion 22, and a ranging portion 23, the ranging light emitting portion 21 emits ranging light and has an emission optical axis 31, a light-emitting element 32 (such as a laser diode (LD)) is provided on the emission optical axis 31, and a projection lens 33 is further provided on the emission optical axis 31.
In addition, a first reflecting mirror 34 as a deflecting optical member is provided on the emission optical axis 31, and a second reflecting mirror 35 as a deflecting optical member is arranged on a reception optical axis 37 so as to face the first reflecting mirror 34. Due to the first reflecting mirror 34 and the second reflecting mirror 35, the emission optical axis 31 is configured so as to become congruent with the ranging optical axis 4. In addition, the optical axis deflecting unit 36 is arranged on the ranging optical axis 4.
While the light-receiving portion 22 receives reflected ranging light from the automobile W and the pedestrian P which are measurement objects, the light-receiving portion 22 has a reception optical axis 37 that is parallel to the emission optical axis 31 and the reception optical axis 37 is common to the ranging optical axis 4. A light-receiving element 38 such as a photodiode (PD) is provided on the reception optical axis 37 and an imaging lens 39 is also arranged on the reception optical axis 37. The imaging lens 39 focuses the reflected ranging light on the light-receiving element 38, and the light-receiving element 38 receives the reflected ranging light and generates a light reception signal. The light reception signal is input to the ranging portion 23.
The optical axis deflecting unit 36 is arranged on an object side of the imaging lens 39 on the reception optical axis 37 shown in
Specifically, the laser beam is emitted toward the automobile W and the pedestrian P, the reflected ranging light having been reflected by the automobile W and the pedestrian P which are measurement objects enters the light-receiving portion 22 via the optical axis deflecting unit 36 (a reflected ranging light deflecting portion 36b) and the imaging lens 39. The reflected ranging light deflecting portion 36b re-deflects the ranging optical axis 4 having been deflected by the ranging light deflecting portion 36a so that the ranging optical axis 4 returns to its original state and causes the light-receiving element 38 to receive the reflected ranging light.
The light-receiving element 38 sends a light reception signal to the ranging portion 23 and the ranging portion 23 performs ranging of the measurement point (the automobile W and the pedestrian P) on the basis of the light reception signal from the light-receiving element 38. As shown in
Ranging light is emitted from the light-emitting element 32, the ranging light is made into a parallel luminous flux by the projection lens 33, passes through the ranging light deflecting portion 36a (the prism elements 42a and 42b), and emitted toward the automobile W and the pedestrian P which are measurement objects. By passing through the ranging light deflecting portion 36a, the ranging light is deflected and output in a direction of the automobile W and the pedestrian P that is a necessary direction by the prism elements 42a and 42b. In addition, the reflected ranging light having been reflected by the automobile W and the pedestrian P passes through and is incident to the reflected ranging light deflecting portion 36b (the prism elements 43a and 43b) and is focused on the light-receiving element 38 by the imaging lens 39.
Subsequently, due to the reflected ranging light passing through the reflected ranging light deflecting portion 36b, an optical axis of the reflected ranging light is deflected by the prism elements 43a and 43b so as to become congruent with the reception optical axis 37. In other words, a configuration is adopted in which, due to a combination of rotational positions of the prism element 42a and the prism element 42b, a deflection direction and a deflection angle of the ranging light to be emitted can be arbitrarily changed.
While the survey apparatus according to the present embodiment does not have a camera, a configuration may be adopted in which the imaging apparatus has a camera function.
The surveillance cameras X and Y, the survey apparatuses T and U, and the vehicle-mounted camera Z of the automobile B shown in
In addition, the operation example shown in
In step (hereinafter, referred to as “ST”) 1 shown in
While a description using Bluetooth has been given in the present embodiment, the present invention is not limited thereto and GPS (Global Positioning System), the Internet, or the like may be used instead.
Next, in processes subsequent to ST2, positional information and attitude information of the three cameras (the surveillance camera X, the surveillance camera Y, and the vehicle-mounted camera Z) are generated. While the processes are respectively executed by the three cameras, hereinafter, the vehicle-mounted camera Z will be described as an example. First, in ST2, a “feature point processing portion (program) 111” shown in
The process then proceeds to ST3. In ST3, the processing portion 111 operates and refers to the “feature point information storage portion 112”, specifies a common feature point on images of the vehicle-mounted camera Z, the surveillance camera X, and the surveillance camera Y, and causes a “common feature portion storage portion 113” to store the specified common feature point.
The process then proceeds to ST4. In ST4, a “matching processing portion (program) 114” shown in
Next, a “relative positional attitude information generating portion (program) 116” shown in
Next, a process of detecting the “automobile W” and the “pedestrian P” which correspond to a blind spot portion of the building V shown in
First, in ST11 in
Specifically, using a method such as SfM (Structure from Motion) that is a photogrammetric technique, three-dimensional information of the “building V”, the “pedestrian P”, and the “automobile W” which appear in a supplied image is restored, and the restoration information is shared by each camera. Specifically, the “building V” is restored by the vehicle-mounted camera Z, the surveillance camera X, and the surveillance camera Y and the “pedestrian P” and the “automobile W” are restored by the surveillance camera X and the surveillance camera Y.
The process then proceeds to ST12. In ST12, an image of the “pedestrian P” and the “automobile W” of which three-dimensional information has been restored by the surveillance camera X and the surveillance camera Y is changed to a point of view of the vehicle-mounted camera Z of the automobile B and displayed on the display of the automobile B.
Since positional information and attitude information of each camera (each of the vehicle-mounted camera Z, the surveillance camera X, and the surveillance camera Y) have been acquired as described above by the process shown in
Therefore, a driver of the automobile B and the like can readily ascertain and beware of the “pedestrian P” and the like in the blind spot portion by simply viewing the display and viewing a single screen. In other words, since the blind spot portion is not displayed in a window or the like showing a different image from a different point of view as is conventional, the driver and the like can promptly and accurately ascertain a risk or the like.
While the display portion has been described in the present embodiment as a display provided in the automobile B as shown in
Furthermore, the automobile W and the pedestrian P may be displayed together with map information on a display used by a car navigation system.
Since the survey apparatuses T and U are arranged at the intersection A shown in
In this case, when the automobile B approaches the intersection A and the automobile B comes within 50 m of the survey apparatus T and the survey apparatus U, a “network” is constructed and communication can be performed in a similar manner to the surveillance camera X and the like described earlier. Next, the survey apparatuses T and U shown in
Since many components of the present embodiment are similar to those of the first embodiment described above, the following description will focus on differences instead. In the present embodiment, the automobile B shown in
Since many components of the present embodiment are similar to those of the first embodiment described above, the following description will focus on differences instead. The present embodiment represents an example in which a construction vehicle E and heavy machinery F are present at a construction/building site D and an automobile G is being driven on the site D. In this case, vehicle-mounted cameras H, I, and J are respectively installed on the automobile G, the construction vehicle E, and the heavy machinery F.
In the present embodiment, when the automobile G is traveling and the vehicle-mounted camera I of the construction vehicle E and the vehicle-mounted camera J of the heavy machinery F approach the automobile G within a radius of 50 m, the approach is detected by Bluetooth in a similar manner to the first embodiment, a “network” is constructed among the three vehicle-mounted cameras H, I, and J, and the three vehicle-mounted cameras H, I, and J communicate with each other.
Next, in a similar manner to the first embodiment, relative positional attitude information of the three vehicle-mounted cameras H, I, and J is specified. Accordingly, a camera positional attitude of each of the vehicle-mounted cameras H, I, and J is specified.
Next, in a similar manner to the first embodiment, an object in a blind spot portion is detected. Specifically, a configuration is adopted in which, while a “fill L” that is visible from the vehicle-mounted camera H of the automobile G and a “worker K” present in a blind spot portion hidden by the “fill L” are detected, three-dimensional information is obtained by SfM (Structure from Motion) processing that is a photogrammetric technique among the three vehicle-mounted cameras H, I, and J in a similar manner to the first embodiment.
In other words, using a method such as SfM, three-dimensional information of the “worker K” and the “fill L” which appear in a supplied image is restored, and the restoration information is shared by each of the cameras H, I, and J.
Specifically, the “worker K” is restored by the vehicle-mounted cameras I and J of the construction vehicle E and the heavy machinery F and the “fill L” is restored by the vehicle-mounted cameras H, I, and J of the automobile G, the construction vehicle E, and the heavy machinery F. In addition, an image of the “worker K” of which three-dimensional information has been restored by the vehicle-mounted cameras I and J of the construction vehicle E and the heavy machinery F can be changed to a point of view of the vehicle-mounted camera H of the automobile G and displayed on the display of the automobile G. Accordingly, the point of view of the vehicle-mounted camera H of the automobile G and the “worker K” in a blind spot of the “fill L” can be ascertained and a state thereof can be displayed.
Even in the present embodiment, the display that is a display portion is not limited to the display shown in
Furthermore, a configuration may be adopted in which, when distances between the automobile G and the construction vehicle E and the heavy machinery F are calculated and the distances are too short, a warning is output to the automobile G and, further, the construction vehicle E and the heavy machinery F are controlled and stopped.
While the present embodiment has been described above using an example in which the present invention is realized as an apparatus, the present invention is not limited thereto and a program to be executed by a computer may be distributed while being stored in storage media such as a magnetic disk (a floppy (registered trademark) disk, a hard disk, or the like), an optical disk (a CD-ROM, a DVD, or the like), a magneto optical disk (MO), and a semiconductor memory.
In addition, the storage media may be any storage media that are capable of storing a program and readable by a computer. A storage format of the storage media is not particularly limited.
Furthermore, an OS (operating system) running on a computer based on instructions of a program having been installed to the computer from a storage medium, MW (middleware) such as database management software and network software, and the like may execute a part of processing steps for realizing the present embodiment.
Moreover, the storage media in the present invention are not limited to media that are independent of a computer and include storage media to which a program transmitted through a LAN, the Internet, or the like has been downloaded and which store or temporarily store the downloaded program.
In addition, the computer in the present invention need only execute respective processing steps in the present embodiment based on a program stored in a storage medium and may be an apparatus constituted by a single personal computer (PC) or the like or may be a system or the like in which a plurality of apparatuses are connected via a network.
Furthermore, the computer in the present invention is not limited to a personal computer and collectively refers to devices and apparatuses capable of realizing functions of the present invention including arithmetic processing units and microcomputers included in information processing devices.
An embodiment of the present invention has been described above. However, it is to be understood that the present invention is not limited to the embodiment described above and that various modifications can be made without departing from the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2020-160225 | Sep 2020 | JP | national |