The present application claims priority under 35 U.S.C. ยง 119 to Japanese Patent Application No. 2023-009491, filed Jan. 25, 2023, the contents of which application are incorporated herein by reference in their entirety.
The present disclosure relates to a remote support system, a remote support method, and a remote support program for providing remote support to a vehicle.
The prior art disclosed in US 2021/0089018 A1 receives a safety condition required for remotely controlling a vehicle, and generates a remote-control signal when the safety condition is satisfied.
The above-described prior art is a kind of remote support by a remote operator. In the remote support, it is considered that a video image acquired by a monitoring camera is used together with a video image acquired by an onboard camera or instead of the video image acquired by the onboard camera. However, when there is a plurality of monitoring cameras, there is room for consideration as to which monitoring camera is used to acquire a video image. This is because the performance of remote support varies depending on the video image to be used. The above-described conventional technique does not provide any solution for selecting a monitoring camera in remote support.
The present disclosure has been made in view of the above problems. The present disclosure provides a technique capable of effectively utilizing a plurality of monitoring cameras for remote support of a vehicle.
To achieve the above objective, the present disclosure provides a remote support system for providing remote support to a vehicle. A remote support system includes a display device for displaying a video image to a remote operator, at least one processor, and at least one memory storing a plurality of instructions executable by the at least one processor. The plurality of instructions is configured to cause the at least one processor to perform the following processes. The first process is to acquire video images related to operation of the vehicle from a plurality of monitoring cameras installed on or near an operation route of the vehicle. The second processing is to select a video image to be displayed on the display device from among the video images acquired by the plurality of monitoring cameras based on information regarding current and future relative relationship between each of the plurality of monitoring cameras and the vehicle. The third process is to display the selected video images on the display device.
In order to achieve the above object, the present disclosure also provides a remote support method for providing remote support to a vehicle. The remote support method of the present disclosure includes the following steps. The first step is a step of acquiring video images related to operation of the vehicle from a plurality of monitoring cameras installed on or near an operation route of the vehicle. The second step is a step of selecting a video image to be displayed on a display device for a remote operator from among the video images acquired by the plurality of monitoring cameras based on information regarding current and future relative relationship between each of the plurality of monitoring cameras and the vehicle. The third step is a step of displaying the selected video on the display device.
In order to achieve the above object, the present disclosure further provides a remote support program for providing remote support to a vehicle. A remote support program according to the present disclosure is configured to cause a computer to execute the remote support method described above. The remote support program of the present disclosure may be recorded in a non-transitory computer-readable recording medium.
According to the technique of the present disclosure, it is possible to determine which monitoring camera is a monitoring camera that provides a more useful video image for performing remote support by taking into consideration not only the current relative relationship with the vehicle but also the future relative relationship with the vehicle with respect to the plurality of monitoring cameras. Thus, it is possible to select a video image useful for remote support from among video images acquired by the plurality of monitoring cameras and display the video image to the remote operator.
The remote support system of the present disclosure is applicable to any of a remote driving system that remotely drives a vehicle, a remote assistance system that remotely assists a vehicle, and a remote monitoring system that remotely monitors a vehicle. In the present embodiment, the remote support system of the present disclosure is applied to a remote driving system that remotely drives a vehicle.
The vehicle 20 is a vehicle that supports remote driving. The vehicle 20 is provided with at least one onboard camera 24. The onboard camera 24 includes at least a camera that captures a video image of the front of the vehicle 20. Preferably, the onboard camera 24 includes a camera that captures a video image of the diagonally right front of the vehicle 20 and a camera that captures a video image of the diagonally left front of the vehicle 20. The onboard camera 24 may include a camera that captures a video image of the rear of the vehicle 20, a camera that captures a video image of the diagonally right rear of the vehicle 20, and a camera that captures a video image of the diagonally left rear of the vehicle 20.
The vehicle 20 includes an onboard computer 21. The onboard computer 21 includes a remote driving kit 22 for remote driving. The remote driving kit 22 is connected to a communication network by using wireless communication. The remote driving kit 22 transmits the onboard camera video image 40 captured by the onboard camera 24 to the management server 30 via the communication network. In addition, when the vehicle 20 is a vehicle having an automatic driving function, the onboard computer 21 includes an automatic driving kit 23 for automatic driving. The automatic driving kit 23 uses a video image captured by the onboard camera 24, particularly, a video image captured by a front camera, for recognition of surrounding objects for automatic driving.
The remote driving kit 22 acquires information regarding the state of the vehicle 20 such as a vehicle speed, a travel distance, and a remaining fuel amount from internal sensors 26 mounted on the vehicle 20. The remote driving kit 22 transmits the information acquired from the internal sensors 26 and the information acquired by the calculation by the onboard computer 21 to the management server 30 as vehicle information 42. The information obtained by the calculation includes information on a scheduled traveling route and a traveling direction of the vehicle 20.
The remote driving kit 22 receives a control amount command 48 for remote driving from the management server 30. The remote driving kit 22 controls actuators 28 of the vehicle 20 in accordance with the control amount command 48. The actuator 28 includes a driving actuator that drives the vehicle 20, a braking actuator that brakes the vehicle 20, and a steering actuator that steers the vehicle 20. When the automatic driving is performed, the control amount of each actuator 28 is calculated by the automatic driving kit 23 so that the vehicle 20 travels along a target trajectory.
The monitoring camera 10 is a monitoring camera as part of a social infrastructure structure, in particular a traffic infrastructure. A large number of monitoring cameras 10 are installed along a road. The monitoring camera 10 is a network camera connected to the communication network. The monitoring camera 10 transmits a captured video image to the management server 30 via the communication network.
Here, an example of installation of the monitoring camera 10 will be described with reference to
Returning to
The remote cockpit 50 includes a controller 52. The display contents of the display device 53 are controlled by the controller 52. The controller 52 is connected to the management server 30. The controller 52 displays the onboard camera video image 41 transmitted from the management server 30 on the main screens 54C, 54L, and 54R, and displays the monitoring camera video image 45 transmitted from the management server 30 on the sub-screens 56C, 56L, and 56R. The controller 52 has a function of receiving the vehicle information 42 from the management server 30 and presenting the received vehicle information 42 to the remote operator. Further, the controller 52 has a function of transmitting the control amount command 48 input by the remote operator to the management server 30.
The management server 30 comprises a processor 32 and a program memory 34 communicatively coupled to the processor 32. The number of processors 32 and the number of program memories 34 constituting the management server 30 may be plural. The program memory 34 is a computer-readable recording medium. The program memory 34 stores at least one program executable by the processor 32. The program comprises a plurality of instructions 36. The instructions 36 stored in the program memory 34 include instructions for causing the processor 32 to implement the remote support method of the present disclosure. The instructions correspond to the remote support program of the present disclosure.
The management server 30 comprises a video image memory 38 communicatively coupled to the processor 32. The onboard camera video image 40 received from the onboard computer 21 and the monitoring camera video image 44 received from the monitoring camera 10 are stored in the video image memory 38. The processor 32 reads the onboard camera video image 40 stored in the video image memory 38, and generates the onboard camera video image 41 to be displayed on the main screens 54C, 54L, and 54R. The onboard camera video image 41 may be the onboard camera video image 40 itself, or may be the onboard camera video image 40 processed as necessary. The processor 32 reads out a video image selected from the many monitoring camera video images 44 stored in the video image memory 38 according to a predetermined rule, and generates the monitoring camera video image 45 to be displayed on the sub-screens 56C, 56L, and 56R. The monitoring camera video image 45 may be the monitoring camera video image 44 itself, or may be the monitoring camera video image 44 processed as necessary. The function of selecting a monitoring camera video image to be displayed from among the plurality of monitoring camera video images 44 and processing the selected video image as necessary may be provided in the controller 52 instead of the management server 30.
As in the example illustrated in
The selection of the monitoring camera video image is performed based on information on the current and future relative relationship between the vehicle 20 and each monitoring camera 10 related to the vehicle 20. By taking into consideration not only the current relative relationship with the vehicle 20 but also the future relative relationship with the vehicle 20 with respect to the plurality of monitoring cameras 10, it becomes clear which monitoring camera is a monitoring camera that provides a more useful video image for performing remote driving. As a result, it is possible to select a video image useful for remote driving from the plurality of monitoring camera video images 44. A method of selecting a monitoring camera video image will be described below.
The video image displayed on the display device 53 according to the selection method 1 is a video image in which the area where the vehicle 20 is scheduled to pass is shown. According to the selection method 1, as the information regarding the current and future relative relationship between each monitoring camera 10 and the vehicle 20, the scheduled passing time at which the vehicle 20 passes through the visual field of each monitoring camera 10 is referred to. Then, the video image of the monitoring camera in which the area where the vehicle 20 is scheduled to pass within a predetermined time is included in the visual field is preferentially selected.
At time T11, the passing scheduled area of the vehicle 20 is included only in the visual field 11A of the monitoring camera 10A. Therefore, at time T11, the video image acquired by the monitoring camera 10A is preferentially selected.
At time T12, a part of the passing scheduled area of the vehicles 20 enters the visual field 11B of the monitoring camera 10B, and the remaining part enters the visual field 11A of the monitoring camera 10A. In this case, a monitoring camera having a larger passing scheduled area in the visual field is prioritized. In addition, in a case where the sizes of the scheduled passing areas included in the visual fields are the same between the monitoring cameras, for example, a monitoring camera installed further forward, that is, a monitoring camera in which a future scheduled passing area is included in the visual field is prioritized. Therefore, at time T12, the video image acquired by the monitoring camera 10B is preferentially selected.
At time T13, the passing scheduled area of the vehicle 20 is included only in the visual field 11B of the monitoring camera 10B. Therefore, at time T13, the video image acquired by the monitoring camera 10B is preferentially selected.
At time T14, most of the passing scheduled area of the vehicles 20 is within the visual field 11C of the monitoring camera 10C, and a part of the remaining area is within the visual field 11B of the monitoring camera 10B. Therefore, at time T14, the video image acquired by the monitoring camera 10C is preferentially selected.
As in the above example, the video image of the monitoring camera in which the passing scheduled area of the vehicle 20 in the near future is included in the visual field is preferentially selected, and thus the remote operator can remotely drive the vehicle 20 while predicting a situation that may occur in the vehicle 20.
The video image displayed on the display device 53 according to the selection method 2 is a video image in which the area where the vehicle 20 is scheduled to pass is shown. According to the selection method 2, as the information regarding the current and future relative relationship between each monitoring camera 10 and the vehicle 20, the distance from a predetermined area on the operation route of the vehicle 20 to the vehicle 20 is referred to. Then, when the vehicle 20 approaches a point at a predetermined distance from the predetermined area, the video image of the monitoring camera in which the predetermined area is included in the visual field is preferentially selected. The predetermined area can be arbitrarily set, but is preferably an area to which the remote operator needs to pay particular attention. Intersections, pedestrian crossings, curved roads with poor visibility, locations with frequent accidents, etc. are examples of such areas.
At time T21, the vehicle 20 has not reached the point P1. Therefore, at time T21, the video image acquired by the monitoring camera 10D is preferentially selected. At time T22, the vehicle 20 has not reached the point P1. Therefore, even at time T22, the video image acquired by the monitoring camera 10D is preferentially selected.
At time T23, the vehicle 20 has reached the point P1. Therefore, at time T23, the video image acquired by the monitoring camera 10E is preferentially selected. Thus, the remote operator can remotely drive the vehicle 20 while checking the situation of the intersection 6, and cause the vehicle 20 to safely pass through the intersection 6.
The video image displayed on the display device 53 according to the selection method 3 is a video image in which the vehicle 20 is shown. According to the selection method 3, as the information regarding the current and future relative relationship between each monitoring camera 10 and the vehicle 20, the time during which the vehicle 20 is in the visual field of each monitoring camera 10 is referred to. Then, the video image of the monitoring camera having a longer time in which the vehicle 20 is in the visual field is preferentially selected.
The period during which the vehicle 20 is in the visual field 11F of the monitoring camera 10F is a period from time T31 to time T34. On the other hand, the period during which the vehicle 20 is in the visual field 11G of the monitoring camera 10G is a period from time T31 to time T33. Therefore, at least during the period from time T31 to time T34, the video image acquired by the monitoring camera 10F is preferentially selected.
As described above, the video image of the monitoring camera in which the vehicle 20 is displayed for a longer time is preferentially selected, and thus it is possible to reduce the frequency of switching the video image on the display device 53. As a modification of the selection method 3, the priority of the video image of the monitoring camera in which the time during which the vehicle 20 is in the visual field is longer may be set to be higher as the speed of the vehicle 20 is higher. The higher the vehicle speed, the shorter the time during which the vehicle 20 is captured by one monitoring camera. Therefore, the modification can reduce the frequency of switching the video image when the vehicle 20 is moving at high speed
The video image displayed on the display device 53 according to the selection method 4 is a video image in which the vehicle 20 is shown. According to the selection method 4, as the information regarding the current and future relative relationship between each monitoring camera 10 and the vehicle 20, the ratio of the visual field of each monitoring camera 10 occupied by the vehicle 20 is referred to. Then, the video image of the monitoring camera in which the ratio of the visual field occupied by the vehicle 20 is equal to or greater than the first predetermined ratio and less than the second predetermined ratio is preferentially selected.
When the vehicle 20 further advances, the ratio of the visual field 11J of the monitoring camera 10J occupied by the vehicle 20 increases to the second predetermined ratio or more, and the ratio of the visual field 11K of the monitoring camera 10K occupied by the vehicle 20 increases to the first predetermined ratio or more and less than the second predetermined ratio. In this case, the video image acquired by the monitoring camera 10J is preferentially selected instead of the video image acquired by the monitoring camera 10K.
If the ratio of the visual field occupied by the vehicle 20 is too small, it is difficult to recognize the state of the vehicle 20 and the situation in which the vehicle 20 is placed. On the other hand, when the ratio of the visual field occupied by the vehicle 20 is too large, information useful for remote driving, particularly, the situation around the vehicle 20 cannot be acquired from the video image of the monitoring camera. The first predetermined ratio and the second predetermined ratio are set to values that allow the state of the vehicle 20 and the situation around the vehicle 20 to be recognized from the video image of the monitoring camera.
The video image displayed on the display device 53 according to the selection method 5 is a video image in which the vehicle 20 is shown. According to the selection method 5, the camera direction of each monitoring camera 10 with respect to the vehicle 20 is referred to. Then, a video image of a monitoring camera that captures a video image of the vehicle 20 from a specific direction, for example, from the rear, is preferentially selected. The video image acquired by capturing the vehicle 20 from the rear side is less uncomfortable for the remote operator to drive while viewing the video image.
The video image displayed on the display device 53 according to the selection method 6 is a video image showing the surroundings of the vehicle 20. According to the selection method 6, the difference between the visual field of each monitoring camera 10 and the visual field of the onboard camera 24 is referred to. For example, the video image of the monitoring camera in which a place that is difficult to see or a place that is a blind spot in the video image of the onboard camera 24 is captured is preferentially selected.
The video image of the monitoring camera in which the target approaching the course of the vehicle 20 or the target that is likely to move in or near the course is captured may be preferentially selected.
In the present embodiment, the video image of the monitoring camera is selected by any of the above-described selection methods. Each monitoring camera is associated with any one of the three sub-screens 56C, 56L, and 56R according to a predetermined distribution rule. The display destination of the video image of the selected monitoring camera, that is, the screen on which the video image of the selected monitoring camera is displayed is determined in advance for each monitoring camera. If it is known in advance which video image of which monitoring camera is displayed on which screen, the remote operator can understand without confusion what kind of video image is displayed on each screen.
Here, the example shown in
In the present embodiment, the video images of the monitoring cameras displayed on the sub-screens 56C, 56L, and 56R of the display device 53 are changed according to the distance of the vehicle 20 to the intersection 6 and the traveling route passing through the intersection 6. Thus, the video images useful for the remote driving can be displayed to the remote operator in an optimum way using the plurality of screens.
First, the example on the left side of
At time T42 when the vehicle 20 approaches the entrance of the intersection 6, the video image of the center screen 56C is turned off. On the other hand, the video image 45N of the monitoring camera 10N is displayed on the left screen 56L, and the video image 45L of the monitoring camera 10L is displayed on the right screen 56R. These video images 45N and 45L are video images in which the situation of the place where the vehicle 20 enters when the vehicle 20 turns right at the intersection 6 is best reflected, that is, video images in which the place to be watched when the vehicle 20 turns right at the intersection 6 can be best observed.
At time T43 when the vehicle 20 reaches the vicinity of the center of the intersection 6, the video image of the center screen 56C remains turned off. Further, the video image 45N of the monitoring camera 10N is continuously displayed on the left screen 56L, and the video image 45L of the monitoring camera 10L is continuously displayed on the right screen 56R.
At time T44 when the vehicle 20 passes through the intersection 6, the video image 45M of the monitoring camera 10M is displayed on the central screen 56C. On the other hand, the video image of the left screen 56L and the right screen 56R is turned off.
Next, the example on the right side of
At time T42 when the vehicle 20 approaches the entrance of the intersection 6, the video image 45N of the monitoring camera 10N is displayed on the left screen 56L, and the video image 45L of the monitoring camera 10L is displayed on the right screen 56R. Further, the video image in the central screen 56C is switched from the video image 45P of the monitoring camera 10P to the video image 45M of the monitoring camera 10M.
At time T43 when vehicle 20 reaches the vicinity of the center of intersection 6, the video image 45M of the monitoring camera 10M is continuously displayed on the center screen 56C. Further, the video image 45N of the monitoring camera 10N is continuously displayed on the left screen 56L, and the video image 45L of the monitoring camera 10L is continuously displayed on the right screen 56R.
At time T44 when the vehicle 20 passes through the intersection 6, the video images of the left screen 56L and the right screen 56R are turned off. On the other hand, the video image 45M of the monitoring camera 10M is continuously displayed on the central screen 56C. That is, after the vehicle 20 enters the intersection 6, the video image 45M of the monitoring camera 10M is continuously displayed on the center screen 56C. This makes it possible to reduce the frequency of switching the display of the video image.
Although the above description is made on the assumption that the display device 53 is provided with the three sub-screens 56C, 56L, and 56R, there may be a case where only one screen is provided to display the monitoring camera video image. In this case, the monitoring camera video image is selected in the following priority order, for example.
In the case where the vehicle 20 travels straight, turns right, or turns left at the intersection 6, if the vehicle speed is high or low, the priorities of the monitoring camera video images are, for example, monitoring camera 10P, monitoring camera 10Q, and monitoring camera 10L and monitoring camera 10N in descending order of priorities. This means that, when the monitoring camera 10P exists, the video image of the monitoring camera 10P is displayed, and when the monitoring camera 10P does not exist, the video image of the monitoring camera 10Q is displayed. The monitoring camera 10L and the monitoring camera 10N have the same priorities.
When the vehicle 20 turns right at the intersection 6 at a medium speed, the monitoring camera 10N is prioritized over the monitoring camera 10L in comparison between the monitoring camera 10N and the monitoring camera 10L. When the vehicle 20 turns right at the intersection 6, the place to be watched is hidden behind the vehicle 20 in the monitoring camera 10L, but the monitoring camera 10N can acquire the video image in which the place to be watched is not hidden behind the vehicle 20 when the vehicle 20 turns right.
In comparison between the monitoring camera 10M and the monitoring camera 10N, the monitoring camera 10M is prioritized over the monitoring camera 10N. Since the video image of the monitoring camera 10M is the video image in which the situation of the place where the vehicle 20 enters by turning right at the intersection 6 is most clearly shown, it is preferable to preferentially select the video image when turning right. In comparison among the monitoring camera 10L, the monitoring camera 10P, and the monitoring camera 10Q, the priorities are higher in the order of the monitoring camera 10L, the monitoring camera 10P, and the monitoring camera 10Q.
When the vehicle 20 travels straight at the intersection 6 at a medium speed, an example of the priorities of the monitoring camera video images is the monitoring camera 10N and the monitoring camera 10L, the monitoring camera 10P, and the monitoring camera 10Q in descending order of priorities. The monitoring camera 10L and the monitoring camera 10N have the same priorities.
The monitoring camera video image displayed on the display device 53 is processed as necessary. For example, an example of processing of the monitoring camera video image in a case where the monitoring camera video image is switched as illustrated in
Judging from the visual fields of the monitoring cameras 10R and 10S, the vehicles 20 entering the intersection 8 should be shown on both the left screen 56L and the right screen 56R at each of time T52 and time T53. However, according to the processing method 1, the processing of hiding the vehicle 20 with the mask 60L and the mask 60R is performed on the respective monitoring camera video images of the left screen 56L and the right screen 56R.
At the intersection 8 where the traveling direction of the vehicle 20 changes, the remote operator is likely to be confused when the vehicle 20 remotely drives while viewing the video image captured from the outside. By hiding the vehicle 20 with the mask 60L and the mask 60R, it is possible to prevent the remote operator from being confused while leaving information useful for remote driving included in the monitoring camera video image. Instead of hiding the vehicle 20 by the mask processing, the vehicle 20 may be removed from the monitoring camera video image using the image processing application.
According to the processing method 2, the vehicle 20 is not hidden so that the relationship between the vehicle 20 and the surroundings can be seen. However, in order to recognize the traveling direction of the vehicle 20 at the intersection 8, processing is performed to superimpose and display AR-arrows 62L and 62R indicating the traveling direction of the vehicle 20 on the respective monitoring camera video images of the left screen 56L and the right screen 56R. The remote operator can intuitively understand the direction of the vehicle 20 from the arrows 62L and 62R, and thus can reduce confusion caused by the vehicle 20 being reflected in the screen 56L and 56R.
If the vehicle 20 is slow, the remote operator is allowed a longer confusion time. Therefore, the processing by the processing method 2 may be performed when the speed of the vehicle 20 is less than a predetermined speed, and the processing by the processing method 1 may be performed when the speed of the vehicle 20 is equal to or greater than the predetermined speed.
By displaying the monitoring camera video image on the onboard camera video image in a superimposed manner, the remote operator can view the monitoring camera video image while viewing the onboard camera video image. The monitoring camera video image may be displayed in a superimposed manner on the onboard camera video image only during a period from when the vehicle 20 approaches the intersection to when the vehicle 20 passes through the intersection. By performing the superimposed display of the monitoring camera video image only in a scene in which remote driving is particularly necessary, it is possible to reduce the calculation load of the processor 32 while preventing the video image superimposed and displayed on the onboard camera video image from interfering with the remote operator.
Number | Date | Country | Kind |
---|---|---|---|
2023-009491 | Jan 2023 | JP | national |