The present disclosure claims priority to Japanese Patent Application No. 2023-090363, filed on May 31, 2023, the contents of which application are incorporated herein by reference in their entirety.
The present disclosure relates to a technique for controlling a vehicle traveling in a predetermined area.
Patent Literature 1 discloses a driving assistance device for a vehicle. The driving assistance device sets a determination region for collision risk determination in front of the vehicle. The driving assistance device detects a preceding vehicle ahead of the vehicle using an in-vehicle camera. When the preceding vehicle enters the determination region, the driving assistance device determines that there is a collision risk and issues a warning to an occupant.
According to the technique described in Patent Literature 1, it is only a target viewable by the in-vehicle camera that is treated as a target for the collision risk determination. For example, since a target ahead of a curve in front of the vehicle is not viewable by the in-vehicle camera, it is not possible to determine a collision risk with regard to the target. When the target comes into a field of vision of the in-vehicle camera after the vehicle comes very close to the target, sudden braking may be caused.
An object of the present disclosure is to provide a technique capable of improving safety when controlling a vehicle traveling in a predetermined area.
A first aspect is directed to a vehicle control system for controlling a vehicle traveling in a predetermined area.
The vehicle control system includes one or more processors.
The one or more processors acquire vehicle information indicating a position of the vehicle.
The one or more processors set a determination region around the vehicle based on the vehicle information.
The one or more processors acquire an image captured by an infrastructure camera that is installed outside the vehicle and images a situation of the predetermined area.
The one or more processors determine whether or not a target is present in the determination region based on the image captured by the infrastructure camera.
The one or more processors decelerate the vehicle when the target is present in the determination region.
A second aspect is directed to a vehicle control method for controlling a vehicle in a predetermined area by a computer.
The vehicle control method includes:
According to the present disclosure, the infrastructure camera is used for controlling the vehicle. Using the infrastructure camera makes it possible to detect a target that cannot be detected by an in-vehicle sensor such as an in-vehicle camera. Then, the vehicle control is performed in consideration of the target that cannot be detected by the in-vehicle sensor. More specifically, when the target is present in the determination region around the vehicle, the vehicle is decelerated. Since the target that cannot be detected by the in-vehicle sensor is taken into consideration as well, it is possible to decelerate the vehicle well in advance, which can reduce necessity of sudden braking. Therefore, the safety in controlling the vehicle is improved.
Embodiments of the present disclosure will be described with reference to the accompanying drawings.
The predetermined area AR in which the vehicle 1 travels is, for example, an elongated region extending in a first direction S. It can be said that the first direction S is a longitudinal direction of the predetermined area AR. Examples of the predetermined area AR include a roadway (e.g., expressway, general road), a passage in a parking lot, a passage in a factory, and the like. A center line of the predetermined area AR parallel to the first direction S may be the same as a center line of the roadway or the passage. On the other hand, a width of the predetermined area AR orthogonal to the first direction S does not need to be completely equal to a width of the roadway or the passage. The width of the predetermined area AR may be a width obtained by adding a margin width to a vehicle width of a general vehicle. The width of the predetermined area AR may be increased in a curve section.
The vehicle control system 100 may include at least a part of an in-vehicle system 10 installed on the vehicle 1. The in-vehicle system 10 acquires a current position of the vehicle 1 by the use of a global navigation satellite system (GNSS). In addition, the in-vehicle system 10 acquires an image captured (taken) by an in-vehicle camera. The in-vehicle system 10 may control the vehicle 1 based on the image captured by the in-vehicle camera. For example, the in-vehicle system 10 may control automated driving of the vehicle 1 based on the image captured by the in-vehicle camera. That is, the vehicle 1 may be an automated driving vehicle. Here, the automated driving means that at least a part of steering, acceleration, and deceleration of the vehicle 1 is automatically performed independently of a driver's driving operation. As an example, the automated driving may be one of Level 3 or higher.
The vehicle control system 100 may include an external device that is located outside the vehicle 1. For example, the external device is a management server that manages the vehicle 1. The management server may be a distributed server that performs distributed processing. The external device communicates with the vehicle 1 (i.e., the in-vehicle system 10) and remotely controls the vehicle 1.
The vehicle control system 100 may be distributed to the in-vehicle system 10 and the external device.
According to the present embodiment, an “infrastructure camera CAM” installed outside the vehicle 1 is also used for controlling the vehicle 1. The infrastructure camera CAM is installed so as to be able to image a situation of the predetermined area AR and its surroundings. Typically, a plurality of infrastructure cameras CAM are installed along the longitudinal direction (i.e., the first direction S) of the predetermined area AR. Respective angles of view of the plurality of infrastructure cameras CAM may partially overlap each other. An image 250, which is captured (taken) by the infrastructure camera CAM, indicates the situation of the predetermined area AR and its surroundings.
The vehicle control system 100 communicates with the infrastructure camera CAM to acquire the image 250 captured (taken) by the infrastructure camera CAM. A target TGT present in the predetermined area AR may be shown in the image 250. Here, the target TGT is an object other than the vehicle 1 which is the control target. Examples of the target TGT include a pedestrian, a bicycle, another vehicle (for example, a preceding vehicle, a parked vehicle) other than the vehicle 1, and the like.
The vehicle control system 100 is able to detect (recognize) the target TGT present in the predetermined area AR based on the image 250 captured by the infrastructure camera CAM. For example, the vehicle control system 100 utilizes an image recognition AI (Artificial Intelligence) to recognize the target TGT shown in the image 250. The image recognition AI is generated in advance through machine learning. Installation information (an installation position, an installation direction, an angle of view, and the like) of the infrastructure camera CAM is known information. Based on the installation information of the infrastructure camera CAM and an in-image position of the target TGT in the image 250, the vehicle control system 100 is able to detect the target TGT present in the predetermined area AR and further calculate a position of the target TGT in an absolute coordinate system.
By using the infrastructure camera CAM, it may be possible to detect a target TGT that cannot be detected by an in-vehicle sensor such as an in-vehicle camera. In
The target TGT present in the predetermined area AR may be a “risk” for the vehicle 1 traveling in the predetermined area AR. Therefore, the vehicle control system 100 performs “risk avoidance control” for avoiding the risk, as necessary. More specifically, the vehicle control system 100 automatically performs at least one of steering and deceleration of the vehicle 1 in order to avoid a collision between the vehicle 1 and the target TGT. That is, the risk avoidance control includes at least one of steering control and deceleration control.
In the following, in particular, the deceleration control in the risk avoidance control will be described in detail. The steering control may be performed in addition to the deceleration control described below.
A vehicle position PV is a position of the vehicle 1 in the absolute coordinate system. A direction of travel X is a direction in which the vehicle 1 travels. A “front direction (forward)” is the direction of travel X, and a “rear direction (rearward)” is a direction opposite to the direction of travel X. The determination region D may be divided into a “front determination region Df” and a “rear determination region Dr.”
The front determination region Df is the determination region D in front of the vehicle 1 and is located in the front direction (i.e., the direction of travel X) when viewed from the vehicle position PV. A front boundary DBf is a front end of the front determination region Df. It can be said that the front determination region Df is a region between the front boundary DBf and the vehicle position PV. A front distance Lf is a distance from the vehicle position PV to the front boundary DBf along the first direction S. The front distance Lf is set to a distance within which the vehicle 1 is able to stop easily without sudden braking. A position of the front boundary DBf may change in conjunction with the vehicle position PV. In that case, the front determination region Df also changes in conjunction with the vehicle position PV.
On the other hand, the rear determination region Dr is the determination region D behind the vehicle 1 and is located in the rear direction when viewed from the vehicle position PV. A rear boundary DBr is a rear end of the rear determination region Dr. It can be said that the rear determination region Dr is a region between the rear boundary DBr and the vehicle position PV. A rear distance Lr is a distance from the vehicle position PV to the rear boundary DBr along the first direction S. A position of the rear boundary DBr may change in conjunction with the vehicle position PV. In that case, the rear determination region Dr also changes in conjunction with the vehicle position PV.
The vehicle control system 100 sets the determination region D (the front determination region Df and the rear determination region Dr) based on the vehicle position PV. In addition, as described above, the vehicle control system 100 detects the target TGT present in the predetermined area AR based on the image 250 captured by the infrastructure camera CAM. Further, the vehicle control system 100 calculates the position of the target TGT in the absolute coordinate system based on the installation information (the installation position, the installation direction, the angle of view, and the like) of the infrastructure camera CAM. Additionally, the vehicle control system 100 may detect the target TGT around the vehicle 1 by using an in-vehicle sensor such as an in-vehicle camera.
Then, the vehicle control system 100 determines whether or not the target TGT is present in the determination region D. An example of the determination process is as follows. The vehicle control system 100 sets a target region that covers the detected target TGT and the vicinity thereof. For example, the target region has a circular shape. As a size of the target TGT becomes larger, the target region also becomes larger. The target region may become larger as a moving speed of the target TGT becomes higher. The target region may have a shape in which a direction of movement of the target TGT is widened. When the target region and the determination region D at least partially overlap each other, the vehicle control system 100 determines that the target TGT is present in the determination region D.
When the target TGT is present in the determination region D, the vehicle control system 100 activates the risk avoidance control (i.e., the deceleration control). That is to say, when the target TGT is present in the determination region D, the vehicle control system 100 decelerates the vehicle 1. Here, decelerating is a concept including decelerating to stop. That is, the vehicle control system 100 may stop the vehicle 1.
The front determination region Df is divided into a first front determination region D1f and a second front determination region D2f. The first front determination region D1f is located on the inner side, and the second front determination region D2f is located on the outer side. In other words, the second front determination region D2f surrounds the first front determination region D1f. A first front boundary DB If is a front end of the first front determination region D1f. A second front boundary DB2f is a front end of the second front determination region D2f.
The rear determination region Dr is divided into a first rear determination region D1r and a second rear determination region D2r. The first rear determination region D1r is located on the inner side, and the second rear determination region D2r is located on the outer side. In other words, the second rear determination region D2r surrounds the first rear determination region Dlr. A first rear boundary DB1r is a rear end of the first rear determination region Dlr. A second rear boundary DB2r is a rear end of the second rear determination region D2r.
The vehicle control system 100 determines whether or not the target TGT is present in the determination region D. When the target TGT is present in the first determination region D1 on the inner side, the vehicle control system 100 decelerates to stop the vehicle 1. This corresponds to emergency stop control. On the other hand, when the target TGT is present in the second determination region D2 on the outer side, the vehicle control system 100 decelerates the vehicle 1 more slowly than in the case of the emergency stop control. This corresponds to normal deceleration control. A deceleration in the normal deceleration control is lower than a deceleration in the emergency stop control.
As described above, according to the present embodiment, the infrastructure camera CAM is used for controlling the vehicle 1. Using the infrastructure camera CAM makes it possible to detect the target TGT that cannot be detected by an in-vehicle sensor such as an in-vehicle camera. For example, even a target TGT present ahead of a curve in front of the vehicle 1 can be detected by the infrastructure camera CAM. Then, the control of the vehicle 1 is performed in consideration of the target TGT that cannot be detected by the in-vehicle sensor. More specifically, when the target TGT is present in the determination region D around the vehicle 1, the vehicle 1 is decelerated. Since the target TGT that cannot be detected by the in-vehicle sensor is taken into consideration as well, it is possible to decelerate the vehicle 1 well in advance, which can reduce necessity of sudden braking. Therefore, the safety in controlling the vehicle 1 is improved.
The sensor group 20 includes a recognition sensor, a vehicle state sensor, and a position sensor. The recognition sensor recognizes (detects) a situation around the vehicle 1. Examples of the recognition sensor include a camera, a laser imaging detection and ranging (LIDAR), a radar, and the like. The vehicle state sensor detects a state of the vehicle 1. For example, the vehicle state sensor includes a speed sensor, an acceleration sensor, a yaw rate sensor, a steering angle sensor, and the like. The position sensor detects a position and an orientation of the vehicle 1. For example, the position sensor includes a GNSS sensor.
The communication device 30 communicates with the outside via a communication network. For example, the communication device 30 communicates with the infrastructure camera CAM, the management server, and the like.
The travel device 40 includes a steering device, a driving device, and a braking device. The steering device steers wheels. For example, the steering device includes an electric power steering (EPS) device. The drive device is a power source that generates a driving force. Examples of the drive device include an engine, an electric motor, an in-wheel motor, and the like. The braking device generates a braking force.
The control device (controller) 50 is a computer that controls the vehicle 1. The control device 50 includes one or more processors 60 (hereinafter, simply referred to as a processor 60 or processing circuitry) and one or more storage devices 70 (hereinafter, simply referred to as a storage device 70). The processor 60 executes a variety of processing. Examples of the processor 60 include a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), and the like. The storage device 70 stores a variety of information. Examples of the storage device 70 include a hard disk drive (HDD), a solid state drive (SSD), a volatile memory, a non-volatile memory, and the like.
A vehicle control program 80 is a computer program for controlling the vehicle 1. The functions of the control device 50 may be implemented by a cooperation of the processor 60 executing the vehicle control program 80 and the storage device 70. The vehicle control program 80 is stored in the storage device 70. Alternatively, the vehicle control program 80 may be recorded on a non-transitory computer-readable recording medium.
The control device 50 acquires driving environment information 90 indicating a driving environment for the vehicle 1. The driving environment information 90 is stored in the storage device 70.
The driving environment information 90 includes surrounding situation information acquired based on the recognition sensor. For example, the surrounding situation information includes an image captured by the camera. As another example, the surrounding situation information may include point group information acquired by the LIDAR. The surrounding situation information includes object information regarding an object (target) around the vehicle 1. Examples of the object around the vehicle 1 include a pedestrian, another vehicle, an obstacle, a white line, a landmark, a traffic light, and the like. The object information indicates a relative position and a relative speed of the object with respect to the vehicle 1.
The driving environment information 90 further includes position information indicating the vehicle position PV (i.e., the current position of the vehicle 1) in the absolute coordinate system. The control device 50 acquires the position information from a result of detection by the position sensor. The control device 50 may acquire highly accurate position information by commonly-known localization processing that uses the object information and map information.
The driving environment information 90 further includes vehicle state information detected by the vehicle state sensor.
Moreover, the control device 50 executes vehicle travel control that controls travel of the vehicle 1. The vehicle travel control includes steering control, acceleration control, and deceleration control. The control device 50 executes the vehicle travel control by controlling the travel device 40 (i.e., the steering device, the driving device, and the braking device).
Further, the control device 50 may execute automated driving control that controls automated driving of the vehicle 1. Here, the automated driving means that at least a part of steering, acceleration, and deceleration of the vehicle 1 is automatically performed independently of a driver's driving operation. The control device 50 generates a travel plan of the vehicle 1 based on the driving environment information 90. Examples of the travel plan include maintaining a current travel lane, making a lane change, making a right or left turn, avoiding collision with an object, and the like. Further, the control device 50 generates a target trajectory for achieving the travel plan. The target trajectory includes a target position and a target speed of the vehicle 1. Then, the control device 50 executes the above-described vehicle travel control so that the vehicle 1 follows the target trajectory.
A vehicle control program 140 is a computer program for controlling the vehicle 1. The functions of the vehicle control system 100 may be implemented by a cooperation of the processor 110 executing the vehicle control program 140 and the storage device 120. The vehicle control program 140 is stored in the storage device 120. Alternatively, the vehicle control program 140 may be recorded on a non-transitory computer-readable recording medium.
The vehicle control system 100 may be partially the same as the in-vehicle system 10 described above. That is, the processor 60 and the processor 110 may be the same. The storage device 70 and the storage device 120 may be the same. The vehicle control program 80 and the vehicle control program 140 may be the same.
The processor 110 acquires a variety of information 200. The variety of information 200 is stored in the storage device 120. The variety of information 200 includes map information 210, vehicle information 220, determination region information 230, infrastructure camera information 240, the image 250, target information 260, and the like.
The map information 210 is map information of the predetermined area AR in which the vehicle 1 travels. The map information 210 is provided to the vehicle control system 100 in advance.
The vehicle information 220 is information related to the vehicle 1 being the control target. The vehicle information 220 includes at least the position information indicating the vehicle position PV (i.e., the current position of the vehicle 1) in the absolute coordinate system. The position information is obtained from the in-vehicle system 10. The vehicle information 220 may further include information on the direction of travel X of the vehicle 1. The direction of travel X of the vehicle 1 can be determined based on the vehicle position PV. The vehicle information 220 may further include speed information of the vehicle 1. The speed information is also obtained from the in-vehicle system 10.
The determination region information 230 is information regarding the determination region D used for the risk avoidance control. For example, the determination region information 230 includes a setting policy for setting the determination region D. The determination region information 230 may include a set value of the front distance Lf from the vehicle position PV to the front boundary DBf along the first direction S. The determination region information 230 may include a set value of the rear distance Lr from the vehicle position PV to the rear boundary DBr along the first direction S.
The infrastructure camera information 240 indicates the installation position, the installation direction, the angle of view, and the like for each infrastructure camera CAM. The infrastructure camera information 240 is provided to the vehicle control system 100 in advance.
The image 250, which is captured (taken) by the infrastructure camera CAM, indicates the situation of the predetermined area AR and its surroundings. The processor 110 acquires the image 250 from the infrastructure camera CAM via the communication device 130.
The target information 260 is information on the target TGT present in the predetermined area AR. The processor 110 detects the target TGT present in the predetermined area AR based on the image 250 captured by the infrastructure camera CAM. For example, the processor 110 recognizes the target TGT shown in the image 250 by using the image recognition AI. The processor 110 calculates the position of the target TGT in the absolute coordinate system based on the infrastructure camera information 240 and the in-image position of the target TGT in the image 250. Additionally, the target information 260 may include information on a target detected by the in-vehicle system 10.
It should be noted that the vehicle 1 being the control target and the other target TGT are distinguished from each other. For example, comparing the vehicle position PV with the position of the target TGT makes it possible to distinguish the vehicle 1 from the other targets TGT. For example, a target region covering the detected target TGT and the vicinity thereof is set. When the vehicle position PV is within the target region, the target TGT is regarded as the vehicle 1 being the control target.
The processor 110 controls the vehicle 1. When the vehicle control system 100 includes the in-vehicle system 10, the processor 110 controls the vehicle 1 by controlling the travel device 40. When the vehicle control system 100 is provided outside the in-vehicle system 10, the processor 110 remotely controls the vehicle 1 by issuing a control instruction to the in-vehicle system 10 via the communication device 130.
In particular, the processor 110 performs the risk avoidance control in order to avoid a collision between the target TGT and the vehicle 1.
In Step S100, the processor 110 acquires the variety of information 200.
In Step S110, the processor 110 sets the determination region D based on the map information 210, the vehicle information 220, and the determination region information 230. The processor 110 may set the front determination region Df and the rear determination region Dr in consideration of the direction of travel X of the vehicle 1.
In Step S120, the processor 110 determines whether or not the target TGT is present in the determination region D on the basis of the target information 260 and the determination region D. For example, the processor 110 sets a target region that covers the detected target TGT and the vicinity thereof. For example, the target region has a circular shape. As a size of the target TGT becomes larger, the target region also becomes larger. The target region may become larger as a moving speed of the target TGT becomes higher. The target region may have a shape in which a direction of movement of the target TGT is widened. When the target region and the determination region D at least partially overlap each other, the processor 110 determines that the target TGT is present in the determination region D.
In a case where the target TGT is not present in the determination region D (Step S120; No), the processing proceeds to Step S130. In Step S130, the processor 110 executes normal vehicle travel control.
On the other hand, in a case where the target TGT is present in the determination region D (Step 120; Yes), the processing proceeds to Step S140. In Step S140, the processor 110 performs deceleration control for decelerating the vehicle 1. The processor 110 may decelerate to stop the vehicle 1.
As described above, according to the present embodiment, the infrastructure camera CAM is used for controlling the vehicle 1. Using the infrastructure camera CAM makes it possible to detect the target TGT that cannot be detected by the in-vehicle sensor such as the in-vehicle camera. For example, even a target TGT present ahead of a curve in front of the vehicle 1 can be detected by the infrastructure camera CAM. Then, the control of the vehicle 1 is performed in consideration of the target TGT that cannot be detected by the in-vehicle sensor. More specifically, when the target TGT is present in the determination region D around the vehicle 1, the vehicle 1 is decelerated. Since the target TGT that cannot be detected by the in-vehicle sensor is taken into consideration as well, it is possible to decelerate the vehicle 1 well in advance, which can reduce necessity of sudden braking. Therefore, the safety in controlling the vehicle 1 is improved.
Other embodiments will be described below. The configurations of the in-vehicle system 10 and the vehicle control system 100 are the same as those in the first embodiment.
According to pattern (A) shown in
In view of the above, according to the second embodiment, as shown in Pattern (B) in
The processor 110 may decrease the update frequency of the front boundary DBf as the speed of the vehicle 1 becomes lower. The speed of the vehicle 1 is obtained from the vehicle information 220. The update frequency of the front boundary DBf may decrease monotonically or stepwise in accordance with the speed of the vehicle 1. For example, the update frequency when the speed of the vehicle 1 is equal to or higher than a predetermined threshold is a first update frequency, and the update frequency when the speed of the vehicle 1 is lower than the predetermined threshold is a second update frequency lower than the first update frequency. Changing the update frequency of the front boundary DBf in consideration of the speed of the vehicle 1 makes it possible to more effectively reduce the processing load on the processor 110.
Block arrangement information indicates an arrangement of a plurality of blocks BK in the absolute coordinate system. That is, the block arrangement information indicates positions of the front block boundary BKBf and the rear block boundary BKBr of each block BK in the absolute coordinate system. Such the block arrangement information is registered in the map information 210 in advance.
The processor 110 sets and updates the front boundary DBf of the front determination region Df based on the vehicle position PV and the block arrangement information. For the sake of description, a block BK in which the vehicle 1 is currently present is hereinafter referred to as a “first block BK1.” A block BK adjacent to and in front of the first block BK1 is hereinafter referred to as a “second block BK2.” That is, the second block BK2 is located in the direction of travel X of the vehicle 1 when viewed from the first block BK1. The processor 110 recognizes the first block BK1 and the second block BK2 based on the vehicle position PV and the block arrangement information. Then, the processor 110 sets the front block boundary BKBf of the second block BK2 as the front boundary DBf of the front determination region Df.
In the example shown in
The vehicle 1 eventually reaches the front block boundary BKBf (i) of the block BK (i). The processor 110 determines whether or not a predetermined portion of the vehicle 1 has passed the front block boundary BKBf (i) of the block BK (i) based on the vehicle position PV and the block arrangement information. The predetermined portion is arbitrary. When the predetermined portion of the vehicle 1 passes the front block boundary BKBf (i) of the block BK (i), the processor 110 determines that the vehicle 1 passes the front block boundary BKBf (i) and enters the adjacent block BK (i+1).
When the vehicle 1 enters the block BK (i+1), the block BK (i+1) becomes a new first block BK1, and the block BK (i+2) becomes a new second block BK2. That is, the first block BK1 and the second block BK2 are updated. Therefore, the processor 110 sets the front block boundary BKBf (i+2) of the block BK (i+2) as a new front boundary DBf. That is, the front boundary DBf of the front determination region Df is updated.
Thereafter, the same processing is repeated. Each time the vehicle 1 passes the front block boundary BKBf of the first block BK1, the processor 110 updates the first block BK1, the second block BK2, and the front boundary DBf of the front determination region Df. Such the method makes it possible to set the update frequency of the front boundary DBf of the front determination region Df to be lower than the update frequency of the vehicle position PV.
As shown in
As described above, according to the second embodiment, the update frequency of the front boundary DBf of the front determination region Df is set to be lower than the update frequency of the vehicle position PV. This makes it possible to reduce the processing load on the processor 110. In particular, when a large number of vehicles 1 are controlled simultaneously in parallel based on the images 250 captured by a large number of infrastructure cameras CAM, it is preferable to reduce the processing load.
As shown in
In view of the above, according to the third embodiment, the rear determination region Dr is set to be narrower than the front determination region Df. More specifically, the processor 110 sets the front determination region Df and the rear determination region Dr such that the rear distance Lr is shorter than the front distance Lf (Lf>Lr). The front distance Lf and the rear distance Lr may be predetermined values, respectively.
According to the third embodiment, it is possible to suppress excessive activation of the deceleration control while ensuring the safety. As a result, the continuity of travel is secured. Further, the user of the vehicle 1 is prevented from feeling a sense of discomfort.
A combination of the second embodiment and the third embodiment described above also is possible. In this case, the update frequency of the front boundary DBf of the front determination region Df is set to be lower than the update frequency of the vehicle position PV. In addition, the rear determination region Dr is set to be narrower than the front determination region Df. As a result, both effects of reducing the processing load and suppressing the excessive activation of the deceleration control can be achieved.
In the fifth embodiment, an update frequency of the rear boundary DBr of the rear determination region Dr is set to be lower than the update frequency of the vehicle position PV. A description overlapping with the above-described second embodiment will be omitted as appropriate.
The processor 110 may decrease the update frequency of the rear boundary DBr as the speed of the vehicle 1 becomes lower. This makes it possible to reduce the processing load more effectively.
According to the fifth embodiment, it is possible to reduce the processing load on the processor 110. In particular, when a large number of vehicles 1 are controlled simultaneously in parallel based on the images 250 captured by a large number of infrastructure cameras CAM, it is preferable to reduce the processing load. A combination of the fifth embodiment and any of the above-described embodiments also is possible.
Number | Date | Country | Kind |
---|---|---|---|
2023-090363 | May 2023 | JP | national |