With the advancement of technology, a variety of vehicles have been frequently used in our daily lives. However, no matter what kind of vehicle, it is necessary to control the vehicle through driving. In order to avoid traffic accidents caused by human operation errors, the complicated technology of “self-driving” emerged. Self-driving is not just a cool symbol in science fiction movies. Its existence can improve our future life.
To put it simply, self-driving is a vehicle equipped with many sophisticated sensors that can automatically sense the surrounding environment and navigate automatically without human intervention. If a self-driving car is compared to a person, the sensor is like an eye, which can sense the surrounding road environment. After the eye sees it, through the decision-making mechanism like the brain, the built-in computing system calculates and selects the driving route. Then, through the control system, which acts like hands and feet, the functions such as braking and accelerator can be controlled.
Self-driving cars can sense their environment using technologies such as radar, lidar, sensors, GPS, and computer vision. Advanced control systems convert sensor data into appropriate navigation paths, as well as obstacles and related signs. Through simultaneous positioning and map construction technology, its map information is updated so that transportation vehicles can continuously track their location. An unmanned fleet composed of multiple self-driving vehicles can effectively reduce traffic pressure and thus improving the transportation efficiency of the transportation system.
In the United States, the National Highway Traffic Safety Administration has proposed a formal five-level classification system for self-driving:
At present, most of the self-driving vehicles with a high market share are level 2. When such vehicles turn on the level 2 assisted driving system, the vehicle can autonomously complete lane changes, ramp entering, and ramp exiting under the driver's supervision. This is just like a person driving a car. In this state, the car has the ability to sense (collect road information), make decisions (know how to drive), and execute (execute planned strategies).
Recently, it was discovered that many level 2 self-driving vehicles, after driving more than ten kilometers on highways with assisted driving, suddenly encountered a car that rolled over on the road without any alarm or deceleration. The self-driving vehicles directly hit the rolled-over car, leading to tragedy.
It is obvious that these vehicles have problems in sensing. Currently, most of these level 2 self-driving vehicles sense through forward-looking cameras and millimeter-wave radars to determine the objects ahead and determine road conditions. Regardless of whether they use the rule-based vision algorithm or the deep learning technology, there are inherent and unchangeable flaws of inability in recognizing some targets.
The first is a situation that has not been seen before. The training data set cannot completely cover all targets in the real world. It is deemed good to cover 10%. The remaining 90% cannot be recognized if it has never been seen before, let alone in the real world. In the real world, new irregular targets are generated every moment, such as cars broken down on the road.
The second situation is that the image lacks texture features. It is like putting a piece of white paper in front of the camera. Undoubtedly, it is not possible to identify what object it is. At a certain moment, the side of some large trucks with high chassis or a white wall is just like white paper. Then, the machine vision based on deep learning is like a blind man and the car will hit the target without deceleration.
In order to make up for this mistake, millimeter-wave radar is introduced on the basis of vision. Millimeter-wave radar will directly filter out some stationary objects or suspected stationary objects through algorithms to avoid erroneous reactions. Unfortunately, if a truck is stationary or moving very slowly, millimeter-wave radar will not detect it and accidents might still occur.
Accordingly, self-driving vehicles cannot use sensors to identify slow-moving or even stopped obstacles and thus avoid directly colliding with them without warning or slowing down. This is a problem to be solved by those skilled in the art.
An objective of the present invention is to provide a method for detecting stationary objects by vehicles moving at high speed, so as to prevent self-driving vehicles from being unable to recognize obstacles and causing irreparable tragedies to occur when they hit obstacles at full speed.
To achieve the above objective, the present invention provides a method for detecting stationary objects by vehicles moving at high speed. The method is applied to a vehicle, which includes an on-board computer, a depth image capture unit, a laser image capture unit, and an optical image capture unit. The on-board computer is connected electrically to the depth image capture unit, the laser image capture unit, and the optical image capture unit. The on-board computer executes the following steps of: when the vehicle moves at a first speed, the depth image capture unit, the laser image capture unit, and the optical image capture unit independently capturing a depth image, a laser image, and an optical image of a front side of the vehicle, respectively; the on-board computer receiving and merging the depth image, the laser image, and the optical image according to a fusion algorithm to give a merged image; the on-board computer analyzing and judging a stationary object in the merged image according to an image optical flow method and generating stationary object information including a relative distance between the stationary object and the vehicle and a second speed of the stationary object; and when the relative distance is smaller than a distance threshold value and the second speed is smaller than a speed threshold value, the on-board computer generating an alarm message. The depth image capture unit captures an angular range of the front side of the vehicle. The laser image capture unit captures the angular range of the front side of the vehicle. The image capture unit captures the angular range of the front side of the vehicle. Thereby, the identification of slow-moving or stationary obstacles can be done and hence preventing self-driving vehicles from being unable to identify obstacles and causing accidents.
According to an embodiment of the present invention, the depth image capture unit, the laser image capture unit, and the optical image capture unit independently perform capture in an angular range, respectively.
According to an embodiment of the present invention, the angular range extends from a central line of the front side of the vehicle along the horizontal plane to both sides by 120 degrees.
According to an embodiment of the present invention, the alarm message includes a sound message or/and a picture message.
According to an embodiment of the present invention, after the step of when the relative distance is smaller than a distance threshold value and the second speed is smaller than a speed threshold value, the on-board computer generating an alarm message, the method further comprises a step of the on-board computer controlling a brake unit to brake.
According to an embodiment of the present invention, the distance threshold value is 100, 70, 50, or 20 meters.
According to an embodiment of the present invention, the speed threshold value is 5 kilometers per hour.
According to an embodiment of the present invention, the capture range of the depth image capture unit is within 20 meters from the vehicle; the capture range of the laser image capture unit is within 100 meters from the vehicle; and the capture range of the optical image capture unit is within 200 meters from the vehicle.
According to an embodiment of the present invention, the first speed of the vehicle is greater than or equal to 60 kilometers per hour.
Accordingly, the present invention provides a method for detecting stationary objects by vehicles moving at high speed to solve the problem in the field.
In order to make the structure and characteristics as well as the effectiveness of the present invention to be further understood and recognized, the detailed description of the present invention is provided as follows along with embodiments and accompanying figures.
Most commonly known self-driving vehicles use forward-looking cameras and millimeter-wave radars to sense the environment and collect road information. The road information is then transmitted to the control unit, and the control unit controls the vehicle's acceleration, deceleration, or steering operations. However, forward-looking cameras cannot judge unidentified objects and images without texture features. Millimeter-wave radars have a flaw when detecting stationary targets. They can detect stationary targets including buildings, vehicles, and pedestrians, etc., but the targets cannot be distinguished and identified very well, as well as being too sensitive to metal objects. Therefore, in order to avoid false operations, after obtaining the reflection data of radar, the radar will directly filter out some stationary objects or suspected stationary objects through algorithms to avoid erroneous response, Unfortunately, it will cause the autonomous driving to be unable to distinguish car accidents, slowly-moving construction vehicles, or large trucks with higher chassis.
The present invention adopts three different image capture units on a high-speed moving vehicle, namely, a depth image capture unit, a laser image capture unit, and an optical image capture unit. The three different sensors are disposed on the front side of the vehicle. The on-board computer then synthesizes and analyzes the images captured by the three different image capture units to determine whether a stationary object is included, the distance between the stationary object and the vehicle, and the speed of the stationary object and to display them on a display unit for reminding drivers to pay attention to stationary objects to avoid accidents.
In the following description, various embodiments of the present invention are described using figures for describing the present invention in detail. Nonetheless, the concepts of the present invention can be embodied by various forms. Those embodiments are not used to limit the scope and range of the present invention.
First, please refer to
Step S10: When the vehicle moves at a first speed, the depth image capture unit, the laser image capture unit, and the optical image capture unit independently capturing a depth image, a laser image, and an optical image of a front side of the vehicle, respectively;
Step S20: The on-board computer receiving and merging the depth image, the laser image, and the optical image according to a fusion algorithm to give a merged image;
Step S30: The on-board computer analyzing and judging a stationary object in the merged image according to an image optical flow method and generating stationary object information including a relative distance between the stationary object and the vehicle and a second speed of the stationary object; and
Step S40: When the relative distance is smaller than a distance threshold value and the second speed is smaller than a speed threshold value, the on-board computer generating an alarm message.
Next, please refer to
Next, the steps will be described in detail in the following.
Please refer to
Please refer again to
Please refer again to
Please refer to
According to the present embodiment, the fusion algorithm as described above is performed through a characteristic function f(x,y) shown in Equation (1). When x and y satisfy a fact, the value of the characteristic function will be 1.
The hidden state corresponding to a certain observation value is determined by the context (observation, state). The introduction of the characteristic function can select environmental characteristics (the combination of observations or states). It can be said that the characteristics (observation combinations) are used to replace observations for avoiding the limitations of the observational independence assumption for Naive Bayes in the hidden Markov model (HMM).
According to the training data D={(x(1), y(1)), (x(2), y(2)}, . . . , (x(N), y(N)} with size T, an empirical expectation value (as shown in Equation (2)) and a model expectation value (as shown in Equation (3)) will be given. The learning of the maximum entropy model is equivalent to the optimization of constraints.
According to the training data
Assume that the empirical expectation value is equal to the model expectation value. Then there exist multiple sets C of conditional probability distribution pertinent to the arbitrary characteristic function fi, as shown in the following Equation (4):
The principle of maximum entropy holds that the only reasonable probability distribution derived from incomplete information (such as a limited amount of training data) should have the maximum entropy value under the constraints provided by this information. That is, the distribution of maximum entropy under limited conditions is the optimal distribution. Therefore, the maximum entropy function model becomes a constrained optimization problem of convex functions.
We usually use the Lagrangian duality principle to transform the original formula into an unconstrained extreme value problem:
Find the partial derivative of the Lagrangian function with respect to p and make it equal to 0. By solving the equation, omitting intermediate steps, and rearranging terms, the following equations will be given:
Maximum-entropy Markov model (MEMM)
Use p(yi|yi-1,xi) distribution to replace the two conditional probability distributions in HMM. It represents the probability of obtaining the current state from the previous state according to the observation value. That is, predict the current state based on the previous state and the current observation. Each such distribution function py
Assume that the points {p1, p2, . . . , pn} on the discrete probability distribution and the maximum information entropy are found. To find the probability distribution {p1, p2, . . . , pn}. The maximum entropy formula is:
The sum of pi at each point i must be equal to 1:
Find the angle of maximum entropy by using Lagrange multipliers. {right arrow over (p)} is across all {x1, x2, . . . , xn} of the discrete probability distributions {right arrow over (p)}. The following condition is imposed:
It gives a system of equations Ñ with k=1, . . . , n, such that:
Expand these equations ñ and give the following equation:
It is apparent that all p*k are equal (since they all depend on λ only). By using the following constraint:
It gives
Thereby, a uniform distribution is the maximum-entropy distribution:
Equation (20) gives the maximum-entropy distribution. As shown in
Please refer to
Furthermore, according to the present embodiment, the method for judging the second speed 260 of the stationary object 2 will be illustrated. Please refer again to
The above-mentioned image optical flow method uses the Lucas-Kanade optical flow algorithm to calculate the position information of the stationary object 2.
First, use the image difference method to expand the image constraint equation using Taylor's formula and give:
H.O.T. means higher order terms and can be neglected for small displacement. According to the equation, it is obtained that:
Vx, Vy, Vz are the x, y, z components of the optical flow vector of I(x,y,z,t). ∂I/∂x, ∂I/∂y, ∂I/∂z and ∂I/∂l are the difference toward the corresponding direction at the point (x,y,z,t). Thereby, Equation (24) is converted to the following equation:
Furthermore, rewrite Equation (25) as:
Since Equation (24) contains three unknowns (Vx,Vy,Vz), the subsequent algorithm is used to calculate the unknowns.
First, assume that (Vx, Vy, Vz) are contact in a small cube with size m*m*m (m>1). Then a system of equations will be given for elements 1 . . . n, n=m3 as follows:
All of the above multiple equations contain three unknowns, forming a system of equations that are overdetermined equations. In other words, there is redundancy in the system of equations. The system of equations can be expressed as:
In order to solve the redundancy problem of this overdetermined equation, Equation (29) is obtained by using the least squares method:
Giving:
Substitute the result of Equation (32) into Equation (24) to estimate the relative distance 240 and the second speed 260 of the stationary object information 220.
Finally, please refer to
For example, the alarm message 132 is displayed on the display unit 13 and further includes an alarm sound.
That is to say, when the relative distance 240 between the vehicle 1 and the stationary object 2 is 100 meters, light indicators will appear on the windshield; when the relative distance 240 between the vehicle 1 and the stationary object 2 is 70 meters, light indicators will flash on the windshield; when the relative distance 240 between the vehicle 1 and the stationary object 2 is 50 meters, light indicators will flash on the windshield along with low-frequency short alarm sound; and when the relative distance 240 between the vehicle 1 and the stationary object 2 is 20 meters, light indicators will flash on the windshield along with high-frequency short alarm sound. In this way, different distances from the stationary object 2 can be distinguished, and different methods are adopted to remind the driver of the vehicle 1. It is understandable that the aforementioned alarm message 132 and alarm sound are only examples. A person having ordinary skill in the art can adjust the alarm method according to the practical application situations.
Next, another embodiment will be provided. Please refer to
The steps S10 to S40 of the present embodiment have been illustrated as above and will not be repeated.
Step S42: The on-board computer controlling the brake unit to execute emergency braking.
Referring back to
For example, when the relative distance 240 between the vehicle 1 and the stationary object 2 is 100 meters, light indicators will appear on the windshield; when the relative distance 240 between the vehicle 1 and the stationary object 2 is 70 meters, light indicators will flash on the windshield; and when the relative distance 240 between the vehicle 1 and the stationary object 2 is 50 meters, light indicators will flash on the windshield along with low-frequency short alarm sound. When the relative distance 240 between the vehicle 1 and the stationary object 2 is 20 meters, light indicators will flash on the windshield along with high-frequency short alarm sound. At this time, the on-board computer 12 controls the braking unit 15 to execute emergency braking concurrently. By using different methods to remind the driver of the vehicle 1, it is thereby prevented that the driver does not execute emergency braking owing to inattention.
The above-mentioned embodiments and the method of the present invention provide a method for detecting stationary objects by vehicle moving at high speed. When the vehicle is traveling at high speed, a depth image capture unit, a laser image capture unit and an optical image capture unit are used to sense and obtain environmental information within 200 meters in front of the vehicle, Autonomous driving accidents will not occur because there are stationary or slowly moving objects in the environment that cannot be detected. In addition, different alarms will be provided at different distances between the vehicle and stationary objects for the driver to judge. Furthermore, when the driver is not paying attention, the on-board computer will execute emergency braking.
The present invention also uses depth images combined with lidar information to provide omnidirectional long- and short-distance depth information through information comparison and obtain a more accurate external environment model of the vehicle for distinguishing moving obstacles from stationary ones. The information can be used for better judgment by the on-board computer.
Accordingly, the present invention conforms to the legal requirements owing to its novelty, nonobviousness, and utility. However, the foregoing description is only embodiments of the present invention, not used to limit the scope and range of the present invention. Those equivalent changes or modifications made according to the shape, structure, feature, or spirit described in the claims of the present invention are included in the appended claims of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
112146259 | Nov 2023 | TW | national |