Trailer angle measurement method and device, and vehicle

Information

  • Patent Grant
  • 12099121
  • Patent Number
    12,099,121
  • Date Filed
    Wednesday, June 9, 2021
    3 years ago
  • Date Issued
    Tuesday, September 24, 2024
    3 months ago
Abstract
The present disclosure provides a method and an apparatus for trailer angle measurement, as well as a vehicle, applied in a vehicle including a tractor and a trailer. At least one LiDAR is provided on each of two sides of the tractor. The method includes: obtaining, an initial trailer model containing initial point cloud data; controlling the to emit laser light; controlling each of the LiDARs to receive a corresponding point cloud reflected by the surface of the trailer; and calculating a trailer angle based on the point cloud data and the initial point cloud data using a point cloud matching algorithm. With the embodiments of the present disclosure, fast and accurate measurement of a trailer angle can be achieved with a simple structure.
Description
TECHNICAL FIELD

The present disclosure relates to vehicle technology, and more particularly, to a method and an apparatus for trailer angle measurement, as well as a vehicle.


BACKGROUND

Currently, with the development of the logistics transportation industry, vehicles with tractors and trailers (hereinafter referred to as semi-trailers), such as container trucks, are becoming increasingly popular. As a heavy transportation tool, a semi-trailer is more capable of improving overall economic benefits of road transportation than an ordinary truck. With the development of autonomous driving technology, a trailer angle (e.g., in FIG. 1 which is a top view of a semi-trailer, the trailer angle refers to the angle α between the central axis of the tractor 11 and the central axis of the trailer 12), as a basis for autonomous driving planning and control point, has become a focus of research.


The existing method for trailer angle measurement can only measure small trailer angles. When the trailer angle is relatively large (e.g., larger than ±40°), it is difficult to obtain the trailer angle accurately. Thus, how to implement fast and accurate measurement of a trailer angle with a simple structure has become a problem to be solved.


SUMMARY

The embodiments of the present disclosure provide a method and an apparatus for trailer angle measurement, as well as a vehicle, capable of achieving fast and accurate measurement of a trailer angle with a simple structure.


In order to achieve the above object, the following technical solutions are provided.


In an aspect, a method for trailer angle measurement is provided according to an embodiment of the present disclosure. The method is applied in a semi-trailer including a tractor and a trailer. At least one multi-line LiDAR is provided on each of two sides of the tractor. The method includes: obtaining, in a predetermined vehicle coordinate system, an initial trailer model corresponding to an initial trailer angle value, to obtain initial point cloud data in the initial trailer model; controlling the multi-line LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the multi-line LiDAR; controlling each of the multi-line LiDARs to receive a corresponding laser point cloud reflected by the surface of the trailer; and calculating a trailer angle based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm.


In another aspect, an apparatus for trailer angle measurement is provided according to an embodiment of the present disclosure. The apparatus is applied in a semi-trailer including a tractor and a trailer. At least one multi-line LiDAR is provided on each of two sides of the tractor. The apparatus is communicatively connected to the multi-line LiDARs. The apparatus includes a memory, a processor, and a computer program stored on the memory and executable by the processor. The processor is configured to, when executing the computer program, perform a process of trailer angle measurement. The process includes: obtaining, in a predetermined vehicle coordinate system, an initial trailer model corresponding to an initial trailer angle value, to obtain initial point cloud data in the initial trailer model; controlling the multi-line LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the multi-line LiDAR; controlling each of the multi-line LiDARs to receive a corresponding laser point cloud reflected by the surface of the trailer; and calculating a trailer angle based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm.


In yet another aspect, a computer-readable storage medium is provided according to an embodiment of the present disclosure. The computer-readable storage medium has a computer program stored thereon. The program, when executed by a processor, implements a process of trailer angle measurement. The process is applied in a semi-trailer including a tractor and a trailer. At least one multi-line LiDAR is provided on each of two sides of the tractor. The process includes: obtaining, in a predetermined vehicle coordinate system, an initial trailer model corresponding to an initial trailer angle value, to obtain initial point cloud data in the initial trailer model; controlling the multi-line LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the multi-line LiDAR; controlling each of the multi-line LiDARs to receive a corresponding laser point cloud reflected by the surface of the trailer; and calculating a trailer angle based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm.


In still another aspect, a vehicle is provided according to an embodiment of the present disclosure. The vehicle includes an apparatus for trailer angle measurement, a tractor, and a trailer. At least one multi-line LiDAR is provided on each of two sides of the tractor. The apparatus for trailer angle measurement is communicatively connected to the multi-line LiDARs. The apparatus for trailer angle measurement includes a memory, a processor, and a computer program stored on the memory and executable by the processor. The processor is configured to, when executing the computer program, perform a process of trailer angle measurement. The process includes: obtaining, in a predetermined vehicle coordinate system, an initial trailer model corresponding to an initial trailer angle value, to obtain initial point cloud data in the initial trailer model; controlling the multi-line LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the multi-line LiDAR; controlling each of the multi-line LiDARs to receive a corresponding laser point cloud reflected by the surface of the trailer; and calculating a trailer angle based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm.


With the method and apparatus for trailer angle measurement and the vehicle according to the embodiments of the present disclosure, at least one multi-line LiDAR is provided on each of two sides of the tractor. The multi-line LiDARs provided on two sides can radiate laser light to the surface of the trailer. Accordingly, a trailer angle can be calculated based on corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm. In addition, in the present disclosure, the trailer angle is calculated based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data corresponding to the initial trailer angle value, instead of a laser point cloud of a single LiDAR, such that the accuracy of the result is greatly improved.


The other features and advantages of the present disclosure will be explained in the following description, and will become apparent partly from the description or be understood by implementing the present disclosure. The objects and other advantages of the present disclosure can be achieved and obtained from the structures specifically illustrated in the written description, claims and figures.


In the following, the solutions according to the present disclosure will be described in detail with reference to the figures and embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the technical solutions according to the embodiments of the present disclosure or the prior art more clearly, figures used in description of the embodiments or the prior art will be introduced briefly below. Obviously, the figures described below only illustrate some embodiments of the present disclosure, and other figures can be obtained by those of ordinary skill in the art based on these drawings without any inventive efforts.



FIG. 1 is a schematic diagram showing a trailer angle;



FIG. 2 is a first flowchart illustrating a method for trailer angle measurement according to an embodiment of the present disclosure;



FIG. 3 is a bottom view of a structure of a semi-trailer according to an embodiment of the present disclosure;



FIG. 4 is a schematic diagram showing an operation scenario in which only one LiDAR is provided at a rear part of a tractor according to an embodiment of the present disclosure;



FIG. 5 is a schematic diagram showing an operation scenario in which one multi-line LiDAR is provided on each of two sides of a tractor according to an embodiment of the present disclosure;



FIG. 6 is a first schematic diagram showing a distribution of multi-line LiDARs on two sides of a tractor according to an embodiment of the present disclosure;



FIG. 7 is a second schematic diagram showing a distribution of multi-line LiDARs on two sides of a tractor according to an embodiment of the present disclosure;



FIG. 8 is a second flowchart illustrating a method for trailer angle measurement according to an embodiment of the present disclosure;



FIG. 9 is a schematic diagram showing a collecting environment of an external multi-line LiDAR according to an embodiment of the present disclosure;



FIG. 10 is a schematic diagram showing a predetermined area range according to an embodiment of the present disclosure;



FIG. 11 is a schematic diagram of a LiDAR coordinate system established according to an embodiment of the present disclosure;



FIG. 12 is a schematic diagram showing a curve of angle data to be processed according to an embodiment of the present disclosure; and



FIG. 13 is a schematic diagram showing a structure of a vehicle according to an embodiment of the present disclosure.





DETAILED DESCRIPTION OF THE EMBODIMENTS

In the following, the solutions according to the embodiments of the present disclosure will be described clearly and completely with reference to the figures. Obviously, the embodiments described below are only some, rather than all, of the embodiments of the present disclosure. All other embodiments that can be obtained by those skilled in the art based on the embodiments described in the present disclosure without any inventive efforts are to be encompassed by the scope of the present disclosure.


In order to allow those skilled in the art to better understand the present disclosure, some of the technical terms used in the embodiments of the present disclosure will be explained as follows:


Point cloud: a set of point data on an outer surface of an object as obtained by a measuring apparatus during reverse engineering.


ICP: Iterative Closest Point algorithm is mainly used for precise merging of depth images in computer vision by iteratively minimizing corresponding points of source data and target data. There are already many variants, mainly focusing on how to efficiently and robustly obtain a better merging effect.


SVD: Singular Value Decomposition algorithm is a reliable method for solving translation vectors and rotation matrices.


As shown in FIG. 2, an embodiment of the present disclosure provides a method for trailer angle measurement, which is applied to a semi-trailer 20 as shown in FIG. 3 (FIG. 3 is a bottom view of the semi-trailer 20). The semi-trailer 20 includes a tractor 201 and a trailer 202. The tractor 201 and the trailer 202 are connected by a shaft 205, such that the trailer 202 can rotate with respect to the tractor 201. On each of two sides of the tractor 201 (such as the left and right sides of the front part of the tractor 201, i.e., the front face of the vehicle), at least one multi-line LiDAR 203 is provided (for example, one, two, or more multi-line LiDARs can be provided on each of the left and right sides, as shown in FIG. 3, which only shows one multi-line LiDAR on each of the left and right sides for the purpose of illustration).


The method for trailer angle measurement includes the following steps.


At step 301, in a predetermined vehicle coordinate system, an initial trailer model corresponding to an initial trailer angle value is obtained, to obtain initial point cloud data in the initial trailer model.


At step 302, the multi-line LiDAR provided on each of the two sides of the tractor is controlled to emit laser light, such that a surface of the trailer reflects the laser light emitted by the multi-line LiDAR.


At step 303, each of the multi-line LiDARs is controlled to receive a corresponding laser point cloud reflected by the surface of the trailer.


At step 304, a trailer angle is calculated based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm.


Here, as shown in FIG. 4, a reflector 204 with a reflective surface can be fixedly provided at a front part of the trailer 202, one LiDAR (typically a single-line LiDAR) 206 can be provided at a rear part of the tractor 201 (typically at the middle of a rear part of the tractor), and the reflective surface faces the LiDAR 206. Since only one LiDAR 206 provided at the rear part of the tractor 201 can also emit laser light towards the reflective surface of the reflector 204, the one LiDAR 206 may be generally sufficient for measurement of the trailer angle. However, when the trailer angle is relatively large, as shown in FIG. 4, the reflector 204 may have moved to a side of the tractor 201 and entered a blind zone of the LiDAR 206, and the laser light emitted by the one LiDAR 206 cannot reach the reflective surface of the laser reflector 204, resulting in a failure in the measurement of the trailer angle. Therefore, in an embodiment of the present disclosure, the solution shown in FIG. 4 can also be used to measure the trailer angle when the trailer angle is small (e.g., smaller than 40°), and the above steps 301 to 304 can be used to measure the trailer angle when trailer angle is large (e.g., larger than or equal to 40°). The present disclosure is not limited to this example. The above steps 301 to 304 in the embodiment of the present disclosure may also be used to measure the trailer angle when the trailer angle is small.


However, as shown in FIG. 5, in the present disclosure, on each of two sides of the tractor 201 (such as the left and right sides of the front part of the tractor 201, i.e., the front face of the vehicle), at least one multi-line LiDAR 203 is provided (for example, one, two, or more multi-line LiDARs can be provided on each of the left and right sides, as shown in FIG. 5, which only shows one multi-line LiDAR on each of the left and right sides for the purpose of illustration). When the reflector 204 moves to a side of the tractor 201, the laser light emitted by the multi-line LiDAR on at least one side can reach the surface of the trailer and thus can be used for measurement of the trailer angle. In addition, in the present disclosure, the trailer angle is calculated based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data corresponding to the initial trailer angle value, instead of a laser point cloud of a single LiDAR, such that the accuracy of the result is greatly improved.


Here, in order to illustrate that at least one multi-line LiDAR 203 is provided on each of two sides of the tractor 201, as shown in FIGS. 6 and 7, the distribution of the multi-line LiDARs 203 at the front part of the tractor 201 may include one multi-line LiDAR 203 (as shown in FIG. 6), or two multi-line LiDARs 203 (as show in FIG. 7), being provided on each of the left and right sides. However, the present disclosure is not limited to any of these examples. Without consideration for the cost, even more multi-line LiDARs can be provided on the left and right sides.


In order to enable those skilled in the art to better understand the present disclosure, a more detailed embodiment will be given below. As shown in FIG. 8, an embodiment of the application provides a method for trailer angle measurement, which can be applied to the above semi-trailer 20 as shown in FIG. 3. The structure of the semi-trailer 20 has been described above and will not be repeated here. The method includes the following steps.


At step 401, in the predetermined vehicle coordinate system, a preconfigured external multi-line LiDAR is controlled to emit light to two sides of the trailer when the trailer angle is 0°, such that surfaces on the two sides of the trailer reflect the laser light emitted by the external multi-line LiDAR, respectively.


At step 402, the external multi-line LiDAR is controlled to receive laser point clouds reflected by the surfaces on the two sides of the trailer.


At step 403, the initial trailer model when the trailer angle is 0° is obtained based on the laser point clouds reflected by the surfaces on the two sides of the trailer using an ICP algorithm, to obtain the initial point cloud data in the initial trailer model.


In general, the above steps 401 to 403 can be implemented as follows. As shown in FIG. 9, a bracket 501 can be provided on one side of a lane in which the vehicle is moving, and an external multi-line LiDAR 502 can be arranged on the bracket 501 (e.g., at a height above 2 m). In this way, the vehicle can maintain the trailer angle at 0°, while entering a range in which the external multi-line LiDAR 502 emit laser light from one side, and then entering the range in which the external multi-line LiDAR 502 emit laser light from the other side. Therefore, the laser light can reach both sides of the trailer, and the external multi-line LiDAR 502 can collect laser point clouds for both sides of the trailer. Then, using the ICP algorithm, the initial trailer model when the trailer angle is 0° can be obtained, and in turn the initial point cloud data in the initial trailer model can be obtained. Here, the initial point cloud data is denoted as {Pt}.


At step 404, the multi-line LiDAR provided on each of the two sides of the tractor is controlled to emit laser light, such that the surface of the trailer reflects the laser light emitted by the multi-line LiDAR.


It is to be noted here that the multi-line LiDARs and the external multi-line LiDAR used in the embodiment of the present disclosure may be e.g., 16-line, 32-line, or 64-line LiDARs, and the present disclosure is not limited to any of these examples.


At step 405, each of the multi-line LiDARs is controlled to receive a corresponding laser point cloud reflected by the surface of the trailer.


Typically, the sampling frequency of each multi-line LiDAR can be 10 Hz, but the present disclosure is not limited to this. In addition, each multi-line LiDAR can emit laser light with its own identification, so as to ensure that when the multi-line LiDAR receives the laser point cloud reflected by the surface of the trailer, it can only receive its corresponding laser point cloud, without receiving the laser point clouds corresponding to other multi-line LiDARs. In addition, in order to sort initial trailer angles corresponding to the respective multi-line LiDARs based on collecting time in a current period at step 408 later, the collecting time of the respective multi-line LiDARs is preferably different, so as to avoid two or more initial trailer angles being collected at the same time and difficult to be distinguished from one another. In addition, due to the structural characteristics of the semi-trailer 20, when there is a certain trailer angle, generally only the multi-line LiDAR(s) on one side can receive the laser point cloud reflected by the surface of the trailer.


At step 406, the corresponding laser point cloud received by each of the multi-line LiDARs is preprocessed to obtain an initial trailer angle corresponding to the multi-line LiDAR based on the initial point cloud data and the preprocessed laser point cloud using an ICP algorithm.


Here, the step 406 can be implemented using an ICP algorithm.


At Step 1, the corresponding laser point cloud received by each of the multi-line LiDARs can be Area of Interest (AOI) filtered to obtain a laser point cloud within a predetermined area range.


Here, the predetermined area range can be determined as follows. Based on a trailer angle in a previous period and a trailer size known in advance, an area range having a predetermined distance from a peripheral of the trailer in the previous period can be determined as the predetermined area range.


For example, the following scheme can be used.


As shown in FIG. 10, when the trailer angle in the previous cycle and the trailer size known in advance are known, a current posture of the trailer can be obtained, such that an area range having a predetermined distance from a peripheral of the trailer in the previous period can be determined as the predetermined area range (the shaded part in the figure only describes the area on a plane, but in fact, there may be similar areas above and below the trailer, such that a three-dimensional area range can be determined as the predetermined area range. The reason for this is that the time between consecutive periods is relatively short (typically 0.1 s) and the trailer angle changes slightly. Therefore, the trailer has a small change in the current period when compared with the previous period, and should be within the predetermined area range.


At Step 2, the laser point cloud within the predetermined area range can be noise filtered to obtain a noise filtered laser point cloud corresponding to each of the multi-line LiDARs to form a current trailer model corresponding to the multi-line LiDAR.


Here, the noise filtering can be used to filter out outliers, so as to obtain a more accurate laser point cloud. Here, a set of points in the current trailer model is denoted as {Pn}, where n is a positive integer. For example, {P1} represents a set of points in the first trailer model.


At Step 3, for each point in the current trailer model, a point with a closest straight-line distance to the point in an initial point cloud data set can be determined as a target point.


At Step 4, each point is moved to its corresponding target point using an SVD algorithm to obtain a next trailer model, and a model matrix at a current iteration can be generated.


Here, the model matrix at the current iteration includes a rotation matrix at the current iteration and a translation matrix at the current iteration. The rotation matrix at the current iteration includes trigonometric function relationships for current deflection angles of three coordinate axes in the vehicle coordinate system.


The method returns to Step 3 after Step 4, until a distance between each point in the current trailer model and the target point becomes smaller than a predetermined distance threshold, and proceeds with Step 5.


In an embodiment of the present disclosure, after the multi-line LiDARs are installed, a LiDAR coordinate system is established. The position information of the laser point clouds of the multi-line LiDARs is based on the LiDAR coordinate system. For example, the LiDAR coordinate system as shown in FIG. 11 can be established, and the present disclosure is not limited to this. In the establishment of the LiDAR coordinate system, another direction can be selected as the x-axis, a direction perpendicular to the x-axis on the horizontal plane can be the y-axis, and a direction perpendicular to both the x-axis and the y-axis can be the z-axis (not shown, typically vertically upward), and further details will be omitted here.


In this way, after each iteration, the obtained model matrix An can be denoted as







An
=

[




Rn



Tn




0


0


0


1



]


,





where Rn is the rotation matrix at the n-th iteration, which is a matrix with 3 rows and 3 columns; Tn is the translation matrix at the current iteration, which is a 3 rows and 1 column. The rotation matrix Rn at the n-th iteration is typically composed of 3 rotations, i.e., Rn=Rxn·Ryn·Rzn, where Rxn is the rotation matrix for the x-axis in the vehicle coordinate system at the n-th iteration, Ryn is the rotation matrix for the y-axis in the vehicle coordinate system at the n-th iteration, and Rzn is the rotation matrix for the z-axis in the vehicle coordinate system at the n-th iteration. For example, in one embodiment:







Rn
=

Rxn
·
Ryn
·
Rzn





=



(



1


0


0




0



cos

θ

xn





-
sin


θ

xn





0



sin

θ

xn




cos

θ

xn




)



(




cos

θ

yn



0



sin

θ

yn





0


1


0






-
sin


θ

yn



0



cos

θ

yn




)



(




cos

θ

zn





-
sin


θ

zn



0





sin

θ

zn




cos

θ

zn



0




0


0


1



)


=

(




cos

θ

yn

cos

θ

zn





-
cos


θ

yn

sin

θ

zn




sin

θ

yn







cos

θ

xn

sin

θ

zn

+





cos

θ

xn

cos

θ

zn

-





-
sin


θ

xn

cos

θ

yn






sin

θ

xn

sin

θ

yn

cos

θ

zn




sin

θ

xn

sin

θ

yn

sin

θ

zn










sin

θ

xn

sin

θ

zn

-





sin

θ

xn

cos

θ

zn

+




cos

θ

xn

cos

θ

yn






cos

θ

xn

sin

θ

yn

cos

θ

zn




cos

θ

xn

sin

θ

yn

sin

θ

zn







)








where θxn is the deflection angle of the x-axis in the vehicle coordinate system at the n-th iteration, θyn is the deflection angle of the y-axis in the vehicle coordinate system at the n-th iteration, and θzn is the deflection angle of the z-axis in the vehicle coordinate system at the n-th iteration.


The translation matrix at the current iteration can be denoted as







Tn
=

(



an




bn




cn



)


,





where an, bn, and cn are translation amounts at the n-th iteration.


At Step 5, the model matrices at respective iterations can be multiplied to obtain a result matrix.


Here, for example, if in total n iterations are performed, the model matrices at the respective iterations can be multiplied to obtain the result matrix as A=An·A(n−1)· . . . ·A2·A1. The result matrix includes a result rotation matrix and a result translation matrix. The result rotation matrix includes trigonometric function relationships for deflection angles of the three coordinate axes in the vehicle coordinate system. For example, the result rotation matrix R in the result matrix A can be represented as







[




m
00




m
01




m
02






m
10




m

1

1





m

1

2







m
20




m

2

1





m

2

2





]

,





where m00 to m22 indicate trigonometric function relationships for deflection angles at respective positions in the result rotation matrix R. Since the trigonometric function relationships here are complicated, details thereof will be omitted here.


At Step 6, the initial trailer angle corresponding to each of the multi-line LiDARs can be determined based on the result rotation matrix.


For example, when the result rotation matrix R is represented as







[




m
00




m
01




m
02






m
10




m

1

1





m

1

2







m
20




m

2

1





m

2

2





]

,





the following applies:

θx=a tan 2(m12,m22)
cos θy=√{square root over (m2+m012)}
θy=a tan 2(−m02,cos θy)
θz=a tan 2(sin θx·m20−cos θx·m10,cos θx·m11−sin θx·m21)


where θx, θy, and θz are the deflection angles in the current period around the x-axis, y-axis, and z-axis, respectively. θz is equivalent to the initial trailer angle.


At step 407, the initial trailer angles corresponding to the respective multi-line LiDARs are screened in accordance with a predetermined determination condition.


Here, the step 407 can be implemented in any of the following two schemes. Of course, it can alternatively be implemented in a combination of the two schemes:


Scheme 1


It can be determined whether a number of points in the noise filtered laser point cloud corresponding to each of the multi-line LiDARs is smaller than a predetermined number threshold.


The initial trailer angle corresponding to any multi-line LiDAR having the number of points in the noise filtered laser point cloud corresponding to the multi-line LiDAR smaller than the predetermined number threshold can be discarded, and the initial trailer angle corresponding to each multi-line LiDAR having the number of points in the noise filtered laser point cloud corresponding to the multi-line LiDAR greater than or equal to the predetermined number threshold can be retained.


Here, if the number of points in the noise filtered laser point cloud corresponding to any multi-line LiDAR is smaller than the predetermined number threshold, it means that the current trailer model corresponding to the multi-line LiDAR is fitted using a relatively small number of laser point clouds. If the current trailer model is fitted using only few laser point clouds, the obtained initial trailer angle may be highly inaccurate and should be discarded.


Scheme 2


When the current period is not the first period, it can be determined whether an angle deviation value between the initial trailer angle corresponding to each of the multi-line LiDARs in the current period and a Kalman filtered trailer angle obtained in a previous period is greater than a predetermined angle deviation threshold.


The initial trailer angle corresponding to any multi-line LiDAR in the current period can be discarded when the angle deviation value between the initial trailer angle corresponding to the multi-line LiDAR in the current period and the Kalman filtered trailer angle obtained in the previous period is greater than the predetermined angle deviation threshold.


The initial trailer angle corresponding to each multi-line LiDAR in the current period can be retained when the angle deviation value between the initial trailer angle corresponding to the multi-line LiDAR in the current period and the Kalman filtered trailer angle obtained in the previous period is smaller than or equal to the predetermined angle deviation threshold.


Here, since the time difference between two consecutive periods is small (typically only 0.1 seconds), the trailer angle will not change significantly. Therefore, if the angle deviation value between the initial trailer angle corresponding to the multi-line LiDAR in the current period and the Kalman filtered trailer angle obtained in the previous period is greater than the predetermined angle deviation threshold, it can be determined that the initial trailer angle corresponding to the multi-line LiDAR in the current period is invalid and should be discarded.


At step 408, the screened initial trailer angles corresponding to the respective multi-line LiDARs are sorted based on collecting time in a current period to form angle data to be processed.


For example, when two multi-line LiDARs are provided on each of the left and right sides of the front face of the tractor (e.g., assuming that only initial trailer angles corresponding to the two multi-line LiDARs on the left side are measured), the measurement period of the multi-line LiDARs is 0.1 s, and a difference between measuring time of the two multi-line LiDARs is 0.05 s, as shown in FIG. 12, the abscissa represents time corresponding to the initial trailer angles, and the ordinate represents the initial trailer angles in degrees, and the entire ordinate data constitutes the angle data to be processed.


At step 409, the angle data to be processed is Kalman filtered to obtain a trailer angle of the current period.


The reason behind the Kalman filtering here is that the surface of the trailer is not flat, and the multi-line LiDARs themselves also have observation errors, resulting in some error in the calculated initial trailer angles. This phenomenon is manifested as jumps of ±1° to 2° in the angles when the vehicle is stationary. In order to solve this problem, Kalman filtering can be used to reduce the noise in the initial trailer angles sorted based on the collecting time in the angle data to be processed, and fuse the initial trailer angles and a simple kinematics model of angle changes to obtain a smooth output result. In this way, it can not only ensure that the errors in the measured data of the trailer angles measured in the stationary state are within ±0.5°, but also ensure that the measured data can change accordingly in real time when the trailer angle changes rapidly, so as to avoid obvious delays.


After the above step 409, the method can return to the step 404 for the next cycle of trailer angle measurement.


It can be seen that the above steps 401 to 409 provide a method for fast and accurate measurement of a trailer angle with a simple structure.


In addition, an embodiment of the present disclosure also provides an apparatus for trailer angle measurement. The apparatus includes a memory, a processor, and a computer program stored on the memory and executable by the processor. The processor is configured to, when executing the computer program, implement the above method corresponding to FIG. 2 or 8.


In addition, an embodiment of the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium has a computer program stored thereon. The program, when executed by a processor, implements the above method corresponding to FIG. 2 or 8.


In addition, as shown in FIG. 13, an embodiment of the present disclosure also provides a vehicle 50. The vehicle 50 includes the above apparatus 601 for trailer angle measurement, a tractor 201, and a trailer 202 (the trailer 202 in the present disclosure may carry a container). At least one multi-line LiDAR 203 is provided on each of two sides of the tractor 201 (such as the left and right sides of the front part of the tractor 201, i.e., the front face of the vehicle). The apparatus 601 for trailer angle measurement is communicatively connected to the multi-line LiDARs 203.


With the method and apparatus for trailer angle measurement and the vehicle according to the embodiments of the present disclosure, at least one multi-line LiDAR is provided on each of the two sides of the tractor. The multi-line LiDARs provided on two sides can radiate laser light to the surface of the trailer. Accordingly, a trailer angle can be calculated based on corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data using a point cloud matching algorithm. In addition, in the present disclosure, the trailer angle is calculated based on the corresponding laser point clouds received by the respective multi-line LiDARs and the initial point cloud data corresponding to the initial trailer angle value, instead of a laser point cloud of a single LiDAR, such that the accuracy of the result is greatly improved.


The basic principles of the present disclosure have been described above with reference to the embodiments. However, it can be appreciated by those skilled in the art that all or any of the steps or components of the method or device according to the present disclosure can be implemented in hardware, firmware, software or any combination thereof in any computing device (including a processor, a storage medium, etc.) or a network of computing devices. This can be achieved by those skilled in the art using their basic programming skills based on the description of the present disclosure.


It can be appreciated by those skilled in the art that all or part of the steps in the method according to the above embodiment can be implemented in hardware following instructions of a program. The program can be stored in a computer-readable storage medium. The program, when executed, may include one or any combination of the steps in the method according to the above embodiment.


Further, the functional units in the embodiments of the present disclosure can be integrated into one processing module or can be physically separate, or two or more units can be integrated into one module. The integrated module can be implemented in any hardware or software functional unit. When implemented in software functional units and sold or used as a standalone product, the integrated module can be stored in a computer-readable storage medium.


It can be appreciated by those skilled in the art that the embodiments of the present disclosure can be implemented as a method, a system or a computer program product. The present disclosure may include pure hardware embodiments, pure software embodiments and any combination thereof. Also, the present disclosure may include a computer program product implemented on one or more computer-readable storage mediums (including, but not limited to, magnetic disk storage and optical storage) containing computer-readable program codes.


The present disclosure has been described with reference to the flowcharts and/or block diagrams of the method, device (system) and computer program product according to the embodiments of the present disclosure. It can be appreciated that each process and/or block in the flowcharts and/or block diagrams or any combination thereof, can be implemented by computer program instructions. Such computer program instructions can be provided to a general computer, a dedicated computer, an embedded processor or a processor of any other programmable data processing device to constitute a machine, such that the instructions executed by a processor of a computer or any other programmable data processing device can constitute means for implementing the functions specified by one or more processes in the flowcharts and/or one or more blocks in the block diagrams.


These computer program instructions can also be stored in a computer-readable memory that can direct a computer or any other programmable data processing device to operate in a particular way. Thus, the instructions stored in the computer-readable memory constitute a manufactured product including instruction means for implementing the functions specified by one or more processes in the flowcharts and/or one or more blocks in the block diagrams.


These computer program instructions can also be loaded onto a computer or any other programmable data processing device, such that the computer or the programmable data processing device can perform a series of operations/steps to achieve a computer-implemented process. In this way, the instructions executed on the computer or the programmable data processing device can provide steps for implementing the functions specified by one or more processes in the flowcharts and/or one or more blocks in the block diagrams.


While the embodiments of the present disclosure have been described above, further alternatives and modifications can be made to these embodiments by those skilled in the art in light of the basic inventive concept of the present disclosure. The claims as attached are intended to cover the above embodiments and all these alternatives and modifications that fall within the scope of the present disclosure.


Obviously, various modifications and variants can be made to the present disclosure by those skilled in the art without departing from the spirit and scope of the present disclosure. Therefore, these modifications and variants are to be encompassed by the present disclosure if they fall within the scope of the present disclosure as defined by the claims and their equivalents.

Claims
  • 1. A method for trailer angle measurement, applied in a vehicle comprising a tractor and a trailer, at least one LiDAR being provided on each of two sides of the tractor, the method comprising: obtaining an initial trailer model containing initial point cloud data corresponding to an initial trailer angle value;controlling the LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the LiDAR;controlling the LiDAR provided on each of the two sides of the tractor to receive corresponding laser light reflected by the surface of the trailer to obtain second point cloud data based on the laser light;determining, based on a further trailer angle obtained in a previous period and a trailer size, an area range with a predetermined distance from the trailer in the previous period as a predetermined area range; andcalculating a trailer angle based on the second point cloud data and the initial point cloud data using a point cloud matching algorithm, comprising: preprocessing the second point cloud data to obtain a preprocessed point cloud data and obtaining an initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud data using an iterative closest point algorithm, comprising: for each of the LiDARs,performing area of interest filter on the second point cloud data to obtain point cloud data within the predetermined area range; andperforming noise filter on the point cloud data within the predetermined area range to obtain noise filtered point cloud data forming a current trailer model;screening the initial trailer angle for each of the LiDARs by a predetermined determination condition;sorting the screened initial trailer angle for each of the LiDARs based on collecting times in a current period to form angle data to be processed; andperforming kalman filter on the angle data to be processed to obtain the trailer angle.
  • 2. The method of claim 1, wherein the said obtaining the initial trailer model containing initial point cloud data corresponding to the initial trailer angle value comprises: controlling, in a predetermined vehicle coordinate system, an external LiDAR to emit laser light to two sides of the trailer when the trailer angle is 0°, such that surfaces on the two sides of the trailer reflect the laser light emitted by the external LiDAR, respectively; andcontrolling the external LiDAR to receive laser light reflected by the surfaces on the two sides of the trailer to obtain the initial point cloud data based on the laser light to obtain the initial trailer model containing the initial point cloud data.
  • 3. The method of claim 1, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each point of the current trailer model,determining a point of the initial point cloud data with a closest straight-line distance to the point as a target point;moving the point to a place of its target point using a singular value decomposition algorithm to obtain a next trailer model, and generating a model matrix of a current iteration, the model matrix comprising a rotation matrix and a translation matrix of the current iteration, the rotation matrix comprising trigonometric function relationships for current deflection angles of three coordinate axes in a predetermined vehicle coordinate system.
  • 4. The method of claim 3, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each of the LiDARs, multiplying a model matrix of each iteration to obtain a result matrix, the result matrix comprising a result rotation matrix and a result translation matrix, the result rotation matrix comprising trigonometric function relationships for deflection angles of the three coordinate axes in the predetermined vehicle coordinate system; anddetermining the initial trailer angle for each of the LiDARs based on the result rotation matrix.
  • 5. The method of claim 3, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether a number of points in the noise filtered point cloud corresponding to each of the LiDARs is smaller than a predetermined number threshold; anddiscarding the initial trailer angle corresponding to any of the LiDARs having the number of points smaller than the predetermined number threshold.
  • 6. The method of claim 3, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether an angle deviation value between the initial trailer angle of each of the LiDARs in the current period and the further trailer angle obtained in the previous period is greater than a predetermined angle deviation threshold; anddiscarding the initial trailer angle of any of the LiDARs in the current period when the angle deviation value is greater than the predetermined angle deviation threshold.
  • 7. A vehicle comprising an apparatus for trailer angle measurement, a tractor and a trailer, at least one LiDAR being provided on each of two sides of the tractor, the apparatus being communicatively connected to each of the LiDARs, and the apparatus comprising a memory, a processor, and a computer program stored on the memory and executable by the processor, wherein the processor is configured to, when executing the computer program, perform a process of trailer angle measurement, the process comprising: obtaining an initial trailer model containing initial point cloud data corresponding to an initial trailer angle value;controlling the LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the LiDAR;controlling the LiDAR provided on each of the two sides of the tractor to receive corresponding laser light reflected by the surface of the trailer to obtain second point cloud data based on the laser light;determining, based on a further trailer angle obtained in a previous period and a trailer size, an area range with a predetermined distance from the trailer in the previous period as a predetermined area range; andcalculating a trailer angle based on the second point cloud data and the initial point cloud data using a point cloud matching algorithm, comprising: preprocessing the second point cloud data to obtain a preprocessed point cloud data and obtaining an initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud data using an iterative closest point algorithm, comprising: for each of the LiDARs,performing area of interest filter on the second point cloud data to obtain point cloud data within the predetermined area range; andperforming noise filter on the point cloud data within the predetermined area range to obtain noise filtered point cloud data forming a current trailer model;screening the initial trailer angle for each of the LiDARs by a predetermined determination condition;sorting the screened initial trailer angle for each of the LiDARs based on collecting times in a current period to form angle data to be processed; andperforming kalman filter on the angle data to be processed to obtain the trailer angle.
  • 8. The vehicle of claim 7, wherein the said obtaining, in a predetermined vehicle coordinate system, the initial trailer model containing initial point cloud data corresponding to the initial trailer angle value comprises: controlling, in the predetermined vehicle coordinate system, an external LiDAR to emit laser light to two sides of the trailer when the trailer angle is 0°, such that surfaces on the two sides of the trailer reflect the laser light emitted by the external LiDAR, respectively; andcontrolling the external LiDAR to receive laser light reflected by the surfaces on the two sides of the trailer to obtain the initial point cloud data based on the laser light to obtain the initial trailer model containing the initial point cloud data.
  • 9. The vehicle of claim 7, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each point of the current trailer model,determining a point of the initial point cloud data with a closest straight-line distance to the point as a target point;moving the point to a place of its target point using a singular value decomposition algorithm to obtain a next trailer model, and generating a model matrix of a current iteration, the model matrix comprising a rotation matrix and a translation matrix of the current iteration, the rotation matrix comprising trigonometric function relationships for current deflection angles of three coordinate axes in a predetermined vehicle coordinate system.
  • 10. The vehicle of claim 9, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each of the LiDARs, multiplying a model matrix of each iteration to obtain a result matrix, the result matrix comprising a result rotation matrix and a result translation matrix, the result rotation matrix comprising trigonometric function relationships for deflection angles of the three coordinate axes in the predetermined vehicle coordinate system; anddetermining the initial trailer angle for each of the LiDARs based on the result rotation matrix.
  • 11. The vehicle of claim 9, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether a number of points in the noise filtered point cloud corresponding to each of the LiDARs is smaller than a predetermined number threshold; anddiscarding the initial trailer angle corresponding to any of the LiDARs having the number of points smaller than the predetermined number threshold.
  • 12. The vehicle of claim 9, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether an angle deviation value between the initial trailer angle of each of the LiDARs in the current period and the further trailer angle obtained in the previous period is greater than a predetermined angle deviation threshold; anddiscarding the initial trailer angle of any of the LiDARs in the current period when the angle deviation value is greater than the predetermined angle deviation threshold.
  • 13. A non-transitory computer-readable storage medium, having a computer program stored thereon, wherein the program, when executed by a processor, implements a process of trailer angle measurement, the process being applied in a vehicle comprising a tractor and a trailer, at least one LiDAR being provided on each of two sides of the tractor, the process comprising: obtaining an initial trailer model containing initial point cloud data corresponding to an initial trailer angle value;controlling the LiDAR provided on each of the two sides of the tractor to emit laser light, such that a surface of the trailer reflects the laser light emitted by the LiDAR;controlling the LiDAR provided on each of the two sides of the tractor to receive corresponding laser light reflected by the surface of the trailer to obtain second point cloud data based on the laser light;determining, based on a further trailer angle obtained in a previous period and a trailer size, an area range with a predetermined distance from the trailer in the previous period as a predetermined area range; andcalculating a trailer angle based on the second point cloud data and the initial point cloud data using a point cloud matching algorithm, comprising: preprocessing the second point cloud data to obtain a preprocessed point cloud data and obtaining an initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using an iterative closest point algorithm, comprising: for each of the LiDARs,performing area of interest filter on the second point cloud data to obtain point cloud data within the predetermined area range; andperforming noise filter on the point cloud data within the predetermined area range to obtain noise filtered point cloud data forming a current trailer model;screening the initial trailer angle for each of the LiDARs by a predetermined determination condition;sorting the screened initial trailer angle for each of the LiDARs based on collecting times in a current period to form angle data to be processed; andperforming kalman filter on the angle data to be processed to obtain the trailer angle.
  • 14. The non-transitory computer-readable storage medium of claim 13, wherein the said obtaining the initial trailer model containing initial point cloud data corresponding to the initial trailer angle value comprises: controlling, in a predetermined vehicle coordinate system, a external LiDAR to emit laser light to two sides of the trailer when the trailer angle is 0°, such that surfaces on the two sides of the trailer reflect the laser light emitted by the external LiDAR, respectively; andcontrolling the external LiDAR to receive laser light reflected by the surfaces on the two sides of the trailer to obtain the initial point cloud data based on the laser light to obtain the initial trailer model containing the initial point cloud data.
  • 15. The non-transitory computer-readable storage medium of claim 13, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each point of the current trailer model,determining a point of the initial point cloud data with a closest straight-line distance to the point as a target point;moving the point to a place of its target point using a singular value decomposition algorithm to obtain a next trailer model, and generating a model matrix of a current iteration, the model matrix comprising a rotation matrix and a translation matrix of the current iteration, the rotation matrix comprising trigonometric function relationships for current deflection angles of three coordinate axes in a predetermined vehicle coordinate system;multiplying a model matrix of each iteration to obtain a result matrix, the result matrix comprising a result rotation matrix and a result translation matrix, the result rotation matrix comprising trigonometric function relationships for deflection angles of the three coordinate axes in the predetermined vehicle coordinate system; anddetermining the initial trailer angle for each of the LiDARs based on the result rotation matrix.
  • 16. The non-transitory computer-readable storage medium of claim 15, wherein the said preprocessing the second point cloud data to obtain the preprocessed point cloud data and obtaining the initial trailer angle for each of the LiDARs based on the initial point cloud data and the preprocessed point cloud using the iterative closest point algorithm further comprises: for each of the LiDARs, multiplying a model matrix of each iteration to obtain a result matrix, the result matrix comprising a result rotation matrix and a result translation matrix, the result rotation matrix comprising trigonometric function relationships for deflection angles of the three coordinate axes in the predetermined vehicle coordinate system; anddetermining the initial trailer angle for each of the LiDARs based on the result rotation matrix.
  • 17. The non-transitory computer-readable storage medium of claim 15, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether a number of points in the noise filtered point cloud corresponding to each of the LiDARs is smaller than a predetermined number threshold; anddiscarding the initial trailer angle corresponding to any of the LiDARs having the number of points smaller than the predetermined number threshold.
  • 18. The non-transitory computer-readable storage medium of claim 15, wherein the said screening initial trailer angles by the predetermined determination condition comprises: determining whether an angle deviation value between the initial trailer angle of each of the LiDARs in the current period and a further trailer angle obtained in a previous period is greater than a predetermined angle deviation threshold; anddiscarding the initial trailer angle of any of the LiDARs in the current period when the angle deviation value is greater than the predetermined angle deviation threshold.
Priority Claims (1)
Number Date Country Kind
201811505593.1 Dec 2018 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

The present disclosure is a continuation of and claims priority to International Application No. PCT/CN2019/077075 entitled “TRAILER ANGLE MEASUREMENT METHOD AND DEVICE, AND VEHICLE”, filed Mar. 6, 2019 and which claims priority to Chinese Patent Application No. 201811505593.1, titled “TRAILER ANGLE MEASUREMENT METHOD AND DEVICE, AND VEHICLE”, filed on Dec. 10, 2018, the content of which is incorporated herein by reference in its entirety.

US Referenced Citations (234)
Number Name Date Kind
6084870 Wooten et al. Jul 2000 A
6263088 Crabtree Jul 2001 B1
6594821 Banning et al. Jul 2003 B1
6777904 Degner Aug 2004 B1
6975923 Spriggs Dec 2005 B2
7103460 Breed Sep 2006 B1
7689559 Canright Mar 2010 B2
7742841 Sakai et al. Jun 2010 B2
7783403 Breed Aug 2010 B2
7844595 Canright Nov 2010 B2
8041111 Wilensky Oct 2011 B1
8064643 Stein Nov 2011 B2
8082101 Stein Dec 2011 B2
8164628 Stein Apr 2012 B2
8175376 Marchesotti May 2012 B2
8271871 Marchesotti Sep 2012 B2
8346480 Trepagnier et al. Jan 2013 B2
8378851 Stein Feb 2013 B2
8392117 Dolgov Mar 2013 B2
8401292 Park Mar 2013 B2
8412449 Trepagnier Apr 2013 B2
8478072 Aisaka Jul 2013 B2
8532870 Hoetzer et al. Sep 2013 B2
8553088 Stein Oct 2013 B2
8706394 Trepagnier et al. Apr 2014 B2
8718861 Montemerlo et al. May 2014 B1
8788134 Litkouhi Jul 2014 B1
8908041 Stein Dec 2014 B2
8917169 Schofield Dec 2014 B2
8963913 Baek Feb 2015 B2
8965621 Urmson Feb 2015 B1
8981966 Stein Mar 2015 B2
8983708 Choe et al. Mar 2015 B2
8993951 Schofield Mar 2015 B2
9002632 Emigh Apr 2015 B1
9008369 Schofield Apr 2015 B2
9025880 Perazzi May 2015 B2
9042648 Wang May 2015 B2
9081385 Ferguson et al. Jul 2015 B1
9088744 Grauer et al. Jul 2015 B2
9111444 Kaganovich Aug 2015 B2
9117133 Barnes Aug 2015 B2
9118816 Stein Aug 2015 B2
9120485 Dolgov Sep 2015 B1
9122954 Srebnik Sep 2015 B2
9134402 Sebastian Sep 2015 B2
9145116 Clarke Sep 2015 B2
9147255 Zhang Sep 2015 B1
9156473 Clarke Oct 2015 B2
9176006 Stein Nov 2015 B2
9179072 Stein Nov 2015 B2
9183447 Gdalyahu Nov 2015 B1
9185360 Stein Nov 2015 B2
9191634 Schofield Nov 2015 B2
9214084 Grauer et al. Dec 2015 B2
9219873 Grauer et al. Dec 2015 B2
9233659 Rosenbaum Jan 2016 B2
9233688 Clarke Jan 2016 B2
9248832 Huberman Feb 2016 B2
9248835 Tanzmeister Feb 2016 B2
9251708 Rosenbaum Feb 2016 B2
9277132 Berberian Mar 2016 B2
9280711 Stein Mar 2016 B2
9282144 Tebay et al. Mar 2016 B2
9286522 Stein Mar 2016 B2
9297641 Stein Mar 2016 B2
9299004 Lin Mar 2016 B2
9315192 Zhu Apr 2016 B1
9317033 Ibanez-guzman et al. Apr 2016 B2
9317776 Honda Apr 2016 B1
9330334 Lin May 2016 B2
9342074 Dolgov May 2016 B2
9347779 Lynch May 2016 B1
9355635 Gao May 2016 B2
9365214 Ben Shalom Jun 2016 B2
9399397 Mizutani Jul 2016 B2
9418549 Kang et al. Aug 2016 B2
9428192 Schofield Aug 2016 B2
9436880 Bos Sep 2016 B2
9438878 Niebla Sep 2016 B2
9443163 Springer Sep 2016 B2
9446765 Ben Shalom Sep 2016 B2
9459515 Stein Oct 2016 B2
9466006 Duan Oct 2016 B2
9476970 Fairfield Oct 2016 B1
9483839 Kwon Nov 2016 B1
9490064 Hirosawa Nov 2016 B2
9494935 Okumura et al. Nov 2016 B2
9507346 Levinson et al. Nov 2016 B1
9513634 Pack et al. Dec 2016 B2
9531966 Stein Dec 2016 B2
9535423 Debreczeni Jan 2017 B1
9538113 Grauer et al. Jan 2017 B2
9547985 Tuukkanen Jan 2017 B2
9549158 Grauer et al. Jan 2017 B2
9555803 Pawlicki Jan 2017 B2
9568915 Berntorp Feb 2017 B1
9587952 Slusar Mar 2017 B1
9599712 Van Der Tempel et al. Mar 2017 B2
9600889 Boisson et al. Mar 2017 B2
9602807 Crane et al. Mar 2017 B2
9612123 Levinson et al. Apr 2017 B1
9620010 Grauer et al. Apr 2017 B2
9625569 Lange Apr 2017 B2
9628565 Stenneth et al. Apr 2017 B2
9649999 Amireddy et al. May 2017 B1
9652860 Maali May 2017 B1
9669827 Ferguson et al. Jun 2017 B1
9672446 Vallesi-Gonzalez Jun 2017 B1
9690290 Prokhorov Jun 2017 B2
9701023 Zhang et al. Jul 2017 B2
9712754 Grauer et al. Jul 2017 B2
9720418 Stenneth Aug 2017 B2
9723097 Harris Aug 2017 B2
9723099 Chen Aug 2017 B2
9723233 Grauer et al. Aug 2017 B2
9726754 Massanell et al. Aug 2017 B2
9729860 Cohen et al. Aug 2017 B2
9738280 Rayes Aug 2017 B2
9739609 Lewis Aug 2017 B1
9746550 Nath Aug 2017 B2
9753128 Schweizer et al. Sep 2017 B2
9753141 Grauer et al. Sep 2017 B2
9754490 Kentley et al. Sep 2017 B2
9760837 Nowozin et al. Sep 2017 B1
9766625 Boroditsky et al. Sep 2017 B2
9769456 You et al. Sep 2017 B2
9773155 Shotton et al. Sep 2017 B2
9779276 Todeschini et al. Oct 2017 B2
9785149 Wang et al. Oct 2017 B2
9805294 Liu et al. Oct 2017 B2
9810785 Grauer et al. Nov 2017 B2
9823339 Cohen Nov 2017 B2
9953236 Huang Apr 2018 B1
10147193 Huang Dec 2018 B2
10223806 Yi et al. Mar 2019 B1
10223807 Yi et al. Mar 2019 B1
10410055 Wang et al. Sep 2019 B2
11073601 Nian Jul 2021 B2
11333766 Kozak May 2022 B2
20030114980 Klausner et al. Jun 2003 A1
20030174773 Comaniciu Sep 2003 A1
20040264763 Mas et al. Dec 2004 A1
20070067077 Liu et al. Mar 2007 A1
20070183661 El-Maleh Aug 2007 A1
20070183662 Wang Aug 2007 A1
20070230792 Shashua Oct 2007 A1
20070286526 Abousleman Dec 2007 A1
20080249667 Horvitz Oct 2008 A1
20090040054 Wang Feb 2009 A1
20090087029 Coleman Apr 2009 A1
20100049397 Lin Feb 2010 A1
20100111417 Ward May 2010 A1
20100226564 Marchesotti Sep 2010 A1
20100281361 Marchesotti Nov 2010 A1
20110142283 Huang Jun 2011 A1
20110206282 Aisaka Aug 2011 A1
20110247031 Jacoby Oct 2011 A1
20110257860 Getman et al. Oct 2011 A1
20120041636 Johnson et al. Feb 2012 A1
20120105639 Stein May 2012 A1
20120114181 Borthwick et al. May 2012 A1
20120140076 Rosenbaum Jun 2012 A1
20120274629 Baek Nov 2012 A1
20120314070 Zhang et al. Dec 2012 A1
20130051613 Bobbitt et al. Feb 2013 A1
20130083959 Owechko Apr 2013 A1
20130182134 Grundmann et al. Jul 2013 A1
20130204465 Phillips et al. Aug 2013 A1
20130266187 Bulan Oct 2013 A1
20130329052 Chew Dec 2013 A1
20140072170 Zhang Mar 2014 A1
20140104051 Breed Apr 2014 A1
20140142799 Ferguson et al. May 2014 A1
20140143839 Ricci May 2014 A1
20140145516 Hirosawa May 2014 A1
20140198184 Stein Jul 2014 A1
20140321704 Partis Oct 2014 A1
20140334668 Saund Nov 2014 A1
20150062304 Stein Mar 2015 A1
20150269438 Samarsekera et al. Sep 2015 A1
20150310370 Burry Oct 2015 A1
20150353082 Lee et al. Dec 2015 A1
20160008988 Kennedy Jan 2016 A1
20160026787 Nairn et al. Jan 2016 A1
20160037064 Stein Feb 2016 A1
20160094774 Li Mar 2016 A1
20160118080 Chen Apr 2016 A1
20160129907 Kim May 2016 A1
20160165157 Stein Jun 2016 A1
20160210528 Duan Jul 2016 A1
20160275766 Venetianer et al. Sep 2016 A1
20160280261 Kyrtsos et al. Sep 2016 A1
20160321381 English Nov 2016 A1
20160334230 Ross et al. Nov 2016 A1
20160342837 Hong et al. Nov 2016 A1
20160347322 Clarke et al. Dec 2016 A1
20160368336 Kahn et al. Dec 2016 A1
20160375907 Erban Dec 2016 A1
20170053169 Cuban et al. Feb 2017 A1
20170061632 Linder et al. Mar 2017 A1
20170080928 Wasiek et al. Mar 2017 A1
20170124476 Levinson et al. May 2017 A1
20170134631 Zhao et al. May 2017 A1
20170177951 Yang et al. Jun 2017 A1
20170301104 Qian Oct 2017 A1
20170305423 Green Oct 2017 A1
20170318407 Meister Nov 2017 A1
20170334484 Koravadi Nov 2017 A1
20180057052 Dodd et al. Mar 2018 A1
20180151063 Pun May 2018 A1
20180158197 Dasgupta Jun 2018 A1
20180260956 Huang Sep 2018 A1
20180283892 Behrendt Oct 2018 A1
20180341021 Schmitt et al. Nov 2018 A1
20180373980 Huval Dec 2018 A1
20190025853 Julian Jan 2019 A1
20190065863 Luo et al. Feb 2019 A1
20190066329 Yi et al. Feb 2019 A1
20190066330 Yi et al. Feb 2019 A1
20190066344 Yi et al. Feb 2019 A1
20190084477 Gomez-Mendoza et al. Mar 2019 A1
20190108384 Wang et al. Apr 2019 A1
20190132391 Thomas May 2019 A1
20190132392 Liu May 2019 A1
20190170867 Wang et al. Jun 2019 A1
20190210564 Han Jul 2019 A1
20190210613 Sun Jul 2019 A1
20190236950 Li Aug 2019 A1
20190266420 Ge Aug 2019 A1
20210291902 Wang Sep 2021 A1
20210356261 Jin Nov 2021 A1
20220343535 Ip Oct 2022 A1
20230114328 Tan Apr 2023 A1
Foreign Referenced Citations (56)
Number Date Country
105547288 May 2016 CN
106340197 Jan 2017 CN
106781591 May 2017 CN
107728156 Feb 2018 CN
107980102 May 2018 CN
108010360 May 2018 CN
108132471 Jun 2018 CN
108278981 Jul 2018 CN
108519604 Sep 2018 CN
108717182 Oct 2018 CN
108749923 Nov 2018 CN
108761479 Nov 2018 CN
108761481 Nov 2018 CN
208059845 Nov 2018 CN
108959173 Dec 2018 CN
2608513 Sep 1977 DE
102016105259 Sep 2016 DE
102017125662 May 2018 DE
890470 Jan 1999 EP
1754179 Feb 2007 EP
2448251 May 2012 EP
2463843 Jun 2012 EP
2761249 Aug 2014 EP
2946336 Nov 2015 EP
2993654 Mar 2016 EP
3081419 Oct 2016 EP
2513392 Oct 2014 GB
2001334966 Dec 2001 JP
2005293350 Oct 2005 JP
3721911 Nov 2005 JP
2012225806 Nov 2012 JP
2022515355 Feb 2022 JP
100802511 Feb 2008 KR
1991009375 Jun 1991 WO
2005098739 Oct 2005 WO
2005098751 Oct 2005 WO
2005098782 Oct 2005 WO
2010109419 Sep 2010 WO
2013045612 Apr 2013 WO
2014111814 Jul 2014 WO
2014166245 Oct 2014 WO
2014201324 Dec 2014 WO
2015083009 Jun 2015 WO
2015103159 Jul 2015 WO
2015125022 Aug 2015 WO
2015186002 Dec 2015 WO
2016090282 Jun 2016 WO
2016135736 Sep 2016 WO
2017079349 May 2017 WO
2017079460 May 2017 WO
2017013875 May 2018 WO
2019040800 Feb 2019 WO
2019084491 May 2019 WO
2019084494 May 2019 WO
2019140277 Jul 2019 WO
2019168986 Sep 2019 WO
Non-Patent Literature Citations (72)
Entry
Japanese Patent Office, 1st Examination Report for JP 2021-533593, Mailing Date: Feb. 22, 2023, 6 pages with English translation.
Office Action in Chinese Patent Application No. 201811505593.1 dated Sep. 29, 2021.
Nyberg, Patrik, “Stabilization, Sensor Fusion and Path Following for Autonomous Reversing of a Full-Scale Truck and Trailer System,” Master of Science in Electrical Engineering, Department of Electrical Engineering, Linköping University, 2016, 50 pages, Sweden.
European Patent Office, Extended European Search Report for EP 19896541, Mailing Date: Jul. 8, 2022, 10 pages.
Chinese Patent Office, Third Office Action for CN 201811505593.1, Mailing Date: Nov. 3, 2022, 10 pages with English translation.
Chinese Patent Office, Second Office Action for CN 201811505593.1, Mailing Date: Jul. 5, 2022, 33 pages with English translation.
Carle, Patrick J.F. et al. “Global Rover Localization by Matching Lidar and Orbital 3D Maps.” IEEE, Anchorage Convention District, pp. 1-6, May 3-8, 2010. (Anchorage Alaska, US).
Caselitz, T. et al., “Monocular camera localization in 3D LiDAR maps,” European Conference on Computer Vision (2014) Computer Vision—ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol. 8690. Springer, Cham.
Mur-Artal, R. et al., “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Transaction on Robotics, Oct. 2015, pp. 1147-1163, vol. 31, No. 5, Spain.
Sattler, T. et al., “Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization?” CVPR, IEEE, 2017, pp. 1-10.
Engel, J. et la. “LSD-SLAM: Large Scare Direct Monocular SLAM,” pp. 1-16, Munich.
Levinson, Jesse et al., Experimental Robotics, Unsupervised Calibration for Multi-Beam Lasers, pp. 179-194, 12th Ed., Oussama Khatib, Vijay Kumar, Gaurav Sukhatme (Eds.) Springer-Verlag Berlin Heidelberg 2014.
International Application No. PCT/US2019/013322, International Search Report and Written Opinion Mailed Apr. 2, 2019.
International Application No. PCT/US19/12934, International Search Report and Written Opinion Mailed Apr. 29, 2019.
International Application No. PCT/US18/53795, International Search Report and Written Opinion Mailed Dec. 31, 2018.
International Application No. PCT/US18/57848, International Search Report and Written Opinion Mailed Jan. 7, 2019.
International Application No. PCT/US2018/057851, International Search Report and Written Opinion Mailed Feb. 1, 2019.
International Application No. PCT/US2019/019839, International Search Report and Written Opinion Mailed May 23, 2019.
International Application No. PCT/US19/25995, International Search Report and Written Opinion Mailed Jul. 9, 2019.
Geiger, Andreas et al., “Automatic Camera and Range Sensor Calibration using a single Shot”, Robotics and Automation (ICRA), pp. 1-8, 2012 IEEE International Conference.
Zhang, Z. et al. A Flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence (vol. 22, Issue: 11, Nov. 2000).
International Application No. PCT/US2018/047830, International Search Report and Written Opinion Mailed Dec. 28, 2018.
Bar-Hillel, Aharon et al. “Recent progress in road and lane detection: a survey.” Machine Vision and Applications 25 (2011): 727-745.
Schindler, Andreas et al. “Generation of high precision digital maps using circular arc splines,” 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares, 2012, pp. 246-251. doi: 10.1109/IVS.2012.6232124.
International Application No. PCT/US2018/047608, International Search Report and Written Opinion Mailed Dec. 28, 2018.
Hou, Xiaodi and Zhang, Liqing, “Saliency Detection: A Spectral Residual Approach”, Computer Vision and Pattern Recognition, CVPR'07—IEEE Conference, pp. 1-8, 2007.
Hou, Xiaodi and Harel, Jonathan and Koch, Christof, “Image Signature: Highlighting Sparse Salient Regions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, No. 1, pp. 194-201, 2012.
Hou, Xiaodi and Zhang, Liqing, “Dynamic Visual Attention: Searching For Coding Length Increments”, Advances in Neural Information Processing Systems, vol. 21, pp. 681-688, 2008.
Li, Yin and Hou, Xiaodi and Koch, Christof and Rehg, James M. and Yuille, Alan L., “The Secrets of Salient Object Segmentation”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280-287, 2014.
Zhou, Bolei and Hou, Xiaodi and Zhang, Liqing, “A Phase Discrepancy Analysis of Object Motion”, Asian Conference on Computer Vision, pp. 225-238, Springer Berlin Heidelberg, 2010.
Hou, Xiaodi and Yuille, Alan and Koch, Christof, “Boundary Detection Benchmarking: Beyond F-Measures”, Computer Vision and Pattern Recognition, CVPR'13, vol. 2013, pp. 1-8, IEEE, 2013.
Hou, Xiaodi and Zhang, Liqing, “Color Conceptualization”, Proceedings of the 15th ACM International Conference on Multimedia, pp. 265-268, ACM, 2007.
Hou, Xiaodi and Zhang, Liqing, “Thumbnail Generation Based on Global Saliency”, Advances in Cognitive Neurodynamics, ICCN 2007, pp. 999-1003, Springer Netherlands, 2008.
Hou, Xiaodi and Yuille, Alan and Koch, Christof, “A Meta-Theory of Boundary Detection Benchmarks”, arXiv preprint arXiv:1302.5985, 2013.
Li, Yanghao and Wang, Naiyan and Shi, Jianping and Liu, Jiaying and Hou, Xiaodi, “Revisiting Batch Normalization for Practical Domain Adaptation”, arXiv preprint arXiv:1603.04779, 2016.
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Demystifying Neural Style Transfer”, arXiv preprint arXiv:1701.01036, 2017.
Hou, Xiaodi and Zhang, Liqing, “A Time-Dependent Model of Information Capacity of Visual Attention”, International Conference on Neural Information Processing, pp. 127-136, Springer Berlin Heidelberg, 2006.
Wang, Panqu and Chen, Pengfei and Yuan, Ye and Liu, Ding and Huang, Zehua and Hou, Xiaodi and Cottrell, Garrison, “Understanding Convolution for Semantic Segmentation”, arXiv preprint arXiv:1702.08502, 2017.
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Factorized Bilinear Models for Image Recognition”, arXiv preprint arXiv:1611.05709, 2016.
Hou, Xiaodi, “Computational Modeling and Psychophysics in Low and Mid-Level Vision”, California Institute of Technology, 2014.
Spinello, Luciano, Triebel, Rudolph, Siegwart, Roland, “Multiclass Multimodal Detection andTracking in Urban Environments”, Sage Journals, vol. 29 Issue 12, pp. 1498-1515 Article first published online: Oct. 7, 2010; Issue published: Oct. 1, 2010.
Matthew Barth, Carrie Malcolm, Theodore Younglove, and Nicole Hill, “Recent Validation Efforts for a Comprehensive Modal Emissions Model”, Transportation Research Record 1750, Paper No. 01-0326, College of Engineering, Center for Environmental Research and Technology, University of California, Riverside, CA 92521, date unknown.
Kyoungho Ahn, Hesham Rakha, “The Effects of Route Choice Decisions on Vehicle Energy Consumption and Emissions”, Virginia Tech Transportation Institute, Blacksburg, VA 24061, date unknown.
Ramos, Sebastian, Gehrig, Stefan, Pinggera, Peter, Franke, Uwe, Rother, Carsten, “Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep Learning and Geometric Modeling”, arXiv:1612.06573v1 [cs.CV] Dec. 20, 2016.
Schroff, Florian, Dmitry Kalenichenko, James Philbin, (Google), “FaceNet: A Unified Embedding for Face Recognition and Clustering”, CVPR 2015.
Dai, Jifeng, Kaiming He, Jian Sun, (Microsoft Research), “Instance-aware Semantic Segmentation via Multi-task Network Cascades”, CVPR 2016.
Huval, Brody, Tao Wang, Sameep Tandon, Jeff Kiske, Will Song, Joel Pazhayampallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migimatsu, Royce Cheng-Yue, Fernando Mujica, Adam Coates, Andrew Y. Ng, “An Empirical Evaluation of Deep Learning on Highway Driving”, arXiv:1504.01716v3 [cs.RO] Apr. 17, 2015.
Tian Li, “Proposal Free Instance Segmentation Based on Instance-aware Metric”, Department of Computer Science, Cranberry-Lemon University, Pittsburgh, PA., date unknown.
Mohammad Norouzi, David J. Fleet, Ruslan Salakhutdinov, “Hamming Distance Metric Learning”, Departments of Computer Science and Statistics, University of Toronto, date unknown.
Jain, Suyong Dutt, Grauman, Kristen, “Active Image Segmentation Propagation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Jun. 2016.
MacAodha, Oisin, Campbell, Neill D.F., Kautz, Jan, Brostow, Gabriel J., “Hierarchical Subquery Evaluation for Active Learning on a Graph”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
Kendall, Alex, Gal, Yarin, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision”, arXiv:1703.04977v1 [cs.CV] Mar. 15, 2017.
Wei, Junqing, John M. Dolan, Bakhtiar Litkhouhi, “A Prediction- and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving”, 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, CA, USA, Jun. 21-24, 2010.
Peter Welinder, Steve Branson, Serge Belongie, Pietro Perona, “The Multidimensional Wisdom of Crowds”; http://www.vision.caltech.edu/visipedia/papers/WelinderEtaINIPS10.pdf, 2010.
Kai Yu, Yang Zhou, Da Li, Zhang Zhang, Kaiqi Huang, “Large-scale Distributed Video Parsing and Evaluation Platform”, Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, China, arXiv:1611.09580v1 [cs.CV] Nov. 29, 2016.
P. Guarneri, G. Rocca and M. Gobbi, “A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities,” in IEEE Transactions on Neural Networks, vol. 19, No. 9, pp. 1549-1563, Sep. 2008.
C. Yang, Z. Li, R. Cui and B. Xu, “Neural Network-Based Motion Control of an Underactuated Wheeled Inverted Pendulum Model,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 25, No. 11, pp. 2004-2016, Nov. 2014.
Stephan R. Richter, Vibhav Vineet, Stefan Roth, Vladlen Koltun, “Playing for Data: Ground Truth from Computer Games”, Intel Labs, European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands, 2016.
Thanos Athanasiadis, Phivos Mylonas, Yannis Avrithis, and Stefanos Kollias, “Semantic Image Segmentation and Object Labeling”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, Mar. 2007.
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, 2016.
Adhiraj Somani, Nan Ye, David Hsu, and Wee Sun Lee, “DESPOT: Online POMDP Planning with Regularization”, Department of Computer Science, National University of Singapore, date unknown.
Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. CoRR, abs/1606.02147, 2016.
Szeliski, Richard, “Computer Vision: Algorithms and Applications” http://szeliski.org/Book/, 2010.
Office Action Mailed in Chinese Application No. 201810025516.X, Mailed Sep. 3, 2019.
Luo, Yi et al. U.S. Appl. No. 15/684,389 Notice of Allowance Mailed Oct. 9, 2019.
International Application No. PCT/US19/58863, International Search Report and Written Opinion mailed Feb. 14, 2020, pp. 1-11.
US Patent & Trademark Office, Non-Final Office Action mailed Apr. 29, 2020 in U.S. Appl. No. 16/174,980, 6 pages.
US Patent & Trademark Office, Final Office Action mailed Sep. 8, 2020 in U.S. Appl. No. 16/174,980, 8 pages.
International Application No. PCT/CN2019/077075, International Search Report and Written Opinion Mailed Sep. 10, 2019, pp. 1-12.
International Application No. PCT/CN2019/077075, International Preliminary Report on Patentability Mailed Jun. 8, 2021, pp. 1-4.
European Patent Office, Communication pursuant to Article 94(3) EPC for EP Appl. No. 19896541.0, mailed on Feb. 19, 2024, 6 pages.
Japanese Patent Office, Notice of Refusal for JP 2023-129340, Mailing Date: Apr. 19, 2024, 4 pages with English translation.
Related Publications (1)
Number Date Country
20210349217 A1 Nov 2021 US
Continuations (1)
Number Date Country
Parent PCT/CN2019/077075 Mar 2019 WO
Child 17343489 US