Detecting and Determining Relevant Variables of an Object by Means of Ultrasonic Sensors

Information

  • Patent Application
  • 20240036193
  • Publication Number
    20240036193
  • Date Filed
    October 13, 2021
    2 years ago
  • Date Published
    February 01, 2024
    3 months ago
  • Inventors
    • Tarasow; Alex
    • Gohlke; Daniel
  • Original Assignees
Abstract
The disclosure relates to a method and a device for determining relevant variables of an object in the environment of a motor vehicle by means of at least one ultrasonic, the method comprising: acquiring a measurement series of real sensor values for the at least one ultrasonic sensor, which sensor values represent the shortest distance between the object and the motor vehicle, wherein the sensor values are acquired cyclically with a specified time interval, modeling a series of sensor values for the at least one ultrasonic sensor by modeling the object movement using a state vector and a parameter vector, and modeling the distance sensing in accordance with the state vector and the parameter vector, and determining the state vector and the parameter vector by adapting the modeled sensor values to the measured real sensor values of the at least one ultrasonic sensor by means of an optimization method.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to German Patent Application No. 10 2020 214 619.5, filed on Nov. 20, 2020 with the German Patent and Trademark Office and to German Patent Application No. 10 2021 203 497.7, filed on Apr. 8, 2021 with the German Patent and Trademark Office. The contents of the aforesaid patent applications are incorporated herein for all purposes.


TECHNICAL FIELD

The invention relates to a method for detecting and tracking an object in the environment of a motor vehicle by means of ultrasound, and to a corresponding device.


BACKGROUND

This background section is provided for the purpose of generally describing the context of the disclosure. Work of the presently named inventor(s), to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.


In order to control a vehicle, referred to in the following as an ego vehicle, in an automated or, alternatively, semi-automated manner, objects in the environment of the ego vehicle need to be detected. First and foremost, the physical variables such as position, orientation, speed, and dimensions of these objects, which may be both static and dynamic, are relevant here. Detecting objects in the environment of the ego vehicle using ultrasonic sensors constitutes an alternative to radar- or camera-based detection. Furthermore, it can be used as a redundant source of information, for example in order to achieve a particular ASIL level.


However, the prior art does not contain precise information regarding the technical implementation of how to detect objects using ultrasonic sensors. Neither are there any explanations as to how a distance value estimated using a Kalman filter as well as the first and second derivations of said value can be used to determine the relevant object variables actually of interest for objects in the environment of the ego vehicle, namely the position, orientation, speed and dimensions of the object.


SUMMARY

A need exists to improve the detection of an object and of its relevant object variables in the environment of a motor vehicle, referred to in the following as ego vehicle, by means of ultrasound.


The need is addressed by a method for detecting an object in the environment of an ego vehicle and by a corresponding device according to the independent claims. Embodiments of the invention are described in the dependent claims, the following description, and the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a definition of relevant reference variables;



FIG. 2a shows a first situation of distance sensing;



FIG. 2b shows a second situation of distance recognition;



FIG. 2c shows a third situation of distance recognition;



FIG. 3 shows a modeling of the distance sensing;



FIG. 4 shows an example of distance sensing in the case of an object moving parallel to the ego vehicle;



FIG. 5 shows an example of distance sensing in the case of an object moving obliquely to the ego vehicle;



FIG. 6 shows the course of the activation function δ1(k);



FIG. 7 shows the course of the activation function δ2(k);



FIG. 8 shows the course of the activation function ƒρ1(k);



FIG. 9 shows the course of the activation function ƒρ2(k);



FIG. 10 shows the course of the activation function {tilde over (ƒ)}ρ1(k); and



FIG. 11 shows the course of the activation function {tilde over (ƒ)}ρ2(k).





DESCRIPTION

The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features will be apparent from the description, drawings, and from the claims.


In the following description of embodiments of the invention, specific details are described in order to provide a thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the instant description.


In some embodiments, a method for detecting and determining relevant variables of an object in the environment of a motor vehicle by means of at least one ultrasonic sensor disposed laterally on the motor vehicle is provided. The ultrasonic sensor may have a sensing range defined by a sensor-specific opening angle, and the method may comprise the steps of:

    • acquiring a measurement series of real sensor values for the at least one ultrasonic sensor, which sensor values represent the shortest distance between the object and the motor vehicle, wherein the sensor values are acquired cyclically with a specified time interval,
    • modeling a series of sensor values for the at least one ultrasonic sensor by
      • modeling the object movement using a state vector and a parameter vector, wherein the state vector and parameter vector are calculated from the relevant object variables,
      • modeling the distance sensing in accordance with the state vector and the parameter vector, and
    • determining the state vector and the parameter vector by adapting the modeled sensor values to the measured real sensor values of the at least one ultrasonic sensor by means of an optimization method.


In this way, the relevant object variables, such as the position, orientation, speed and dimensions of the object, can be acquired in a model-based manner and thus in a closed, consistent form. The predictive error method, a sliding mode observer, the Kalman filter, or the Levenberg-Marquardt method can be considered as a suitable optimization method, such that the deviation between the modeled and real recorded sensor values is as small as possible.


For example, the sum of the differences between the modeled sensor values and the real sensor values is minimized in order to determine the state vector and parameter vector.


For example, a modeled sensor value vector is formed from the modeled values of the at least one ultrasonic sensor, which modeled sensor value vector is linked to the sensor value vector of the real sensor values of the individual ultrasonic vectors for minimization.


For example, the modeled sensor values are a function of the position of the object relative to the ego vehicle and a function of the appearance of corners of the object within the sensing range of the at least one ultrasonic sensor, wherein the coordinates of the corners of the object are determined in the coordinate system of the motor vehicle.


For example, the relevant variables of the object are given at least by:

    • the longitudinal distance of the object from the motor vehicle,
    • the longitudinal speed of the object relative to the motor vehicle,
    • the lateral distance of the object from the motor vehicle,
    • the lateral speed of the object relative to the motor vehicle,
    • the angle between the longitudinal axis of the object and the longitudinal axis of the motor vehicle,
    • the length of the object, and
    • the width of the object.


For example, the modeled sensor value vector is formed from the modeled distance values of the at least one ultrasonic sensor, wherein the modeled distance value of the at least one ultrasonic sensor results from the minimum of the maximum range of the at least one ultrasonic sensor and calculated distance values between the object and the motor vehicle.


For example, the calculated distance values are ascertained as a function of the position of the object relative to the sensor normal of the at least one ultrasonic sensor of the motor vehicle, wherein the sensor normal is defined as the straight line that extends through the origin of the respective at least one ultrasonic sensor and that stands perpendicularly on the object longitudinal axis.


For example, in order to determine the calculated distance values, it is checked whether the object is located to the left or right of or on the sensor normal.


Furthermore, the check of the position of the object can be carried out by means of activation functions.


A device for detecting, tracking and determining relevant variables of an object in the environment of a motor vehicle is provided in some embodiments, wherein the device is configured and designed to carry out the previously described method, and comprises

    • at least one ultrasonic sensor disposed laterally on the motor vehicle for cyclically determining sensor values that represent distances from an object,
    • an apparatus for determining modeled sensor values as a function of relevant variables of the object, and
    • an optimization apparatus for adapting the modeled sensor values to the real sensor values for each ultrasonic sensor for acquiring the relevant object variables.


Further embodiments are discussed in greater detail with reference to the drawings.


Specific references to components, process steps, and other elements are not intended to be limiting. Further, it is understood that like parts bear the same or similar reference numerals when referring to alternate FIGS.



FIG. 1 shows the definition of the relevant object variables in the case of an object O being detected in the environment of an ego vehicle, wherein no road boundaries or markings are shown in the image for the sake of simplicity. The schematic FIG. 1 shows an ego vehicle E that is moving in the direction of the triangle drawn in the ego vehicle E. An object O that is moving at an angle α relative to the ego vehicle is located above the ego vehicle, wherein the travel direction of the object is given by the triangle drawn in the object. The angle α shown in FIG. 1 between the sides of the two vehicles, the ego vehicle E and the object, is defined by the angle enclosed between the two longitudinal axes of the vehicles (not shown).


The object O is further characterized by its length l and width b as well as by its position {right arrow over (r)} relative to the ego vehicle E, wherein the following applies for the position {right arrow over (r)} in the xe-ye coordinate system of the ego vehicle:






{right arrow over (r)}=[x,y]
T,

    • wherein the position vector {right arrow over (r)} is applied to the geometric centers of gravity of the vehicles E and O.


Furthermore, the object O is moving at a speed {right arrow over (v)} relative to the ego vehicle, wherein the following applies for the speed {right arrow over (v)} relative to the ego vehicle E:






{right arrow over (v)}=[v
x
,v
y]T,

    • wherein vx and vy are defined in the xe-ye coordinate system of the ego vehicle E.


In order to be able to model the movement of the object O, state variable x1 to x6 are defined, which are combined into one state vector






{right arrow over (x)}=[x
1
,x
2
,x
3
,x
4
,x
5
,x
6]T


The meaning of the individual state variables is as follows:

    • x1: Longitudinal object position,
    • x2: Longitudinal object speed,
    • x3: Lateral object position,
    • x4: Lateral object speed,
    • x5: Angle between the longitudinal axes of the ego vehicle and object, and
    • x6: Temporal change in the angle between the longitudinal axes of the vehicles.


The replication of the movement of the object O can be expressed as follows:






{right arrow over (x)}(k+1)=A·{right arrow over (x)}(k)+K(k{right arrow over (e)}(k), wherein 1, . . . N, with N=number of measurements,

    • and with the matrix A defined as follows, wherein T represents the sampling time between two measurements:






A
=


[



1


T


0


0


0


0




0


1


0


0


0


0




0


0


1


T


0


0




0


0


0


1


0


0




0


0


0


0


1


T




0


0


0


0


0


1



]

.





Here, K(k) is the matrix yet to be determined for correcting the estimation via the feedback of the error vector {right arrow over (e)}(k). The error vector {right arrow over (e)}(k) contains the deviations between the modeled values ŷi(k) and the measured values yi(k) of the various sensors i=1 to m, wherein the components of the vector {right arrow over (e)}(k) are defined as follows:






e
i(k)=yi(k)−ŷi(k), with






y
i
=h
i({right arrow over (x)}(k),{right arrow over (θ)}(k)).


Here, hi is the so-called output function and describes the distance sensing with the sensor i as a function of {right arrow over (x)} and the parameter vector {right arrow over (θ)}, wherein the parameter vector {right arrow over (θ)} contains the unknown width and length of the object as well as the coefficients of the matrix K and is estimated using a suitable optimization method.



FIGS. 2a to 2b portray the three possible situations of the sensing of the distance between an ego vehicle E and an adjacent vehicle, i.e., an object O, by means of an ultrasonic sensor SR1. Here, the replication or modeling of the distance sensing with an ultrasonic sensor plays an important role, since this influences not only the complexity of the algorithm, but also the identifiability of unknown parameters such as the object length, and the observability of unmeasurable object variables, such as the position, speed and/or angle. Since an ultrasonic sensor only provides the shortest distance from the object, the information content of the distance measurement and also the modeling thereof differs depending on whether or not the object O to be detected completely covers the sensing range of the sensor SR1, as shown in FIGS. 2a to 2c. For simplification purposes, only one sensor SR1 is shown in the ego vehicle E, which sensor is laterally disposed to the left in the rear region of the ego vehicle, wherein the shown SR1 shown has a sensing range W within which detection of an object O is possible. Furthermore, the travel direction of the two vehicle O, E is indicated by means of the triangle in the respective vehicles O, E.



FIG. 2a shows the situation in which the vehicle or rather object O is approaching the ego vehicle E from behind on the outside lane 3 of a roadway 1, the ego vehicle being located on the inside lane 2 of the roadway 1. The sensor SR1 of the ego vehicle E detects a distance s1 to the front of the object vehicle O within its sensing range W, wherein the sensor S always provides the shortest distance to the object O.



FIG. 2b shows the situation in which the vehicle O on the outside lane 3 has caught up more with the ego vehicle E than in the situation in FIG. 2a. The sensor SR1 then measures the distance s2 as the shortest distance to the object vehicle O within the sensing range W.


In FIG. 2c, the object vehicle O has moved up to the side next to the ego vehicle E, and therefore the sensor SR1 now ascertains a distance s3 as the shortest distance.


It is obvious from the sequence of the approach of the object vehicle O to the ego vehicle E that, in FIG. 2a, firstly the distance s1 to the front of the object vehicle O is ascertained, in FIG. 2b the distance s2 to the front corner of the object vehicle is ascertained in another approximation, and finally, in FIG. 2c, the lateral distance s3 to the object vehicle O is ascertained, a follows:






s1>s2>s3.


In the case of the measurements shown in FIGS. 2a to 2c, these measurements s1, s2 and s3 are therefore components of the measurement vector ySR1(k) of the sensor SR1 with kϵ1, . . . N, N=number of measurements.



FIG. 3 shows the modeling of the distance sensing of the sensed object O with respect to the ego vehicle E, i.e. the modeling of the circumstances shown in FIGS. 2a to 2c. Here, the object O is moving along its longitudinal axis to the right and the ego vehicle E is moving along its longitudinal axis to the right as well, wherein the two directions of movement enclose an angle α. The ego vehicle E is shown with two sensors SR1 and Sr2 laterally disposed on its left-hand side, wherein only the back left sensor SR1 is considered in this discussion.


The coordinate system xe,ye of the ego vehicle E is located in the center thereof and the zero point of the coordinate system xS,yS of the sensor SR1 considered here is disposed centrally in the transmission plane of the sensor SR1. The coordinate system of the sensor SR1 is transformed into the coordinate system of the ego vehicle E by adding up the coordinates of the installation position of the sensor SR1.


The reference signs E1 to E4 denote the corners of the object O as drawn, wherein E1 forms the front left corner, E2 forms the front right corner, E3 forms the back left corner, and E4 forms the back right corner of the object. Consequently, the connection E1E2 forms the front edge and the connection E3E4 forms the back edge of the object, wherein the width of the object O is denoted by b and the length is denoted by l.


Furthermore, the sensing range W of the sensor SR1 is shown and is defined by an opening angle φ. The perpendicular a to the longitudinal axis LO of the object is drawn in the coordinate origin of the sensor SR1 and serves to define and determine the distances μ1 and μ2 and is referred to as the sensor normal. Here, the distance μ1 is defined as the distance between the front edge E1E2 and the perpendicular a and the distance μ2 is defined as the distance between the back edge E3E4 of the object O and the perpendicular a. Finally, the center point M of the object O in the xSs,yS coordinate system of the sensor SR1 has the coordinates xobj and yobj.


Furthermore, the following applies for the above-defined state variable {right arrow over (x)}=[x1,x2,x3,x4,x5,x6]T:






x
obj
=x
1(k)






y
obj
=x
3(k)





α=x5(k),





θ1=0.5·, and





θ2=0.5·b.


The calculation steps for determining the distance value of a sensor, in this case the sensor SR1, are thus the following:


Firstly, the distances μ1 and μ2 are determined, wherein the straight line a referred to as the sensor normal is being shown in the Hess normal form and the distances μ1 and μ2 are calculated as orthogonal distances between the two corner points E1 and E3 as well as the straight line. The following equations then result:





μ1(k)=cos x5(kxE1(k)+sin x5(kyE1(k), and





μ2(k)=cos x5(kxE3(k)+sin x5(kyE3(k).


The coordinates of the corners E1 and E4 of the object O in the coordinate system xe,ye of the ego vehicle E are as follows:






x
E1(k)=x1(k)+θ1(k)·cos x5(k)+θ2·sin x5(k)






y
E1(k)=x3(k)+θ1(k)·sin x5(k)+θ2·cos x5(k)  E1:






x
E2(k)=x1(k)+θ1(k)·cos x5(k)+θ2·sin x5(k)






y
E2(k)=x3(k)+θ1(k)·sin x5(k)+θ2·cos x5(k)  E2:






x
E3(k)=x1(k)+θ1(k)·cos x5(k)+θ2·sin x5(k)






y
E3(k)=x3(k)+θ1(k)·sin x5(k)+θ2·cos x5(k)  E3:






x
E4(k)=x1(k)+θ1(k)·cos x5(k)+θ2·sin x5(k)






y
E4(k)=x3(k)+θ1(k)·sin x5(k)+θ2·cos x5(k)  E4:


Here, the coordinates are transformed from the object coordinate system x0,y0 into the ego coordinate system xe,ye using the following transformation matrix:






T
=


[




cos



x
5

(
k
)






-
sin




x
5

(
k
)






x
1

(
k
)






sin



x
5

(
k
)





cos



x
5

(
k
)






x
3

(
k
)




]

.





The coordinates of the corner points E1, E2, E3, E4 in the object coordinate system x0,y0 are:







[




l
2






b
2




]

,

[




l
2






-

b
2





]

,


[




-

l
2







b
2




]





and

[




-

l
2







-

b
2





]

.






Consequently, the object corners are calculated in the ego coordinate system xe,ye as follows:








E

1

=

T
·

[




l
2






b
2





1



]



,


E

2

=

T
·

[




i
2






-

b
2






1



]



,


E

3

=



T
·

[




-

l
2







b
2





1



]




and






E

4

=

T
·


[




-

l
2







-

b
2






1



]

.








Performing the transformations leads to the above results for the corners E1, E2, E3 and E4.


Subsequently, the modeled or estimated distance value ŷ(k) of a sensor, in this case the sensor SR1 in the example of FIG. 3, is determined. For this purpose, the following three cases, which correspond to the situations of FIGS. 2a to 2c, are distinguished:

    • A(k): The case with μ1(k)<0 is considered. Here, the object O is located to the left of the straight line a, which extends through the origin of the coordinate system of the sensor SR1 and which is perpendicular to the longitudinal axis LO of the object, as already defined above. Here, three subcases are distinguished once more.
      • A1) If the following logic operation is satisfied:







(



cos



φ
2


·


x

E

1


(
k
)



+

sin



φ
2

·


y

E

1


(
k
)





0

)



AND







(



cos



φ
2

·

x

E

2




+

sin



φ
2

·


y

E

2


(
k
)





0

)










        • this means that only the corner E1 is located within the sensing range W of the sensor SR1, and the distance s between the object O and the ego vehicle E is calculated as follows:














s
=



μ
1

(
k
)


sin

(


-

φ
2


-


x
5

(
k
)


)



,








      • wherein s denotes the shortest distance to the object O.

      • A2) If the condition:











(



cos



φ
2

·


x

E

2


(
k
)



+

sin



φ
2

·



y

E

2


(
k
)





0

)










        • applies, which means that the corner E2 is located within the sensing range W of the sensor SR1, then s is calculated as follows:













s=√{square root over (xE2(k)2+yE2(k)2)}.

      • A3) If neither the corner E1 nor the corner E2 is within the sensing range W of the sensor SR1 under the condition μ1(k)<0, the following applies:






s=s
max,

        • wherein smax denotes the maximum range of the sensor SR1.
    • B(k): The case μ2(k)>0 will now be considered, which means that the object O is located to the right of the straight line a shown in FIG. 3.
      • B1) If the following condition is met:







(



cos



φ
2

·


x

E

4


(
k
)



+
sin
-


φ
2

·


y

E

4


(
k
)




0

)

,










        • which means that the corner E4 is located within the sensing range W of the sensor SR1, then the distance s of the object O is calculated as follows:













s=√{square root over (xE4(k)2+yE4(k)2)}.

      • B2) If the following AND logic operation is satisfied:








(




cos

(

-

φ
2


)

·



x

E

4


(
k
)


+


sin

(

-

φ
2


)

·


y

E

4


(
k
)

·



0

)



AND





(




cos

(

-

φ
2


)

·


x

E

3


(
k
)


+


sin

(

-

φ
2


)

·


y

E

3


(
k
)




0

)

,











        • which means that the corner E3 is located within the sensing range of the sensor SR1, then the distance s is calculated as follows:













s
=




μ
2

(
k
)


sin

(


-

φ
2


-


x
5

(
k
)


)


.









      • B3) If neither the corner E3 nor E4 is within the sensing range of the sensor SR1 under the condition μ2(k)>0, the following applies:











s=s
max.

    • C(k): This relates to the case in which the object O completely covers the sensing range W of the sensor SR1, as is the case in the example of FIG. 2c. The distance s between the object and the ego vehicle E is then calculated as follows:









s
=



cos

(



x
5

(
k
)

+

π
2


)

·

x

(
k
)


+


sin

(



x
5

(
k
)

+

π
2


)

·

y

(
k
)


-


θ
2

(
k
)






C1
)









    • The modeled distance value of an ultrasonic sensor, in this example the sensor SR1, then results as follows:









ŷ
SR1(k)=min(s,smax)


The distinctions described above under A(k),B(k) and C(k) are repeated for each ultrasonic sensor i=1 to m considered in the model, m being the number of sensors involved. In the example of FIG. 3, these are the sensors SR1 and SR2.


The total output of the complete model is then composed of the individual estimated sensor values as follows:






ŷ=[ŷ
1(k),ŷ2(k), . . . ,ŷm(k)]T with 1, . . . ,m number of sensors


In the example of FIG. 3, the sensors SR1 and SR2 are relevant, and therefore the modeled distance vector for the two sensors SR1, SR2 is:






ŷ=[ŷ
SR1(k),ŷSR2(k)]T.


By applying a suitable optimization method, for example the predictive error method, a sliding mode observer, the Kalman filter, the Levenberg-Marquardt method, etc., the state vector {right arrow over (x)} and the parameter vector θ are adapted such that the measured y(k) and the modeled ŷ(k) system output match as well as possible within the meaning of a defined optimization criterion.


A typical optimization criterion is the minimization of the sum of the deviations F(N) between the measured and the modeled system output, i.e.






F(N)=Σk=1N(y(k)−ŷ(k))2, with N=number of measurements.


This is not the only option for minimizing the deviations. Instead of squaring the difference, the amount of the difference between the measured and the modeled system output can be used, for example.



FIG. 4 shows the result of the modeling for the case of parallel passing for the sensor SR1. The ego vehicle E is therefore overtaken by an object vehicle O that passes by in parallel. The measurements s(k) of the sensor SR1 are shown as a function of the time t in the form of stars “custom-character”, wherein the measurement index k can also be understood as the point in time t. There is therefore a clear assignment of the number k of the measurement to the point in time t. The model M, i.e., the modeled system output ŷ(k), is shown as a solid line.


Therefore, the state vector {right arrow over (x)} described at the outset and the parameter vector θ are determined by means of the above-mentioned minimization of the sum of the deviations.



FIG. 5 shows the result of the modeling for the case of oblique passing, i.e., the ego vehicle E is overtaken by an object vehicle that is passing by at an angle, wherein only one sensor SR1 is considered here as well. The measurements s(k) of the sensor SR1 are shown as a function of the time t in the form of stars “custom-character”, wherein the measurement index k can also be understood as the point in time t. There is therefore a clear assignment of the number k of the measurement to the point in time t. The model M, i.e., the modeled system output ŷ(k), is shown as a solid line that extends obliquely in the lower region, unlike in the case of FIG. 4.


The state vector {right arrow over (x)} described at the outset and the parameter vector θ can therefore be determined from the model M.


In the case of the described modeling of the distance sensing, the present situation A(k),B(K) or C(k) is checked using IF-THEN queries. Instead of this sequence of requests, in another embodiment, specially parameterized activation functions can also be used. This will be explained in the following based on the example of the check as to whether the object is located to the left or right of or on the straight line a.


The three cases A(k) to C(k) already explained above are distinguished.


A(k) μ1<0

Here, the case with μ1(k)<0 is therefore relevant and is reproduced in FIGS. 2a and 2b. The object O is located to the left of the straight line a, which extends through the origin of the coordinate system of the sensor SR1 and which is perpendicular to the longitudinal axis LO of the object, as already defined above. The straight line a forms, so to speak, a normal on the longitudinal axis of the object O, which extends through the origin of the sensor SR1.


For this first case with μ1(k)<0, in which the object O is located to the left of the straight line a, the following transition function ƒ1(k) is defined:








f
1

(
k
)

=


(



-

0
.
5





β
·


μ
1

(
k
)





(

β
·


μ
1

(
k
)


)

2




+
1

)

.





Here, the factor β is a constant parameter that defines the steepness of the transition from the value “One” to the value “Zero” or, alternatively, from the value “Zero” to the value “One”.


Using the above transition function ƒ1(k), an activation function δ1(k) for the case μ1(k)<0 is defined as follows:





δ1(k)=(ƒ1(k)−0.5).


This first activation function δ1(k) is shown in FIG. 6.


B(k) μ2(k)>0


The case μ2(k)>0 will now be considered, which means that the object O is located to the right of the straight line a shown in FIG. 3. This case is almost a mirror image of the first case shown in FIGS. 2a and 2b.


For the second case with μ2(k)>0, in which the object O is located to the right of the straight line a, the following transition function ƒ2(k) is defined:








f
2

(
k
)

=


(



0
.
5




β
·


μ
2

(
k
)





(

β
·


μ
2

(
k
)


)

2




+
1

)

.





Here, the factor β is defined as above and defines the steepness of the transition from “Zero” to “One” or vice versa.


Using the above transition function ƒ2(k), a further activation function δ2(k) for the case μ2(k)>0 is defined as follows:





δ2(k)=(ƒ2(k)−0.5).


The course of this second activation function is shown in FIG. 7.


C(k)) Else


Thirdly, the case is considered in which the object O completely covers the sensing range W of the sensor SR1, i.e., is located on the straight line a, as is the case in the example of FIG. 2c.


In this case, a third activation function δ3(k) is defined as follows:





δ3(k)=(ƒ1(k)−1.5)·(ƒ2(k)−1.5)


This third activation function δ3(k) then only assumes the value of “One” if μ1(k)<0 and μ2(k)>0 apply at the same time.


By means of the activation functions δ1(k), δ2(k) and δ3(k), the query of the cases as to whether the object O is located to the left of the straight line a, to the right of the straight line a, or on the straight line a, can be represented compactly as follows:






s=δ
1(kA(k)+δ2(kB(k)+δ3(kC(k)


In order to be able to calculate the above equation, the cases or functions A(k) to C(k) must be defined. This will be shown below based on the example of the function A(k). The functions B(k) and C(k) then result in a similar manner:


Firstly, the expressions within the “IF-THEN” queries in the case A(k) are summarized as follows:







ρ
1

=


cos



φ
2

·


x

E

1


(
k
)



+

sin



φ
2

·


y

E

1


(
k
)











ρ
2

=


cos



φ
2

·


x

E

2


(
k
)



+

sin



φ
2

·


y

E

2


(
k
)








Further activation functions, which are shown in FIGS. 6 to 9, are defined by means of the above-described functions ρ1 and ρ2.












f

ρ
1


(
k
)

=



0
.
5




β
·


ρ
1

(
k
)






(

β
·


ρ
1

(
k
)


)

2

+
1




+
0.5


,




FIG
.

8















f

ρ
2


(
k
)

=



-

0
.
5





β
·


ρ
2

(
k
)






(

β
·


ρ
2

(
k
)


)

2

+
1




+
0.5


,




FIG
.

9









    • as well as their reflections on the ordinate:
















f
˜


ρ
1


(
k
)

=



0
.
5




β
·

(

-


ρ
1

(
k
)


)






(

β
·


ρ
1

(
k
)


)

2

+
1




+
0.5


,
and




FIG
.

10















f
˜


ρ
2


(
k
)

=



-

0
.
5





β
·

(

-


ρ
2

(
k
)


)






(

β
·


ρ
2

(
k
)


)

2

+
1




+

0
.
5






FIG
.

11







Furthermore, the calculation of the distances s for the subcases A1, A2 and A3 of the case A(k) are defined as follows:








s

A

1


=



μ
1

(
k
)


sin

(


-

φ
2


-


x
5

(
k
)


)



,








s

A

2


=





x

E

2


(
k
)

2

+



y

E

2


(
k
)

2




,
and







s

A

3


=


s
max

.





If the distance s in the case A(k) is denoted by sA, the following results for the calculation of said distance in the case A(k):






s
Aρ1(k)·ƒρ2(ksA1+{tilde over (ƒ)}ρ2(ksA2+{tilde over (ƒ)}ρ1(k)·ƒρ2(ksA3


Consequently, the equation for the general determination of the distance s:






s=δ
1(kA(k)+δ2(kB(k)+δ3(kC(k)





can be rewritten as:






s=δ
1(k)·ƒρ1(k)·ƒρ2(ksA1+{tilde over (ƒ)}ρ2(ksA2+{tilde over (ƒ)}ρ1(k)·ƒρ2(ksA3)+δ2B(k)+δ3·C(k)


The further cases B(k) and C(k) can be represented in a similar manner. The final value of the modeled distance value of an ultrasonic sensor then results as:






ŷ(k)=min(s,smax),

    • as is also shown in the above in conjunction with the IF-THEN queries.


LIST OF REFERENCE NUMERALS





    • O Object

    • E Ego vehicle

    • {right arrow over (r)} Position {right arrow over (r)}=[x, y]T

    • {right arrow over (v)} Speed v=[vx, vy]T

    • l Length of the object O

    • ∝ Angle between the longitudinal axes of the object and of the ego vehicle


    • 1 Roadway


    • 2 Lane


    • 3 Lane

    • S Sensor

    • s1 Measured distance

    • s2 Measured distance

    • s3 Measured distance

    • W Sensing range

    • xe,ye Coordinate system of the ego vehicle

    • xs,ys Coordinate system of the sensor SR1

    • x0,y0 Coordinate system of the object

    • SR1 Sensor 1 of the ego vehicle

    • SR2 Sensor 2 of the ego vehicle

    • xob Distance between center point of object and sensor SR1 in xs direction

    • yob Distance between center point of object and sensor SR1 in ys direction

    • E1 Front left corner of the object

    • E2 Front right corner of the object

    • E3 Back left corner of the object

    • E4 Back right corner of the object

    • E1E2 Front edge of the object

    • E3E4 Back edge of the object

    • b Width of the object

    • μ1 Distance of the front edge E1-E2 of the object from the sensor SR1

    • μ2 Distance of the back edge E3-E4 of the object from the sensor SR1

    • a Sensor normal of the sensor SR1 on the longitudinal axis of the object

    • φ Opening angle of the sensor SR1

    • LO Longitudinal axis of object

    • M Model for the sensor SR1

    • s(t) Measured distance at the point in time t

    • δ1 Activation function

    • δ2 Activation function

    • ρ1 Query function

    • ρ2 Query function

    • ƒρ1 Activation function

    • ƒρ2 Activation function

    • {tilde over (ƒ)}ρ1 Activation function

    • {tilde over (ƒ)}ρ2 Activation function





The invention has been described in the preceding using various exemplary embodiments. Other variations to the disclosed embodiments may be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor, module or other unit or device may fulfil the functions of several items recited in the claims.


The term “exemplary” used throughout the specification means “serving as an example, instance, or exemplification” and does not mean “preferred” or “having advantages” over other embodiments. The term “in particular” and “particularly” used throughout the specification means “for example” or “for instance”.


The mere fact that certain measures are recited in mutually different dependent claims or embodiments does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims
  • 1. A method for determining relevant variables of an object in the environment of a motor vehicle using at least one ultrasonic sensor disposed laterally on the motor vehicle, wherein the ultrasonic sensor has a sensing range defined by a sensor-specific opening angle, comprising: acquiring a measurement series of real sensor values for the at least one ultrasonic sensor, which sensor values represent the shortest distance between the object and the motor vehicle, wherein the sensor values of the at least one ultrasonic sensor are acquired cyclically with a specified time interval;modeling a series of sensor values for the at least one ultrasonic sensor by: modeling the object movement using a state vector {right arrow over (x)} and a parameter vector θ, wherein the state vector {right arrow over (x)} and the parameter vector θ are calculated from the relevant object variables, andmodeling the distance sensing in accordance with the state vector {right arrow over (x)} and the parameter vector; θ anddetermining the state vector {right arrow over (x)} and the parameter vector θ by adapting the modeled sensor values to the measured real sensor values of the at least one ultrasonic sensor using an optimization method.
  • 2. The method of claim 1, wherein the sum of the differences between the modeled sensor values and the real sensor values is minimized in order to determine the state vector and parameter vector.
  • 3. The method of claim 2, wherein a modeled sensor value vector is formed from the modeled distance values of the at least one ultrasonic sensor, which modeled sensor value vector is linked to the sensor value vector of the real sensor values of the individual ultrasonic vectors for minimization.
  • 4. The method of claim 1, wherein the modeled sensor values are a function of the position of the object relative to the ego vehicle and a function of the appearance of corners of the object within the sensing range of the at least one ultrasonic sensor, wherein the coordinates of the corners of the object are determined in the coordinate system of the motor vehicle.
  • 5. The method claim 1, wherein the relevant variables of the object are given at least by one or mere of: the longitudinal distance of the object from the motor vehicle;the longitudinal speed of the object relative to the motor vehicle;the lateral distance of the object from the motor vehicle;the lateral speed of the object relative to the motor vehicle;the angle between the longitudinal axis of the object and the longitudinal axis of the motor vehicle;the length of the object; andthe width of the object.
  • 6. The method of claim 3, wherein the modeled sensor value vector is formed from the modeled distance values of the at least one ultrasonic sensor, wherein the modeled distance value of the at least one ultrasonic sensor results from the minimum of the maximum range of the at least one ultrasonic sensor and calculated distance values between the object and the motor vehicle.
  • 7. The method of claim 6, wherein the calculated distance values are determined as a function of the position of the object relative to the sensor normal of the at least one ultrasonic sensor of the motor vehicle, wherein the sensor normal is defined as the straight line that extends through the origin of the respective at least one ultrasonic sensor and that stands perpendicularly on the object longitudinal axis.
  • 8. The method of claim 7, wherein, in order to determine the calculated distance values, it is determined whether the object is located to the left or right of or on the sensor.
  • 9. The method of claim 8, wherein the determination of the position of the object is carried out using activation functions.
  • 10. A device for determining relevant variables of an object in the environment of a motor vehicle, comprising: at least one ultrasonic sensor disposed laterally on the motor vehicle for cyclically determining sensor values that represent distances from an object;an apparatus for determining modeled sensor values as a function of relevant variables of the object; andan optimization apparatus for adapting the modeled sensor values to the real sensor values for each ultrasonic sensor for acquiring the relevant object variables; whereinthe device is configured to:acquire a measurement series of real sensor values for the at least one ultrasonic sensor, which sensor values represent the shortest distance between the object and the motor vehicle, wherein the sensor values of the at least one ultrasonic sensor are acquired cyclically with a specified time interval;model a series of sensor values for the at least one ultrasonic sensor by: modeling the object movement using a state vector {right arrow over (x)} and a parameter vector θ, wherein the state vector {right arrow over (x)} and the parameter vector θ are calculated from the relevant object variables, andmodeling the distance sensing in accordance with the state vector {right arrow over (x)} and the parameter vector; θ anddetermine the state vector {right arrow over (x)} and the parameter vector θ by adapting the modeled sensor values to the measured real sensors values of the at least one ultrasonic sensor using an optimization method.
  • 11. The method of claim 2, wherein the modeled sensor values are a function of the position of the object relative to the ego vehicle and a function of the appearance of corners of the object within the sensing range of the at least one ultrasonic sensor, wherein the coordinates of the corners of the object are determined in the coordinate system of the motor vehicle.
  • 12. The method of claim 3, wherein the modeled sensor values are a function of the position of the object relative to the ego vehicle and a function of the appearance of corners of the object within the sensing range of the at least one ultrasonic sensor, wherein the coordinates of the corners of the object are determined in the coordinate system of the motor vehicle.
  • 13. The method of claim 2, wherein the relevant variables of the object are given at least by one or more of: the longitudinal distance of the object from the motor vehicle;the longitudinal speed of the object relative to the motor vehicle;the lateral distance of the object from the motor vehicle;the lateral speed of the object relative to the motor vehicle;the angle between the longitudinal axis of the object and the longitudinal axis of the motor vehicle;the length of the object; andthe width of the object.
  • 14. The method of claim 3, wherein the relevant variables of the object are given at least by one or more of: the longitudinal distance of the object from the motor vehicle;the longitudinal speed of the object relative to the motor vehicle;the lateral distance of the object from the motor vehicle;the lateral speed of the object relative to the motor vehicle;the angle between the longitudinal axis of the object and the longitudinal axis of the motor vehicle;the length of the object; andthe width of the object.
  • 15. The method of claim 4, wherein the relevant variables of the object are given at least by one or more of: the longitudinal distance of the object from the motor vehicle;the longitudinal speed of the object relative to the motor vehicle;the lateral distance of the object from the motor vehicle;the lateral speed of the object relative to the motor vehicle;the angle between the longitudinal axis of the object and the longitudinal axis of the motor vehicle;the length of the object; andthe width of the object.
  • 16. The method of claim 4, wherein the modeled sensor value vector is formed from the modeled distance values of the at least one ultrasonic sensor, wherein the modeled distance value of the at least one ultrasonic sensor results from the minimum of the maximum range of the at least one ultrasonic sensor and calculated distance values between the object and the motor vehicle.
  • 17. The method of claim 5, wherein the modeled sensor value vector is formed from the modeled distance values of the at least one ultrasonic sensor, wherein the modeled distance value of the at least one ultrasonic sensor results from the minimum of the maximum range of the at least one ultrasonic sensor and calculated distance values between the object and the motor vehicle.
Priority Claims (2)
Number Date Country Kind
10 2020 214 619.5 Nov 2020 DE national
10 2021 203 497.7 Apr 2021 DE national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2021/078237 10/13/2021 WO