This application claims priority to and the benefit of Korean Patent Application No. 10-2017-0046772, filed on Apr. 11, 2017, which is incorporated herein by reference in its entirety.
Forms of the present disclosure relate to a vehicle and a method for collision avoidance assistance.
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
A vehicle is a device configured to carry a human being or an object to a destination while traveling on a road or tracks. The vehicle may move to various locations using at least one wheel installed on a body thereof. Examples of the vehicle include a three or four-wheeled vehicle, a two-wheeled vehicle such as a motorcycle, construction equipment, a bicycle, a train traveling on rails on a railroad, etc.
Research has been actively conducted on a vehicle with an advanced driver assistance system (ADAS) which actively provides information regarding a state of the vehicle, a state of a driver, and an ambient environment to decrease a burden on the driver and increase driver convenience.
One example of the ADAS installed in a vehicle is a parking collision-avoidance assistance (PCA) system. The PCA system determines a collision possibility between a vehicle and a pedestrian or an obstacle near the vehicle, and gives a warning or brakes the vehicle while the vehicle is running at a low speed.
The present disclosure is directed to accurately estimating the distance between a vehicle and a pedestrian. In particular, when the pedestrian is located behind the vehicle and on a road with a slope or a gradient, the present disclosure may reduce or prevent a collision between the vehicle and the pedestrian when the vehicle moves backward.
Additional aspects of the present disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present disclosure.
In accordance with one aspect of the present disclosure, a vehicle includes: a camera configured to obtain an image of an object behind the vehicle by photographing the object when the vehicle moves backward, and obtain coordinates of at least one feature point spaced a predetermined distance from the object, a controller configured to establish an estimated value of a vector indicating a state of the vehicle on the basis of coordinates of the vehicle, coordinates of the object, and the coordinates of the at least one feature point, determine a predicted value of the estimated value of the vector on the basis of a result of differentiating the estimated value of the vector with respect to time, correct the predicted value on the basis of the coordinates of the object and the coordinates of the at least one feature point obtained by the camera, determine the estimated value of the vector on the basis of the corrected predicted value, calculate a distance between the camera and the object on the basis of the determined estimated value of the vector, and transmit a collision warning signal on the basis of the calculated distance, and a notification unit configured to output a collision warning on the basis of the transmitted collision warning signal.
The camera may obtain coordinates of a road on which the object is located and coordinates of at least one feature point spaced a predetermined distance from the coordinates of the road.
The controller may establish a first coordinate system with a center which is the same as a center of an axle of a wheel of the vehicle at a position at which the camera starts sensing the object behind the vehicle when the vehicle is stopped and moves backward, establish a second coordinate system with a center, and establish a third coordinate system relative to the first coordinate system and based on a position of the camera. The center of the second coordinate system is established at a position where the center of the wheel is located close to the object with a predetermined distance after the vehicle moves backward.
The controller may determine the coordinates of the object and the coordinates of the at least one feature point with respect to the first coordinate system of the vehicle.
The controller may determine a roll, a pitch, and a yaw of the vehicle with respect to the first coordinate system of the vehicle.
The controller may determine a differential value of the estimated value of the vector on the basis of a backing up speed of the vehicle, a yaw, a distance of the axle of the wheel, and a steering angle.
The controller may determine the predicted value of the estimated value of the vector and a predicted value of a covariance matrix indicating the state of the vehicle on the basis of the determined differential value.
The controller may determine a measured value of a position vector of the object on the basis of the coordinates of the object obtained by the camera, determines a measured value of a position vector of the at least one feature point on the basis of the coordinates of the at least one feature point obtained by the camera, and determines a measured value of a height of the road on which the object is located on the basis of an average value of height components of the coordinates of the at least one feature point obtained by the camera.
The controller may determine the measured value of the position vector of the object with respect to the third coordinate system of the vehicle.
The controller may determine the measured value of the position vector of the at least one feature point with respect to the third coordinate system of the vehicle.
The controller may determine the measured value of the height of the road on which the object is located on the basis of the average value of the height components of the coordinates of the at least one feature point with respect to the first coordinate system of the vehicle.
The controller may correct an error of the predicted value of the estimated value of the vector and an error of the predicted value of the covariance matrix indicating the state of the vehicle on the basis of the measured value of the position vector of the object, the measured value of the position vector of the at least one feature point, and the measured value of the height of the road on which the object is located.
The controller may determine the coordinates of the object with respect to the second coordinate system, on the basis of the determined estimated value of the vector.
The controller may determine the coordinates of the object with respect to the third coordinate system on the basis of the coordinates of the object determined with respect to the second coordinate system and coordinates of the camera.
The controller may calculate the distance between the rear camera and the object from an inner product of the coordinates of the object determined with respect to the third coordinate system.
The controller may transmit the collision warning signal when the calculated distance between the camera and the object is less than a predetermined value.
The controller may transmit a control signal for decreasing a backing up speed of the vehicle when the calculated distance between the camera and the object is less than a predetermined value.
The camera may be a rear camera, and the rear camera may obtain coordinates of at least four feature points spaced a predetermined distance apart from the object, and the controller may determine the estimated value of the vector indicating the state of the vehicle on the basis of the coordinates of the vehicle, the coordinates of the object, and the coordinates of the at least four feature points, determines the predicted value of the estimated value of the vector on the basis of a result of differentiating the estimated value of the vector with respect to time, corrects the predicted value on the basis of the coordinates of the object and the coordinates of at least four feature point obtained by the rear camera, determines the estimated value of the vector on the basis of the corrected predicted value, calculates a distance between the rear camera and the object on the basis of the determined estimated value of the vector, and transmits a collision warning signal on the basis of the calculated distance.
In accordance with another aspect of the present disclosure, a method for controlling a vehicle includes: obtaining, by a rear camera, an image of an object behind a vehicle which is backing up by photographing the object; obtaining, by the camera, coordinates of at least one feature point spaced a predetermined distance from the object; establishing, by the controller, an estimated value of a vector representing a state of the vehicle on the basis of coordinates of the vehicle, coordinates of the object, and the coordinates of the at least one feature point; determining, by the controller, a predicted value of the estimated value of the vector on the basis of a result of differentiating the estimated value of the vector with respect to time; correcting, by the controller, the predicted value on the basis of the coordinates of the object and the coordinates of the at least one feature point obtained by the rear camera; determining, by the controller, the estimated value of the vector on the basis of the corrected predicted value; and calculating, by the controller, a distance between the rear camera and the object on the basis of the determined estimated value of the vector and transmitting a collision warning signal when the calculated distance is less than a predetermined value; and outputting, by a notification unit, a collision warning on the basis of the transmitted collision warning signal.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
It should be understood that every element of the forms is not described herein, and general content in the technical field to which the present disclosure pertains or overlapping parts between forms are not described herein. As used herein, the terms “unit”, “module”, and “block” may be implemented as software or hardware, and a plurality of units, modules, or blocks may be integrated into one element or one unit, module, or block may include a plurality of elements in accordance with embodiments.
Throughout the present disclosure, when one element is referred to as being “connected to” another element, it should be understood to mean that the element may be connected directly or indirectly to the other element. When the element is indirectly connected to the other element, it should be understood that the element may be connected to the other element via a wireless communication network.
When one element is referred to as including another element, it should be understood that the presence or addition of one or more other elements is not precluded and the element may further include other elements, unless otherwise stated.
It should be understood that the terms “first,” “second,” etc., are used herein to distinguish one element from another element and the elements are not limited by these terms.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Reference characters assigned to identify operations of a method are merely used for convenience of explanation. Thus, it should be understood that the reference characters do not indicate an order of the operations, and the operations may be performed in an order different from an order described herein unless a specific order is clearly stated in the content.
Hereinafter, operating principles and forms of the present disclosure will be described with reference to the accompanying drawings below.
Generally, for convenience of explanation, a direction in which a vehicle 1 moves forward will be referred to as a forward direction and a left direction and a right direction will be determined with respect to the forward direction as illustrated in
Referring to
The body 10 may include a hood 11a for protecting various devices, e.g., an engine, needed to drive the vehicle 1, a roof panel 11b forming an inner space, a trunk lid 11c covering a storage space, front fenders 11d and quarter panels 11e provided on lateral surfaces of the vehicle 1, and a plurality of doors 15 which are provided on the lateral surfaces of the body 10, and hinge-coupled to the body 10.
A front window 19a is provided between the hood 11a and the roof panel 11b to provide a field of view in front of the vehicle 1. A rear window 19b may be provided between the roof panel 11b and the trunk lid 11c to provide a field of view behind the vehicle 1. Lateral windows 19c may be provided at upper sides of the doors 15 to provide fields of view at sides of the vehicle 1.
A headlamp 15 may be provided in the front of the vehicle 1 to emit light in a traveling direction of the vehicle 1.
Turn signal lamps 16 may be provided in the front and rear of the vehicle 1 to indicate an intended traveling direction of the vehicle 1.
The intended traveling direction of the vehicle 1 may be indicated by flickering the turn signal lamps 16. Tail lamps 17 may be provided in the rear of the vehicle 1. The tail lamps 17 may be provided in the rear of the vehicle 1 to indicate a gear change state, a brake operating state, or the like of the vehicle 1.
As illustrated in
The at least one image capturing unit 350 may capture an image of an object near the vehicle 1, sense the type of the object by determining the shape of the image of the object through image recognition, and transmit information regarding the sensed type to a controller 100.
Although
The image capturing unit 350 may include at least one camera, and may include a three-dimensional (3D) space recognition sensor, a radar sensor, an ultrasonic sensor, or the like to more accurately capture an image.
The 3D space recognition sensor may include a Kinect (a RGB-D sensor), a time-of-flight (TOF) sensor (a structured light sensor), a stereo camera, or the like, but is not limited thereto and may include other devices having functions similar to those of the 3D space recognition sensor.
Referring to
As will be described below, the rear camera 360 may obtain the coordinates of the road on which an object is located behind the vehicle 1, and obtain coordinates of feature points spaced a predetermined distance apart from the coordinates of the road.
In one form, the rear camera 360 may be installed above a rear bumper of the vehicle 1. The rear camera 360 may be any device capable of capturing an image and thus is not limited in terms of type, shape, installation location, etc. An operation and structure of the rear camera 360 will be described in detail with reference to
Referring to
The dashboard 310 refers to a panel which divides the interior and an engine compartment of the vehicle 1 and in which various components needed for driving are installed. The dashboard 310 is provided in front of the driver's seat 301 and the passenger's seat 302. The dashboard 310 may include an upper panel, a center fascia 311, a gear box 315, etc.
The display unit 303 may be installed on the upper panel of the dashboard 310. The display unit 303 may provide various types of information in the form of an image to a driver of the vehicle 1 or a fellow passenger. For example, the display unit 303 may visually provide a map, weather information, news, various moving pictures or still pictures, various types of information related to a state or operation of the vehicle 1, for example, information regarding an air conditioner device, etc. The display unit 303 may provide a driver or a fellow passenger with a warning about a degree of risk. In detail, when the vehicle 1 conducts a lane change, the display unit 303 may provide a driver or the like with different warnings according to a degree of risk. The display unit 303 may be implemented using a general navigation device.
The display unit 303 may be provided in a housing integrally formed with the dashboard 310 such that only a display panel is exposed to the outside. Alternatively, the display unit 303 may be installed on a middle or bottom part of the center fascia 311 or may be installed on an inner side surface of a windshield (not shown) or an upper surface of the dashboard 310 using an additional support (not shown). In addition, the display unit 303 may be installed at other various locations which may be considered by a designer.
Various devices such as a processor, a communication module, a global positioning system (GPS) receiving module, a storage device, etc. may be installed in the dashboard 310. The processor installed in the vehicle 1 may be provided to control various electronic devices installed in the vehicle 1, or provided to perform a function of the controller 100 as described above. These devices may be implemented using various components such as a semiconductor chip, a switch, an integrated circuit, a resistor, a volatile or nonvolatile memory, a printed circuit board, etc.
The center fascia 311 may be installed at a center part of the dashboard 310, in which input units 318a to 318c for inputting various commands related to the vehicle 1 may be provided. The input units 318a to 318c may be implemented using physical buttons, knobs, a touch pad, a touch screen, a stick type manipulation device, a trackball, or the like. A driver may control various operations of the vehicle 1 by manipulating the input units 318a to 318c.
The gear box 315 is provided at a bottom end of the center fascia 311 and between the driver's seat 301 and the passenger's seat 302. In the gear box 315, a gears 316, a storage box 317, various input units 318d to 318e, and the like may be provided. The input units 318d to 318e may be implemented using physical buttons, knobs, a touch pad, a touch screen, a stick type manipulation device, a trackball, or the like. The storage box 317 and the input units 318d to 318e may be omitted in some forms.
The steering device 320 and the instrument panel 330 are provided at a part of the dashboard 310 adjacent to the driver's seat.
The steering device 320 may be provided to be rotatable in a desired direction according to a driver's manipulation. The vehicle 1 may be steered as the front wheels 12 or the rear wheels 13 of the vehicle 1 are rotated according to a rotational direction of the steering device 320. In the steering device 320, spokes 321 connected to a rotating shaft and a steering wheel 322 coupled to the spokes 321 are provided. On the spokes 321, an input means for inputting various commands may be provided. The input means may be implemented using physical a button, a knob, a touch pad, a touch screen, a stick type manipulation device, a trackball, or the like. The steering wheel 322 may have a round shape for driver convenience but is not limited thereto. A vibration unit 201 of
A turn signal lamp input unit 318f may be provided at a rear part of the steering device 320. A user may input a signal for changing a driving direction or conduct a lane change through the turn signal lamp input unit 318f during driving of the vehicle 1.
The instrument panel 330 is provided to provide a driver with various types of information related to the vehicle 1, such as the speed of the vehicle 1, engine revolutions per minute (RPM), a remaining fuel amount, the temperature of engine oil, whether a turn signal lamp flickers, a distance traveled by the vehicle 1, etc. The instrument panel 330 may be implemented using a light, a scale plate, or the like, and may be implemented using a display panel in one form. When the instrument panel 330 is implemented using a display panel, the instrument panel 330 may provide a driver with not only the above-described information but also other various types of information such as fuel efficiency, whether various functions of the vehicle 1 are performed, etc. The instrument panel 330 may output different warnings and provide them to a driver according to a degree of risk of the vehicle 1. In detail, when the vehicle 1 conducts a lane change, the instrument panel 330 may provide a driver with different warnings according to a degree of risk corresponding to the lane change.
Referring to
The notification unit 60 may output a collision warning indicating a risk of collision between the vehicle 1 and an object on the basis of a warning signal transmitted from the controller 100. That is, as will be described below, when the vehicle 1 is backing up, the notification unit 60 may sense an object such as a pedestrian behind the vehicle 1, and output a warning signal indicating a collision possibility to a user when a risk of collision is sensed on the basis of the distance between the vehicle 1 and the object, under control of the controller 100.
The notification unit 60 may be included in the display unit 303 or may be provided as a separate sound outputting element in the vehicle 1. The warning signal may be output in the form of a predetermined sound or utterance indicating a collision possibility.
The speed adjustor 70 may adjust the speed of the vehicle 1 driven by the driver. The speed adjustor 70 may include an accelerator driver 71 and a brake driver 72.
The accelerator driver 71 may receive a control signal from the controller 100 and increase the speed of the vehicle 1 by operating an accelerator. The brake driver 72 may receive a control signal from the controller 100 and decrease the speed of the vehicle 1 by operating a brake.
The controller 100 may increase or decrease the driving speed of the vehicle 1 on the basis of the distance between the vehicle 1 and an object and a predetermined reference distance stored in the memory 90 to increase or decrease the distance between the vehicle 1 and the object.
Furthermore, the controller 100 may calculate an estimated collision time until a collision between the vehicle 1 and an object based on a relative distance between the vehicle 1 the object, and a relative speed between the vehicle 1 the object. The controller 100 transmits a signal for controlling the driving speed of the vehicle 1 to the speed adjustor 70 on the basis of the calculated predicted collision time.
The speed adjustor 70 may adjust the driving speed of the vehicle 1 under control of the controller 100. The speed adjustor 70 may decrease the driving speed of the vehicle 1 when a degree of collision risk between the vehicle 1 and an object is high.
The speed adjustor 70 may adjust the driving speed of the vehicle 1 when the vehicle 1 moves forward or backward.
The speed sensor 80 may sense the speed of the vehicle 1 under control of the controller 100. That is, the speed sensor 80 may sense the speed of the vehicle 1 using the RPM of wheels of the vehicle 1 or the like. A unit of the driving speed of the vehicle 1 may be expressed in kph which means kilometers per hour.
The memory 90 may store various types of data related to controlling the vehicle 1. In detail, in one form, the memory 90 may store information regarding the driving speed, traveling distance, and traveling time of the vehicle 1, and store information regarding the type and location of an object sensed by the image capturing unit 350.
Furthermore, the memory 90 may store location information and speed information of an object sensed by a sensor 200, and store information regarding coordinates of a moving object which change in real time, a relative distance between the vehicle 1 and the object, and a relative speed between the vehicle 1 and the object.
In addition, the memory 90 may store data related to a formula and a control algorithm for controlling the vehicle 1 in one form. The controller 100 may transmit a control signal for controlling the vehicle according to the formula and the control algorithm.
The memory 90 may be implemented as, but is not limited to, at least one among a nonvolatile memory device (e.g., a cache, a read-only memory (ROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), or a flash memory), a volatile memory device (e.g., a random access memory (RAM)), and a storage medium (e.g., a hard disk drive (HDD) or a compact disc (CD)-ROM). The memory 90 may be a memory implemented as a chip separately from the processor described above and related to the controller 100, or may be a chip integrally formed with the processor.
Referring to
Referring to
That is, in
The first coordinate system is defined with respect to a position at which the rear camera 360 of the vehicle 1 starts sensing an object behind the vehicle 1 and is thus a fixed coordinate system. As illustrated in
When the vehicle 1 is backed up to a position (b), a coordinate system representing a position of the vehicle 1 may be defined as a second coordinate system. A center Ov of the second coordinate system may be at the center of the axle of the wheel of the vehicle 1.
The second coordinate system is a coordinate system at a position to which the vehicle 1 is backed up close to an object behind the vehicle 1 to be spaced a predetermined distance apart from the object, and is thus a coordinate system which changes as the vehicle 1 moves backward. As illustrated in
Furthermore, the controller 100 may set a coordinate system with respect to a position of the rear camera 360 of the vehicle 1. This coordinate system may be defined as a third coordinate system.
As illustrated in
The rear camera 360 of the vehicle 1 may obtain an image of an object behind the vehicle 1 by photographing the object. The type of the object is not limited but it will be assumed below that the object behind the vehicle 1 is a pedestrian 2.
As illustrated in
According to the vehicle 1 and a method of controlling the same, a position of the pedestrian 2 on the road with the slope or the gradient may be accurately determined and the distance (d) from the camera 360 to the pedestrian 2 may be precisely calculated, thereby inhibiting or preventing a collision between the vehicle 1 which is backing up and the pedestrian 2.
As illustrated in
The rear camera 360 may obtain an image of the pedestrian 2 behind the vehicle 1 by photographing the pedestrian 2. The rear camera 360 may obtain coordinates of a center point on the road on which the pedestrian 2 is located on the basis of the image of the pedestrian 2, and obtain coordinates of at least one feature point spaced a predetermined distance from the obtained coordinates of the center point on the road. In this case, the rear camera 360 may obtain coordinates of at least four feature points.
The forms set forth herein may be performed on the assumption that at least one feature point is present, but a case in which the number of feature points is four or more will be described as an example to particularly explain a method of implementing the present disclosure. However, the number of feature points is not limited in implementing the present disclosure.
A center point of the position of the pedestrian 2 and a feature point in the vicinity of the pedestrian 2 may be extracted from the image captured by the rear camera 360 using various known techniques. That is, these points may be extracted using a texture-based coordinate extraction method, and a distance to the pedestrian 2 may be measured by assigning distance values to pixel positions in a vertical direction in an image captured by the rear camera 360 and using the distance value assigned to the position of a pixel corresponding to a foot of the pedestrian 2.
The controller 100 may set an estimated value of a vector indicating a state of the vehicle 1 on the basis of the center point of the pedestrian 2 in the image captured by the rear camera 360 and at least four feature points.
That is, the controller 100 may establish 3D coordinates of the pedestrian 2 with respect to the first coordinate system of the vehicle 1 and 3D coordinates of at least four feature points in the form of vectors.
Referring to
The state of the vehicle 1 means a distance and position of the vehicle 1 relative to objects, such as the pedestrian 2, near the vehicle 1 on the basis of a position and movement of the vehicle 1.
In this case, P represents the position of the pedestrian 2, p represents the pedestrian 2, and w represents a world defined as the first coordinate system used as a reference.
{circumflex over (P)}pw is in the form of the vector indicating the 3D coordinates of the pedestrian 2 and may be thus defined as
with respect to an x-coordinate, a y-coordinate, and a z-coordinate.
Referring to
{circumflex over (P)}gw is in the form of the vector of the 3D coordinates of the at least four feature points and may be thus defined as
with respect to the x-coordinate, the y-coordinate, and the z-coordinate of one of the at least four feature points. Furthermore, since there may be a plurality of feature points, when n feature points are obtained as illustrated in
The feature points are points close to the pedestrian 2 located on the slope and may thus have a height equal to the slope on which the pedestrian 2 is located with respect to a flat plane. The number of feature points, the coordinates of which are obtained as described above, is not limited.
In order to set the estimated value of the vector indicating the state of the vehicle 1, the controller 100 may establish, in the form of a vector, 3D coordinates of the vehicle 1 at a position to which the vehicle 1 is backed up with respect to the first coordinate system of the vehicle 1. Referring to
That is, the controller 100 may determine 3D coordinates of the vehicle 1 at a position to which the vehicle 1 is backed up with respect to the first coordinate system to be {circumflex over (P)}vw. {circumflex over (P)}vw is an element for setting an estimated value of the vector indicating the state of the vehicle 1. In this case, v represents the vehicle 1.
{circumflex over (P)}vw is in the form of a vector of the 3D coordinates of the vehicle 1 at the position to which the vehicle 1 is backed up and is thus defined as
with respect to an x-coordinate, a y-coordinate, and a z-coordinate.
In order to set the estimated value of the vector indicating the state of the vehicle 1, the controller 100 may set an attitude of the vehicle 1 at the position to which the vehicle 1 is backed up with respect to the first coordinate system of the vehicle 1 . In this case, the attitude of the vehicle 1 includes a roll, pitch, and yaw of the vehicle 1. The roll refers to rotation around an axis on a horizontal plane perpendicular to a moving direction of the vehicle 1. The pitch refers to rotation around an axis on a horizontal plane parallel to the moving direction of the vehicle 1. The yaw refers to rotation around an axis on a vertical plane perpendicular to the moving direction of the vehicle 1.
The roll, the pitch, and the yaw refer to rotation angles. Thus, the controller 100 may determine the attitude of the vehicle 1 at the position to which the vehicle 1 is backed up with respect to the first coordinate system to be {circumflex over (θ)}vw. θvw is an element for setting an estimated value of the vector indicating the state of the vehicle 1.
{circumflex over (θ)}vw is in the form of vector of the attitude of the vehicle 1 at the position to which the vehicle 1 is backed up, and may be defined as
when it is assumed that the roll and pitch of the vehicle 1 are zero. In this case, ψ means a yaw angle.
The controller 100 may set an estimated value of the vector indicating the state of the vehicle 1 on the basis of the 3D coordinates of the vehicle 1, the attitude of the vehicle 1, the 3D coordinates of the pedestrian 2, and the 3D coordinates of the at least four feature points which are set as described above. When the estimated value of the vector is the relationship between the estimated value of the vector and the elements described above may be expressed by Equation 1 below.
The controller 100 may determine a motion model of the vector indicating the state of the vehicle 1 by differentiating the estimated value {circumflex over (x)} of the vector with respect to time.
When the motion model of the vector is f({circumflex over (x)}, t), the motion model may be expressed as a result of differentiating the estimated value {circumflex over (x)} of the vector according to Equation 1 with respect to time.
The controller 100 may differentiate the vector {circumflex over (P)}vw of the 3D coordinates of the vehicle 1 with respect to the first coordinate system with respect to time. In this case, the motion model of the vector may be determined to be f({circumflex over (x)}, t) on the basis of a moving speed v of the vehicle 1, a yaw angle ψ by which the vehicle 1 rotates while backing up, a distance l of the axle of the wheel, and a steering angle α.
The vector of the coordinates {circumflex over (P)}vw of the 3D position of the vehicle 1 at the position to which the vehicle 1 is backed up may be defined as
as described above. Thus, when the vector {circumflex over (P)}vw is differentiated with respect to time, a result of differentiating the vector {circumflex over (P)}vw with respect to time may be expressed as
That is, a result of differentiating the 3D coordinates of the vehicle 1 with respect to time is 0 on an assumption that zvw does not change with respect to height in the 3D coordinates of the vehicle 1.
Since the attitude {circumflex over (θ)}vw of the vehicle 1 at the position to which the vehicle 1 is backed up with respect to the first coordinate system may be defined as
as described above, it may be expressed as
when the attitude {circumflex over (θ)}vw is differentiated with respect to time.
The coordinates {circumflex over (P)}pw of the 3D position of the pedestrian 2 may be expressed as
as described above, and may be thus expressed as
when they are differentiated with respect to time since it is assumed that the pedestrian 2 stays still in terms of a mathematical concept.
As the coordinates {circumflex over (P)}gw of the 3D positions of the at least four feature points may be expressed as
as described above, each of the at least four feature points may be expressed as
when it is differentiated with respect to time because it is assumed that the at least four feature points are still.
Thus, the controller 100 may determine the motion model f({circumflex over (x)}, t) of the vector, as expressed in Equation 2 below.
In this case, 03n×1 represents a case in which there are n feature points.
When the vehicle 1 is backing up, the controller 100 may use a Kalman filter as a tracking filter to determine a predicted value of the estimated value {circumflex over (x)} of the vector set to indicate the state of the vehicle 1 as described above.
That is, the controller 100 may determine a predicted value of the estimated value {circumflex over (x)} of the vector indicating the state of the vehicle 1 and a predicted value of a covariance matrix P({circumflex over (x)}, t) indicating the state of the vehicle 1 on the basis of Equations 3 and 4 below.
{dot over ({circumflex over (x)})}=f({circumflex over (x)},t) [Equation 3]
{dot over (P)}({circumflex over (x)},t)=F({circumflex over (x)},t)P({circumflex over (x)},t)+P({circumflex over (x)},t)F({circumflex over (x)},t)T+W [Equation 4]
In this case, F({circumflex over (x)}, t) may be calculated using a Jacobian of f({circumflex over (x)}, t), and W represents a process covariance. The determining of the predicted value of the estimated value {circumflex over (x)} of the vector and the predicted value of the covariance matrix P({circumflex over (x)}, t) indicating the state of the vehicle 1 using the Kalman filter is performed for a short time period than a correction operation which will be described below.
The controller 100 may determine a measured value hp(x,t) of a position vector of the pedestrian 2 on the basis of coordinates of the pedestrian 2 obtained by the rear camera 360. The controller 100 may determine measured values hg(x,t) of position vectors of at least four feature points on the basis of coordinates of the at least four feature points obtained by the rear camera 360. The controller 100 may determine a measured value zpw of a height of the road on which the pedestrian 2 is located on the basis of an average value of height components with respect to the coordinates of the at least four feature points obtained by the rear camera 360.
In this case, the controller 100 may determine a measured value of the position vector of the pedestrian 2, measured values of the position vectors of the at least four feature points, and a measured value of the height of the road on which the pedestrian 2 is located on the basis of the third coordinate system of the rear camera 360.
That is, the controller 100 may determine a measured value (h(x,t) of a position vector for estimating the distance between the vehicle 1 and the pedestrian 2, as expressed in Equation 5 below, based on a measured value of the position vector of the pedestrian 2 obtained by the rear camera 360, measured values of the position vectors of the at least four feature points, and a measured value of the height of the road on which the pedestrian 2 is located.
In this case, the controller 100 may determine a measured value hp(x,t) of the position vector of the pedestrian 2, as expressed in Equation 6 below.
That is, the controller 100 may determine coordinates of the pedestrian 2 in an image of the pedestrian 2 captured by the rear camera 360 to be (up, vp) by defining a vertical coordinate and a horizontal coordinate. Equation 6 defines a measured value of the position vector of the pedestrian 2 by dividing the coordinates of the pedestrian 2 by a focal distance (fu, fv) based on pixels of the rear camera 360.
Furthermore, the controller 100 may determine measured values hg(x,t) of position vectors of at least four feature points, as expressed in Equation 7 below.
That is, the controller 100 may define coordinates of each of n feature points in an image of at least four feature points captured by the rear camera 360 to be vertical coordinates and horizontal coordinates, and determine coordinates of an ith feature point among the n features points to be (ug,i, vg,i). Equation 7 defines measured values of the position vectors of the at least four feature points by dividing the coordinates of the ith feature point by a focal distance (fu, fv) based on pixels of the rear camera 360.
The controller 100 may determine a measured value zpw of a height of the road on which the pedestrian 2 is located, as expressed in Equation 8 below.
zpw=Σi=1nzg,iw/n [Equation 8]
The controller 100 may determine the measured value zpw of the height of the road on which the pedestrian 2 is located, based on an average value of height components with respect to the coordinates of the at least four feature points obtained by the rear camera 360. In this case, the controller 100 may determine the measured value zpw of the height of the road according to an average of height components among 3D coordinate components of feature points with respect to the first coordinate system.
In this case, zgw corresponds to a height component among the 3D coordinates of the feature points as described above.
The controller 100 may determine a measurement model for estimating the distance between the vehicle 1 and the pedestrian 2 on the basis of the measured value h(x,t) of the position vector of the pedestrian 2 determined according to Equation 5.
That is, the controller 100 may determine the measurement model of a position vector for estimating the distance between the vehicle 1 and the pedestrian 2 on the basis of a measurement model of the position vector of the pedestrian 2 obtained by the rear camera 360, a measurement model of the position vectors of the at least four feature points, and a measurement model of the height of the road on which the pedestrian 2 is located, as expressed in Equation 9 below.
In this case, the controller 100 may determine a measurement model hp({circumflex over (x)},t) of the position vector of the pedestrian 2, as expressed in Equation 10 below.
That is, as illustrated in
The controller 100 may determine the measurement model hg({circumflex over (x)},t) of the position vectors of the at least four feature points, as expressed in Equation 11 below.
That is, as illustrated in
Furthermore, the controller 100 may determine a measurement model of the height of the road on which the pedestrian 2 is located to be based on the height component of the 3D coordinates {circumflex over (x)}pw,ŷpw,{circumflex over (z)}pw of the pedestrian 2 with respect to the first coordinate system.
The controller 100 may correct an error of a predicted value of the estimated value {circumflex over (x)} of the vector indicating the state of the vehicle 1 determined by Equations 3 and 4 and an error of a predicted value of the covariance matrix P({circumflex over (x)},t) indicating the state of the vehicle 1, based on the measurement model hp({circumflex over (x)},t) of the position vector of the pedestrian 2, the measurement model hg({circumflex over (x)},t) of the position vectors of the at least four feature points, and the measurement model {circumflex over (z)}pw of the height of the road on which the pedestrian 2 is located, which are described above.
The controller 100 may calculate a Kalman gain K({circumflex over (x)},t) of the Kalman filter using Equations 12 to 14 below, and use the Kalman gain K({circumflex over (x)},t) as a weight with respect to the difference between a measured value and a predicted value.
K({circumflex over (x)},tk)=P({circumflex over (x)},tk)H({circumflex over (x)},tk)T(H({circumflex over (x)},tk)P({circumflex over (x)},tk)H({circumflex over (x)},tk)T+V)−1 [Equation 12]
{circumflex over (x)}÷={circumflex over (x)}+K({circumflex over (x)},tk)(y−h({circumflex over (x)},tk)) [Equation 13]
P({circumflex over (x)}+,tk)=(I−K({circumflex over (x)},tk)H({circumflex over (x)},tk))P({circumflex over (x)},tk) [Equation 14]
Here, K({circumflex over (x)},t) represents the Kalman gain, and V represents a measurement covariance.
The controller 100 may correct the estimated value s of the vector indicating the state of the vehicle 1 to be {circumflex over (x)}+ by reflecting a measured value h({circumflex over (x)},t) of a position vector for estimating the Kalman gain K({circumflex over (x)},t) proportional to the distance between the vehicle 1 and the pedestrian 2 and an error of the measurement model h({circumflex over (x)},t).
Furthermore, the controller 100 may correct the covariance matrix P({circumflex over (x)},t) indicating the state of the vehicle 1 to be P({circumflex over (x)}+,tk). In this case, H({circumflex over (x)},t) may be calculated using a Jacobian of h({circumflex over (x)},t).
The controller 100 may accurately estimate the distance between the vehicle 1 and the pedestrian 2 by updating a predicted value of the estimated value {circumflex over (x)} of the vector indicating the state of the vehicle 1 using the measured value h(x,t) of the position vector for estimating the distance between the vehicle 1 and the pedestrian 2 and a measurement covariance and by correcting an error of the predicted value of the estimated value of the vector and an error of a predicted value of the covariance matrix indicating the state of the vehicle 1.
That is, the controller 100 may determine the estimated value {circumflex over (x)} of the vector indicating the state of the vehicle 1 which is set as described above, through the above-described correction operation.
The controller 100 may estimate coordinates {circumflex over (P)}pv of the pedestrian 2 with respect to the second coordinate system on the basis of the determined estimated value {circumflex over (x)} of the vector.
Referring to
{circumflex over (P)}pv=R({circumflex over (θ)}vw)T({circumflex over (P)}pw−{circumflex over (P)}vw) [Equation 15]
In this case, R({circumflex over (θ)}vw)T represents a transposed matrix of a rotation matrix of the attitude {circumflex over (θ)}vw of the vehicle 1.
The controller 100 may estimate coordinates {circumflex over (P)}pc of the pedestrian 2 with respect to the third coordinate system on the basis of the estimated coordinates {circumflex over (P)}pv of the pedestrian 2 determined by Equation 15.
Referring to
{circumflex over (P)}pc=R(θcv)T({circumflex over (P)}pv−Pcv) [Equation 16]
In this case, R(θcv)T represents rotational transformation for rotationally transforming coordinates with respect to the second coordinate system into coordinates with respect to the third coordinate system.
The controller 100 may calculate a distance d between the rear camera 360 and the pedestrian 2 from an inner product of the estimated coordinates {circumflex over (P)}pc of the pedestrian 2 determined by Equation 16.
That is, the controller 100 calculate the distanced between the rear camera 360 and the pedestrian 2 from a vector of the estimated coordinates {circumflex over (P)}pc of the pedestrian 2 according to Equation 17 below.
{circumflex over (d)}=({circumflex over (P)}pc·{circumflex over (P)}pc)1/2 [Equation 17]
The controller 100 may accurately determine the position of the pedestrian 2 located on the road with a slope or a gradient by precisely calculating the distance between the rear camera 360 and the pedestrian 2 on the basis of the above-described method.
The controller 100 may determine a risk of collision between the vehicle 1 and the pedestrian 2 and transmit a collision warning signal when the calculated distance between the rear camera 360 and the pedestrian 2 is less than a predetermined value. The notification unit 60 of the vehicle 1 may inform a driver of the risk of collision with the pedestrian 2 behind the vehicle 1 by outputting the collision warning signal on the basis of the collision warning signal transmitted from the controller 100.
The collision warning signal may be output in the form of a sound, or may be visually output through a display unit or an audio-video-navigation (AVN) system included in the vehicle 1.
Furthermore, the controller 100 may control the speed adjustor 70 to transmit a signal for braking the vehicle 1 which is backing up when the calculated distance between the rear camera 360 and the pedestrian 2 is less than the predetermined value. The speed adjustor 70 may decrease the speed of the vehicle 1 which is backing up or stop the backing up of the vehicle 1 on the basis of the signal transmitted from the controller 100, thereby inhibiting or preventing a collision between the vehicle 1 and the pedestrian 2 behind the vehicle 1.
The outputting of the collision warning signal indicating the risk of collision between the vehicle 1 and the pedestrian 2 behind the vehicle 1 or controlling the braking of the vehicle 1 is based on a general collision avoidance control or parking collision prevention assistance system when the vehicle 1 is backing up.
Referring to
The rear camera 360 may obtain coordinates of a center point on the road on which the pedestrian 2 behind the vehicle 1 is located, and coordinates of at least one feature point spaced a predetermined distance apart from the coordinates of the center point on the road (410).
The controller 100 may set an estimated value of a vector indicating a state of the vehicle 1 on the basis of coordinates of the vehicle 1, coordinates of the pedestrian 2, and the coordinates of the at least one feature point (420).
The controller 100 may determine a predicted value of the set estimated value of the vector on the basis of a result of differentiating the estimated value of the vector with respect to time (430). The controller 100 may correct the determined predicted value on the basis of the coordinates of the pedestrian 2 and the coordinates of the at least four feature points obtained by the rear camera 360 (440). In this case, the controller 100 may use the Kalman filter.
The controller 100 may determine the estimated value of the vector on the basis of the corrected predicted value (450), and precisely calculate the distance between the rear camera 360 and the pedestrian 2 on the basis of the determined estimated value of the vector (460).
The controller 100 may determine whether the calculated distance between the rear camera 360 and the pedestrian 2 is less than a predetermined value (470), and transmit a collision warning signal when it is determined that the calculated distance between the rear camera 360 and the pedestrian 2 is less than the predetermined value (480).
The notification unit 60 may output a collision warning on the basis of the collision warning signal transmitted from the controller 100 to inform a driver of a risk of collision with the pedestrian 2 behind the vehicle 1 (490).
As is apparent from the above description, the distance between a vehicle and a pedestrian located behind the vehicle and on the road with a slope or a gradient may be accurately estimated to inhibit or prevent a collision between the vehicle and the pedestrian when the vehicle is backing up.
The forms set forth herein may be realized in a recording medium having recorded thereon instructions which may be executed in a computer. The instructions may be stored in the form of program code, and a program module may be generated to realize these forms when the instructions are executed by a processor. The recording medium may be implemented as a non-transitory computer-readable recording medium.
Examples of the non-transitory computer-readable recording medium include all recording media capable of storing data that is read by a computer, e.g., a ROM, a RAM, a magnetic tape, a magnetic disk, a flash memory, an optical data storage device, etc.
Although a few forms of the present disclosure have been described with reference to the accompanying drawings, it will be appreciated by those skilled in the art that changes may be made in these forms without departing from the principles and spirit of the disclosure. The forms should be considered in descriptive sense only and not for purposes of limitation.
Number | Date | Country | Kind |
---|---|---|---|
10-2017-0046772 | Apr 2017 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
5521633 | Nakajima | May 1996 | A |
6218960 | Ishikawa | Apr 2001 | B1 |
20060055776 | Nobori | Mar 2006 | A1 |
20080309516 | Friedrichs | Dec 2008 | A1 |
20090074247 | Ma et al. | Mar 2009 | A1 |
20090080701 | Meuter et al. | Mar 2009 | A1 |
20090118900 | Adachi | May 2009 | A1 |
20120022716 | Kitahama | Jan 2012 | A1 |
20150073661 | Raisch | Mar 2015 | A1 |
20150332089 | Zhang et al. | Nov 2015 | A1 |
Number | Date | Country |
---|---|---|
10-2010-0113371 | Oct 2010 | KR |
Entry |
---|
Taejae Jeon et al, Pedestrian Distance Estimation using a Single Camera Image, pp. 798-799 (2014), with English abstract. |
Number | Date | Country | |
---|---|---|---|
20180293893 A1 | Oct 2018 | US |