This application claims the benefit of Korean Patent Application No. 10-2017-0095215, filed on Jul. 27, 2017, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates to a lane change support method and apparatus, and more particularly, to a lane change support method and apparatus employed to detect lane lines and vehicle detection regions by analyzing an image captured by an image sensor unit, detect a vehicle in a vehicle detection region, analyze the possibility of collision, and provide lane change information to a driver.
In a conventional lane change support method, various sensors such as an ultrasonic sensor attached to a vehicle sense a collision risk with an adjacent vehicle. The sensors transmit sensed information to the driver of the vehicle, and the driver prepares for the risk of an accident using the information.
Due to the distance sensing constraints of the sensors, however, the conventional lane change support method can be used to detect a collision risk only when an adjacent vehicle is located very close to the vehicle or travelling in an adjacent lane side by side with the vehicle. Therefore, the conventional lane change support method substantially has limitations in predicting a collision itself.
In a conventional lane change support method and apparatus, it is only possible to detect an object located close to a vehicle using an ultrasonic sensor or through image analysis and inform the driver of the vehicle about the detected object. However, it is impossible to judge the possibility of collision in various lane change situations on a multi-lane road. The various lane change situations may be when a vehicle traveling in a first lane and another vehicle traveling in a third lane attempt to change to a second lane at the same time and when the first vehicle and the second vehicle traveling in the same lane attempt to change to the same lane. Therefore, there is a need for a lane change support method and apparatus capable of predicting various lane change situations.
Aspects of the present disclosure provide a lane change support method and apparatus which are employed to accurately set a vehicle detection region based on lane lines detected in an image of an area on sides of and behind a driving vehicle.
Aspects of the present disclosure also provide a lane change support method and apparatus which are employed to determine the types of the detected lane lines and issue a lane change related warning to a driver in consideration of the types of the detected lane lines.
Aspects of the present disclosure also provide a lane change support method and apparatus which are employed to detect and analyze an object in the set vehicle detection region, judge the possibility of collision between the vehicle and another vehicle located in an area other than a lane line close to the vehicle, and inform the possibility of collision.
However, aspects of the present disclosure are not restricted to the one set forth herein. The above and other aspects of the present disclosure will become more apparent to one of ordinary skill in the art to which the present disclosure pertains by referencing the detailed description of the present disclosure given below.
According to an aspect of the present disclosure, there is provided a method providing a lane change support, the method comprising step of detecting lane lines by analyzing an image obtained by a first image sensor provided on a side of a vehicle and step of setting a vehicle detection region for detecting a moving object in the image obtained by the first image sensor based on the detected lane lines and step of detecting the moving object in the set vehicle detection region and judging a possibility of collision between the vehicle and the detected object and step of providing lane change information indicating whether it is dangerous for the vehicle to change lanes based on the result of judging the possibility of collision.
These and/or other aspects will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings in which:
Hereinafter, preferred embodiments of the present invention will be described with reference to the attached drawings. Advantages and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of preferred embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Like numbers refer to like elements throughout.
Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Further, it will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The terms used herein are for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms “comprise”, “include”. “have”, etc. when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or combinations of them but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or combinations thereof.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
The lane change support apparatus according to the current embodiment includes an image sensor unit 10, an image providing unit 20, a detection unit 100, a collision possibility analysis unit 80, and a warning notification unit 90. The detection unit 100 includes a lane line region setting unit 30, a lane line detection unit 40, a lane line judgment unit 50, a vehicle detection region setting unit 60, and a vehicle detection unit 70.
The image sensor unit 10 obtains an image of an area around a vehicle. The image sensor unit 10 may include at least one of a first image sensor provided on the side of the vehicle and a second image sensor provided on the rear of the vehicle. The first image sensor obtains an image used to detect lane lines located on a side of the vehicle and to set a vehicle detection region for detecting a moving object. The second image sensor obtains an image used to detect lane lines located behind the vehicle and to set a vehicle detection region for detecting a moving object.
The image providing unit 20 provides the obtained image to the detection unit 100. According to an embodiment, the image providing unit 20 may provide an image obtained by the first image sensor on the side of the vehicle to the detection unit 100.
According to an embodiment, the image providing unit 20 may correct an image obtained by the first image sensor on the side of the vehicle and an image obtained by the second image sensor on the rear of the vehicle, merge the corrected images into one image, and provide the one image to the detection unit 100. In the one image, one vehicle detection region may be shared in the same lane. If the images obtained by the first image sensor and the second image sensor are merged, a vehicle detection region may be extended backward.
When the obtained image is provided to the detection unit 100, the detection unit 100 analyzes the obtained image. The lane line region setting unit 30 detects lane lines using a lane line region candidate group set according to a standard road lane line width. By using the lane line region candidate group, it is possible to detect all lane lines at a time in the obtained image. Thus, quick lane line detection is possible. The lane line detection unit 40 detects the number and positions of lane lines, and the lane line judgment unit 50 determines whether the detected lane lines are dotted lines or solid lines. A vehicle detection region may be set based on the detected lane lines. The vehicle detection region corresponds to a region of interest (ROI) for detecting a vehicle. The vehicle detection unit 70 may detect a moving object within the vehicle detection region or a fixed object within the vehicle detection region. When analyzing an image within the vehicle detection region, the vehicle detection unit 70 may utilize various widely known object recognition techniques. The detected object may be a vehicle, an obstacle, etc. which can be located on the road.
The collision possibility analysis unit 80 judges the possibility of collision by analyzing the movement of the detected object in the vehicle detection region. If the object is another driving vehicle, a motion vector of the object indicates the relative speed between the vehicle and the another driving vehicle. Therefore, the collision possibility analysis unit 80 may judge the possibility of collision based on the motion vector of the object. In addition, the collision possibility analysis unit 80 may judge the possibility of collision by analyzing the size of the object. To judge the possibility of collision, an ultrasonic sensor provided in the vehicle can be used in addition to the obtained image.
If the collision possibility analysis unit 80 judges the possibility of collision, the warning notification unit 90 provides current lane information and lane change information to the driver of the vehicle using a notification medium provided in the vehicle. The lane change information is information indicating whether it is dangerous for the vehicle to change lanes. The lane change information may include at least one of lane change warning notification and lane change permission notification.
In the current embodiment, the configuration of the lane change support apparatus has been described. A lane change support method according to an embodiment will now be described. The description of the lane change support apparatus can be supplemented in more detail by the following description of the lane change support method.
An image of an area on sides of and behind a vehicle is received (operation S10). The received image is analyzed to set a region for multi-lane line detection in the received image and to detect lane lines in the received image (operation S20). To decide lane lines for setting a vehicle detection region, lane lines may be detected by analyzing the received image. In addition, lane lines may be detected by matching lane lines in the received image with lane lines in a preset region for multi-lane line detection. The region for multi-lane line detection corresponds to a lane line region candidate group set according to the standard road lane width. Next, it is determined whether the detected lane lines are dotted lines or solid lines (operation S30). If the detected lane lines are solid lines, they are lane lines where lane changing is not permitted. After the determining of whether the detected lane lines are dotted lines or solid lines (operation S30), it is determined whether the detected lane lines are lane change prohibited regions (operation S40). The prohibited regions refer to a case where the detected lane lines are determined to be solid lines. If the detected lane lines are determined to be solid lines, they are regions where lane changing is prohibited, such as a road in a tunnel, an overpass, etc. If the detected lane lines are determined to be dotted lines, they correspond to regions where lane changing is possible. Therefore, the detected lane lines are not determined to be prohibited regions.
If the detected lane lines are determined to be prohibited regions, a warning notification unit warns against changing lanes (operation S100).
If the detected lane lines are not determined to be prohibited regions, a vehicle detection region is set based on the detected lane lines (operation S50). Two or more vehicle detection regions may be set in order to analyze the possibility of collision between the vehicle and another vehicle travelling in another lane in the same direction as the vehicle. Next, an object is detected in the vehicle detection region (operation S60), and the possibility of collision between the detected object and the vehicle is analyzed to judge the possibility of collision (operation S70). Examples of judging the possibility of collision will be described in detail later with reference to
If it is determined that there is no possibility of collision, the driver of the vehicle is informed that a lane change is possible (operation S90). If it is determined that there is a possibility of collision, the driver of the vehicle is warned against changing lanes (operation S100).
An image of an area on sides of and behind a vehicle is captured and obtained by an image sensor unit, and the obtained image is provided to a detection unit by an image providing unit (operation S10). Then, it is determined whether the turn signal lamp of the vehicle is on (operation S11). If the turn signal lamp of the vehicle is not on, the detection unit and a collision possibility analysis unit determines that there is a possibility of collision (operation S70) and warn against changing lanes (operation S100).
An image of an area on sides of and behind a vehicle is captured and obtained by the image sensor unit, and the obtained image is provided to the detection unit by the image providing unit (operation S10). Then, it is determined whether the turn signal lamp of the vehicle is on (operation S11). If the turn signal lamp of the vehicle is on, an image of an area in a direction in which the vehicle intends to move is obtained using an image sensor provided on a side of the vehicle in the intended movement direction (operation S70). When the image of the area in the intended movement direction is obtained, a region for multi-lane line detection is set in the obtained image, and multiple lane lines are detected in the obtained image (operation S20-1). The lane lines may be immediately detected in the obtained image or may be detected at the same time as the setting of the region for multi-lane line detection. The region for multi-lane line detection may correspond to a lane line region candidate group set according to the standard road lane line width.
When the lane lines are detected (operation S21), the types of the lane lines are judged (operation S22).
If the lane lines are dotted lines (operation S23), a lane change is possible. Therefore, a vehicle detection region is set based on the detected lane lines (operation S50). Two or more vehicle detection regions may be set in order to analyze the possibility of collision between the vehicle and another vehicle travelling in another lane. Next, a vehicle is detected in the vehicle detection region (operation S60). When a vehicle is detected in the vehicle detection region (operation S61), the possibility of collision with the detected vehicle is judged by sensing the movement of the detected vehicle in the vehicle detection region (operation S70). If there is a possibility of collision between the vehicle and the detected vehicle (operation S80), the driver of the vehicle is warned against changing lanes (operation S100). If there is no possibility of collision (operation S80), the driver of the vehicle is informed that a lane change is possible (operation S90).
If the lane lines are not dotted lines (operation S23), a lane change is not possible. In this case, the driver of the vehicle is warned against changing lanes (operation S100).
If no lane is detected (operation S21), the background photographed in the intended movement direction in which the turn signal lamp of the vehicle is turned on is identified (operation S31). The background in the intended movement direction is identified and analyzed to determine whether the background is a prohibited region (operation S40).
If it is determined that the background in the intended movement direction is a prohibited region, the driver of the vehicle is warned against changing lanes (operation S100). The prohibited region refers to a solid lane line in the intended movement direction or an obstacle such as a median strip.
If it is determined that the background in the intended movement direction is not a prohibited region, default lane lines are created (operation S41). The default lane lines are virtual lane lines set according to the standard road lane line width when there is no lane line in an image. A vehicle detection region is set based on the created default lane lines (operation S50). Then, a vehicle is detected in the vehicle detection region (operation S60). When a vehicle is detected in the vehicle detection region (operation S61), the possibility of collision with the detected vehicle is analyzed and judged (operation S70). If there is a possibility of collision between the vehicle and the detected vehicle (operation S80), the driver of the vehicle is warned against changing lanes (operation S100).
The lane change support method is terminated by informing that a lane change is possible (operation S90) or dangerous (operation S100).
The vehicle 1 includes a first image sensor provided on its side and a second image sensor provided on its rear. The first image sensor obtains an image of an area on the side of the vehicle 1, and the second image sensor obtains an image of an area behind the vehicle 1. In the current embodiment, the obtained image of the area on the side of the vehicle 1 and the obtained image of the area behind the vehicle 1 may be merged as shown in
After the lane lines are determined, information about in which lane the vehicle 1 is currently travelling may be provided in the lane change support method according to the current embodiment.
A lane information providing method according to an embodiment may include judging whether the detected lane lines are dotted lines or solid lines and providing current lane information of the vehicle 1 based on solid lane lines among the judged lane lines. For example, the number of dotted lane lines between a current lane of the vehicle 1 and each of the solid lane lines determined on both sides of the vehicle 1 may be identified to inform the driver of the vehicle 1 about the current lane. In an embodiment, the current lane of the vehicle 1 may be identified only by counting the number of dotted lane lines detected in a left image. For example, if the number of dotted lane lines located between the centerline (solid line) and the current lane of the vehicle 1 is two in the left image, it may be determined that the vehicle 1 is currently traveling in a third lane.
However, there may be cases where it is difficult to accurately determine in which lane the vehicle 1 is currently travelling based on only the number of dotted lane lines. For example, when a large vehicle such as a trailer truck is travelling on a side of or behind the vehicle 1 or when the vehicle 1 is travelling on a wide road whose lanes are not all captured in a side image, it may be difficult to accurately determine in which lane the vehicle 1 is currently travelling. Thus, a lane information providing method according to an embodiment may include receiving information about the number of lanes on a road on which the vehicle 1 is currently travelling from a navigation device provided in the vehicle 1 and determining in which lane the vehicle 1 is currently travelling by additionally using the information about the number of lanes. For example, when the navigation device provides information indicating that the current road is a one-way 8-lane road, the dotted first lane line 7-1, the dotted second lane line 7-2 and the solid third lane line 7-3 may be determined on a right side of the vehicle 1 in the obtained image. Therefore, the driver of the vehicle 1 may be informed that the vehicle 1 is currently travelling on a sixth lane of the road.
As described above, it is possible to more accurately determine in which lane the vehicle 1 is currently travelling by using the number of dotted lane lines and information about the number of lanes on the current road provided by the navigation device.
When the default lane lines are created, the detection unit analyzes the possibility of collision with another vehicle in a lane created based on the default lane lines. The lane created based on the default lane lines is considered as a lane marked by actual lane lines. Referring to
Although only one side of the vehicle 1 is illustrated in
In
Although only one side of the vehicle 1 is illustrated in
Referring to
Referring to
An image processing process in a case where a plurality of vehicle detection regions are set in the first lane 9-1 will now be described with reference to
In addition, the number of vehicle detection regions may be different in the first lane 9-1 and the second lane 9-2. Since the vehicle 1 is more likely to collide with a vehicle in the first lane 9-1 than in the first lane 9-2, the detection unit may set more vehicle detection regions in the first lane 9-1 than in the second lane 9-2 in order for efficient image processing.
Referring to
The first lane-changing vehicle detection region 14-1 detects a vehicle moving through the first lane line 7-1. For example, when a vehicle behind the vehicle 1 attempts to overtake the vehicle 1 by moving to a first lane 9-1 and then moving from the first lane 9-1 to the same lane as the vehicle 1, the first lane-changing vehicle detection region 14-1 can detect the vehicle. In addition, the first lane-changing vehicle detection region 14-1 can detect a vehicle suspected of drowsy driving or drunk driving by determining whether a vehicle behind the vehicle 1 invades the first lane line 7-1 and provide danger information to the driver of the vehicle 1.
The second lane-changing vehicle detection region 14-2 detects a vehicle moving through the second lane line 7-2. For example, the second lane-changing vehicle detection region 14-2 can detect a vehicle moving from a second lane 9-2 to the first lane 9-1. In addition, the second lane-changing vehicle detection region 14-2 can detect a vehicle suspected of drowsy driving or drunk driving by determining whether a vehicle invades the first lane line 7-1 or the second lane line 7-2 and provide danger information to the driver of the vehicle 1.
A vehicle suspected of drowsy driving or drunk driving can be detected by determining whether a vehicle invades a first lane line 7-1 or the second lane line 7-2, and then danger information can be provided to the driver of the vehicle 1.
The number and area of vehicle detection regions and the area and number of lane boundary regions may influence the amount of computation required in the image processing of the detection unit. Therefore, it is necessary to minimize the number and area of vehicle detection regions and lane boundary regions.
A vehicle detection region and a lane-changing vehicle detection region may analyze the possibility of collision with a vehicle that changes lanes by sensing the size and speed of the vehicle. On the other hand, the lane boundary region 15 may analyze the possibility of collision by detecting a vehicle passing through the lane boundary region 15 without analyzing the size and speed of the vehicle. Therefore, it is possible to more accurately and quickly analyze the possibility of collision by detecting a vehicle that changes lanes using the lane boundary region 15 than by using the lane-changing vehicle detection region.
The number of vehicle detection regions, the number of lane-changing vehicle detection regions, and the number of lane boundary regions are not limited to those in the current embodiment, but can be adjusted according to the number of vehicles in an image obtained by the image sensor unit or the traffic volume on a driving road. If there are not many vehicles around the vehicle 1, the number of vehicle detection regions may be minimized in order for efficient image processing.
The number and area of ROIs may be adjusted for efficient image processing. Since the number and size of ROIs affect the image processing speed, they may be adjusted according to the traffic volume in a driving lane.
Although only one side of the vehicle 1 is illustrated in
Referring to
When it is determined that, although the second vehicle 2 does not intend to change lanes, there is a possibility of collision because the speed of the second vehicle 2 is higher than that of the first vehicle 1, the warning notification unit warns of a possible collision.
A case where the first vehicle 1 and the second vehicle 2 intend to change to the first lane 9-1 at the same time will now be described. In a conventional lane change support apparatus and method, the possibility of collision can be judged only when a vehicle is running side by side with another vehicle or when there is a vehicle in a blind spot.
In the lane change support method according to the current embodiment, however, the possibility of collision can also be judged when the first vehicle 1 and the second vehicle 2 attempt to change lanes at the same time by using a first image sensor provided on the side of the first vehicle 1 and a second image sensor provided on the rear of the first vehicle 1. The second image sensor provided on the rear of the first vehicle 1 obtains an image of the second vehicle 2 and analyzes the obtained image. In the obtained image, the collision possibility analysis unit analyzes the speed and position of the second vehicle 2. In addition, the collision possibility analysis unit analyzes whether a vehicle is detected in the first vehicle detection region 13-1 in the obtained image. If a vehicle is detected in the first vehicle detection region 13-1, there is a possibility of collision when the first vehicle 1 attempts to change to the first lane 9-1. Therefore, the warning notification unit warns of a possible collision.
When the first vehicle 1 does not intend to change lanes and the second vehicle 2 traveling in the second lane 9-2 does not intend to change lanes, the collision possibility analysis unit fails to sense a change in the movement of the second vehicle 2 in a second vehicle detection region 13-2. In this case, there is no possibility of collision between the first vehicle 1 and the second vehicle 2. However, when both the first vehicle 1 and the second vehicle 2 simultaneously attempt to change to the first lane 9-1, there is a possibility of collision. The first vehicle detection region 13-1 is set in order to detect the second vehicle 2 moving from the second lane 9-2 to the first lane 9-1. A second vehicle detection region 13-2 is set in order to detect a vehicle moving in the second lane 9-2.
Referring to
Although only one side of the first vehicle 1 is illustrated in
Although only one side of the first vehicle 1 is illustrated in
The concepts of the invention described above with reference to
Although operations are shown in a specific order in the drawings, it should not be understood that desired results can be obtained when the operations must be performed in the specific order or sequential order or when all of the operations must be performed. In certain situations, multitasking and parallel processing may be advantageous. According to the above-described embodiments, it should not be understood that the separation of various configurations is necessarily required, and it should be understood that the described program components and systems may generally be integrated together into a single software product or be packaged into multiple software products.
While the present invention has been particularly illustrated and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention as defined by the following claims. The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation.
Number | Date | Country | Kind |
---|---|---|---|
10-2017-0095215 | Jul 2017 | KR | national |