The present application claims the priority of the Chinese patent application with the application No. 201911147428.8 and the title “lane line recognition method, apparatus, device and storage medium” which was filed to the China Patent Office on Nov. 21, 2019, and the entire content of this Chinese patent application is incorporated herein by reference.
The present disclosure relates to the technical field of image recognition, in particular to a lane line recognition method, an apparatus, a device and a storage medium.
With the vigorous development of artificial intelligence technology, automatic driving has become a possible driving method. Automatic driving usually obtains environment images around a vehicle through a camera, and uses artificial intelligence technology to acquire road information from the environment images, so as to control the vehicle to drive according to the road information.
The process of using artificial intelligence technology to acquire road information from environment images usually comprises determining lane lines in the vehicle driving section from environment images. Lane lines, as common traffic signs, comprise many different types of lane lines. For example, in terms of color, lane lines comprise white lines and yellow lines. In terms of purpose, lane lines are divided into dashed lines, solid lines, double solid lines and double dashed lines. When a terminal determines the lane lines from the environment images, it usually determines the lane lines through pixel grayscale of different areas in the environment images. For example, the area where the pixel grayscale in the environment images is significantly higher than that of surrounding areas is determined as a solid line area.
However, the pixel grayscale of the environment images may be changed by effects of white balance algorithm of an image sensor, different light intensity and ground reflection, resulting in inaccurate lane lines determined according to the pixel grayscale of the environment images.
Based on this, it is necessary to provide a lane line recognition method, an apparatus, a device and a storage medium for the problem of inaccurate lane lines determined by traditional methods.
In the first aspect, a lane line recognition method comprises:
detecting a current frame image collected by a vehicle and determine a plurality of detection frames where the lane lines in the current frame image are located;
determining a connection area according to position information of a plurality of detection frames, and the connection area comprising lane lines; and
performing edge detection on the connection area and determining position information of the lane lines in the connection area.
In one embodiment, detecting the current frame image collected by the vehicle and determining a plurality of detection frames where the lane lines in the current frame image are located comprises:
inputting the current frame image into a lane line classification model to obtain the plurality of detection frames where the lane lines in the current frame image are located, where the lane line classification model comprises at least two classifiers that is cascaded.
In one embodiment, inputting the current frame image into the lane line classification model to obtain the plurality of detection frames where the lane lines in the current frame image are located comprises:
performing a scaling operation on the current frame image according to an area size which is recognizable by the lane line classification model so as to obtain a scaled current frame image; and
obtaining the plurality of detection frames where the lane lines in the current frame image are located according to the scaled current frame image and the lane line classification model.
In one embodiment, obtaining the plurality of detection frames where the lane lines in the current frame image are located according to the scaled current frame image and the lane line classification model comprises:
performing a sliding window operation on the scaled current frame image according to a preset sliding window size so as to obtain a plurality of images to be recognized; and
inputting the plurality of images to be recognized into the lane line classification model successively to obtain the plurality of detection frames where the lane lines are located.
In one embodiment, determining the connection area according to the position information of the plurality of detection frames comprises:
merging the plurality of detection frames according to the position information of the plurality of detection frames and determining a merging area where the plurality of detection frames are located; and
determining the connection area corresponding to the plurality of detection frames according to the merging area.
In one embodiment, performing the edge detection on the connection area and determining the position information of the lane lines in the connection area comprises:
performing the edge detection on the connection area to obtain a target edge area; and
taking position information of the target edge area as the position information of the lane lines in the case where the target edge area meets a preset condition.
In one embodiment, the above preset condition comprises at least one selecting from a group consisting of: the target edge area comprises a left edge and a right edge, a distal width of the target edge area is less than a proximal width of the target edge area, and the distal width of the target edge area is greater than a product of the proximal width and a width coefficient.
In one embodiment, the method further comprises:
performing target tracking on a next frame image of the current frame image according to the position information of the lane lines in the current frame image and obtaining position information of the lane lines in the next frame image.
In one embodiment, according to a recognition result, performing the target tracking on the next frame image of the current frame image according to the position information of the lane lines in the current frame image and obtaining position information of the lane lines in the next frame image comprises:
dividing the next frame image into a plurality of area images;
selecting an area image in the next frame image corresponding to the position information of the lane lines in the current frame image as a target area image; and
performing the target tracking on the target area image to acquire the position information of the lane lines in the next frame image.
In one embodiment, the method further comprises:
determining an intersection of the lane lines according to the position information of lane lines in the current frame image;
determining a lane line estimation area according to the intersection of lane lines and the position information of the lane lines in the current frame image; and
selecting an area image corresponding to the lane line estimation area in the next frame image of the current frame image as the next frame image.
In one embodiment, the method further comprises:
determining a driving state of the vehicle according to the position information of the lane lines, where the driving state of the vehicle comprises line-covering driving; and
outputting warning information in the case where the driving state of the vehicle meets a warning condition that is preset.
In one embodiment, the above warning condition comprises that the vehicle is driving on a solid line, or a duration of the vehicle covering a dotted line exceeds a preset duration threshold.
In the second aspect, a lane line recognition apparatus comprises:
a detection module, configured to detect a current frame image collected by a vehicle and determine a plurality of detection frames where lane lines in the current frame image are located;
a first determination module, configured to determine a connection area according to position information of the plurality of detection frames, where the connection area comprises the lane lines; and
a second determination module, configured to perform edge detection on the connection area and determine position information of the lane lines in the connection area.
In the third aspect, a computer device comprises a memory and a processor, the memory stores computer programs, and the processor implements steps of the above lane line recognition method when executing the computer programs.
In the fourth aspect, a computer-readable storage medium, which stores computer programs thereon, and the computer programs implement steps of the above-mentioned lane line recognition method when executed by a processor.
In the above lane line recognition method, the apparatus, the device and the storage medium, the position information of the lane lines is determined by first detecting the current frame image collected by the vehicle, determining a plurality of detection frames where the lane lines in the current frame image are located, determining the connection area which comprises the lane lines according to the position information of the plurality of detection frames, and then performing edge detection on the connection area and determining the position information of the lane lines in the connection area. That is to say, the position information of the lane lines is obtained by first dividing the current frame image into a plurality of detection frames, then connecting the detection frames to obtain the connection area comprising the lane lines, and then performing edge detection on the connection area, so as to avoid the problem of inaccurate position information of the determined lane lines when the pixel grayscale of the environment image changes drastically, thereby improving the accuracy of the position information of the determined lane lines.
The above description is only an overview of the technical schemes of the present disclosure. In order to understand the technical schemes of the present disclosure more clearly, it can be implemented according to the contents of the description, and in order to make the above and other purposes, features and advantages of the present disclosure more obvious and easy to understand, the specific embodiments of the present disclosure are described below.
In order to illustrate the technical schemes of the embodiments of the present disclosure or prior art more clearly, the following will briefly introduce the attached drawings used in the description of the embodiments or the prior art. Obviously, the attached drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can also be obtained from these drawings without any creative effort.
The lane line recognition method, the apparatus, the device, and the storage medium provided by the present application aim to solve the problem of inaccurate lane lines determined by traditional methods. The technical schemes of the present application and how the technical schemes of the present application solve the above technical problems will be described in detail through the embodiments and in combination with the accompanying drawings. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application and are not used to limit the present application. Obviously, the described embodiments are part of the embodiments of the present disclosure, not all of them. Based on the embodiments of the present disclosure, all other embodiments obtained by ordinary technicians in the art without creative work belong to the protection scope of the present disclosure.
The lane line recognition method provided by this embodiment can be applied to the application environment shown in
It should be noted that the execution subject of the lane line recognition method provided by the embodiments of the present disclosure can be a lane line recognition apparatus, which can be realized as part or all of a lane line recognition terminal by means of software, hardware or a combination of software and hardware.
In order to make the purpose, technical schemes and advantages of the embodiments of the present disclosure clearer, the technical schemes in the embodiments of the present disclosure will be clearly and completely described below in combination with the attached drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, not all of the embodiments.
S101: detecting the current frame image collected by the vehicle and determining a plurality of detection frames where the lane lines in the current frame image are located.
The current frame image can be the image collected by the image acquisition device arranged on the vehicle, and the current frame image can comprise the environment information around the vehicle when the vehicle is driving. Generally, the image acquisition device is a camera, and the data it collects is video data, that is, the current frame image can be the image corresponding to the current frame in the video data. The detection frame can be an area comprising lane lines in the current frame image and is a roughly selected area of lane lines in the current frame image. The position information of the detection frame can be used to indicate the position of the lane line area in the current frame image. It should be noted that the detection frame can be an area smaller than the area of the position of all lane lines, that is to say, one detection frame usually comprises only part of the lane lines, not all the lane lines.
When detecting the current frame image collected by the vehicle and determine the plurality of detection frames where the lane lines in the current frame image are located, it can be realized by image detection technology. For example, a plurality of detection frames where the lane lines in the current frame image are located can be determined through the lane line area recognition model.
S102: determining the connection area according to the position information of the plurality of detection frames, where the connection area comprises lane lines.
Generally, one frame of image may comprises a plurality of lane lines, so the detection frames indicating the same lane line can be connected to obtain one connection area, which comprises one lane line. That is, when acquiring a plurality of detection frames, the detection frames indicating that the lane lines coincide can be connected according to the position information of the detection frames so as to obtain the connection area.
S103: performing edge detection on the connection area and determining the position information of the lane lines in the connection area.
The position information of the lane lines can be used to indicate the area where the lane lines in the environment image are located, which can mark lane lines in the environment image by different colors. When the connection area is obtained, edge detection can be performed on the position in the current frame image indicated by the connection area, that is to say, the edge area with significantly different image pixel grayscale in the connection area can be selected to determine the position information of the lane lines.
In the above lane line recognition method, the position information of the lane lines is determined by first detecting the current frame image collected by the vehicle, determining a plurality of detection frames where the lane lines in the current frame image are located, determining the connection area which comprises the lane lines according to the position information of the plurality of detection frames, and then performing edge detection on the connection area and determining the position information of the lanes lines in the connection area. That is to say, the position information of the lane lines is obtained by first dividing the current frame image into a plurality of detection frames, then connecting the detection frames to obtain the connection area comprising the lane lines, and then performing edge detection on the connection area, so as to avoid the problem of inaccurate position information of the determined lane lines when the pixel grayscale of the environment image changes drastically, thereby improving the accuracy of the position information of the determined lane lines.
Optionally, the current frame image is input into the lane line classification model to obtain a plurality of detection frames where the lane lines in the current frame image are located; and the lane line classification model comprises at least two classifiers that are cascaded.
The lane line classification model can be a traditional neural network model. For example, the lane line classification model can be an Adaboost model, and its structure can be shown as
When the current frame image is input into the lane line classification model to obtain a plurality of detection frames where the lane lines in the current frame image are located, the current frame image of the vehicle can be directly input into the lane line classification model, and a plurality of detection frames corresponding to the current frame image can be output through the mapping relationship between the current frame image and the detection frame preset in the lane line classification model; it is also possible to perform a scaling operation on the current frame image of the vehicle according to the preset scaling ratio, so that the size of the scaled current frame image matches the size of the area that can be recognized by the lane line classification model. After the scaled current frame image is input into the lane line classification model, a plurality of detection frames corresponding to the current frame image is output through the mapping relationship between the current frame image and the detection frame preset in the lane line classification model. The embodiments of the present disclosure do not limit this aspect.
S201: performing a scaling operation on the current frame image according to the area size which is recognizable by the lane line classification model so as to obtain the scaled current frame image.
When the lane line classification model is a traditional neural network model, the area size identified by the traditional neural network model is a fixed size, for example, the fixed size is 20×20, or the fixed size is 30×30. When the size of the lane line area in the current frame image collected by the image acquisition device is greater than the above fixed size, the lane line classification model cannot recognize and obtain the position information of a plurality of lane line areas according to the current frame image in the case where the current frame image is directly input into the lane line classification model. The current frame image can be scaled through scaling operation to obtain the scaled current frame image, so that the size of the lane line area in the scaled current frame image can match the size of the area which can be recognized by the lane line classification model.
S202: obtaining a plurality of detection frames where the lane lines in the current frame image are located according to the scaled current frame image and the lane line classification model.
Optionally, the specific process of obtaining the position information of the plurality of lane line areas according to the scaled current frame image and the lane line classification model can be shown in
S301: performing a sliding window operation on the scaled current frame image according to the preset sliding window size so as to obtain a plurality of images to be recognized.
The preset sliding window size can be obtained according to the area size that can be recognized by the above lane line classification model. The preset sliding window size can be the same as the area size that can be recognized by the lane line classification model, or it can be slightly smaller than the area size that can be recognized by the lane line classification model, and the embodiments of the present disclosure do not limit this. According to the preset sliding window size, the sliding window operation can be performed on the scaled current frame image to obtain a plurality of images to be recognized, and the size of the image to be recognized is obtained according to the preset sliding window size. For example, the size of the scaled current frame image is 800×600, and the preset sliding window size is 20×20, so that the image in the window determined with the coordinate (0,0) as the starting point and the coordinate (20,20) as the ending point can be regarded as the first image to be recognized according to the preset sliding window size, and then the image in the window determined with the coordinate (2,0) as the starting point and the coordinate (22,20) as the ending point is obtained by sliding 2 along the x-axis coordinate according to the preset sliding window step 2, and is regarded as the second image to be recognized. The window is slid successively until the image in the window determined with the coordinate (780,580) as the starting point and the coordinate (800,600) as the ending point is taken as the last image to be recognized, so as to obtain a plurality of images to be recognized.
S302: inputting the plurality of images to be recognized into the lane line classification model successively to obtain the plurality of detection frames where the lane lines are located.
When the plurality of images to be recognized are input into the lane line classification model successively, the lane line classification model can judge whether the image to be recognized is an image of the lane lines through the classifier. The classifier can be at least two classifiers that are cascaded. When the last-level classifier determines that the image to be recognized is an image of lane lines, the position information corresponding to the image to be recognized determined as the image of lane lines can be determined as a plurality of detection frames where the lane lines are located, that is, the plurality of detection frames where the lane lines are located can be small windows shown in
In the above lane line recognition method, the terminal performs a scaling operation on the current frame image according to the area size which is recognizable by the lane line classification model so as to obtain the scaled current frame image, and obtains the position information of a plurality of lane line areas according to the scaled current frame image and the lane line classification model, so that when the lane line classification model is a traditional neural network model, the current frame image that does not match the area size which can be recognized by the traditional neural network model cannot be recognized. At the same time, due to the simple structure of the traditional neural network model, the traditional neural network model is used as the lane line classification model to obtain the position information of the lane line area of the current frame image, and the amount of calculation is small. Therefore, there is no need to use a chip with high computing capability to acquire the position information of the lane line area of the current frame image, and thus, the cost of the apparatus required for lane line recognition is reduced.
S401: merging the plurality of detection frames according to the position information of the plurality of detection frames and determining the merging area where the plurality of detection frames are located.
According to the position information of the detection frames, a plurality of detection frames with overlapping positions are determined, and the detection frames with overlapping positions are merged to obtain the merging area where the plurality of detection frames are located. Based on the description in the above embodiments, each detection frame comprises part of lane lines, and a plurality of detection frames with overlapping positions usually correspond to one complete lane line. Therefore, a plurality of detection frames with overlapping positions are merged to obtain the merging area where a plurality of detection frames are located, and the merging area usually comprises one complete lane line. For example, the merging area may be two merging areas shown in
S402: determining the connection area corresponding to the plurality of detection frames according to the merging area.
On the basis of the above S401, after the merging area are obtained, the frame detection can be carried out on the merging area to obtain the connection area corresponding to the plurality of detection frames. It should be noted that the connection area can be the largest circumscribed polygon corresponding to the merging area, the largest circumscribed circle corresponding to the merging area, or the largest circumscribed sector corresponding to the merging area, and the embodiments of the present disclosure do not limit this. For example, the connection area may be the largest circumscribed polygon of two merging areas shown in
Optionally, edge detection may be performed on the connection area through the embodiment shown in
S501: performing edge detection on the connection area to obtain the target edge area.
S502: taking the position information of the target edge area as the position information of the lane lines in the case where the target edge area meets the preset condition.
When the edge detection is performed on the connection area and the target edge area obtained is inaccurate, that is, there is a case that the target edge area is not a lane line, and thus whether the target edge area comprises a lane line can be determined by judging whether the target edge area meets the preset condition. When the target edge area meets the preset condition, the position information of the target edge area is used as the position information of the lane lines.
Optionally, the preset condition comprises at least one selected from group consisting of: the target edge area comprises a left edge and a right edge, the distal width of the target edge area is less than the proximal width of the target edge area, and the distal width of the target edge area is greater than the product of the proximal width and the width coefficient.
Because a lane line is usually a line of a preset width on a plane image, it can be determined that the target edge area may be a lane line when the target edge area comprises both the left edge and the right edge. When the target edge area comprises only the left edge or only the right edge, the target edge area cannot be a lane line, which is a misjudgment. At the same time, in the plane image, the lane lines meet the principle of “near thick and far thin”. Therefore, when the distal width of the target edge area is less than the proximal width, the target edge area may be a lane line. Further, the change degree of the width of the lane lines can be defined by the condition of the distal width of the target edge area being greater than the product of the proximal width and the width coefficient. For example, in the case of determining whether the distal width of the target edge area is less than the proximal width, it can be determined by the following formula:
length(i)≥length(i+1) and 0.7*length(i)πlength(i+1)
That is, when the target edge area comprises a left edge and a right edge, or the distal width of the target edge area is less than the proximal width, the target edge area is the recognition result of the lane lines.
In the above lane line recognition method, the terminal performs edge detection on the connection area to obtain the target edge area. In the case where the target edge area meets the preset condition, the position information of the target edge area is taken as the position information of the lane lines, and the preset condition is used to determine whether the target edge area comprises a lane line. That is to say, after edge detection is performed on the connection area and the target edge area is obtained, further, by judging whether the target edge area meets the preset condition, and taking the position information of the target edge area meeting the preset condition as the position information of the lane lines, the situation that the position information of the determined lane lines is inaccurate due to misjudgment when the position information of the target edge area obtained by edge extraction of the target area is directly used as the position information of the lane lines is avoided, and further the accuracy of the position information of the determined lane lines is improved.
On the basis of the above embodiments, when recognizing the lane lines of the next frame image of the current frame image, the target tracking may be performed on the next frame image according to the position information of the lane lines in the current frame image, so as to obtain the position information of the lane lines in the next frame image. Optionally, according to the position information of the lane lines in the current frame image, target tracking is performed on the next frame image of the current frame image so as to acquire the position information of the lane lines in the next frame image.
When the position information of the lane lines in the current frame image is determined, the color and brightness of the lane lines in the position information of the lane lines can be compared with the next frame image of the current frame image, and the area in the next frame image that matches the color and brightness of the lane lines in the current frame image can be tracked to obtain the position information of the lane lines in the next frame image.
Optionally, target tracking may be performed on the next frame image of the current frame image through the embodiment shown in
S601: dividing the next frame image into a plurality of area images.
When target tracking is performed on the next frame image according to the position information of the lane lines, and when the illumination in the next frame image changes, such as the reflection caused by the puddles on the road, there is a ponding area in the next frame image, and the brightness of the ponding area is significantly different from other areas. When target tracking is directly performed on the next frame image, it is easy to misjudge due to the high brightness of the ponding area. At this time, the next frame image can be divided into a plurality of area images, so that the brightness of the lane lines in each area image is uniform, so as to avoid misjudgment caused by too high brightness of ponding area.
S602: selecting the area image in the next frame image corresponding to the position information of the lane lines in the current frame image as the target area image.
S603: performing target tracking on the target area image to acquire the position information of the lane lines in the next frame image.
In the lane line recognition method, the next frame image is divided into a plurality of area images, the area image in the next frame image corresponding to the position information of the lane lines in the current frame image is selected as the target area image, target tracking is performed on the target area image, and the position information of the lane lines in the next frame image is acquired, which avoids the existence of abnormal brightness area caused by illumination change in the next frame image, further avoids the wrong target area image obtained by misjudging the abnormal brightness area, and improves the accuracy of the position information of the lane lines in the next frame image obtained by performing target tracking on the target area image.
When the position information of the lane lines in the current frame image is determined and the position information of the lane lines needs to be determined for the next frame image of the current frame image, the lane line estimation area can be determined according to the position information of the lane lines in the current frame image, and the area image corresponding to the lane line estimation area in the next frame image can be used as the next frame image. As shown in
S701: determining the intersection of lane lines according to the position information of the lane lines in the current frame image.
Generally, lane lines appear in pairs, that is, the lane lines in the environment image are usually two lane lines. As shown in
S702: determining the lane line estimation area according to the intersection of the lane lines and the position information of the lane lines in the current frame image.
When the intersection of lane lines is obtained, the current frame image can be divided into two areas according to the intersection of lane lines. The area comprising lane lines is regarded as the lane line estimation area. When the current frame image is divided into two areas which include the upper area of the image and the lower area of the image, generally speaking, because the intersection is usually located on the horizon of the image, that is, the upper area of the image is the sky and the lower area of the image is the ground, i.e., the area where the lane lines are located, the lower area of the image is determined as the lane line estimation area.
S703: selecting the area image corresponding to the lane line estimation area in the next frame image of the current frame image as the environment image of the next frame.
In the above lane line recognition method, the intersection of the lane lines is determined according to the position information of the lane lines in the current frame image, the lane line estimation area is determined according to the intersection of the lane lines and the position information of the lane lines in the current frame image, and the area image corresponding to the lane line estimation area in the next frame image of the current frame image is selected as the environment image of the next frame. That is to say, the next frame image only comprises the lane line estimation area, so that the amount of data required to be calculated when determining the position information of the lane lines in the next frame image is small, which improves the efficiency of determining the position information of the lane lines in the next frame image.
When the recognition result of lane lines is determined, it can also be determined whether to output warning information according to the recognition result and the current position information of the vehicle. The following is described in detail with reference to
S801: determining the driving state of the vehicle according to the position information of the lane lines, where the driving state of the vehicle comprises line-covering driving.
On the basis of the above embodiments, after the position information of the lane lines is determined, the driving state of the vehicle can be calculated according to the position information of the image acquisition device installed on the vehicle, that is to say, whether the vehicle drives on the line. For example, when the image acquisition device is installed on the vehicle, whether the vehicle drives on the line can be determined according to the position where the image acquisition device is installed on the vehicle, the lane line recognition result and the vehicle's own parameters, such as the height and width of the vehicle.
S802: in the case where the driving state of the vehicle meets the warning condition that is preset, outputting the warning information.
When the driving state of the vehicle is line-covering driving and meets the preset warning condition, the warning information is output. Optionally, the warning condition comprises that the vehicle drives on a solid line, or the duration of the vehicle on a dotted line exceeds a preset duration threshold, that is to say, when the driving state of the vehicle is line-covering driving, and the vehicle is driving on a solid line, or the vehicle is in the line-covering driving state, and the duration of the vehicle on the dotted line exceeds the preset duration threshold, the driving state of the vehicle meets the preset warning condition, and thus the warning information is output. The warning information may be a voice prompt, a beeper or a flashing light, which is not limited in the embodiments of the present disclosure.
In the above lane line recognition method, the terminal determines the driving state of the vehicle according to the position information of the lane lines and the current position information of the vehicle. The driving state of the vehicle comprises line-covering driving. In the case where the driving state of the vehicle meets the preset warning condition, the terminal outputs warning information. The warning condition comprises that the vehicle is driving on the solid line, or the duration of the vehicle on the dotted line exceeds the preset duration threshold, so that when the vehicle is driving on the solid line, or the duration on the dotted line exceeds the preset duration threshold, the warning information may be output and the driver can be prompted to ensure driving safety.
It should be understood that although the steps in the flowchart of
The detection module 10 is used for the detection module, which is configured to detect the current frame image collected by the vehicle and determine a plurality of detection frames where the lane lines in the current frame image are located.
The first determination module 20 is configured to determine a connection area according to the position information of the plurality of detection frames, and the connection area comprises lane lines.
The second determination module 30 is configured to perform edge detection on the connection area and determine the position information of the lane lines in the connection area.
In an embodiment, the detection module 10 is specifically used to input the current frame image into the lane line classification model to obtain a plurality of detection frames where the lane lines in the current frame image are located. The lane line classification model comprises at least two classifiers that are cascaded.
The lane line recognition apparatus provided by the embodiments of the present disclosure can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The scaling unit 101 is configured to perform a scaling operation on the current frame image according to the area size which is recognizable by the lane line classification model so as to obtain the scaled current frame image.
The first acquisition unit 102 is configured to obtain a plurality of detection frames where the lane lines in the current frame image are located according to the scaled current frame image and the lane line classification model.
In an embodiment, the first acquisition unit 102 is specifically used to perform a sliding window operation on the scaled current frame image according to the preset sliding window size to obtain a plurality of images to be recognized, and is used to input the plurality of images to be recognized successively into the lane line classification model so as to obtain a plurality of detection frames where the lane lines are located.
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiment, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The merging unit 201 is configured to merge a plurality of detection frames according to the position information of the plurality of detection frames and determine the merging area where the plurality of detection frames are located.
The first determination unit 202 is used to determine the connection area corresponding to the plurality of detection frames according to the merging area.
It should be noted that
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The detection unit 301 is configured to perform edge detection on the connection area to obtain a target edge area.
The second determination unit 302 is configured to take the position information of the target edge area as the position information of the lane lines in the case where the target edge area meets the preset condition.
In an embodiment, the above preset condition comprises at least one selected from a group consisting of: the target edge area comprises a left edge and a right edge, the distal width of the target edge area is less than the proximal width, and the distal width of the target edge area is greater than the product of the proximal width and the width coefficient.
It should be noted that
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The tracking module 40 is configured to perform target tracking on the next frame image of the current frame image according to the position information of the lane lines in the current frame image, and acquire the position information of the lane lines in the next frame image.
In an embodiment, the tracking module 40 is specifically used to divide the next frame image into a plurality of area images, select the area image in the next frame image corresponding to the recognition result of lane lines in the current frame image as the target area image, and perform target tracking on the target area image to acquire the position information of the lane lines in the next frame image.
It should be noted that
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The selection module 50 is specifically used to determine the intersection of lane lines according to the position information of the lane lines in the current frame image, determine the lane line estimation area according to the intersection of lane lines and the position information of the lane lines in the current frame image, and select the area image corresponding to the lane line estimation area in the next frame image of the current frame image as the next frame image.
It should be noted that
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
The warning module 60 is specifically used to determine the driving state of the vehicle according to the position information of the lane lines. The driving state of the vehicle comprises line-covering driving. The warning module 60 is also used to output the warning information in the case where the driving state of the vehicle meets the preset warning condition.
In an embodiment, the above warning condition comprises that the vehicle is driving on the solid line, or the duration of the vehicle covering the dotted line exceeds the preset duration threshold.
It should be noted that
The lane line recognition apparatus provided by the embodiment of the present application can execute the above method of the above embodiments, and its implementation principle and the technical effect are similar to those of the method, which will not be repeated here.
For the specific definition of a lane line recognition apparatus, reference may be made to the definition of the lane line recognition method above, which will not be repeated here. Each module in the lane line recognition apparatus can be realized in whole or in part by software, hardware and their combinations. The above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the corresponding operations of the above modules.
In an embodiment, a computer device is provided, which can be a terminal device, and its internal structure diagram can be shown in
Those skilled in the art can understand that the structure shown in
In an embodiment, a terminal device is provided, which comprises a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
detecting the current frame image collected by the vehicle and determining a plurality of detection frames where the lane lines in the current frame image are located;
determining a connection area according to the position information of the plurality of detection frames, where the connection area comprises lane lines; and
performing edge detection on the connection area and determining the position information of the lane lines in the connection area.
In an embodiment, when the processor executes the computer program, it also realizes the following steps: inputting the current frame image into the lane line classification model to obtain a plurality of detection frames where the lane lines in the current frame image are located. The lane line classification model comprises at least two classifiers that are cascaded.
In an embodiment, when executing the computer program, the processor also implements the following steps: performing a scaling operation on the current frame image according to the area size which is recognizable by the lane line classification model so as to obtain the scaled current frame image; and obtaining the plurality of detection frames of lane lines in the current frame image according to the scaled current frame image and the lane line classification model.
In an embodiment, when the processor executes the computer program, it also implements the following steps: performing a sliding window operation on the scaled current frame image according to the preset sliding window size so as to obtain a plurality of images to be recognized; and successively inputting the plurality of images to be recognized into the lane line classification model to obtain the plurality of detection frames where the lane lines are located.
In an embodiment, when the processor executes the computer program, it also implements the following steps: merging the plurality of detection frames according to the position information of the plurality of detection frames and determining the merging area where the plurality of detection frames are located; and according to the merging area, determining the connection area corresponding to the plurality of detection frames.
In an embodiment, when the processor executes the computer program, it also implements the following steps: performing edge detection on the connection area to obtain the target edge area; and in the case where the target edge area meets the preset condition, taking the position information of the target edge area as the position information of the lane lines.
In an embodiment, the above preset condition comprises at least one selected from a group consisting of the following: the target edge area comprises a left edge and a right edge, the distal width of the target edge area is less than the proximal width of the target edge area, and the distal width of the target edge area is greater than the product of the proximal width and the width coefficient.
In an embodiment, when the processor executes the computer program, it also implements the following steps: according to the position information of the lane lines in the current frame image, performing target tracking on the next frame image of the current frame image and acquiring the position information of the lane lines in the next frame image.
In an embodiment, when the processor executes the computer program, it also implements the following steps: dividing the next frame image into a plurality of area images; selecting the area image in the next frame image corresponding to the position information of the lane lines in the current frame image as the target area image; and performing target tracking on the target area image to acquire the position information of the lane lines in the next frame image.
In an embodiment, when the processor executes the computer program, it also implements the following steps: determining the intersection of lane lines according to the position information of lane lines in the current frame image; determining the lane line estimation area according to the intersection of lane lines and the position information of lane lines in the current frame image; and selecting the area image corresponding to the lane line estimation area in the next frame image of the current frame image as the next frame image.
In an embodiment, when the processor executes the computer program, it also implements the following steps: determining the driving state of the vehicle according to the position information of the lane lines, where the driving state of the vehicle comprises line-covering driving; and in the case where the driving state of the vehicle meets the preset warning condition, outputting the warning information.
In an embodiment, the above warning condition comprises that the vehicle is driving on the solid line, or the duration of the vehicle on the dotted line exceeds the preset duration threshold.
The implementation principle and technical effect of the terminal device provided in this embodiment are similar to those of the above method embodiment, and will not be repeated here.
In an embodiment, a computer-readable storage medium is provided on which a computer program is stored. The computer program implements the following steps when executed by a processor:
detecting the current frame image collected by the vehicle, and determining a plurality of detection frames where the lane lines in the current frame image are located;
determining a connection area according to the position information of the plurality of detection frames, where the connection area comprises lane lines; and
performing edge detection on the connection area and determining the position information of the lane lines in the connection area.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: inputting the current frame image into the lane line classification model to obtain a plurality of detection frames where the lane lines in the current frame image are located. The lane line classification model comprises at least two classifiers that are cascaded.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: performing a scaling operation on the current frame image according to the area size which is recognizable by the lane line classification model to obtain the scaled current frame image; and according to the scaled current frame image and the lane line classification model, obtaining the plurality of detection frames of lane lines in the current frame image.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: performing a sliding window operation on the scaled current frame image according to the preset sliding window size to obtain a plurality of images to be recognized; and successively inputting the plurality of images to be recognized into the lane line classification model to obtain the plurality of detection frames where the lane lines are located.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: merging the plurality of detection frames according to the position information of the plurality of detection frames and determining the merging area where the plurality of detection frames are located; and according to the merging area, determining the connection area corresponding to the plurality of detection frames.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: performing edge detection on the connection area to obtain the target edge area; and in the case where the target edge area meets the preset condition, taking the position information of the target edge area as the position information of the lane lines.
In an embodiment, the above preset condition comprises at least one selected from a group consisting of the following: the target edge area comprises a left edge and a right edge, the distal width of the target edge area is less than the proximal width of the target edge area, and the distal width of the target edge area is greater than the product of the proximal width and the width coefficient.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: according to the position information of the lane lines in the current frame image, performing target tracking on the next frame image of the current frame image to acquire the position information of the lane lines in the next frame image.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: dividing the next frame image into a plurality of area images; selecting the area image in the next frame image corresponding to the position information of the lane lines in the current frame image as the target area image; and performing the target tracking on the target area image to acquire the position information of the lane lines in the next frame image.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: determining the intersection of lane lines according to the position information of the lane lines in the current frame image; determining the lane line estimation area according to the intersection of lane lines and the position information of lane lines in the current frame image; and selecting the area image corresponding to the lane line estimation area in the next frame image of the current frame image as the next frame image.
In an embodiment, when the computer program is executed by the processor, the following steps are realized: determining the driving state of the vehicle according to the position information of the lane lines, where the driving state of the vehicle comprises line-covering driving; and in the case where the driving state of the vehicle meets the preset warning condition, outputting the warning information.
In an embodiment, the above warning condition comprises that the vehicle is driving on the solid line, or the duration of the vehicle covering the dotted line exceeds the preset duration threshold.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above embodiment of the method, and will not be repeated here.
Those of ordinary skill in the art can understand that all or part of the process of implementing the above embodiment method can be completed by instructing relevant hardware through a computer program. The computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, the process of the above embodiments can be realized. Any reference to memory, storage, database or other media used in the various embodiments provided by the present disclosure may comprise non-volatile and/or volatile memory. The non-volatile memory may comprise read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. The volatile memory may comprise random access memory (RAM) or external cache memory. By way for illustration but not for limitation, RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (synchlink) DRAM (SLDRAM), rambus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), memory bus dynamic RAM (RDRAM) and so on.
Various component embodiments of the present disclosure may be implemented by hardware, or by software modules running on one or more processors, or in a combination thereof. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components in the computing processing device according to the embodiments of the present disclosure. The present disclosure may also be implemented as a device or apparatus programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program implementing the present disclosure may be stored on a computer-readable medium or may have the form of one or more signals. Such signals may be downloaded from Internet websites, or provided on carrier signals, or provided in any other form. For example,
The term “one embodiment”, “an embodiment” or “one or more embodiments” herein means that the specific features, structures or characteristics described in combination with the embodiments are comprised in at least one embodiment of the present disclosure. In addition, please note that the examples of the word “in an embodiment” here do not necessarily refer to the same embodiment.
In the specification provided herein, a large amount of specific details are set forth. However, it can be understood that the embodiments of the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail so as not to obscure an understanding of this specification.
In the claims, any reference symbols between parentheses shall not be constructed as a limitation of the claims. The word “comprise” does not exclude the existence of elements or steps not listed in the claims. The word “a” or “an” before an element does not exclude the existence of a plurality of such components. The present disclosure can be implemented by means of hardware comprising several different elements and by means of a properly programmed computer. In the unit claims listing several apparatuses, several of these apparatuses may be embodied specifically by the same hardware item. The use of words first, second, and third, etc. do not denote any order. These words may be interpreted as names. The technical features of the above embodiments may be combined arbitrarily. In order to make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be considered to be within the scope recorded in this specification. The above embodiments only express several embodiments of the present application, and the descriptions thereof are specific and detailed, but should not be construed as limiting the scope of the disclosure. It should be noted that for those skilled in the art, several modifications and improvements can be made without departing from the concept of the present disclosure, which all belong to the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be determined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
201911147428.8 | Nov 2019 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/115390 | 9/15/2020 | WO |