This application claims the priority benefit of Chinese Patent Application No. 201510660793.4 filed on Oct. 14, 2015 in the Chinese State Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field
The present disclosure relates to the field of information technologies, and in particular to a method and apparatus for detecting a traffic state and electronic equipment.
2. Description of the Related Art
As the development of economics, more and more vehicles enter into lives of the people. However, as the rapid increase of the number of vehicles, problems of traffic jams and traffic security become more severe. Thanks to the development of the information technologies, a concept of intelligent transport was proposed, and the transport problems are expected to be solved by technological means.
Traffic state detection is a part of the intelligent transport, which is able to provide important information for transport management. In the intelligent transport technologies, an image processing technology may usually be adopted, in which traffic surveillance images are analyzed, so as to obtain traffic state information.
It should be noted that the above description of the background art is merely provided for clear and complete explanation of the present disclosure and for easy understanding by those skilled in the art. And it should not be understood that the above technical solution is known to those skilled in the art as it is described in the background of the present disclosure.
Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the embodiments.
It was found by the inventors of this application that in an existing traffic state detection method, the number of vehicles on the road is often detected, so as to judge a traffic state. However, moving states of the vehicles are not taken into account in the detection method, hence, traffic state information cannot be obtained more accurately, thereby affecting efficiency of the transport management.
For example, in a case where the number of vehicles on the road is relatively large, if the vehicles are static, a relatively severe traffic jam may possibly occur due to a traffic accident. And if the vehicles may still keep a suitable driving speed, it shows that only a vehicle flow-rate is relatively large, and no traffic jam occurs. For the two traffic states, the traffic jam and the relatively large vehicle flow-rate, different transport management schemes may be adopted for management. However, if the above existing traffic state detection method is still adopted, the two traffic states, the traffic jam and the relatively large vehicle flow-rate, cannot be easily differentiated, and therefore, it is hard to use a corresponding transport management scheme for management, and the efficiency of the transport management is affected.
Embodiments of this application provide a method and apparatus for detecting a traffic state and electronic equipment, in which a traffic state is determined according to the number of vehicles and moving states of the vehicles in a traffic surveillance image, thereby more accurately obtaining traffic state information.
According to a first aspect of the embodiments of the present disclosure, there is provided an apparatus for detecting a traffic state, including:
According to a second aspect of the embodiments of the present disclosure, there is provided electronic equipment, including the apparatus for detecting a traffic state as described in the first aspect of the above embodiment.
According to a third aspect of the embodiments of the present disclosure, there is provided a method for detecting a traffic state, including:
An advantage of the embodiments of this application exists in that a traffic state is determined according to the number of vehicles and motion states of the vehicles in a traffic surveillance image, thereby more accurately obtaining traffic state information.
With reference to the following description and drawings, the particular embodiments of the present disclosure are disclosed in detail, and the principle of the present disclosure and the manners of use are indicated. It should be understood that the scope of the embodiments of the present disclosure is not limited thereto. The embodiments of the present disclosure contain many alternations, modifications and equivalents within the spirits and scope of the terms of the appended claims.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
It should be emphasized that the term “comprises/comprising/includes/including” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
The drawings are included to provide further understanding of the present disclosure, which constitute a part of the specification and illustrate the preferred embodiments of the present disclosure, and are used for setting forth the principles of the present disclosure together with the description. It should be noted that the accompanying drawings in the following description are some embodiments of the present disclosure only, and a person of ordinary skill in the art may obtain other accompanying drawings according to these accompanying drawings without making an inventive effort. In the drawings:
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below by referring to the figures.
These and further aspects and features of the present disclosure will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the disclosure have been disclosed in detail as being indicative of some of the ways in which the principles of the disclosure may be employed, but it is understood that the disclosure is not limited correspondingly in scope. Rather, the disclosure includes all changes, modifications and equivalents coming within the spirit and terms of the appended claims.
Embodiment 1 of this application provides a method for detecting a traffic state, in which traffic surveillance images are analyzed, so as to detect a traffic state.
In this embodiment, the traffic state is determined according to the occupation ratio reflecting the number of the vehicles and the motion state of a vehicle, thereby obtaining more accurate traffic state information.
In this embodiment, the region of interest (ROI) of the traffic surveillance image may be a region of an image most reflecting a traffic state. For example, the region of interest may be a region in the image covering relatively large amount of lanes, etc.
In this embodiment, a size and a position of the region of interest may be set in advance, and may also be separately set for each frame of traffic surveillance image. For consecutive multiple frames of traffic surveillance images, positions and sizes of each frame of regions of interest may be identical, and subsequent image processing may only be performed on the regions of interest, thereby lowering an amount of data for performing image processing, and improving efficiency of the processing. This embodiment is not limited thereto, and all regions in one frame in a traffic surveillance image may be taken as regions of interest.
In step S101 of this embodiment, the contour image corresponding to the region of interest may be obtained, and the occupation ratio may be calculated according to the contour image.
In this embodiment, when the traffic surveillance image is a color image, edge extraction processing may be performed on color components of the region of interest respectively so as to form edge detection images of the color components respectively, and the edge detection images of the color components are combined, so as to generate the contour image. For example, the color image may be RGB color image, or other types of color image. A method of edge extraction processing may be, for example, a Sobel edge detection algorithm, or other edge detection algorithms. A method of combining the edge detection images may be, for example, a simple method of summation of pixel values, or other methods, such as a method of weighted summation. The edge detection images and the contour image may be binarized images. For example, the edge detection images and the contour image may consist of black pixels and white pixels, the white pixels denoting edges in an image. Of course, the edge detection images and the contour image may also be images of other forms.
In this embodiment, a method for forming the contour image is not limited thereto. For example, when the traffic surveillance image is a color image, the color image may also be transformed into a grayscale image, edge extraction processing is performed in a region of interest of the grayscale image so as to form an edge detection image of the grayscale image, and the edge detection image of the grayscale image is taken as the contour image. Or, when the traffic surveillance image is a grayscale image, edge extraction processing may be performed directly on the region of interest so as to form an edge detection image, and the edge detection image is taken as the contour image. Other methods may also be used to form the contour image.
In this embodiment, after the contour image is obtained, a ratio of the number of pixels of the contour in the contour image to a total number of pixels in the region of interest is calculated, and the ratio is taken as the occupation ratio (ratioOccup). For example, the calculation may be performed by employing formula (1) as below:
where, numCon is the number of pixels of an object contour in a contour image, widthROI is the number of pixels of a region of interest in a direction of a width X, and heightROI is the number of pixels of a region of interest in a direction of a height Y.
In this embodiment, the occupation ratio (ratioOccup) may reflect the number of vehicles in a region of interest, and the higher the occupation ratio, the larger the number of vehicles in the region of interest.
In step S102 of this embodiment, whether there exists an object of a predetermined size moving in the region of interest of the traffic surveillance image may be detected, so as to determine the motion state of the vehicle in the region of interest.
In step S401 of this embodiment, the moving object in the image may be detected by using a movement detection method, so as to obtain the foreground image. For example, a differential operation may be performed on the traffic surveillance image and a background image, so as to detect the foreground image of the region of interest, an object in the foreground image representing the moving object. Of course, this embodiment is not limited thereto, and other methods may also be employed to obtain the foreground image.
In step S402 of this embodiment, the predetermined size may correspond to a size of the vehicle in the foreground image. Hence, if the object of the predetermined size exists in the foreground image, it may be deemed that it is very possible that a moving vehicle exists in the region of interest of the traffic surveillance image. And if the size of the object in the foreground image is less than the predetermined size, it may be deemed that the moving object in the region of interest of the traffic surveillance image is not a vehicle, but may possibly be a pedestrian, or a bicycle, etc., or may also be an error of detection.
In this embodiment, auxiliary lines may be set in the foreground image, which are used to judge whether there exists an object of a predetermined size in the foreground image.
In
In this embodiment, when it is judged that there exists no object of a predetermined size in the foreground image, it may be deemed that there exists no object of a predetermined size moving in the region of interest, thereby determining that a motion state of a vehicle in the region of interest of the current traffic surveillance image is a static state.
In this embodiment, when it is judged that there exists an object of a predetermined size in the foreground image, it may be deemed that there exists an object of a predetermined size moving in the region of interest of the current traffic surveillance image, in which case in order to avoid misjudgment, whether there exist objects of predetermined sizes in regions of interest in a first predetermined number of frames preceding a current frame may further be judged; and if yes, it is determined that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a state of movement; otherwise, it is determined that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a static state. For example, let the traffic surveillance image of the current frame is an N-th frame, whether there exist objects of predetermined sizes in all regions of interest in (N−numfusion+1)-th to (N−1)-th frames of traffic surveillance images may further be judged; and if yes, it is determined that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a state of movement; otherwise, it is determined that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a static state; wherein, the first predetermined number is numfusion−1, and N>(numfusion−1).
In step S103 of this embodiment, the traffic state may be determined according to the occupation ratio calculated in step S101 and the motion state determined in step S102.
In step S601 of this embodiment, the threshold value may include a first threshold value (threStat) and a second threshold value (threMov), which correspond respectively to different motion states. For example, when a vehicle in a region of interest of a frame of traffic surveillance image is in a static state, an occupation ratio to which the frame of image corresponds is compared with the first threshold value (threStat), and when the vehicle in the region of interest is in a state of movement, the occupation ratio to which the frame of image corresponds is compared with the second threshold value (threMov). In this embodiment, the parameters may be 1 or 0. Of course, this embodiment is not limited thereto, and the first parameters may also have other values.
For example, in step S601, let the traffic surveillance image of the current frame is an N-th frame and the second predetermined number is numFusCla−1, when a vehicle in a region of interest of an n-th frame (n is an integer, and (N−numFusCla+1)≦n≦N) of traffic surveillance image is in a static state, if an occupation ration (ratioOccup) is greater than the first threshold value (threStat), a first parameter (labelFram(n)) to which the frame of image corresponds may be set to be 1; otherwise, the first parameter (labelFram(n)) to which the frame of image corresponds may be set to be 0; when the vehicle in the region of interest of the n-th frame is in a moving state, if the occupation ration is greater than the second threshold value (threMov), the first parameter (labelFram(n)) to which the frame of image corresponds may be set to be 1; otherwise, the first parameter (labelFram(n)) to which the n-th frame of image corresponds may be set to be 0.
In step S602, summation may be performed on the first parameters of the traffic surveillance images of the current frame and the second predetermined number of frames preceding the current frame, the result of the operation is compared with the second parameter, and the traffic state to which the current frame corresponds is determined according to a comparison result. In this embodiment, the summation is not limited to be performed on the first parameters, and other operations, such as an operation based on a voting algorithm, etc., may also be performed. In this embodiment, the second parameter may be a parameter related to the number of the current frame and the second predetermined number of frames.
For example, in step S602, a formula for performing the summation may be
the number of the current frame and the second predetermined number (numFusCla−1) of frames may be numFusCla, and the second parameter may be numFusCla·ratioFus; where, ratioFus may be a value between 0 and 1. When the vehicle in the region of interest of the current frame is in a static state, if the result of the above summation is greater than the second parameter, it is determined that a traffic state to which the traffic surveillance image of the current frame corresponds is being jammed; otherwise, it is determined that the traffic state to which the traffic surveillance image of the current frame corresponds is smooth flow; and when the vehicle in the region of interest of the current frame is in a state of movement, if the result of the above summation is greater than the second parameter, it is determined that the traffic state to which the traffic surveillance image of the current frame corresponds is heavy flow; otherwise, it is determined that the traffic state to which the traffic surveillance image of the current frame corresponds is smooth flow.
With this embodiment, the traffic state is determined according to the occupation ratio reflecting the number of the vehicles and the motion state of a vehicle, thereby obtaining more accurate traffic state information.
Embodiment 2 of this application provides an apparatus for detecting a traffic state, corresponding to the method for detecting a traffic state of Embodiment 1.
As shown in
The first generating unit 901, the second generating unit 902 and the second calculating unit 903 shall be described below, respectively.
When the traffic surveillance image is a color image, the first generating unit 901 may perform edge extraction processing on color components of the region of interest respectively so as to form edge detection images of the color components respectively, and combine the edge detection images of the color components, so as to generate the contour image; or, the first generating unit 901 may transform the color image into a grayscale image, perform edge extraction processing in the region of interest so as to form an edge detection image of the grayscale image, and take the edge detection image of the grayscale image as the contour image.
When the traffic surveillance image is a grayscale image, the second generating unit 902 may perform edge extraction processing on the region of interest so as to form an edge detection image, and take the edge detection image as the contour image. And the second calculating unit 903 is configured to calculate a ratio of the number of pixels of the contour in the contour image to a total number of pixels in the region of interest, and take the ratio as the occupation ratio.
In this embodiment, the first detecting unit 102 detects a foreground image of the region of interest of the traffic surveillance image, and judges whether there exists the object of the predetermined size in the foreground image. And when there exists the object of the predetermined size moving in the region of interest of the traffic surveillance image of a current frame, the first detecting unit 102 may judge whether there exists the object of the predetermined size moving in regions of interest in a first predetermined number of frames preceding the current frame, and if yes, determines that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a state of movement, otherwise, determines that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a static state.
Furthermore, in this embodiment, when there exists no object of the predetermined size moving in the region of interest of the traffic surveillance image of the current frame, the first detecting unit 102 is able to determine that the vehicle in the region of interest of the traffic surveillance image of the current frame is in a static state.
In this embodiment, the first determining unit 103 calculates first parameters of the traffic surveillance images of the current frame and a second predetermined number of frames preceding the current frame according to motion states of vehicles in the region of interest of the current frame and the second predetermined number of frames preceding the current frame and a relationship between the occupation ratio and a threshold value; and performs an operation on the first parameters of the current frame and the second predetermined number of frames preceding the current frame, and determines according to a relationship between a result of operation and a second parameter that the traffic state to which the traffic surveillance image of the current frame corresponds is jammed, a vehicle flow-rate is smooth flow, or a vehicle flow-rate is heavy flow.
In this embodiment, the description of corresponding steps in Embodiment 1 may be referred to for detailed description of the units of the apparatus 800 for detecting a traffic state, which shall not be described herein any further.
With this embodiment, the traffic state is determined according to the occupation ratio reflecting the number of the vehicles and the motion state of a vehicle, thereby obtaining more accurate traffic state information.
Embodiment 3 of this application provides electronic equipment, including the units of the apparatus as described in Embodiment 2.
In an implementation, the functions of the apparatus 800 for detecting a traffic state may be integrated into the central processing unit 1001. The central processing unit 1001 may be configured to carry out the method for detecting a traffic state as described in Embodiment 1, i.e., the central processing unit 1001 may be configured to:
In another implementation, the apparatus 800 for detecting a traffic state and the central processing unit may be configured separately. For example, the apparatus 800 for detecting a traffic state may be configured as a chip connected to the central processing unit 1001, with its functions being realized under control of the central processing unit.
Furthermore, as shown in
An embodiment of the present disclosure provides a computer-readable program, wherein when the program is executed in electronic equipment, the program enables the computer to carry out the method for detecting a traffic state as described in Embodiment 1 in the electronic equipment.
An embodiment of the present disclosure further provides a non-transitory storage medium in which a computer-readable program is stored, wherein the computer-readable program enables the computer to carry out the method for detecting a traffic state as described in Embodiment 1 in electronic equipment.
The above apparatuses and methods of the present disclosure may be implemented by hardware, or by hardware in combination with software. The present disclosure relates to such a computer-readable program that when the program is executed by a logic device, the logic device is enabled to carry out the apparatus or components as described above, or to carry out the methods or steps as described above. The present disclosure also relates to a storage medium for storing the above program, such as a hard disk, a floppy disk, a CD, a DVD, and a flash memory, etc.
One or more functional blocks and/or one or more combinations of the functional blocks in the accompanying drawings may be realized as a universal processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware component or any appropriate combinations thereof. And they may also be realized as a combination of computing equipment, such as a combination of a DSP and a microprocessor, multiple processors, one or more microprocessors in communication combination with a DSP, or any other such configuration.
The present disclosure is described above with reference to particular embodiments. However, it should be understood by those skilled in the art that such a description is illustrative only, and not intended to limit the protection scope of the present disclosure. Various variants and modifications may be made by those skilled in the art according to the spirits and principle of the present disclosure, and such variants and modifications fall within the scope of the present disclosure.
For implementations of the present disclosure containing the above embodiments, following supplements are further disclosed.
Supplement 1. An apparatus for detecting a traffic state, including:
a first calculating unit configured to detect a contour of an object from a region of interest of a traffic surveillance image to generate a contour image, and calculate an occupation ratio of the contour in the region of interest based on the contour image;
Supplement 2. The apparatus for detecting a traffic state according to supplement 1, wherein the first calculating unit includes a first generating unit, when the traffic surveillance image is a color image, the first generating unit being configured to:
Supplement 3. The apparatus for detecting a traffic state according to supplement 1, wherein the first calculating unit includes:
Supplement 4. The apparatus for detecting a traffic state according to supplement 1, wherein the first calculating unit includes:
Supplement 5. The apparatus for detecting a traffic state according to supplement 1, wherein,
Supplement 6. The apparatus for detecting a traffic state according to supplement 1, wherein,
Supplement 7. The apparatus for detecting a traffic state according to supplement 1, wherein,
Supplement 8. The apparatus for detecting a traffic state according to supplement 1, wherein the first determining unit:
Supplement 9. The apparatus for detecting a traffic state according to supplement 8, wherein,
Supplement 10. A method for detecting a traffic state, including:
Supplement 11. The method for detecting a traffic state according to supplement 10, wherein when the traffic surveillance image is a color image, the detecting a contour of an object to generate a contour image includes:
Supplement 12. The method for detecting a traffic state according to supplement 10, wherein when the traffic surveillance image is a grayscale image, the detecting a contour of an object to generate a contour image includes:
Supplement 13. The method for detecting a traffic state according to supplement 10, wherein the calculating an occupation ratio based on the contour image includes:
Supplement 14. The method for detecting a traffic state according to supplement 10, wherein the detecting whether there exists an object of a predetermined size moving in the region of interest includes:
Supplement 15. The method for detecting a traffic state according to supplement 10, wherein the determining a motion state of a vehicle in the region of interest includes:
Supplement 16. The method for detecting a traffic state according to supplement 10, wherein the determining a motion state of a vehicle in the region of interest includes:
Supplement 17. The method for detecting a traffic state according to supplement 10, wherein the determining a traffic state according to the occupation ratio and the motion state includes:
Supplement 18. The method for detecting a traffic state according to supplement 17, wherein,
Supplement 19. Electronic equipment, including the apparatus for detecting a traffic state as described in any one of supplements 1-9.
Although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the embodiments, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
201510660793.4 | Oct 2015 | CN | national |