The present application relates to the field of image processing technology, in particular, to a shadow detection method for monitoring video images, a shadow detection system for monitoring video images, and a shadow removal method for monitoring video images using the shadow detection method for monitoring video images.
Monitoring system is one of the most widely used systems in security systems. For monitoring technology, shadows in a monitored scene (comprising the shadow of a monitored target and the shadow of other background objects, etc.) have always been important factors that interfere with monitoring and detection of monitored targets. Especially under lighting conditions, the shadow projected by a monitored target in motion always accompanies the monitored target itself, that is, the shadow projected by the monitored target has similar motion properties as the monitored target, and both the projected shadow and the monitored target are distinct from the corresponding background area to a great extent, thus the projected shadow can be easily detected together with the monitored target in motion.
If the shadow is mistakenly detected simultaneously as a monitored target, it is easy to cause adhesion, fusion, and geometric attribute distortion of the monitored target. Therefore, how to detect a moving target in a monitoring video scene to eliminate the interference of the projected shadow and ensuring the integrity of the monitored target as much as possible are of great significance to intelligent video analysis.
In view of the deficiency in the prior art, the objective of the present application is to provide a shadow detection method for monitoring video images, a shadow detection system for monitoring video images, and a shadow removal method for monitoring video images using the shadow detection method for monitoring video images. The shadow detection method for monitoring video images, the shadow detection system, and the shadow removal method for monitoring video images can effectively detect and remove shadows, and minimize the impact of shadows on the integrity of a monitored target.
In one aspect according to the present application, a shadow detection method for monitoring video images is provided which comprises the following steps: S10, acquiring a current frame and a background frame from source data; S20, acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of corresponding areas of the background frame; S30, computing a local-ternary-pattern shadow detection value of all of the first candidate shadow areas, and selecting first candidate shadow areas with the local-ternary-pattern shadow detection value greater than the first threshold as second candidate shadow areas; S40, computing a hue detection value and a saturation detection value and a gradient detection value of each of the second candidate shadow areas; S50, estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, the hue detection value, the saturation detection value and the gradient detection value of the second candidate shadow areas computed; S60, computing a local-ternary-pattern shadow detection value, a hue detection value, a saturation detection value and a gradient detection value of each of the first candidate shadow areas; and S70, selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas.
In another aspect according to the present application, a shadow removal method for monitoring video images is further provided, which at least comprises the following steps for the realizing the shadow detection method for monitoring video images: S10, acquiring a current frame and a background frame from source data; S20, acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of corresponding areas of the background frame; S30, computing local-ternary-pattern shadow detection values of all of the first candidate shadow areas, and selecting first candidate shadow areas with the local-ternary-pattern shadow detection values greater than the first threshold as second candidate shadow areas; S40, computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas; S50, estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, the hue detection value, the saturation detection value and the gradient detection value of the second candidate shadow areas computed; S60, computing a local-ternary-pattern shadow detection value, a hue value, a saturation detection value and a gradient detection value of each of the first candidate shadow areas; and S70, selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas.
According to another aspect of the present application, a shadow detection system for monitoring video images is further provided, which comprises: an extraction module, for acquiring a current frame, a background frame or a foreground frame from source data; a first candidate shadow area acquisition module, for acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of corresponding areas of the background frame; a second candidate shadow area acquisition module, for computing local-ternary-pattern shadow detection values of all the first candidate shadow areas, and selecting first candidate shadow area with the local-ternary-pattern shadow detection values greater than the first threshold as second candidate shadow areas; a first computation module, for computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas; a threshold estimation module, for estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, the hue detection value, saturation detection value and gradient detection value of the second candidate shadow areas computed; a second computation module, for computing a local-ternary-pattern shadow detection value, a hue value, a saturation detection value and a gradient detection value of each of the first candidate shadow areas; and a shadow area selection module, for selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas.
Compared with the prior art, in the shadow detection method for monitoring video images, the shadow detection system for monitoring video images and the shadow removal method for monitoring video images using the same shadow detection method provided by embodiments of the present application, first candidate shadow areas (rough candidate shadow areas) are firstly acquired, and a small part of the true second candidate shadow area is extracted from the first candidate shadow area for estimating threshold parameters of subsequent three shadow detectors. Based on the principle of texture consistency and chrominance constancy in the shadow area and the corresponding background area, the three shadow detectors are used to extract relatively accurate shadow areas from the first candidate shadow area in parallel, and then the all more accurate shadow areas are jointly screened to obtain a more accurate shadow area. Therefore, the shadow detection method for monitoring video images of the present application has significant detection effect when the shadow area of an acquired monitored target in motion is detected for most common indoor scenes, and can detect very accurate shadow areas. In addition, the algorithm embodied by the above processes can be applied as an independent module in monitoring scenes, combined with a background modelling or background difference algorithm, and based on the real-time video frame (the current frame), the foreground frame, and the background frame, the algorithm can be implemented and applied to reduce the impact of shadows on the integrity of the target to the maximum extent, so that the monitored target obtained after the shadow area is removed is also more accurate and complete, which is more conducive to the monitoring of the monitored target.
By reading the detailed description of the non-limiting embodiment with reference to the following drawings, other features, purposes and advantages of the present application will become more apparent:
Exemplary implementations will now be described more fully in conjunction with the accompanying drawings. However, the exemplary implementations can be implemented in a variety of forms and should not be construed as limited to the implementations set forth herein; instead, providing these implementations makes the present application comprehensive and complete, And fully convey the concept of the exemplary implementations to those skilled in the art. In the figures, the same reference numerals indicate the same or similar structures, so repeated description thereof will be omitted.
According to the main concept of the present application, the shadow detection method for monitoring video images of the present application comprises the following steps: acquiring a current frame and a background frame from source data; acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of a corresponding areas of the background frame; computing local-ternary-pattern shadow detection values of all of the first candidate shadow areas, and selecting first candidate shadow areas with the local-ternary-pattern shadow detection values greater than the first threshold as second candidate shadow areas; computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas; estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value of the second candidate shadow areas computed; computing a local-ternary-pattern shadow detection value of all of the first candidate shadow areas, and selecting first candidate shadow areas whose local-ternary-pattern shadow detection value greater than the first threshold as second candidate shadow areas; computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas; selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas.
The technical content of the present application will be described in the following in combination with the accompanying drawings and the embodiments.
Please refer to
Step S10: acquiring a current frame and a background frame from source data. Wherein, the source data refers to an original image or video data acquired by a monitoring device; the current frame refers to the current image collected in real time, and the background frame is a background image without monitored targets and extracted from a monitoring screen or video by means of background modeling or background difference algorithm, etc. Further, in a preferred embodiment of the present application, step S10 further includes the step of acquiring a foreground frame from the source data simultaneously, wherein, the foreground frame refers to monitored images recorded at a time earlier than that of the current frame during the operation of the monitoring device.
Step S20: acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of a corresponding areas of the background frame. Specifically, this step is mainly based on the assumption that the shadow areas are darker than corresponding background areas. This assumption is true in most cases. Therefore, rough candidate shadow areas (that are, the above first candidate shadow areas) can be extracted under this assumption. Therefore, the brightness of the acquired first candidate shadow areas are smaller than that of corresponding areas in the background frame. It should be noted that the background frame is an image without monitored targets, that is, the image of the areas in the current frame except for the monitored targets and the shadow areas is the same as that in the background frame, so the first candidate shadow areas in the current frame are at essentially the same position as the corresponding areas in the background.
Further, because the shadow area may be affected by noise, Therefore, the first candidate shadow areas actually acquired in step S20 include most of the real shadow areas and the monitored targets that are falsely detected as the shadow area. If the assumption of chroma darkness is used to judge it, the area of falsely detected as the shadow region will be large. Furthermore, in the embodiment of the present application, when performing statistical analysis on the monitored target and the shadow area, the inventor finds that the ratio of the spectral frequency of each color channel of the shadow area in the red, green, and blue (RGB) color space undergoes a smaller change than the ratio of the spectral frequency of each color channel of a corresponding background area; whereas the ratio of the spectral frequency of each color channel of the monitored target undergoes a greater change than the ratio of the spectral frequency of each color channel of a corresponding background area. This feature helps distinguish most of the monitored targets that are falsely detected as shadow areas from the detected candidate shadow areas. Therefore, referring to
Step S201: computing brightness of each area in the current frame and the background frame, and selecting an area in the current frame with the brightness smaller than that of a corresponding area in the background frame as a first area.
Step S202: computing three first ratios of spectral frequency respectively in red, green and blue channels of the first area to that of a second area corresponding to the first area in the background frame, as well as three second ratios of spectral frequency respectively in red, green and blue channels of a third area corresponding to the first area in the foreground frame to that of the second area, wherein, the first area, the second area and the third area are essentially the same area in the image.
Specifically, in step S202, the computation methods of the three first ratio are:
wherein, Ψr is a first ratio of the spectral frequency in a red channel, Ψg is a first ratio of the spectral frequency in a green channel, and b is a first ratio of the spectral frequency in the blue channel; Cr is a spectral frequency of the red channel in the current frame, Cg is a spectral frequency of the green channel in the current frame, and Cb is a spectral frequency of the blue channel in the current frame; Br is a spectral frequency of the background frame in the red channel, Bg is a spectral frequency of the background frame in the green channel, and Bb is a spectral frequency of the background frame in the blue channel.
Correspondingly, in the foreground frame, three second ratios of the spectral frequency in the red, green and blue channels of the third area corresponding to the first area to that of and second area are respectively computed in the same way as the first ratio, wherein only parameters corresponding to the current frame are substituted while related parameters of the background frame are retained. For example, Cg is replaced with the spectral frequency of the foreground frame in the red channel. Other parameters of the current frame are similarly replaced, and will not be repeated here.
Step S203: selecting a first area with a difference between the first ratio and the second ratio smaller than the second threshold as the first candidate shadow area, wherein, the second threshold may be set and adjusted according to actual demands.
Step S30: computing a local-ternary-pattern shadow detection value of all of the first candidate shadow areas, and selecting first candidate shadow areas with the local-ternary-pattern shadow detection value greater than a first threshold as second candidate shadow areas. Specifically, the present application mainly uses three shadow detectors to detect the shadow areas. Each shadow detector has a corresponding parameter threshold, but because the scene in a monitoring video is changeable, it will limit the application of the algorithm if a group of parameter thresholds are needed to be set for each scene, so it is necessary to predict more accurate parameter thresholds in advance. Furthermore, on the basis of the acquisition of the first candidate shadow area in the above step S20, the present application uses an improved local-ternary-pattern) detector (hereinafter referred to as the ILTP detector) to screen all of the selected first candidate shadow areas, and select accurate shadow areas (that is, these shadow areas have a high detection standard, and the selected areas are basically the final shadow areas), and estimate threshold parameters of the three shadow detectors (a hue detector, a saturation detector and a gradient detector) for the detection of other first candidate shadow areas based on these accurate shadow areas. It should be noted that in this step, the ILTP detector is chosen due to higher accuracy and less target interference thereof than the hue and saturation (HS) detector and the gradient (Gradient) detector in the detection of the shadow areas.
Further, referring to
Step S301: computing a local-ternary-pattern computation value of all pixels in the first candidate shadow area or the second candidate shadow area in the current frame. Specifically, the local-ternary-pattern computation value (ILTP computed value) is computed for the pixels in the first candidate shadow area in the above step S30 in the present application.
Step S302: computing the local-ternary-pattern computation value of each corresponding pixel at the same position in the background frame.
Step S303: computing the number of pixels in the first candidate shadow areas or the second candidate shadow areas in the current frame that have the same local-ternary-pattern computation value as corresponding pixels in the background frame, and using the number of pixels as the local-ternary-pattern shadow detection value. Specifically, in this step, comparing the computed ILTP computed values of each pixel in the above step S301 and step S302, if the ILTP computed value of a certain pixel of the current frame in step S301 is the same as the ILTP computed value of a corresponding (that is, at the same position) pixel in step S302, then the pixel can be counted as 1 pixel. Furthermore, similarly computing all pixels in the first candidate area, accumulating the pixels that meet the above conditions, so as to acquire the local-ternary-pattern shadow detection value.
Further, referring to
Step S3001: setting a noise tolerance value.
Step S3002: comparing each adjacent pixel surrounding the pixel with the gray level of the pixel, so as to obtain the following three results, that is, only three computation values. Specifically, if difference in the gray level between an adjacent pixel and the pixel is smaller than the noise tolerance value, then the value of the adjacent pixel is tagged as a first value; if the gray level of an adjacent pixel is greater than or equal to the sum of the gray level of the pixel and the noise tolerance value, the value of the adjacent pixel is tagged as a second value; if the gray level of an adjacent pixel is smaller than or equal to the difference between the gray level and the noise tolerance value, then the value of the adjacent pixel is tagged as a third value.
Referring to
Step S3003: grouping the tagged values of all of the adjacent pixels into a first array in a first order. In the embodiment shown in
Step S3004: comparing the gray level of each of the adjacent pixels with another one of the adjacent pixels furthest from the adjacent pixel. If difference in the gray level between the two adjacent pixels is smaller than the noise tolerance value, then the value formed is the first value; if the gray level of the adjacent pixel of one of the adjacent pixels is greater than or equal to a sum of the gray level of another one of the adjacent pixel furthest from the adjacent pixel and the noise tolerance value, then the value formed is the second value; if the gray level of one of the adjacent pixels is smaller than or equal to the difference between the gray level of another one of the adjacent pixels furthest from the adjacent pixel and the noise tolerance value, then the value formed is the third value. Specifically, in the prior art, the local-ternary-pattern computation value is computed by only comparing the detected pixel with the surrounding adjacent pixels, ignoring the correlation information between the adjacent pixels, which can enhance the expression ability of the local-ternary-pattern computation value. Therefore, in the present application, the correlation information between the adjacent pixels is also included to improve the expression ability of the existing local-ternary-pattern computation value, and further, to make the detected shadow area more accurate. Furthermore, the comparison method in this step is the same as that in the above step S3003, with the difference that the pixels to be compared are different, and in step S3004, the comparison is performed between multiple adjacent pixels. In the embodiment as shown in
Step S3005: grouping all of the values formed into a second array in a second order. Specifically, in the embodiment shown in
Step S3006: adding up the first array and the second array to obtain the local-ternary-pattern computation value. In the embodiment shown in
Furthermore, the local-ternary-pattern computation value of the detected pixel in the current frame and a corresponding pixel in the background frame are respectively computed, to determine whether the local-ternary-pattern computation value of the above two pixels are the same, and compute the number of pixels which are the same (step S303). This number is the local-ternary-pattern shadow detection value of a first candidate shadow area finally acquired in step S30. The first candidate shadow area with a local-ternary-pattern shadow detection value greater than the first threshold will be used as the second candidate shadow area.
It should be noted that
Step S40: computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas. Specifically, the hue detection value of the second candidate shadow area is an average value of the differences in the hue values between all pixels in the second candidate shadow area and all corresponding pixels in the background frame; similarly, the saturation detection value of the second candidate shadow area is an average value of the differences in the saturation values of all pixels in the second candidate shadow area and all corresponding pixels in the background frame.
Step S50: estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, the hue detection value, the saturation detection value and the gradient detection value of the second candidate shadow areas computed. Specifically, according to the above step S30, the computation method of the present application included the correlation information between the adjacent pixels and enhanced the local-ternary-pattern expression ability. Therefore, the acquired second candidate shadow areas are very accurate, and are basically the final shadow areas. Furthermore, the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold for detecting all first candidate shadow areas can be estimated according to the computed local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value of the second candidate shadow area. The estimation can be performed by taking an average value of the local-ternary-pattern shadow detection value of all second candidate shadow areas as the local-ternary-pattern shadow threshold; taking an average value of the hue and the saturation detection value of all second candidate shadow areas as the hue threshold and the saturation threshold; and taking the gradient detection value of all second candidate shadow areas as the gradient threshold. Or the above average value can also be adjusted as the final threshold according to actual demands, which will not be described in detail here.
Since the second candidate shadow area is detected by using the improved local-ternary-pattern shadow detection value of the present application, the selected second candidate shadow area is accurate and has low target interference. The threshold parameter of each shadow detector for determining subsequent all first candidate shadow areas will have better representativeness and accuracy.
Step S60: computing a local-ternary-pattern shadow detection value, a hue value, a saturation detection value and a gradient detection value of each of the first candidate shadow areas. In this step, the local-ternary-pattern shadow detection value, hue value, saturation detection value and gradient detection value are computed in the same method as the above step S30 and step S50.
Step S70: selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas. Specifically, in this step, the method in the above step S30 can be used to determine whether the local-ternary-pattern shadow detection value of the first candidate shadow area falls in the local-ternary-pattern shadow threshold, wherein, it only requires to substitute the first threshold with the local-ternary-pattern shadow threshold in step S50.
Further, the hue and saturation detection method is as follows:
wherein, Cih is the hue value of pixels in the current frame, Bih is the hue value of pixels in the background frame, Cis is the saturation value of pixels in the current frame, Bis is the saturation value of pixels in the background, τh is the hue threshold, and τs is the saturation threshold;
the hue detection value and the saturation detection value in the first candidate shadow areas has an output value of 1 in the range of the hue threshold and the saturation threshold, when a hue average value in the first candidate shadow areas is smaller than the hue threshold and a saturation average value is smaller than the saturation threshold; otherwise, the output value is 0, when the hue detection value and the saturation detection value in the first candidate shadow areas exceeds the range of the hue threshold and the saturation threshold. The hue average value of the first candidate shadow area is an average value of the difference in the hue value between all pixels in the first candidate shadow area and all corresponding pixels in the background frame; similarly, the saturation average value of the first candidate shadow area is an average value of the difference in the saturation value between all pixels in the first candidate shadow area and all corresponding pixels in the background frame. The hue and saturation detection value of a first candidate shadow area can be determined to be in the range of the hue and the saturation threshold or not according to whether the output value is 1 or 0. It should be noted that, compared with the computation and analysis on the H, S, and V channels of the current frame and background frame using the traditional hue, saturation and value (HSV) detectors, the hue and saturation detection proposed by the present application removes the computation of the V channel, mainly uses the chrominance invariance jointly expressed by the H and S channels, and makes full use of the neighborhood information of the H and S channels (such as the adjacent pixel). The hue value threshold and saturation threshold are computed according to the second candidate shadow area, so it will vary with the scene. For a single isolated pixel, the use of neighborhood information can reduce the interference caused by sudden light changes, reduce missed detection, and improve accuracy of the detection.
Further, the gradient detection method is as follows:
wherein, ∇x is horizontal gradient of the pixel, ∇y is vertical gradient of the pixel, ∇ is the gradient of the pixel, θ is the value of an angle, c(∇1j) is the gradient of a pixel in the current frame in a color channel B(Δ1j) is the gradient of a corresponding pixel in the background frame in the same color, φm is the gradient threshold. c(θij) is the value of an angle of a pixel in the current frame in a color channel, B(θij) is the value of an angle of a corresponding pixel in the background frame in the same color, and φd is the angle threshold;
the gradient detection value in the first candidate shadow area has an output value of 1 within the gradient threshold range, when an average value of differences in all gradient between all pixels in the current frame and corresponding pixels in the background frame in the red, green and blue channels is smaller than the gradient threshold, and an average value of differences in all angles between all pixels in the current frame and corresponding pixels in the background frame in the red, green and blue channels is smaller than the angle threshold; otherwise, the output value is 0 when the gradient detection value in the first candidate shadow area exceeds the gradient threshold. The gradient detection value of a first candidate shadow area can be determined to be in the range of the gradient threshold or not according to whether the output value is 1 or 0.
Further, the present application further provides a shadow removal method for monitoring video images, which at least comprises the shadow detection method for monitoring video images as shown in the above
acquiring a foreground frame from the source data; and
removing the shadow area from the current frame via median filtering and void filling in combination with the foreground frame.
By use of the above shadow detection method for monitoring video images as shown in
Further, the present application further provides a shadow detection system for monitoring video images, for realizing the above shadow detection method for monitoring video images. The shadow detection system for monitoring video images mainly comprises: an extraction module, a first candidate shadow area acquisition module, a second candidate shadow area acquisition module, a first computation module, a threshold estimation module, a second computation module and a shadow area selection module.
The extraction module is used for acquiring a current frame, a background frame or a foreground frame from source data.
The first candidate shadow area acquisition module is used for acquiring, from the current frame, first candidate shadow areas with brightness smaller than that of corresponding areas of the background frame.
The second candidate shadow area acquisition module is used for computing a local-ternary-pattern shadow detection value of all the first candidate shadow areas, and selecting first candidate shadow areas with the local-ternary-pattern shadow detection value greater than a first threshold as second candidate shadow areas.
The first computation module is used for computing a hue detection value, a saturation detection value and a gradient detection value of each of the second candidate shadow areas.
The threshold estimation module is used for estimating a corresponding local-ternary-pattern shadow threshold, a corresponding hue threshold, a corresponding saturation threshold and a corresponding gradient threshold according to the local-ternary-pattern shadow detection value, the hue detection value, the saturation detection value and the gradient detection value of the second candidate shadow areas computed.
The second computation module is used for computing a local-ternary-pattern shadow detection value, a hue detection value, a saturation detection value and a gradient detection value of each of the first candidate shadow areas.
The shadow area selection module is used for selecting first candidate shadow areas whose local-ternary-pattern shadow detection value, hue detection value, saturation detection value and gradient detection value all fall in a range of the local-ternary-pattern shadow threshold, the hue threshold, the saturation threshold and the gradient threshold as shadow areas.
In summary, in the shadow detection method for monitoring video images, the shadow detection system for monitoring video images and the shadow removal method for monitoring video images using the same shadow detection method provided in embodiments of the present application, first candidate shadow areas (rough candidate shadow areas) are first acquired, and a small part of the true second candidate shadow areas are extracted from the first candidate shadow areas for estimating threshold parameters of subsequent three shadow detectors. Based on the principle of texture consistency and chrominance constancy between the shadow area and the corresponding background area, the three shadow detectors are used to extract more accurate shadow areas from the first candidate shadow area in concurrence, and then the all more accurate shadow areas are jointly screened to obtain a more accurate shadow area. Therefore, the shadow detection method for monitoring video images of the present application has significant detection effect when detecting the shadow areas of an acquired monitored target in motion for most common indoor scenes, and can detect very accurate shadow areas. In addition, the algorithm can be applied as an independent module in a monitoring scene, combined with a background modelling or background difference algorithm, and based on the real-time video frame (the current frame), the foreground frame, and the background frame, the algorithm can be implemented and applied to reduce the impact of shadows on the integrity of a target to the maximum extent, so that after the shadow area is removed, the acquired monitored target is also more accurate and complete, which is more in favour of the monitoring of the monitored target.
Although the present application has been disclosed as above with optional embodiments, it is not intended to limit the present application. Those skilled in the technical field to which the present application belongs, without departing from the spirit and scope of the present application, can make various changes and modifications. Therefore, the scope of protection of the present application shall be subject to the scope defined in the claim.
Number | Date | Country | Kind |
---|---|---|---|
201710986529.9 | Oct 2017 | CN | national |
This application is a continuation of International Application No. PCT/CN2018/110701, filed on Oct. 17, 2018, which is based upon and claims priority to Chinese Patent Application No. 201710986529.9, filed on Oct. 20, 2017, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2018/110701 | Oct 2018 | US |
Child | 16852597 | US |