This application claims priority to Chinese Patent Application No. 202210732669.4 with a filing date of Jun. 27, 2022. The content of the aforementioned application, including any intervening amendments thereto, is incorporated herein by reference.
The present disclosure belongs to the field of intelligent recognition technologies, and in particular, relates to a method for gait recognition based on visible light, infrared radiation and structured light.
Gait is a pattern of movements during walking, which is a complex behavioral characteristic. As everyone's gait is different, gait can be used as new biometric information for identifying individuals. Gait information differs greatly from other biometric information in data collection and processing. A current gait recognition technology typically relies on acquiring data of a single visible light image for analysis and recognition. However, this conventional gait recognition technology will be restricted in situations such as a poor lighting condition, a limited sensor position, and a long distance, as individuals cannot be identified by using only visible light images. In view of the above problems, it is urgent to study a new gait recognition technology with better adaptability.
An objective of the present disclosure is to provide a method for gait recognition based on visible light, infrared radiation and structured light. By improving image processing methods and combining multiple sensors, the robustness of a recognition algorithm is fully improved, and the problem in an existing recognition technology that individuals cannot be accurately identified under various extreme conditions are resolved.
The present disclosure achieves the above technical objective through following technical solutions.
A method for gait recognition based on visible light, infrared radiation and structured light, includes the following steps:
Preferably, the new value of the pixel in step 3 is obtained according to the following formula:
where ∇2ƒ(x, y) represents second-order partial derivative processing performed on a function ƒ(x, y); (x, y) represents coordinates of the pixel, x is the abscissa, and y is the ordinate; ∂ represents a partial derivative symbol; ƒ represents ƒ(x, y); and υ1, υ2, υ3, and υ4 represent unit vectors of four directions of 0°, 90°, 180°, and 270° respectively.
Preferably, the gradient strength and the gradient direction in step 4 are calculated according to the following formula:
where Gx and Gy both represent convolution operators; GD
Preferably, two mutually inverse feature convolution kernels G1(x,y) and G2(x,y) in step 5 are as follows:
Where GD
Preferably, the similarity in step 6 is calculated according to the following formula:
where S(x, y) represents the similarity at a pixel position with the abscissa of x and the ordinate of y; i represents a variable parameter, indicating a serial number of a feature weight map; Ci(x, y) represents a parameter of a pixel with the abscissa of x and the ordinate of y in an ith feature weight map.
Preferably, the similarity in step 7 thresholds are T1 and T2, and T1<T2; when S(x, y)<T1, a fused image A(x, y)=Ci(x,y)max; when T1≤S(x, y)≤T2, a fused image A(x, y)=an average of top four Ci(x, y) with greatest values; and when T2≤S(x, y), a fused image
where S(x, y) represents a similarity at a pixel position with the abscissa of x and the ordinate of y; Ci(x, y) represents a parameter of a pixel with the abscissa of x and the ordinate of y in an ith feature weight map; i represents a variable parameter, indicating a serial number of a feature weight map; Ci(x, y)max represents a maximum value of the parameter of the pixel with the abscissa of x and the ordinate of y in the ith feature weight map.
Preferably, the preprocessing in step 1 includes intrinsic calibration, extrinsic calibration, cropping, and normalization.
The present disclosure has the following beneficial effects.
The present disclosure proposes a method for gait recognition based on visible light, infrared radiation and structured light. According to the method, image data required by three detection devices are fused, and gait recognition is performed based on the fused image. The method improves image processing and multi-sensor image fusion, effectively improves the robustness of a recognition algorithm, and can realize accurate identification of individuals under various extreme conditions.
The present disclosure will be further described below in conjunction with the accompanying drawings and specific embodiments, but the protection scope of the present disclosure is not limited thereto.
A method for gait recognition based on visible light, infrared radiation and structured light according to the present disclosure is shown in
Step 1: Obtain three types of raw data from a visible light sensor, an infrared sensor, and a structured light sensor, where the three types of raw data include YUV channel data, infrared grayscale image data, and structured light image data.
Step 2: Perform intrinsic calibration, extrinsic calibration, cropping, and normalization on the raw data to obtain three types of image data with a consistent spatial mapping relationship, wherein the three types of image data comprises visible light data, infrared data and structured light data.
Step 3: Encode the visible light data processed in step 2 into Y, U, and V channels based on a YUV encoding space, encode the processed infrared data into a T channel, and encode the processed structured light data into a depth channel.
Step 4: Use two-dimensional Laplace transform to solve isotropic second-order derivatives of eight adjacent pixels in front, back, left and right directions of each pixel in the depth channel, and add the second-order derivatives to obtain a of new value of the pixel. Specifically:
where ∇2ƒ(x, y) represents second-order partial derivative processing performed on a function ƒ(x, y); (x, y) represents coordinates of the pixel, where x is the abscissa, and y is the ordinate; ∂ represents a partial derivative symbol; ƒ represents ƒ(x, y); and υ1, υ2, υ3, and υ4 represent unit vectors of four directions of 0°, 90°, 180°, and 270° respectively.
Step 5: For the depth channel processed by the Laplace transform in step 4, perform convolution operation by using two convolution operators Gx and Gy to determine a gradient vector AD
Step 6: Use the gradient vector obtained from the depth channel to generate two mutually inverse feature convolution kernels G1(x,y) and G2(x,y):
The two feature convolution kernels G1(x,y) and G2(x,y) are used as weights to perform convolution operations on 3×3 regions of pixels at a corresponding pixel (x, y) position in the four channels of Y, U, V, and T, so as to obtain eight feature weight maps.
Step 7: Calculate a similarity between eight values of a same pixel position in the eight feature weight maps as follows:
where S(x, y) represents the similarity at a pixel position with the abscissa of x and the ordinate of y; i represents a variable parameter, indicating a serial number of a feature weight map; Ci(x, y) represents a parameter of a pixel with the abscissa of x and the ordinate of y in an ith feature weight map.
Step 8: Set similarity thresholds T1 and T2 and T1<T2, and obtain a corresponding fused image by selecting different fusion rules based on a similarity degree of each pixel; where
Step 9: Extract human head information in the fused image by using the YOLO algorithm, and then extract human skeleton information in the fused image based on the Alphapose method.
Step 10: Extract a gait feature based on the human skeleton information, extract a gait feature based on a normalized YUV visible light flow, and combine the two gait features for gait recognition.
The above embodiments are preferred implementations of the present disclosure, but the present disclosure is not limited to the above implementations. Any obvious improvement, substitution, or modification made by those skilled in the art without departing from the essence of the present disclosure should fall within the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202210732669.4 | Jun 2022 | CN | national |