IMAGE PROCESSING METHOD, APPARATUS, DEVICE AND MEDIUM

Information

  • Patent Application
  • 20230106278
  • Publication Number
    20230106278
  • Date Filed
    February 21, 2020
    4 years ago
  • Date Published
    April 06, 2023
    a year ago
Abstract
The present disclosure provides an image processing apparatus and an image processing method. The method comprising: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image; performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair containing the target, and a disparity exists between the first image and second image; and calculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.
Description
FIELD OF INVENTION

The present disclosure relates to image processing, and more particularly, to an image processing method, an image processing apparatus, an image processing device and a computer-readable storage medium.


BACKGROUND ART

With the development of image processing technology, image processing is widely used in commercial fields. Image processing process is also faced with higher requirements.


Currently, when performing an image processing on an image pair (Such as a pair of images taken with a binocular camera) including the same target to obtain a spatial coordinate value or attribute characteristics of the target, it usually involves the process of calculating the disparity value of the target in the image pair. Ordinary methods would try to calculate, for each pixel in one image of the image pair, the disparity value between the pixel and the matching pixel thereof in the other image of the image pair. However, this method is not only time-consuming but also unnecessary when only specific targets on the image are concerned.


Therefore, there is a need for an image processing method that can speed up the calculation process, reduce the time consumed by disparity calculation, and significantly reduce the amount of calculation as well as achieving desirable disparity calculation.


SUMMARY OF THE INVENTION

According to one aspect of the present disclosure, an image processing method is provided. The image processing method includes: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image; performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image; calculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.


In one embodiment of the present disclosure, wherein calculating the disparity value of the target in the first to-be-processed image and the second to-be-processed image includes: obtaining, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel; calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating a disparity image; calculating the disparity value of the target based on the disparity image.


In another embodiment of the present disclosure, wherein a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein generating the first to-be-processed image and generating the second to-be-processed image including: determining a region to be processed according to the first target detection box and the second target detection box, wherein the region to be processed includes the first target detection box and the second target detection box; cropping the first image and the second image according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image.


In another embodiment of the present disclosure, wherein the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image, when the target is not detected in the first image or the second image, an image in which the target is not detected is used as an image to be predicted, and the process of generating the target detection box of the image to be predicted including: obtaining target motion information based on the video image corresponding to the image to be predicted; determining, the target detection box in the image to be predicted according to the target motion information.


In another embodiment of the present disclosure, wherein the target motion information is target motion speed, and wherein determining the target detection box in the image to be predicted according to the target motion information includes: determining the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.


In another embodiment of the present disclosure, wherein, obtaining, for each pixel of the target in the first to-be-processed image, the matching pixel in the second to-be-processed image that matches the pixel comprises: determining a maximum disparity threshold and a minimum disparity threshold of the pixel; determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold; determining, based on the pixel in the first to-be-processed image and a pixel in the pixel search region, a matching pixel in the second to-be-processed image that matches the pixel.


In another embodiment of the present disclosure, wherein the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and wherein determining the maximum disparity threshold and the minimum disparity threshold of the pixel comprises: obtaining the disparity value of the target in at least one pair of historical image frames of the first video image and the second video image; determining the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames.


In another embodiment of the present disclosure, wherein calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating the disparity image comprises: for each pixel of the target in the first to-be-processed image, calculating vector-distance between the pixel in the first to-be-processed image and the matching pixel in the second to-be-processed image of the pixel in the first to-be-processed image, and using the vector-distance as the disparity value of the pixel; setting the pixel value of the pixel as the disparity value of the pixel, and obtaining the disparity image.


In another embodiment of the present disclosure, wherein calculating the disparity value of the target based on the disparity image comprises: weighting and summing the pixel value of each pixel in the disparity image to obtain the disparity value of the target.


In another embodiment of the present disclosure, the method further comprising: calculating a spatial coordinate value of the target according to the disparity value of the target in the first to-be-processed image and the second to-be-processed image.


According to an aspect of the present disclosure, there is provided an image processing apparatus, comprising: a first to-be-processed image generation module, which is configured to perform a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image; a second to-be-processed image generation module, which is configured to perform the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has the size smaller than the size of the second image, the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image; a target disparity value calculation module configured to calculate, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.


In an embodiment of the present disclosure, wherein the target disparity value calculation module comprises: a pixel matching module, which is configured to obtain, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel; a disparity image generation module, which is configured to calculate, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel point thereof, and generating a disparity image; a target disparity value generation module, which is configured to calculate the disparity value of the target based on the disparity image.


In another embodiment of the present disclosure, wherein a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein the apparatus further comprises: a region to be processed determination module, which is configured to determine a region to be processed according to the first target detection box and the second target detection box, wherein the region to be processed comprises the first target detection box and the second target detection box; an image segmentation module, which is configured to crop the first image and the second image according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image.


In another embodiment of the present disclosure, wherein the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image, when the target is not detected in the first image or the second image, an image in which the target is not detected is used as an image to be predicted, and the apparatus further comprises: a target motion information acquisition module, which is configured to obtain target motion information based on the video image corresponding to the image to be predicted; a target detection box determination module, which is configured to determine, the target detection box in the image to be predicted according to the target motion information.


In another embodiment of the present disclosure, wherein the target motion information is target motion speed, and wherein the target detection box determination module comprises: a motion detection box prediction module, which is configured to determine the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.


In another embodiment of the present disclosure, the pixel matching module comprises: a threshold determination module, which is configured to determine a maximum disparity threshold and a minimum disparity threshold of the pixel; a search region determination module, which is configured to determine a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold; a matching pixel determination module, which is configured to determine, based on the pixel in the first to-be-processed image and a pixel in the pixel search region, a matching pixel in the second to-be-processed image that matches the pixel.


In another embodiment of the present disclosure, wherein the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and wherein the threshold determination module comprises: a target attribute value acquisition module, which is configured to obtain the disparity value of the target in at least one pair of historical image frames of the first video image and the second video image; a disparity threshold calculation module, which is configured to determine the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames.


According to an aspect of the present disclosure, there is provided an image processing device, wherein the apparatus comprises a processor and a memory, the memory including a set of instructions that, when executed by the processor, cause the image processing apparatus to perform an operation, the operation comprises: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image; performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image; calculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.


According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-readable instructions, the method as previously described is executed when the instructions executed by a computer.


The significance and benefits of the present disclosure will be clear from the following description of the embodiments. However, it should be understood that those embodiments are merely examples of how the disclosure can be implemented, and the meanings of the terms used to describe the disclosure are not limited to the specific ones in which they are used in the description of the embodiments.


Others systems, method, features and advantages of the disclosure will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the disclosure, and be protected by the appended claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The disclosure can be better understood with reference to the flowing drawings and description. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.



FIG. 1 illustrates an exemplary flowchart of an image processing method according to an embodiment of the present disclosure;



FIG. 2 illustrates an exemplary flowchart of the process of calculating the disparity value of the target in the first to-be-processed image and the second to-be-processed image according to an embodiment of the present disclosure;



FIG. 3A illustrates an exemplary flowchart of the process of generating the first to-be-processed image and generating the second to-be-processed image according to an embodiment of the present disclosure;



FIG. 3B illustrates a schematic diagram of a determined region to be processed according to an embodiment of the present disclosure;



FIG. 4 illustrates an exemplary flowchart of the process of generating the target detection box of the image to be predicted according to an embodiment of the present disclosure;



FIG. 5 illustrates a schematic diagram of determining the target detection box in the image to be predicted according to the target motion information according to an embodiment of the present disclosure;



FIG. 6A illustrates an exemplary flowchart of the process of obtaining the matching pixel in the second to-be-processed image that matches the pixel of the target in the first to-be-processed image according to an embodiment of the present disclosure;



FIG. 6B illustrates a schematic diagram of determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold;



FIG. 6C illustrates another schematic diagram of determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold;



FIG. 7 illustrates an exemplary flowchart of process of determining the maximum disparity threshold and the minimum disparity threshold of the pixel based on the first video image and the second video image according to an embodiment of the present disclosure;



FIG. 8A illustrates an exemplary flowchart of the process of calculating the disparity value between the pixel in the first to-be-processed image and the matching pixel thereof and generating the disparity image according to an embodiment of the present disclosure;



FIG. 8B illustrates a schematic diagram of the magnified disparity image according to the present disclosure;



FIG. 9 shows a structural block diagram of the image processing apparatus according to an embodiment of the present disclosure;



FIG. 10 shows a structural block diagram of an image processing device according to an embodiment of the present disclosure.





BRIEF DESCRIPTION OF THE PREFERRED EMBODIMENTS

Hereinafter the structure and concept of the present disclosure will be described more detailed with reference to the drawings. The details and examples are only provided to facilitate understanding of the present disclosure, and should not limit the present disclosure in any way.


Currently, when performing an image processing on an image pair (such as a pair of images taken with a binocular camera) including a same target to obtain the spatial coordinate value or attribute characteristics of the target, it usually involves the process of calculating a disparity value of the target in the image pair.


When calculating the disparity value of an image pair, ordinary methods includes: firstly, for each pixel in one image of the image pair, calculating the disparity value between the pixel and the matching pixel thereof in the other image of the image pair; and then, based on the calculated disparity value of each pixel, calculating the disparity value of the image pair.


However, this method is not only time-consuming but also unnecessary when only specific target on the image are concerned. In addition, this method has a large amount of calculation, which will cause a waste of computing resources.


To overcome at least some of the problems of above image processing method, the present disclosure provides an image processing method, in which on basis of achieving good disparity calculation, the calculation process can also be sped up. In addition, the time consumed by disparity calculation and the amount of calculation can be reduced significantly.



FIG. 1 illustrates a schematic diagram of an image processing method 100 according to an embodiment of the present disclosure.


As shown in FIG. 1, according to the image processing method 100, firstly, in step S101, a target detection is performed on a first image to generate a first to-be-processed image, and the first to-be-processed image has a size smaller than the size of the first image.


Then, in step S102, the target detection is performed on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image.


Wherein the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image.


In computer vision field, the disparity represents the vector-distance of pixel-level correspondences/matching pairs in an image pair. Specifically, for a single target pixel in an image of the image pair, the disparity of the target pixel corresponds to the vector-distance between the target pixel in one image of the image pair and the matching pixel of the target pixel in the other image of the image pair.


In some embodiments, the image pair (including the first image and the second image) may be a pair of images captured in real time by a camera or a video apparatus, for example, the image pair may be a pair of images output by a binocular camera or images taken by a monocular camera at different times and positions, or may be a pair of images including the same target taken by different cameras at different positions. In addition, the image pair may be a pair of images obtained after being processed in advance by a computer. Still further, the image pair may also be the output of a device with an image display or a potential image display function. The embodiments of the present disclosure are not limited by the source and acquisition method of the image pair.


In some embodiments, the target detection process may be achieved by pixel threshold comparison, for example, pixel values of pixels in the first image and the second image are compared with a preset pixel threshold, and a set of pixels having pixel values greater than the preset threshold is used as the target; or it may also be achieved by performing edge detection on the image to identify a target in the image, or it may also be implemented by a deep learning algorithm, such as a regional algorithm based on features of a convolutional neural network(R-CNN), a two-stage object detection algorithm such as Fast Region algorithm based on features of a convolutional neural network (Fast R-CNN). The embodiment of the present disclosure is not limited by the selected specific target detection method.


The target detection result may be represented by a target detection box or a coordinate vector, for example, for a target in an image, a rectangular region including the target and having the smallest area may be adopted as the target detection box, and the coordinates of four corner points of the target detection box is used to represent the target detection box.


However, according to an embodiment of the present disclosure, the target detection result may also be a circular or irregular closed shape that surrounds the target and has the smallest area, the embodiment of the present disclosure makes no limitation to the representing manner of the positioning result.


The image pair may include only one target, or the image pair may include multiple targets. The embodiment of the present disclosure makes no limitation to the number of targets in the image pair and the target types thereof.


The first to-be-processed image represents an image region of the first image used for disparity calculation, and the second to-be-processed image represents an image region of the second image used for disparity calculation. For example, the first to-be-processed image and the second to-be-processed image may have the same size. Embodiments of the present disclosure are not limited by the specific sizes of the first to-be-processed image and the second to-be-processed image.


It should be understood that operations of steps S101 and S102 may be performed in parallel, or performed sequentially, which is not intended to be limited herein.


After the first to-be-processed image and the second to-be-processed image are obtained, in step S103, a disparity value of the target in the first to-be-processed image and the second to-be-processed image is calculated.


Based on the foregoing, by performing the target detection on a first image and the second image to get first to-be-processed image and the second to-be-processed image which have a size smaller than the size of the initial first and second images, and further calculating the disparity value of the target based on the first and second to-be-processed images, the time consumed by the disparity calculation and the amount of calculation can be reduced significantly. Therefore, the calculation process of the disparity value of the target can be sped up.


In some embodiments, the above process S103 of calculating the disparity value of the target in the first to-be-processed image and the second to-be-processed image can be described more specifically. FIG. 2 illustrates an exemplary flowchart of the process 103 of calculating the disparity value of the target in the first to-be-processed image and the second to-be-processed image according to an embodiment of the present disclosure.


As shown in FIG. 2, first, in step S1031, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel is obtained. Embodiments of the present disclosure are not limited by a specific method of obtaining a matching pixel.


After the matching pixel for each pixel of the target in the first to-be-processed image is obtained, in step S1032, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof is calculated, and a disparity image is generated.


For example, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof may be calculated by computing the vector-distance between the pixel and the matching pixel thereof.


The disparity image represents an image representing the disparity value of each pixel in the first to-be-processed image. Specifically, the disparity image and the first to-be-processed image may have the same size, and the pixel value of each pixel in the disparity image may be the disparity value of the corresponding pixel in the first to-be-processed image.


Based on the disparity image, in step S1033, the disparity value of the target is calculated based on the disparity image.


For example, the target disparity value can be calculated by weighting the pixel values of multiple pixels of the target in the disparity image, or by randomly selecting target pixel from multiple pixels of the target in the disparity image and directly using the pixel values (i.e. disparity values) of the target pixel as the target disparity value. In addition, the target disparity value may also by calculated by preprocessing the disparity image to obtain the target cluster, and calculating the target disparity value based on the pixels included in the target cluster. Embodiments of the present disclosure are not limited by a specific method of obtaining a target disparity value based on the disparity image.


Based on the foregoing, by obtaining a matching pixel in the second to-be-processed image for each pixel of the target in the first to-be-processed image, and further calculating the disparity value of the pixel for each pixel of the target, the disparity image may be achieved. In addition, instead of processing every pixel in the whole image, by processing the pixels in the target, the time consumed by the disparity calculation and the amount of calculation can be reduced significantly.


In some embodiments, a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein the process of generating the first to-be-processed image and generating the second to-be-processed image can be described more specifically.



FIG. 3A illustrates an exemplary flowchart of the process 200 of generating the first to-be-processed image and generating the second to-be-processed image according to an embodiment of the present disclosure.


As shown in FIG. 3A, first, in step S201, a region to be processed is determined according to the first target detection box and the second target detection box, wherein the region to be processed comprises the first target detection box and the second target detection box.


The region to be processed represents an image region in the first image and the second image used for the disparity calculation. Embodiments of the present disclosure are not limited by the size and shape of the region to be processed.



FIG. 3B illustrates a schematic diagram of a determined region to be processed according to an embodiment of the present disclosure. Referring to FIG. 3B, the process of determining a region to be processed can be described more specifically.


In some embodiments, Referring to FIG. 3B, the target is the human head, the first image is the left image taken by a binocular camera, and the second image is the right image taken by the binocular camera. Then, based on the first target detection box T1 in the left image and the second target detection box T2 in the right image, the region to be processed Rt may be determined as a rectangular region Rmin (shown by a solid line box) which includes the first target detection box T1 and the second target detection box T2 and has the smallest area.


However, under other circumstance, the region to be processed Rt may be determined to be a region larger than the rectangular region Rmin for better calculation accuracy.


Based on the determined region to be processed, in the step S202, the first image and the second image are cropped according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image. Wherein the first to-be-processed image and the second to-be-processed have the same size.


Based on foregoing, according to the present disclosure, by determining the region to be processed based on the first target detection box and the second target detection box, and further cropping the first image and the second image based on the region to be processed, on the one hand, the accuracy of disparity calculation is improved by accurately locating the target position, and on the other hand, by cropping the image pair with the target detection results, the calculation amount is significantly reduced and the calculation speed has been increased.


In some embodiments, after the first to-be-processed image and the second to-be-processed image are obtained, the first to-be-processed image and the second to-be-processed image can be further scaled proportionally. For example, the first to-be-processed image and the second to-be-processed image may be reduced to two thirds of their current size to obtain a first reduced image and a second reduced image. Then, the target disparity value of the target is calculated based on the first reduced image and the second reduced image.


The present disclosure is not limited by the scale of reducing the first image and the second image, and the specific size of the first reduced image and the second reduced image.


By further reducing the first to-be-processed image and the second to-be-processed image, under the condition that the detection accuracy is basically maintained, the amount of calculation in the process of calculating the disparity value of the target can be further reduced, and the calculation speed can be improved, it is much easier to deploy the image processing method on devices with limited computing power. In addition, because of the scaling process, it is possible to adjust the first to-be-processed image and the second to-be-processed image to a fixed size, and then control the time consumption.


In some embodiments, the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image. The first video image and the second video image may be the output of two cameras located at different positions.


Under this circumstance, when the target is not detected in the first image or the second image, an image in which the target is not detected is considered as an image to be predicted. And the process of generating the target detection box of the image to be predicted can be described more specifically. FIG. 4 illustrates an exemplary flowchart of generating the target detection box of the image to be predicted 400 according to an embodiment of the present disclosure.


As shown in FIG. 4, first, in step S401, a target motion information is obtained based on the video image corresponding to the image to be predicted.


For example, the target motion information is the information represents the characteristics of target motion, which may include the target motion speed, the change trend of the disparity value of the target, and so on. Embodiments of the present disclosure are not limited by the content and the type of the target motion information.


After the target motion information is obtained, in step S402, the target detection box in the image to be predicted is determined according to the target motion information.


For example, the target detection box in the image to be predicted may be calculated based on the target motion speed and the position of the target detection box of the previous frame corresponding to the image to be predicted. However, it should be understood that embodiments of the present disclosure are not limited by the specific method of determining the target detection box in the image to be predicted according to the target motion information.


Based on the foregoing, when the target is not detected in the first image or the second image, by using the method in the present disclosure, the target detection box can be generated by the target motion information of the video image corresponding to the image to be predicted, so that when the target is not detected in the first image or the second image, the image processing method still can perform the disparity calculation based on the generated target detection box in order to improve the calculation accuracy and enhance the robustness of the method.


In some embodiments, the target motion information is target motion speed, and the process of determining the target detection box in the image to be predicted according to the target motion information includes: determining the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.


The previous image frame corresponding to the image to be predicted represents the image frame previous to the current image frame. For example, if the current image frame is t, the previous image frame may be t-1.


The above process of determining the target detection box can be described more specifically. FIG. 5 illustrates a schematic diagram of determining the target detection box in the image to be predicted according to the target motion information according to an embodiment of the present disclosure.


As shown in FIG. 5, for example, the first image is the left image taken by a binocular camera, and the second image is the right image taken by the binocular camera, the target is the human head. For example, the target is detected in the right image and is not detected in the left image, so that the left image is considered as an image to be predicted Tm. Then, firstly, the motion speed may be calculated, for example, mt stands for current right image frame, previous k image frames are selected to estimate the target motion information, the previous k image frames represented by mt-1, mt-2, mt-3, ..., mt-k, then the average motion speed between Tmt-1 and Tmt-k can be calculated. And the average motion speed may be used as the target motion speed Vs. Secondly, the position of the target detection box in the image to be predicted may be calculated according to the following formula:








Tm

t



=Tm


t-1


+Vs*
Δ
t




where Tmt represents the center point position of the target detection box in the image to be predicted, Tmt-1 represents the center point position of the target detection box in the previous image frame corresponding to the image to be predicted, Vs represents the target motion speed, and Δt represents the timespan between adjacent frames.


Based on the foregoing, when the current target is not detected in the image, the target motion speed is calculated from the video image corresponding to the image, and the target detection box of the current image is determined by the target motion speed and the position of the target detection box of the previous image frame, so that the target detection box can be determined in an easy and convenient way.


In some embodiments, the process of obtaining the matching pixel in the second to-be-processed image that matches the pixel in the first to-be-processed image can be described more specifically. FIG. 6A illustrates a schematic diagram of the process S1031 of obtaining the matching pixel in the second to-be-processed image that matches the pixel of the target in the first to-be-processed image according to an embodiment of the present disclosure.


As shown in FIG. 6A, firstly, in step S1031-1, a maximum disparity threshold and a minimum disparity threshold of the pixel are determined.


The maximum disparity threshold and the minimum disparity threshold are used to define the pixel search region in the second to-be-processed image. The maximum disparity threshold represents the preset maximum disparity value, and the minimum disparity threshold represents the preset minimum disparity value. The maximum disparity threshold and the minimum disparity threshold can be set based on actual needs, and embodiments of the present disclosure are not limited by the specific values of the maximum disparity threshold and the minimum disparity threshold.


The maximum disparity threshold and the minimum disparity threshold may be constant for all frames of the video image, or may be dynamically adjusted based on different image frames of the video image. The embodiments of the present disclosure are not limited by the way of setting the maximum disparity threshold and the minimum disparity threshold.


After the determination of the maximum disparity threshold and the minimum disparity threshold, in step S1031-2, a pixel search region is determined in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold.


The pixel search region represents the region in the second to-be-processed image used for searching the matching pixels of the pixels in the first to-be-processed image. Embodiments of the present disclosure are not limited by the manner of determining the pixel search region and the size of the pixel search region.



FIG. 6B illustrates a schematic diagram of determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold according to an embodiment of the present disclosure.


As shown in FIG. 6B, wherein the first image is the left image taken by a binocular camera, and the second image is the right image taken by the binocular camera, the target is the human head. The preset maximum disparity threshold Ymax and a preset minimum disparity threshold Ymin are marked in FIG. 6B, then a region extends between the preset maximum disparity threshold Ymax and a preset minimum disparity threshold Ymin can be determined as the pixel search region.


After the determination of the pixel search region, in step S1031-3, a matching pixel in the second to-be-processed image that matches the pixel is determined based on the pixel in the first to-be-processed image and at least one pixel in the pixel search region of the pixel.


Based on the foregoing, by determining a pixel search region in the second to-be-processed image based on the maximum disparity threshold and the minimum disparity threshold, when searching for a matching pixel for a pixel in the first to-be-processed image, only the pixels in the pixel search region in the second to-be-processed image need to be searched, which significantly reduces the calculation amount when performing pixel matching and increases the speed of matching pixels. At the same time, it is possible to further adjust the pixel search region by changing the maximum disparity threshold and the minimum disparity threshold in real time according to actual needs.


However, under different circumstances, the set of the minimum disparity threshold and the minimum disparity threshold may significantly affect the calculation accuracy. When the minimum disparity threshold is too big, as shown in FIG. 6B, for the pixel PT in the left image, the method might possibly find a wrong matching pixel PM1 rather than the desired matching pixel PM0. Conversely, while the maximum disparity is too small, as shown in FIG. 6C, it would also result in wrong matches. The correct matching pixel PM0 for the pixel PT is outside of the pixel research region, and had never been calculated.


Based on the foregoing, in some embodiments, the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and the process of determining the maximum disparity threshold and the minimum disparity threshold of the pixel can be described more specifically. FIG. 7 illustrates an exemplary flowchart of process 700 of determining the maximum disparity threshold and the minimum disparity threshold of the pixel based on the first video image and the second video image according to an embodiment of the present disclosure.


As shown in FIG. 7, first, in step S701, the disparity value of the target is obtained in at least one pair of historical image frames of the first video image and the second video image.


The at least one pair of historical image frames of the first video image and the second video image represents at least one pair of historical image frames, and each pair of historical image frames in the at least one pair of historical image frames includes an image frame of the first video image and the corresponding image frame of the second video image.


For example, the disparity value of the target in at least one pair of historical image frames may be directly obtained, or the disparity value of the target in at least one pair of historical image frames can also be calculated according to the characteristics of the target in the historical image frames, such as its spatial coordinate values or depth information. The embodiments of the present disclosure are not limited by the specific acquisition manner of the disparity value of the target in at least one pair of historical image frames.


According to actual needs, different numbers of historical image frame pairs may be selected, for example, only one pair of historical image frames may be selected, or multiple pairs of historical image frames may be selected. Embodiments of the present disclosure are not limited to the specific number of the pairs of the historical image frames.


After the disparity values are obtained, in step S702, the maximum disparity threshold and the minimum disparity threshold of the pixel is determined according to the disparity value of the target in the at least one pair of historical image frames. Embodiments of the present disclosure are not limited by the specific method of determining the disparity value of the target.


Based on the foregoing, by determining the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames, the maximum disparity threshold and the minimum disparity threshold can be set more accurately and adjusted in real time based on the actual situation, so that the method may have better detection accuracy and robustness.


In some embodiments, for each pixel of the target in the first to-be-processed image, the process of calculating the disparity value between the pixel and the matching pixel thereof and generating the disparity image can be described more specifically.



FIG. 8A illustrates an exemplary flowchart of the process 1032 of calculating the disparity value between the pixel and the matching pixel thereof and generating the disparity image according to the present disclosure.


As shown in FIG. 8A, first, in step S1032-1, for each pixel of the target in the first to-be-processed image, the vector-distance between the pixel in the first to-be-processed image and the matching pixel in the second to-be-processed image of the pixel in the first to-be-processed image is calculated, and the vector-distance is used as the disparity value of the pixel.


The vector-distance between the pixel and the matching pixel of the pixel represents the spatial distance between the pixel and the matching pixel. When the first image and the second image are coplanar and their optical axes are parallel to each other, the vector-distance between the pixel and the matching pixel of the pixel may be the plane-distance between the pixel and the matching pixel.


Based on the disparity value of each pixel of the target, in step S 1032-2, for each pixel of the target in the first to-be-processed image, the pixel value of the pixel is set as the disparity value of the pixel, and the disparity image is thus obtained.


Based on the foregoing, by calculating vector-distance between the pixel in the first to-be-processed image and the matching pixel in the second to-be-processed image of the pixel in the first to-be-processed image, and using the vector-distance as the disparity value of the pixel, and further setting the pixel value of the pixel as the disparity value of the pixel, the disparity image is obtained, , which can facilitate subsequent calculations of target disparity value.


In some embodiments, in order to better display the disparity value difference of different pixels, for example, the pixel value (i.e. disparity value) of each pixel in the disparity image is enlarged proportionally within a range of 0-255 to generate a magnified disparity image. Therefore, the disparity changes of pixels of the target can be shown in an intuitive and convenient way.



FIG. 8B illustrates a schematic diagram of the magnified disparity image according to the present disclosure. A light-colored pixel indicates that the disparity value of the pixel is large, and a dark-colored pixel indicates that the disparity value of the pixel is small.


In some embodiments, wherein calculating the disparity value of the target based on the disparity image includes: calculating the disparity value of the target based on the disparity image comprises: weighting and summing the pixel value of each pixel in the disparity image to obtain the disparity value of the target.


By weighting and summing the pixel value of each pixel in the disparity image to obtain the disparity value of the target, it is possible to reasonably assign the weight of each pixel in the target, for example, it is possible to assign a larger weight value to a pixel near the center point of the target, and to assign a smaller weight value to a pixel far from the center point of the target, thereby further improving calculation accuracy of the target disparity value. However, it should be understood that the embodiments of the present disclosure are not limited by the specific allocation manner of the weight value and the specific weight value of each pixel.


In some embodiments, the image processing method further includes the process of calculating a spatial coordinate value of the target according to the disparity value of the target in the first to-be-processed image and the second to-be-processed image.


By calculating the spatial coordinate value of the target according to the disparity value, the spatial feature of the target may be obtained and used for further analysis of the target.


In some embodiments, before the target detection on the first image and the second image, the image processing method further includes: through matrix transformation, the first image and the second image are projected on the same plane, and the optical axis of the first image and the optical axis of the second image are parallel to each other.


By performing image correction and preprocessing on the first image and the second image, making the first image and the second image to be projected on the same plane and the optical axis of the first image and the optical axis of the second image are parallel to each other, the method of the present disclosure may facilitate the subsequent calculation process of the parallax value, simplify the calculation amount, and further increase the calculation speed.


According to another aspect of the present disclosure, there is provided an image processing apparatus. FIG. 9 shows a structural block diagram of the image processing apparatus 900 according to an embodiment of the present disclosure.


As shown in the FIG. 9, the image processing apparatus 900 provided by the embodiment of the present disclosure includes: a first to-be-processed image generation module 910, a second to-be-processed image generation module 920 and a target disparity value calculation module 930, the image processing apparatus 900 can execute the method shown in FIG. 1.


The first to-be-processed image generation module 910 is configured to perform a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image.


The second to-be-processed image generation module 920 is configured to perform the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has the size smaller than the size of the second image, the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image.


In computer vision field, the disparity represents the vector-distance of pixel-level correspondences/matching pairs in an image pair.


In some embodiments, the image pair (including the first image and the second image) may be a pair of images captured in real time by a camera or a video apparatus, for example, the image pair may be a pair of images output by a binocular camera or images taken by a monocular camera at different times and positions, or may be a pair of images including the same target taken by different cameras at different positions. In addition, the image pair may also be the output of a device with an image display or a potential image display function.


In some embodiments, the target detection process may be achieved by pixel threshold comparison; or it may also be achieved by performing edge detection on the image to identify a target in the image. The embodiment of the present disclosure is not limited by the selected specific target detection method.


The target detection result may be represented by a target detection box or a coordinate vector, for example, for a target in an image, a rectangular region including the target and having the smallest area may be adopted as the target detection box, and the coordinates of four corner points of the target detection box is used to represent the target detection box. The embodiments of the present disclosure make no limitation to the representing manner of the positioning result.


The image pair may include only one target, or the image pair may include multiple targets. The embodiment of the present disclosure makes no limitation to the number of targets in the image pair and the target types thereof.


The target disparity value calculation module 930 is configured to calculate, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.


The first to-be-processed image represents an image region of the first image used for disparity calculation, and the second to-be-processed image represents an image region of the second image used for disparity calculation. Embodiments of the present disclosure are not limited by the specific sizes of the first to-be-processed image and the second to-be-processed image.


Based on the foregoing, by performing the target detection on a first image and the second image to get first to-be-processed image and the second to-be-processed image which have a size smaller than the size of the initial first and second images, and further calculating the disparity value of the target based on the first and second to-be-processed images, the time consumed by the disparity calculation and the amount of calculation can be reduced significantly. Therefore, the calculation process of the disparity value of the target can be speed up.


In some embodiments, the target disparity value calculation module 930 includes a pixel matching module 931, a disparity image generation module 932 and a target disparity value generation module 933, which can perform the method described in FIG. 2.


The pixel matching module 931 is configured to perform the operation of the step S1031 in FIG. 2, obtaining, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel.


The matching pixel of a certain pixel of the target represents a pixel in the second to-be-processed image having the closest pixel feature to the pixel of the target in the first to-be-processed image. For example, the matching pixel of a certain pixel of the target may be obtained by Semi-Global Block Matching algorithm, or it may also be implemented by other algorithms, and the embodiments of the present disclosure are not limited by the specific manner of determining the matching pixel.


The disparity image generation module 932 is configured to perform the operation of the step S1032 in FIG. 2, calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel point thereof, and generating a disparity image.


The target disparity value generation module 933 is configured to perform the operation of the step S1033 in FIG. 2, calculating the disparity value of the target based on the disparity image.


The disparity image represents an image representing the disparity value of each pixel in the first to-be-processed image. Specifically, the disparity image and the first to-be-processed image may have the same size, and the pixel value of each pixel in the disparity image may be the disparity value of the corresponding pixel in the first to-be-processed image.


Based on the foregoing, by obtaining a matching pixel in the second to-be-processed image for each pixel of the target in the first to-be-processed image, and further calculating the disparity value of the pixel for each pixel of the target, the disparity image may be achieved. In addition, instead of processing every pixel in the whole image, by processing the pixels in the target, the time consumed by the disparity calculation and the amount of calculation can be reduced significantly.


In some embodiments, wherein a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein the apparatus further comprises a region to be processed determination module 940 and an image segmentation module 950.


The region to be processed determination module 940 is configured to perform the operation of the step S201 in FIG. 3A, determining a region to be processed according to the first target detection box and the second target detection box, wherein the region to be processed comprises the first target detection box and the second target detection box.


The region to be processed represents an image region in the first image and the second image used for the disparity calculation. Embodiments of the present disclosure are not limited by the size and shape of the region to be processed.


The image segmentation module 950 is configured to perform the operation of the step S202 in FIG. 3A, cropping the first image and the second image according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image.


Based on foregoing, according to the present disclosure, by determining the region to be processed based on the first target detection box and the second target detection box, and further cropping the first image and the second image based on the region to be processed, on the one hand, the accuracy of disparity calculation is improved by accurately locating the target position, and on the other hand, by cropping the image pair with the target detection results, the calculation amount is significantly reduced and the calculation speed has been increased.


In some embodiments, the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image, when the target is not detected in the first image or the second image, an image in which the target is not detected is used as an image to be predicted, and the apparatus further comprises a target motion information acquisition module 960 and a target detection box determination module 970.


The target motion information acquisition module 960 is configured to perform the operation of the step S401 in FIG. 4, obtaining target motion information based on the video image corresponding to the image to be predicted.


For example, the target motion information may be the information represents the characteristics of target motion, which may include the target motion speed, the change trend of the disparity value of the target, and so on. Embodiments of the present disclosure are not limited by the content and the type of the target motion information.


The target detection box determination module 970 is configured to perform the operation of the step S402 in FIG. 4, determining, the target detection box in the image to be predicted according to the target motion information.


For example, the target detection box in the image to be predicted may be calculated based on the target motion speed and the position of the target detection box of the previous frame corresponding to the image to be predicted. However, it should be understood that embodiments of the present disclosure are not limited by the specific method of determining the target detection box in the image to be predicted according to the target motion information.


Based on the foregoing, when the target is not detected in the first image or the second image, by using the method in the present disclosure, the target detection box can be generated by the target motion information of the video image corresponding to the image to be predicted, so that when the target is not detected in the first image or the second image, the image processing method still can perform the disparity calculation based on the generated target detection box in order to improve the calculation accuracy and enhance the robustness of the method.


In some embodiments, wherein the target motion information is target motion speed, and wherein the target detection box determination module 970 comprises a motion detection box prediction module 971. The motion detection box prediction module 971 is configured to determine the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.


Based on the foregoing, when the target is not detected in the image, the target motion speed is calculated from the video image corresponding to the image, and the target detection box of the current image is determined by the target motion speed and the position of the target detection box of the previous image frame, so that the target detection box can be determined in an easy and convenient way.


In some embodiments, wherein the pixel matching module 931 comprises a threshold determination module 9311, a search region determination module 9312, and a matching pixel determination module 9313.


The threshold determination module 9311 is configured to perform the operation of the step S1031-1 in FIG. 6A, determining a maximum disparity threshold and a minimum disparity threshold of the pixel.


The search region determination module 9312 is configured to perform the operation of the step S1031-2 in FIG. 6A, determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold.


The matching pixel determination module 9313 is configured to perform the operation of the step S1031-3 in FIG. 6A, determining, based on the pixel in the first to-be-processed image and a pixel in the pixel search region, a matching pixel in the second to-be-processed image that matches the pixel.


Wherein the maximum disparity threshold and the minimum disparity threshold are used to define the pixel search region in the in the second to-be-processed image. The maximum disparity threshold represents the preset maximum disparity value, and the minimum disparity threshold represents the preset minimum disparity value. The maximum disparity threshold and the minimum disparity threshold can be set based on actual needs, and embodiments of the present disclosure are not limited by the specific values of the maximum disparity threshold and the minimum disparity threshold.


The maximum disparity threshold and the minimum disparity threshold may be constant for all frames of the video image, or may be dynamically adjusted based on different image frames of the video image. The embodiments of the present disclosure are not limited by the way of setting the maximum disparity threshold and the minimum disparity threshold.


The pixel search region represents the region in the second to-be-processed image used for searching the matching pixels of the pixels in the first to-be-processed image. Embodiments of the present disclosure are not limited by the manner of determining the pixel search region and the size of the pixel search region.


For each pixel, its matching pixel may be determined by Semi-Global Block Matching algorithm, or it may also be implemented by other algorithms, and the embodiments of the present disclosure are not limited by the specific manner of determining the matching pixel.


Based on the foregoing, by determining a pixel search region in the second to-be-processed image based on the maximum disparity threshold and the minimum disparity, when searching for a matching pixel for a pixel in the first to-be-processed image, only the pixels in the pixel search region in the second to-be-processed image need to be searched, which significantly reduces the calculation amount when performing pixel matching and increases the speed of matching pixels. At the same time, it may further adjust the pixel search region by changing the maximum disparity threshold and the minimum disparity in real time according to actual needs.


In some embodiments, wherein the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and wherein the threshold determination module 9311 comprises: a target attribute value acquisition module 9311-A and a disparity threshold calculation module 9311-B.


The target attribute value acquisition module 9311-A is configured to perform the operation of the step S701 in FIG. 7, obtaining the disparity value of the target in at least one pair of historical image frames of the first video image and the second video image.


The disparity threshold calculation module 9311-B is configured to perform the operation of the step S702 in FIG. 7, determining the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames.


The at least one pair of historical image frames of the first video image and the second video image represents at least one pair of historical image frames, and each pair of historical image frames in the at least one pair of historical image frames includes an image frame of the first video image and the corresponding image frame of the second video image.


According to actual needs, different numbers of historical image frame pairs may be selected, for example, only one pair of historical image frames may be selected, or multiple pairs of historical image frames may be selected. Embodiments of the present disclosure are not limited to the specific number of the pairs of the historical image frames.


Based on the foregoing, by determining the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames, the maximum disparity threshold and the minimum disparity threshold can be set more accurately and adjusted in real time based on the actual situation, so that the method may have better detection accuracy and robustness.


In some embodiments, the image processing device may execute the method as described above, and implement the functions described above.


According to another aspect of the present disclosure, there is further provided an image processing device 980.


The image processing device 980 as shown in FIG. 10 may be implemented as one or more dedicated or general-purpose computer system modules or components, such as a personal computer, a notebook computer, a tablet computer, a mobile phone, a personal digital assistance (PDA), and any smart portable device. The image detection device 980 may include at least one processor 981 and a memory 982.


The memory including a set of instructions that, when executed by the processor, cause the image processing device to perform an operation, the operation comprises: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image; performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has the size smaller than the size of the second image, the first image and the second image constitute an image pair including the target, and a disparity exists between the first image and second image; calculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image. The at least one processor is configured to execute program instructions.


In some embodiments, the image processing device 980 can receive images from camera external to the image processing device 980, and perform the image processing method described above on the received image data to implement the functions of the image processing device described above.


According to another aspect of the present disclosure, there is also provided a non-volatile computer-readable storage medium having stored thereon computer-readable instructions that, when executed by a computer, perform the method as described above.


Program portions of the technology may be considered to be “product” or “article” that exists in the form of executable codes and/or related data, which are embodied or implemented by a computer-readable medium. A tangible, permanent storage medium may include an internal memory or a storage used by any computers, processors, or similar devices or associated modules. For example, various semiconductor memories, tape drivers, disk drivers, or any similar devices capable of providing storage functionality for software.


All software or parts of it may sometimes communicate over a network, such as the internet or other communication networks. Such communication can load software from one computer device or processor to another. For example, loading from one server or host computer to a hardware environment of one computer environment, or other computer environment implementing the system, or a system having a similar function associated with providing information needed for image processing. Therefore, another medium capable of transmitting software elements can also be used as a physical connection between local devices, such as light waves, electric waves, electromagnetic waves, etc., to be propagated through cables, optical cables, or air. Physical medium used for carrying the waves such as cables, wireless connections, or fiber optic cables can also be considered as medium for carrying the software. In usage herein, unless a tangible “storage” medium is defined, other terms referring to a computer or machine “readable medium” corresponds to a medium that participates in execution of any instruction by the processor.


While various embodiments of the disclosure have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents.

Claims
  • 1. An image processing method, comprising: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image;performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair including a target, and a disparity exists between the first image and second image; andcalculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.
  • 2. The image processing method according to claim 1, wherein calculating the disparity value of the target in the first to-be-processed image and the second to-be-processed image comprises: obtaining, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel;calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating a disparity image; andcalculating the disparity value of the target based on the disparity image.
  • 3. The image processing method according to claim 1, wherein a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein generating the first to-be-processed image and generating the second to-be-processed image comprising: determining a region to be processed according to the first target detection box and the second target detection box, wherein the region to be processed comprises the first target detection box and the second target detection box; andcropping the first image and the second image according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image.
  • 4. The image processing method according to claim 3, wherein the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image, an image in which the target is not detected is used as an image to be predicted in response to the target not being detected in the first image or the second image, and the process of generating the target detection box of the image to be predicted comprising:obtaining target motion information based on the video image corresponding to the image to be predicted; anddetermining, the target detection box in the image to be predicted according to the target motion information.
  • 5. The image processing method according to claim 4, wherein the target motion information is target motion speed, and wherein determining the target detection box in the image to be predicted according to the target motion information comprises: determining the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.
  • 6. The image processing method according to claim 2, wherein, obtaining, for each pixel of the target in the first to-be-processed image, the matching pixel in the second to-be-processed image that matches the pixel comprises: determining a maximum disparity threshold and a minimum disparity threshold of the pixel;determining a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold; anddetermining, based on the pixel in the first to-be-processed image and a pixel in the pixel search region, the matching pixel in the second to-be-processed image that matches the pixel.
  • 7. The image processing method according to claim 6, wherein the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and wherein determining the maximum disparity threshold and the minimum disparity threshold of the pixel comprises: obtaining the disparity value of the target in at least one pair of historical image frames of the first video image and the second video image; anddetermining the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames.
  • 8. The image processing method according to claim 2, wherein calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating the disparity image comprises: for each pixel of the target in the first to-be-processed image,calculating vector-distance between the pixel in the first to-be-processed image and the matching pixel in the second to-be-processed image of the pixel in the first to-be-processed image, and using the vector-distance as the disparity value of the pixel;setting a pixel value of the pixel as the disparity value of the pixel, andobtaining the disparity image.
  • 9. The image processing method according to claim 8, wherein calculating the disparity value of the target based on the disparity image comprises: weighting and summing the pixel value of each pixel in the disparity image to obtain the disparity value of the target.
  • 10. The image processing method according to claim 1, further comprising: calculating a spatial coordinate value of the target according to the disparity value of the target in the first to-be-processed image and the second to-be-processed image.
  • 11. An image processing apparatus, comprising: a first to-be-processed image generation module, is configured to perform a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image;a second to-be-processed image generation module, is configured to perform the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair a target, and a disparity exists between the first image and second image; anda target disparity value calculation module configured to calculate, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.
  • 12. The image processing apparatus according to claim 11, wherein the target disparity value calculation module comprises: a pixel matching module, is configured to obtain, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel;a disparity image generation module, is configured to calculate, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating a disparity image; anda target disparity value generation module, is configured to calculate the disparity value of the target based on the disparity image.
  • 13. The image processing apparatus according to claim 11, wherein a first target detection box is obtained by performing the target detection on the first image, and a second target detection box is obtained by performing the target detection on the second image, and wherein the apparatus further comprises: a region to be processed determination module, is configured to determine a region to be processed according to the first target detection box and the second target detection box, wherein the region to be processed comprises the first target detection box and the second target detection box; andan image segmentation module, is configured to crop the first image and the second image according to the region to be processed to obtain the first to-be-processed image and the second to-be-processed image.
  • 14. The image processing apparatus according to claim 13, wherein the first image is a current image frame of a first video image, and the second image is a current image frame of a second video image, wherein an image in which the target is not detected is used as an image to be predicted in response to the target not being detected in the first image or the second image, and the apparatus further comprises:a target motion information acquisition moduleconfigured to obtain target motion information based on the video image corresponding to the image to be predicted; anda target detection box determination module, is configured to determine, the target detection box in the image to be predicted according to the target motion information.
  • 15. The image processing apparatus according to claim 14, wherein the target motion information is target motion speed, and wherein the target detection box determination module comprises: a motion detection box prediction module, is configured to determine the target detection box in the image to be predicted according to the target motion speed and the target detection box in a previous image frame corresponding to the image to be predicted.
  • 16. The image processing apparatus according to claim 12, wherein, the pixel matching module comprises: a threshold determination module, is configured to determine a maximum disparity threshold and a minimum disparity threshold of the pixel;a search region determination module, is configured to determine a pixel search region in the second to-be-processed image according to the maximum disparity threshold and the minimum disparity threshold; anda matching pixel determination module, is configured to determine, based on the pixel in the first to-be-processed image and a pixel in the pixel search region of the pixel, a matching pixel in the second to-be-processed image that matches the pixel.
  • 17. The image processing apparatus according to claim 16, wherein the first image is a current image frame of a first video image, the second image is a current image frame of a second video image, and wherein the threshold determination module comprises: a target attribute value acquisition module, is configured to obtain the disparity value of the target in at least one pair of historical image frames of the first video image and the second video image; anda disparity threshold calculation module, configured to determine the maximum disparity threshold and the minimum disparity threshold of the pixel according to the disparity value of the target in the at least one pair of historical image frames.
  • 18. An image processing device, wherein the device comprises a processor and a memory, the memory including a set of instructions that, when executed by the processor, cause the image processing device to perform an operation, the operation comprises: performing a target detection on a first image to generate a first to-be-processed image, the first to-be-processed image has a size smaller than the size of the first image;performing the target detection on a second image to generate a second to-be-processed image, the second to-be-processed image has a size smaller than the size of the second image, the first image and the second image constitute an image pair including a target, and a disparity exists between the first image and second image; andcalculating, based on the first to-be-processed image and the second to-be-processed image, a disparity value of the target in the first to-be-processed image and the second to-be-processed image.
  • 19. A computer-readable storage medium, characterized in that computer-readable instructions are stored thereon, and when the instructions are executed by a computer, the method according to claim 1 is executed.
  • 20. The image processing device according to claim 18, when executed by the processor, cause the image processing device to perform an operation, the operation further comprises: obtaining, for each pixel of the target in the first to-be-processed image, a matching pixel in the second to-be-processed image that matches the pixel;calculating, for each pixel of the target in the first to-be-processed image, the disparity value between the pixel and the matching pixel thereof, and generating a disparity image; andcalculating the disparity value of the target based on the disparity image.
CROSS-REFERENCE TO RELATED APPLICATION

This application is the U.S. national phase of PCT Application No. PCT/CN2020/076191 filed on Feb. 21, 2020, the disclosure of which is incorporated in its entirety by reference herein.

PCT Information
Filing Document Filing Date Country Kind
PCT/CN2020/076191 2/21/2020 WO