This application claims priority to Chinese Patent Application No. 202010878201.7, filed on Aug. 27, 2020, the contents of which are hereby incorporated by reference in their entirety for all purposes.
The present disclosure relates to the fields of computer vision and image processing, and more specifically, to an object detection method, a computer system, and a readable storage medium.
In recent years, remarkable progress in computer vision technologies represented by object detection has been witnessed. The applications of an object detection technology bring better experience and higher efficiency to many industries, while also reducing costs. For example, in the field of automated driving of automobiles, the object detection technology can be employed to detect pedestrians, vehicles, and obstacles, thereby improving the safety and convenience of automobile driving; in the security monitoring field, the object detection technology can be employed to monitor information such as the appearance and movement of particular persons or items; and in the medical diagnosis field, the object detection technology can be employed to discover lesion areas and count the number of cells. But the detection of an extremely small object is often ineffective.
According to a first aspect of the present disclosure, an embodiment of the present disclosure discloses an object detection method, comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.
According to a second aspect of the present disclosure, an embodiment of the present disclosure discloses a computer system, comprising: a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.
According to a third aspect of the present disclosure, an embodiment of the present disclosure discloses a non-transitory computer-readable storage medium that stores one or more computer programs comprising instruction that, when executed by one or more processors of a computer system, cause the computer system to perform operations comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.
The drawings exemplarily show embodiments and form a part of the specification, and are used to illustrate example implementations of the embodiments together with a written description of the specification. The embodiments shown are merely for illustrative purposes and do not limit the scope of the claims. Throughout the drawings, like reference signs denote like but not necessarily identical elements.
The present disclosure will be further described in detail below with reference to the drawings and embodiments. It can be understood that embodiments described herein are used merely to explain a related disclosure, rather than limit the disclosure. It should be additionally noted that, for ease of description, only parts related to the related disclosure are shown in the drawings.
It should be noted that the embodiments in the present disclosure and features in the embodiments can be combined with each other without conflict. If the number of elements is not specifically defined, there may be one or more elements, unless otherwise expressly indicated in the context. In addition, numbers of steps or functional modules used in the present disclosure are used merely to identify the steps or functional modules, rather than limit either a sequence of performing the steps or a connection relationship between the functional modules.
In some industries or fields, an object is extremely small relative to an image acquisition area, with a ratio being usually in the range of 1:100 to 1:1000. As a result, it is very difficult or even impossible to employ current object detection technologies to implement detection of such an extremely small object in a picture shot for an image acquisition area. For example, in the industrial field, when a pseudo solder is to be detected in an X-ray scanned image of a welded steel plate or a flaw is to be detected in a scanned image of a glass cover of a mobile phone, because a proportion of the pseudo solder or flaw in the entire picture is very small, detection of such extremely small objects cannot be implemented directly using the current object detection technologies.
Currently, there are the following several solutions for small-object detection: (1) Using feature pyramid network (FPN) layer to perform multi-scale fusion of features in an input picture, to improve the effect of small object detection. (2) Enlarging an input picture by different scales, performing object detection on input pictures of different enlarged scales, and then merging the results of the object detection on the input pictures of the different enlarged scales. (3) Slicing a training picture and modifying annotation information associated with the training picture, to obtain training image slices and their associated annotation information; using the training image slices and their associated annotation information to train an object detection model; and using the trained object detection model to perform object detection.
The solutions above have the following disadvantages: Solution (1) can only improve the detection effect of small objects with an object ratio of 1:10, and is not suitable, for example, for detection of extremely small objects with an object ratio of 1:100. Solution (2) can increase a size of an object correspondingly. However, due to limitations of a video memory of a graphics processing unit (GPU), a size of an input picture of an object detection model usually can be only about 2,000 pixels, and therefore Solution (2) is apparently not suitable for detection of extremely small objects in which an input image needs to be scaled up to 5,000 pixels or even 10,000 pixels. In Solution (3), different training image slice sizes need to be manually selected for different training data sets, and the trained object detection model is used to perform object detection on to-be-detected pictures as a whole. Therefore, Solution (3) is not suitable for detection of extremely small objects.
The current small object detection solutions have very poor detection effects for extremely small objects with a very small object ratio, and it is impossible to train, with high quality and without manual intervention, an object detection model to complete a task of detecting the extremely small objects.
In view of the above problems that exist in the current small object detection solutions, the present disclosure provides an object detection method and apparatus, to complete, with high quality and without manual intervention, a task of detecting an extremely small object. The object detection method and apparatus according to the embodiments of the present disclosure can be applied in scenarios such as industrial quality inspection and farm aerial photography. The object detection method and apparatus according to the embodiments of the present disclosure are described in detail below in conjunction with the accompanying drawings.
In the object detection method according to this example of the present application, the at least one picture scaling size is adaptively determined based on the typical object ratio of the first training data set; the training pictures in the first training data set are scaled according to the at least one picture scaling size; the scaled training pictures are sliced, to obtain the second training data set; and the object detection model is trained by using the second training data set. Therefore, in the case of a very small object ratio relative to the to-be-detected picture, the trained object detection model can still accurately detect an object in the to-be-detected picture, and then can complete, with high quality and without manual intervention, a task of detecting an extremely small object.
In some examples, the first training data set comprises a plurality of training pictures and annotation information associated with the plurality of training pictures. Any one of the training pictures may contain one or more objects. An object ratio of any one of the objects refers to a proportion of a size of an object detection box of the object to an overall size of the training picture. Annotation information associated with the training picture comprises coordinate information associated with object detection boxes on the training picture.
In some embodiments, the ratios of all the objects in the training pictures of the first training data set may be clustered, to obtain the at least one typical object ratio of the first training data set. For example, ratios of all objects in training pictures in any training data set A may be clustered, to obtain three typical object ratios R1, R2, and R3 of the training data set A.
In some embodiments, to facilitate training of the object detection model, sizes of most of object detection boxes on the training pictures of the first training data set may be scaled to near a fixed size. Therefore, the at least one picture scaling size may be determined based on the at least one typical object ratio of the first training data set and the fixed size. For example, assuming that the sizes of most of the object detection boxes on the training pictures of the training data set A need to be scaled to a fixed size T0, the fixed size T0 may divide the three typical object ratios R1, R2, and R3 in the training data set A, to determine three picture scaling sizes
In some embodiments, to improve the training effect of the object detection model, the at least one picture scaling size may be further determined based on an optimal detection size for the object detection model. In other words, the at least one picture scaling size may be determined based on the at least one typical object ratio and the optimal detection size for the object detection model of the first training data set, such that the sizes of most of the object detection boxes on the training pictures of the first training data set may be scaled to near the optimal detection size for the object detection model. For example, for the training data set A, assuming that the optimal detection size for the object detection model is T, the optimal detection size T for the object detection model may be divided by the three typical object ratios R1, R2, and R3 in the training data set A, to determine three picture scaling sizes
In some embodiments, the scaling the training pictures of the first training data set according to the at least one picture scaling size may comprise: for each training picture of the training pictures of the first training data set, scaling the training picture to each of the at least one picture scaling size. For example, each training picture in the training data set A may be scaled three times according to the picture scaling sizes
such that most of the object detection boxes on the training pictures in the training data set A can be scaled to near the optimal detection size T of the object detection model.
Alternatively, in some embodiments, the scaling the training pictures of the first training data set according to the at least one picture scaling size may comprise: dividing, based on the at least one typical object ratio of the first training data set, the training pictures of the first training data set into at least one training picture group, and scaling a training picture in each training picture group to a corresponding picture scaling size. For example, for the training data set A, the training pictures in the training data set A may be divided into three training picture groups A1, A2, and A3 based on the three typical object ratios R1, R2, and R3 in the training data set A, and training pictures in the three training picture groups A1, A2, and A3 are scaled to the three picture scaling sizes
respectively. Compared with scaling each training picture of the training data set A three times according to the picture scaling sizes
this embodiment has higher processing efficiency but has a poorer training effect.
In an application scenario that requires detection of an extremely small object, the typical object ratio of the first training data set, for example, ranges from 1:100 to 1:1000. A size of each scaled training picture is very large, which will cause the video memory of the graphics processing unit to be insufficient. Therefore, the scaled training pictures need to be sliced. In some embodiments, the obtaining the second training data set by slicing the scaled training pictures comprises: slicing the scaled training pictures, to obtain a set of training image slices; transforming annotation information, associated with the training pictures, of the first training data set to obtain annotation information associated with training image slices of the set of training image slices; and forming the second training data set with the set of training image slices and the annotation information associated with the training image slices of the set of training image slices. Training the object detection model based on the second training data set can improve a capability of the object detection model for detection of the extremely small object, while avoiding the insufficient video memory of the graphics processing unit.
Here, the transforming annotation information, associated with the training pictures, of the first training data set refers to transforming coordinate information, associated with the object detection boxes on the training pictures, of the first training data set. In other words, for any object detection box on any training picture of the first training data set, coordinate information associated with the object detection box is transformed from coordinate information that is based on the training picture to coordinate information that is based on a training image slice containing the object detection box, wherein the training image slice is obtained by slicing the training picture.
In some embodiments, an input picture size of the object detection model may be used as a training image slice size, to slice the scaled training pictures. In other words, the training image slice size does not need to be set manually, and the input picture size of the object detection model may be directly used to slice the scaled training pictures.
In some embodiments, in the case that the input picture size of the object detection model is used as the training image slice size, a movement step that is less than a difference between the input picture size of the object detection model and the optimal detection size may be used, to slice the scaled training pictures. This can ensure that each of the object detection boxes on the scaled training pictures can completely appear in the at least one training image slice.
For example, assuming that the input picture size of the object detection model is I and the optimal detection size is T, the training image slice size may be set to I, and the movement step S may be set to be less than I−T (that is, S<I−T, for example, S=I−2T).
In some embodiments, in the case that the input picture size of the object detection model is used as the training image slice size and the movement step that is less than the difference between the input picture size of the object detection model and the optimal detection size is used to slice the scaled training pictures, each of the object detection boxes on the scaled training pictures can completely appear in the at least one training image slice. To reduce repeated detections of an overlapping area between the training image slices, for any training image slice of the second training data set, coordinate information associated with an incomplete object detection box on the training image slice may be removed from the annotation information associated with the training image slice. For example, as shown in
In some embodiments, coordinate information associated with object detection boxes with sizes significantly different from the optimal detection size of the object detection model may be removed from the annotation information associated with the training image slices of the second training data set, such that these object detection boxes with sizes significantly different from the optimal detection size of the object detection model do not participate in training of the object detection model. This can improve the training effect of the object detection model, while improving the training efficiency of the object detection model.
In some embodiments, in the application scenario that requires detection of an extremely small object, due to very small ratios of objects in the training pictures of the first training data set, most areas of each training picture are background areas that do not contain an object detection box. If only a training image slice containing an object detection box is used to train the object detection model, it may cause many false detections when the trained object detection model is subsequently used to detect a background area of a to-be-detected picture. To avoid such a case, a training image slice that contains an object detection box, a training image slice that does not contain an object detection box, and annotation information associated with the training image slices in the second training data set may be used to train the object detection model. This can strengthen the object detection model in learning the background areas that do not contain an object detection box, and can reduce false detections of the background areas that do not contain an object detection box during implementation of the detection of an extremely small object.
In some embodiments, as shown in
In some embodiments, an input picture size of the object detection model may be used as a to-be-detected image slice size, to slice the scaled to-be-detected picture. This can avoid the insufficient video memory of the graphics processing unit. In other words, the to-be-detected image slice size may be set to be the same as the training image slice size, that is, equal to the input picture size of the object detection model. It should be understood that the to-be-detected image slice size may also be appropriately increased to be greater than the input picture size of the object detection model, thereby improving the slicing efficiency of the to-be-detected picture.
In some embodiments, a movement step that is less than a difference between the input picture size of the object detection model and an optimal detection size may be used, to slice the scaled to-be-detected picture. For example, the movement step for slicing the scaled to-be-detected picture may be set to be equal to the movement step for slicing the scaled training picture. This can ensure that each object detection box on the scaled to-be-detected picture can completely appear in at least one to-be-detected image slice.
In some embodiments, for any to-be-detected image slice in the to-be-detected image slice set, if an object detection box overlapping an edge of the to-be-detected image slice is detected on the to-be-detected image slice, the object detection box is discarded. For example, when the trained object detection model is used to perform object detection on a to-be-detected image slice, if an object detection box on the to-be-detected image slice is found to be incomplete, the object detection box may be discarded (that is, the object detection box is not considered as detected). This can reduce repeated detections of an overlapping area between to-be-detected image slices.
In some embodiments, the inputting the set of to-be-detected image slices to the trained object detection model to perform the object detection may comprises: obtaining, using the trained object detection model, respective coordinate information associated with respective object detection boxes on to-be-detected image slices in the set of to-be-detected image slices; and transforming the respective coordinate information associated with the respective object detection boxes on the to-be-detected image slices in the set of to-be-detected image slices into respective coordinate information that is based on the to-be-detected picture. For example, for any object detection box on any to-be-detected image slice, coordinate information associated with the object detection box may be transformed from coordinate information that is based on the to-be-detected image slice to coordinate information that is based on the to-be-detected picture. In this way, a relatively intuitive object detection result for the to-be-detected picture can be obtained.
In conclusion, the object detection method according to the one or more examples of the present application can be used to complete, with high quality and without manual intervention, a task of detecting an extremely small object, and is applicable to scenarios such as industrial quality inspection and farm aerial photography.
In this embodiment, for exemplary implementations and technical effects of the object detection apparatus 400 and its corresponding functional modules, refer to the relevant description in the embodiment described in
As shown in
Generally, the following apparatuses may be connected to the I/O interface 505: an input apparatus 506, for example, including a touchscreen, a touch panel, a camera, an accelerometer, a gyroscope, etc.; an output apparatus 507, for example, including a liquid crystal display (LCD), a speaker, a vibrator, etc.; the storage apparatus 508, for example, including a flash memory (Flash Card), etc.; and a communication apparatus 509. The communication apparatus 509 may enable the computer system 500 to perform wireless or wired communication with other devices to exchange data. Although
In particular, according to an example of the present application, the process described above with reference to the flowcharts may be implemented as a computer software program. For example, an example of the present application provides a computer-readable storage medium that stores a computer program, the computer program containing program code for performing the method 100 shown in
It should be noted that a computer-readable medium described in the example of the present application may be a computer-readable signal medium, or a computer-readable storage medium, or any combination thereof. The computer-readable storage medium may be, for example but not limited to, electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, or devices, or any combination thereof. A more specific example of the computer-readable storage medium may include, but is not limited to: an electrical connection having one or more wires, a portable computer magnetic disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. In the one or more examples of the present application, the computer-readable storage medium may be any tangible medium containing or storing a program which may be used by or in combination with an instruction execution system, apparatus, or device. In the one or more examples of the present application, the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier, the data signal carrying computer-readable program code. The propagated data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable signal medium can send, propagate, or transmit a program used by or in combination with an instruction execution system, apparatus, or device. The program code contained in the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wires, optical cables, radio frequency (RF), etc., or any suitable combination thereof.
The foregoing computer-readable medium may be contained in the foregoing computer system 500. Alternatively, the computer-readable medium may exist independently, without being assembled into the computer system 500. The foregoing computer-readable medium carries one or more programs, and the one or more programs, when executed by the computer system, cause the computer system to perform the following: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.
Computer program code for performing operations of the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, wherein the programming languages comprise object-oriented programming languages, such as Java, Smalltalk, and C++, and further comprise conventional procedural programming languages, such as “C” language or similar programming languages. The program code may be completely executed on a computer of a user, partially executed on a computer of a user, executed as an independent software package, partially executed on a computer of a user and partially executed on a remote computer, or completely executed on a remote computer or server. In the circumstance involving a remote computer, the remote computer may be connected to a computer of a user over any type of network, comprising a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (for example, connected over the Internet using an Internet service provider).
The flowcharts and block diagrams in the accompanying drawings illustrate the possibly implemented architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more executable instructions for implementing the logical functions. It should also be noted that, in some alternative implementations, the functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two blocks shown in succession can actually be performed substantially in parallel, or they can sometimes be performed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or the flowchart, and a combination of the blocks in the block diagram and/or the flowchart may be implemented by a dedicated hardware-based system that executes functions or operations, or may be implemented by a combination of dedicated hardware and computer instructions.
The related modules described in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described modules may also be arranged in the processor, which for example may be described as: a processor, comprising a picture slicing configuration module, a model training module, and an object detection module. Names of these modules do not constitute a limitation on the modules themselves under certain circumstances.
The foregoing descriptions are merely preferred embodiments of the present disclosure and explanations of the applied technical principles. Those skilled in the art should understand that the scope of the present application involved in the embodiments of the present disclosure is not limited to the technical solutions formed by specific combinations of the foregoing technical features, and shall also cover other technical solutions formed by any combination of the foregoing technical features or equivalent features thereof without departing from the foregoing inventive concept. For example, a technical solution formed by a replacement of the foregoing features with technical features with similar functions in the technical features disclosed in the embodiments of the present disclosure (but not limited thereto) also falls within the scope of the present application.
Number | Date | Country | Kind |
---|---|---|---|
202010878201.7 | Aug 2020 | CN | national |