OBJECT DETECTION

Information

  • Patent Application
  • 20220067375
  • Publication Number
    20220067375
  • Date Filed
    March 12, 2021
    3 years ago
  • Date Published
    March 03, 2022
    2 years ago
Abstract
A method includes: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model. The object detection method according to the embodiments of the present disclosure can be used to complete, without manual intervention, a task of detecting an extremely small object.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to Chinese Patent Application No. 202010878201.7, filed on Aug. 27, 2020, the contents of which are hereby incorporated by reference in their entirety for all purposes.


TECHNICAL FIELD

The present disclosure relates to the fields of computer vision and image processing, and more specifically, to an object detection method, a computer system, and a readable storage medium.


BACKGROUND

In recent years, remarkable progress in computer vision technologies represented by object detection has been witnessed. The applications of an object detection technology bring better experience and higher efficiency to many industries, while also reducing costs. For example, in the field of automated driving of automobiles, the object detection technology can be employed to detect pedestrians, vehicles, and obstacles, thereby improving the safety and convenience of automobile driving; in the security monitoring field, the object detection technology can be employed to monitor information such as the appearance and movement of particular persons or items; and in the medical diagnosis field, the object detection technology can be employed to discover lesion areas and count the number of cells. But the detection of an extremely small object is often ineffective.


SUMMARY

According to a first aspect of the present disclosure, an embodiment of the present disclosure discloses an object detection method, comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.


According to a second aspect of the present disclosure, an embodiment of the present disclosure discloses a computer system, comprising: a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.


According to a third aspect of the present disclosure, an embodiment of the present disclosure discloses a non-transitory computer-readable storage medium that stores one or more computer programs comprising instruction that, when executed by one or more processors of a computer system, cause the computer system to perform operations comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based at least on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.





BRIEF DESCRIPTION OF THE DRAWINGS

The drawings exemplarily show embodiments and form a part of the specification, and are used to illustrate example implementations of the embodiments together with a written description of the specification. The embodiments shown are merely for illustrative purposes and do not limit the scope of the claims. Throughout the drawings, like reference signs denote like but not necessarily identical elements.



FIG. 1 is a flowchart showing an object detection method according to one or more examples of the present application;



FIG. 2a is a schematic diagram showing an example of a scaled training picture;



FIG. 2b is a schematic diagram showing slicing the scaled training picture shown in FIG. 2a;



FIG. 3 is a flowchart showing step S105 in the object detection method shown in FIG. 1;



FIG. 4 is a structural block diagram showing an object detection apparatus according to one or more examples of the present application; and



FIG. 5 is a structural block diagram showing an exemplary computer system that can be used to implement one or more examples of the present application.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The present disclosure will be further described in detail below with reference to the drawings and embodiments. It can be understood that embodiments described herein are used merely to explain a related disclosure, rather than limit the disclosure. It should be additionally noted that, for ease of description, only parts related to the related disclosure are shown in the drawings.


It should be noted that the embodiments in the present disclosure and features in the embodiments can be combined with each other without conflict. If the number of elements is not specifically defined, there may be one or more elements, unless otherwise expressly indicated in the context. In addition, numbers of steps or functional modules used in the present disclosure are used merely to identify the steps or functional modules, rather than limit either a sequence of performing the steps or a connection relationship between the functional modules.


In some industries or fields, an object is extremely small relative to an image acquisition area, with a ratio being usually in the range of 1:100 to 1:1000. As a result, it is very difficult or even impossible to employ current object detection technologies to implement detection of such an extremely small object in a picture shot for an image acquisition area. For example, in the industrial field, when a pseudo solder is to be detected in an X-ray scanned image of a welded steel plate or a flaw is to be detected in a scanned image of a glass cover of a mobile phone, because a proportion of the pseudo solder or flaw in the entire picture is very small, detection of such extremely small objects cannot be implemented directly using the current object detection technologies.


Currently, there are the following several solutions for small-object detection: (1) Using feature pyramid network (FPN) layer to perform multi-scale fusion of features in an input picture, to improve the effect of small object detection. (2) Enlarging an input picture by different scales, performing object detection on input pictures of different enlarged scales, and then merging the results of the object detection on the input pictures of the different enlarged scales. (3) Slicing a training picture and modifying annotation information associated with the training picture, to obtain training image slices and their associated annotation information; using the training image slices and their associated annotation information to train an object detection model; and using the trained object detection model to perform object detection.


The solutions above have the following disadvantages: Solution (1) can only improve the detection effect of small objects with an object ratio of 1:10, and is not suitable, for example, for detection of extremely small objects with an object ratio of 1:100. Solution (2) can increase a size of an object correspondingly. However, due to limitations of a video memory of a graphics processing unit (GPU), a size of an input picture of an object detection model usually can be only about 2,000 pixels, and therefore Solution (2) is apparently not suitable for detection of extremely small objects in which an input image needs to be scaled up to 5,000 pixels or even 10,000 pixels. In Solution (3), different training image slice sizes need to be manually selected for different training data sets, and the trained object detection model is used to perform object detection on to-be-detected pictures as a whole. Therefore, Solution (3) is not suitable for detection of extremely small objects.


The current small object detection solutions have very poor detection effects for extremely small objects with a very small object ratio, and it is impossible to train, with high quality and without manual intervention, an object detection model to complete a task of detecting the extremely small objects.


In view of the above problems that exist in the current small object detection solutions, the present disclosure provides an object detection method and apparatus, to complete, with high quality and without manual intervention, a task of detecting an extremely small object. The object detection method and apparatus according to the embodiments of the present disclosure can be applied in scenarios such as industrial quality inspection and farm aerial photography. The object detection method and apparatus according to the embodiments of the present disclosure are described in detail below in conjunction with the accompanying drawings.



FIG. 1 is a flowchart showing an object detection method 100 according to one or more examples of the present application. As shown in FIG. 1, the object detection method 100 may comprise: step S101: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; step S102: determining at least one picture scaling size based at least on the at least one typical object ratio; step S103: scaling the training pictures of the first training data set according to the at least one picture scaling size; step S104: obtaining a second training data set by slicing the scaled training pictures; step S105: training an object detection model using the second training data set; and step S106: performing object detection on a to-be-detected picture using the trained object detection model.


In the object detection method according to this example of the present application, the at least one picture scaling size is adaptively determined based on the typical object ratio of the first training data set; the training pictures in the first training data set are scaled according to the at least one picture scaling size; the scaled training pictures are sliced, to obtain the second training data set; and the object detection model is trained by using the second training data set. Therefore, in the case of a very small object ratio relative to the to-be-detected picture, the trained object detection model can still accurately detect an object in the to-be-detected picture, and then can complete, with high quality and without manual intervention, a task of detecting an extremely small object.


In some examples, the first training data set comprises a plurality of training pictures and annotation information associated with the plurality of training pictures. Any one of the training pictures may contain one or more objects. An object ratio of any one of the objects refers to a proportion of a size of an object detection box of the object to an overall size of the training picture. Annotation information associated with the training picture comprises coordinate information associated with object detection boxes on the training picture.


In some embodiments, the ratios of all the objects in the training pictures of the first training data set may be clustered, to obtain the at least one typical object ratio of the first training data set. For example, ratios of all objects in training pictures in any training data set A may be clustered, to obtain three typical object ratios R1, R2, and R3 of the training data set A.


In some embodiments, to facilitate training of the object detection model, sizes of most of object detection boxes on the training pictures of the first training data set may be scaled to near a fixed size. Therefore, the at least one picture scaling size may be determined based on the at least one typical object ratio of the first training data set and the fixed size. For example, assuming that the sizes of most of the object detection boxes on the training pictures of the training data set A need to be scaled to a fixed size T0, the fixed size T0 may divide the three typical object ratios R1, R2, and R3 in the training data set A, to determine three picture scaling sizes








T
0


R
1


,


T
0


R
2


,

and








T
0


R
3


.






In some embodiments, to improve the training effect of the object detection model, the at least one picture scaling size may be further determined based on an optimal detection size for the object detection model. In other words, the at least one picture scaling size may be determined based on the at least one typical object ratio and the optimal detection size for the object detection model of the first training data set, such that the sizes of most of the object detection boxes on the training pictures of the first training data set may be scaled to near the optimal detection size for the object detection model. For example, for the training data set A, assuming that the optimal detection size for the object detection model is T, the optimal detection size T for the object detection model may be divided by the three typical object ratios R1, R2, and R3 in the training data set A, to determine three picture scaling sizes







T

R
1


,

T

R
2


,

and







T

R
3


.






In some embodiments, the scaling the training pictures of the first training data set according to the at least one picture scaling size may comprise: for each training picture of the training pictures of the first training data set, scaling the training picture to each of the at least one picture scaling size. For example, each training picture in the training data set A may be scaled three times according to the picture scaling sizes







T

R
1


,

T

R
2


,

and






T

R
3



,




such that most of the object detection boxes on the training pictures in the training data set A can be scaled to near the optimal detection size T of the object detection model.


Alternatively, in some embodiments, the scaling the training pictures of the first training data set according to the at least one picture scaling size may comprise: dividing, based on the at least one typical object ratio of the first training data set, the training pictures of the first training data set into at least one training picture group, and scaling a training picture in each training picture group to a corresponding picture scaling size. For example, for the training data set A, the training pictures in the training data set A may be divided into three training picture groups A1, A2, and A3 based on the three typical object ratios R1, R2, and R3 in the training data set A, and training pictures in the three training picture groups A1, A2, and A3 are scaled to the three picture scaling sizes







T

R
1


,

T

R
2


,

and






T

R
3



,




respectively. Compared with scaling each training picture of the training data set A three times according to the picture scaling sizes







T

R
1


,

T

R
2


,

and






T

R
3



,




this embodiment has higher processing efficiency but has a poorer training effect.


In an application scenario that requires detection of an extremely small object, the typical object ratio of the first training data set, for example, ranges from 1:100 to 1:1000. A size of each scaled training picture is very large, which will cause the video memory of the graphics processing unit to be insufficient. Therefore, the scaled training pictures need to be sliced. In some embodiments, the obtaining the second training data set by slicing the scaled training pictures comprises: slicing the scaled training pictures, to obtain a set of training image slices; transforming annotation information, associated with the training pictures, of the first training data set to obtain annotation information associated with training image slices of the set of training image slices; and forming the second training data set with the set of training image slices and the annotation information associated with the training image slices of the set of training image slices. Training the object detection model based on the second training data set can improve a capability of the object detection model for detection of the extremely small object, while avoiding the insufficient video memory of the graphics processing unit.


Here, the transforming annotation information, associated with the training pictures, of the first training data set refers to transforming coordinate information, associated with the object detection boxes on the training pictures, of the first training data set. In other words, for any object detection box on any training picture of the first training data set, coordinate information associated with the object detection box is transformed from coordinate information that is based on the training picture to coordinate information that is based on a training image slice containing the object detection box, wherein the training image slice is obtained by slicing the training picture.


In some embodiments, an input picture size of the object detection model may be used as a training image slice size, to slice the scaled training pictures. In other words, the training image slice size does not need to be set manually, and the input picture size of the object detection model may be directly used to slice the scaled training pictures.


In some embodiments, in the case that the input picture size of the object detection model is used as the training image slice size, a movement step that is less than a difference between the input picture size of the object detection model and the optimal detection size may be used, to slice the scaled training pictures. This can ensure that each of the object detection boxes on the scaled training pictures can completely appear in the at least one training image slice.


For example, assuming that the input picture size of the object detection model is I and the optimal detection size is T, the training image slice size may be set to I, and the movement step S may be set to be less than I−T (that is, S<I−T, for example, S=I−2T). FIG. 2a is a schematic diagram showing an example of a scaled training picture. FIG. 2b is a schematic diagram showing slicing the scaled training picture shown in FIG. 2a. As shown in FIGS. 2a and 2b, in the case that the training image slice size is I and the movement step is S, a sliding window of size I×I slides in directions of the horizontal axis and the vertical axis from the top-left vertex of the scaled training picture, to slice the scaled training picture. A distance, that is, the movement step, for which the sliding window slides each time is S, and each time the sliding window slides, a training image slice can be obtained, for example, training image slices Q and Q1. In some cases, to obtain more training image slices, the movement step S may be appropriately reduced.


In some embodiments, in the case that the input picture size of the object detection model is used as the training image slice size and the movement step that is less than the difference between the input picture size of the object detection model and the optimal detection size is used to slice the scaled training pictures, each of the object detection boxes on the scaled training pictures can completely appear in the at least one training image slice. To reduce repeated detections of an overlapping area between the training image slices, for any training image slice of the second training data set, coordinate information associated with an incomplete object detection box on the training image slice may be removed from the annotation information associated with the training image slice. For example, as shown in FIG. 2b, an object detection box a1 is incomplete in the training image slice Q, and therefore coordinate information associated with the object detection box a1 may be removed from the annotation information associated with the training image slice Q. Conversely, the object detection box a1 completely appears in the training image slice Q1, and therefore the coordinate information associated with the object detection box a1 is retained in the annotation information associated with the training image slice Q1.


In some embodiments, coordinate information associated with object detection boxes with sizes significantly different from the optimal detection size of the object detection model may be removed from the annotation information associated with the training image slices of the second training data set, such that these object detection boxes with sizes significantly different from the optimal detection size of the object detection model do not participate in training of the object detection model. This can improve the training effect of the object detection model, while improving the training efficiency of the object detection model.


In some embodiments, in the application scenario that requires detection of an extremely small object, due to very small ratios of objects in the training pictures of the first training data set, most areas of each training picture are background areas that do not contain an object detection box. If only a training image slice containing an object detection box is used to train the object detection model, it may cause many false detections when the trained object detection model is subsequently used to detect a background area of a to-be-detected picture. To avoid such a case, a training image slice that contains an object detection box, a training image slice that does not contain an object detection box, and annotation information associated with the training image slices in the second training data set may be used to train the object detection model. This can strengthen the object detection model in learning the background areas that do not contain an object detection box, and can reduce false detections of the background areas that do not contain an object detection box during implementation of the detection of an extremely small object.


In some embodiments, as shown in FIG. 3, the performing the object detection on a to-be-detected picture using the trained object detection model may comprise: step S1061: scaling the to-be-detected picture according to the at least one picture scaling size; step S1062: slicing the scaled to-be-detected picture, to obtain a set of to-be-detected image slices; and step S1063: inputting the set of to-be-detected image slices to the trained object detection model to perform the object detection. Scaling and slicing the to-be-detected picture can not only avoid the insufficient video memory of the graphics processing unit, but can also implement detection of an extremely small object for a to-be-detected image slice, thereby implementing the detection of an extremely small object for the to-be-detected picture as a whole.


In some embodiments, an input picture size of the object detection model may be used as a to-be-detected image slice size, to slice the scaled to-be-detected picture. This can avoid the insufficient video memory of the graphics processing unit. In other words, the to-be-detected image slice size may be set to be the same as the training image slice size, that is, equal to the input picture size of the object detection model. It should be understood that the to-be-detected image slice size may also be appropriately increased to be greater than the input picture size of the object detection model, thereby improving the slicing efficiency of the to-be-detected picture.


In some embodiments, a movement step that is less than a difference between the input picture size of the object detection model and an optimal detection size may be used, to slice the scaled to-be-detected picture. For example, the movement step for slicing the scaled to-be-detected picture may be set to be equal to the movement step for slicing the scaled training picture. This can ensure that each object detection box on the scaled to-be-detected picture can completely appear in at least one to-be-detected image slice.


In some embodiments, for any to-be-detected image slice in the to-be-detected image slice set, if an object detection box overlapping an edge of the to-be-detected image slice is detected on the to-be-detected image slice, the object detection box is discarded. For example, when the trained object detection model is used to perform object detection on a to-be-detected image slice, if an object detection box on the to-be-detected image slice is found to be incomplete, the object detection box may be discarded (that is, the object detection box is not considered as detected). This can reduce repeated detections of an overlapping area between to-be-detected image slices.


In some embodiments, the inputting the set of to-be-detected image slices to the trained object detection model to perform the object detection may comprises: obtaining, using the trained object detection model, respective coordinate information associated with respective object detection boxes on to-be-detected image slices in the set of to-be-detected image slices; and transforming the respective coordinate information associated with the respective object detection boxes on the to-be-detected image slices in the set of to-be-detected image slices into respective coordinate information that is based on the to-be-detected picture. For example, for any object detection box on any to-be-detected image slice, coordinate information associated with the object detection box may be transformed from coordinate information that is based on the to-be-detected image slice to coordinate information that is based on the to-be-detected picture. In this way, a relatively intuitive object detection result for the to-be-detected picture can be obtained.


In conclusion, the object detection method according to the one or more examples of the present application can be used to complete, with high quality and without manual intervention, a task of detecting an extremely small object, and is applicable to scenarios such as industrial quality inspection and farm aerial photography.



FIG. 4 is a structural block diagram showing an object detection apparatus 400 according to one or more examples of the present application. As shown in FIG. 4, the object detection apparatus 400 may comprise a picture slicing configuration module 401, a model training module 402, and an object detection module 403. The picture slicing configuration module 401 is configured to: determine at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determine at least one picture scaling size based on the at least one typical object ratio; and scale the training pictures of the first training data set according to the at least one picture scaling size. The model training module 402 is configure to: obtain a second training data set by slicing the scaled training pictures; and train an object detection model using the second training data set. The object detection module 403 is configured to: perform object detection on a to-be-detected picture using the trained object detection model.


In this embodiment, for exemplary implementations and technical effects of the object detection apparatus 400 and its corresponding functional modules, refer to the relevant description in the embodiment described in FIG. 1, and details are not repeated herein.



FIG. 5 is a structural block diagram showing an exemplary computer system that can be used to implement one or more examples of the present application. The following describes, in conjunction with FIG. 5, the computer system 500 that is suitable for implementation of the one or more examples of the present application. It should be understood that the computer system 500 shown in FIG. 5 is merely an example, and shall not impose any limitation on the function and scope of use of the one or more examples of the present application.


As shown in FIG. 5, the computer system 500 may comprise a processing apparatus (for example, a central processing unit, a graphics processing unit, etc.) 501, which may perform appropriate actions and processing according to a program stored in a read-only memory (ROM) 502 or a program loaded from a storage apparatus 508 to a random access memory (RAM) 503. The RAM 503 additionally stores various programs and data for the operation of the computer system 500. The processing apparatus 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to the bus 504.


Generally, the following apparatuses may be connected to the I/O interface 505: an input apparatus 506, for example, including a touchscreen, a touch panel, a camera, an accelerometer, a gyroscope, etc.; an output apparatus 507, for example, including a liquid crystal display (LCD), a speaker, a vibrator, etc.; the storage apparatus 508, for example, including a flash memory (Flash Card), etc.; and a communication apparatus 509. The communication apparatus 509 may enable the computer system 500 to perform wireless or wired communication with other devices to exchange data. Although FIG. 5 shows the computer system 500 having various apparatuses, it should be understood that it is not required to implement or have all of the shown apparatuses. It may be an alternative to implement or have more or fewer apparatuses. Each block shown in FIG. 5 may represent one apparatus, or may represent a plurality of apparatuses in different circumstances.


In particular, according to an example of the present application, the process described above with reference to the flowcharts may be implemented as a computer software program. For example, an example of the present application provides a computer-readable storage medium that stores a computer program, the computer program containing program code for performing the method 100 shown in FIG. 1. In such an embodiment, the computer program may be downloaded and installed from a network through the communication apparatus 509, or installed from the storage apparatus 508, or installed from the ROM 502. When the computer program is executed by the processing apparatus 501, the above-mentioned functions defined in the apparatus of the example of the present application are implemented.


It should be noted that a computer-readable medium described in the example of the present application may be a computer-readable signal medium, or a computer-readable storage medium, or any combination thereof. The computer-readable storage medium may be, for example but not limited to, electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, or devices, or any combination thereof. A more specific example of the computer-readable storage medium may include, but is not limited to: an electrical connection having one or more wires, a portable computer magnetic disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. In the one or more examples of the present application, the computer-readable storage medium may be any tangible medium containing or storing a program which may be used by or in combination with an instruction execution system, apparatus, or device. In the one or more examples of the present application, the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier, the data signal carrying computer-readable program code. The propagated data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable signal medium can send, propagate, or transmit a program used by or in combination with an instruction execution system, apparatus, or device. The program code contained in the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wires, optical cables, radio frequency (RF), etc., or any suitable combination thereof.


The foregoing computer-readable medium may be contained in the foregoing computer system 500. Alternatively, the computer-readable medium may exist independently, without being assembled into the computer system 500. The foregoing computer-readable medium carries one or more programs, and the one or more programs, when executed by the computer system, cause the computer system to perform the following: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set; determining at least one picture scaling size based on the at least one typical object ratio; scaling the training pictures of the first training data set according to the at least one picture scaling size; obtaining a second training data set by slicing the scaled training pictures; training an object detection model using the second training data set; and performing object detection on a to-be-detected picture using the trained object detection model.


Computer program code for performing operations of the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, wherein the programming languages comprise object-oriented programming languages, such as Java, Smalltalk, and C++, and further comprise conventional procedural programming languages, such as “C” language or similar programming languages. The program code may be completely executed on a computer of a user, partially executed on a computer of a user, executed as an independent software package, partially executed on a computer of a user and partially executed on a remote computer, or completely executed on a remote computer or server. In the circumstance involving a remote computer, the remote computer may be connected to a computer of a user over any type of network, comprising a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (for example, connected over the Internet using an Internet service provider).


The flowcharts and block diagrams in the accompanying drawings illustrate the possibly implemented architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more executable instructions for implementing the logical functions. It should also be noted that, in some alternative implementations, the functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two blocks shown in succession can actually be performed substantially in parallel, or they can sometimes be performed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or the flowchart, and a combination of the blocks in the block diagram and/or the flowchart may be implemented by a dedicated hardware-based system that executes functions or operations, or may be implemented by a combination of dedicated hardware and computer instructions.


The related modules described in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described modules may also be arranged in the processor, which for example may be described as: a processor, comprising a picture slicing configuration module, a model training module, and an object detection module. Names of these modules do not constitute a limitation on the modules themselves under certain circumstances.


The foregoing descriptions are merely preferred embodiments of the present disclosure and explanations of the applied technical principles. Those skilled in the art should understand that the scope of the present application involved in the embodiments of the present disclosure is not limited to the technical solutions formed by specific combinations of the foregoing technical features, and shall also cover other technical solutions formed by any combination of the foregoing technical features or equivalent features thereof without departing from the foregoing inventive concept. For example, a technical solution formed by a replacement of the foregoing features with technical features with similar functions in the technical features disclosed in the embodiments of the present disclosure (but not limited thereto) also falls within the scope of the present application.

Claims
  • 1. A method, comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set;determining at least one picture scaling size based at least on the at least one typical object ratio;scaling the training pictures of the first training data set according to the at least one picture scaling size;obtaining a second training data set by slicing the scaled training pictures;training an object detection model using the second training data set; andperforming object detection on a to-be-detected picture using the trained object detection model.
  • 2. The method according to claim 1, wherein the determining the at least one picture scaling size comprises: determining the at least one picture scaling size based on the at least one typical object ratio and an optimal detection size for the object detection model.
  • 3. The method according to claim 1, wherein the scaling the training pictures of the first training data set comprises: for each training picture of the training pictures in the first training data set, scaling the training picture to each of the at least one picture scaling size.
  • 4. The method according to claim 1, wherein the obtaining the second training data set comprises: slicing the scaled training pictures to obtain a set of training image slices;transforming annotation information, associated with the training pictures, of the first training data set to obtain annotation information associated with training image slices of the set of training image slices; andgenerating the second training data set with the set of training image slices and the annotation information associated with the training image slices of the set of training image slices.
  • 5. The method according to claim 4, wherein the slicing the scaled training pictures comprises using an input picture size for the object detection model as a training image slice size to slice the scaled training pictures.
  • 6. The method according to claim 5, wherein the slicing the scaled training pictures further comprises using a movement step less than a difference between the input picture size for the object detection model and an optimal detection size to slice the scaled training pictures.
  • 7. The method according to claim 6, further comprising: for each training image slice of the second training data set, removing coordinate information associated with an incomplete object detection box on the training image slice from the annotation information associated with the training image slice.
  • 8. The method according to claim 1, wherein the second training data set comprises training image slices that include an object detection box, training image slices that do not include an object detection box, and annotation information associated with the training image slices of the second training data set.
  • 9. The method according to claim 1, wherein the performing the object detection comprises: scaling the to-be-detected picture according to the at least one picture scaling size;slicing the scaled to-be-detected picture to obtain a set of to-be-detected image slices; andinputting the set of to-be-detected image slices to the trained object detection model to perform the object detection.
  • 10. The method according to claim 9, wherein the slicing the scaled to-be-detected picture comprises using an input picture size for the object detection model as a to-be-detected image slice size to slice the scaled to-be-detected picture.
  • 11. The method according to claim 10, wherein the slicing the scaled to-be-detected picture further comprises using a movement step less than a difference between the input picture size for the object detection model and an optimal detection size to slice the scaled to-be-detected picture.
  • 12. The method according to claim 11, wherein the performing the object detection comprises: for each to-be-detected image slice of the set of to-be-detected image slices, discarding an object detection box on the to-be-detected image slice in response to detecting that the object detection box overlaps an edge of the to-be-detected image slice.
  • 13. The method according to claim 9, wherein the inputting the set of to-be-detected image slices to the trained object detection model to perform the object detection comprises: obtaining, using the trained object detection model, coordinate information associated with respective object detection boxes on to-be-detected image slices of the set of to-be-detected image slices; andtransforming the coordinate information associated with the respective object detection boxes on the to-be-detected image slices of the set of to-be-detected image slices into coordinate information that is based on the to-be-detected picture.
  • 14. A computer system, comprising: a non-transitory memory storing one or more programs configured to be executed by one or more processors, the one or more programs including instructions for:determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set;determining at least one picture scaling size based at least on the at least one typical object ratio;scaling the training pictures of the first training data set according to the at least one picture scaling size;obtaining a second training data set by slicing the scaled training pictures;training an object detection model using the second training data set; andperforming object detection on a to-be-detected picture using the trained object detection model.
  • 15. The computer system according to claim 14, wherein the determining the at least one picture scaling size comprises: determining the at least one picture scaling size based on the at least one object ratio and an optimal detection size for the object detection model.
  • 16. The computer system according to claim 14, wherein the scaling the training pictures of the first training data set comprises: for each training picture of the training pictures in the first training data set, scaling the training picture to each of the at least one picture scaling size.
  • 17. The computer system according to claim 14, wherein the obtaining the second training data set comprises: slicing the scaled training pictures to obtain a set of training image slices;transforming annotation information, associated with the training pictures, of the first training data set to obtain annotation information associated with training image slices of the set of training image slices; andgenerating the second training data set with the set of training image slices and the annotation information associated with the training image slices of the set of training image slices.
  • 18. The computer system according to claim 17, wherein the slicing the scaled training pictures comprises using an input picture size for the object detection model as a training image slice size to slice the scaled training pictures.
  • 19. The computer system according to claim 18, wherein the slicing the scaled training pictures further comprises using a movement step less than a difference between the input picture size for the object detection model and an optimal detection size to slice the scaled training pictures.
  • 20. A non-transitory computer-readable storage medium that stores one or more computer programs comprising instructions that, when executed by one or more processors of a computer system, cause the computer system to perform operations comprising: determining at least one typical object ratio from a first training data set by counting ratios of objects in training pictures of the first training data set;determining at least one picture scaling size based at least on the at least one typical object ratio;scaling the training pictures of the first training data set according to the at least one picture scaling size;obtaining a second training data set by slicing the scaled training pictures;training an object detection model using the second training data set; andperforming object detection on a to-be-detected picture using the trained object detection model.
Priority Claims (1)
Number Date Country Kind
202010878201.7 Aug 2020 CN national