This application claims the benefit of priority to Taiwan Patent Application No. 112143572, filed on Nov. 13, 2023. The entire content of the above identified application is incorporated herein by reference.
Some references, which may include patents, patent applications and various publications, may be cited and discussed in the description of this disclosure. The citation and/or discussion of such references is provided merely to clarify the description of the present disclosure and is not an admission that any such reference is “prior art” to the disclosure described herein. All references cited and discussed in this specification are incorporated herein by reference in their entireties and to the same extent as if each reference was individually incorporated by reference.
The present disclosure relates to a method and a system, and more particularly to an identification method and an identification system for a license plate.
Existing license plate identification systems encompass numerous image processing and identification processes, and operating these procedures in a sequential manner can result in extended processing times, which may not meet user requirements. Therefore, a parallelization architecture can be adopted to maximize the use of all computing resources.
However, it is important to note that not all image processing and identification tasks can be executed in parallel. Certain tasks require a specific sequence for effective processing and recognition. Consequently, the advantages of parallelizing various identification tasks can be restricted.
The present disclosure provides an identification method and an identification system for a license plate capable of greatly improving processing efficiency of license plate identification.
One of the technical aspects adopted by the present disclosure is to provide an identification method for a license plate, and the identification method includes: configuring at least one processor to perform the following steps: sequentially obtaining a plurality of images, wherein each of the plurality of images includes one or more vehicles and at least one license plate; decomposing each of the plurality of images into a vehicle image and at least one license plate image, and inputting the vehicle image into a vehicle detection model to detect at least one vehicle through a plurality of first processing stages; inputting the at least one vehicle into a vehicle metadata identification model to obtain vehicle metadata through a plurality of second processing stages; inputting the at least one license plate image into a license plate identification model to identify at least one piece of license plate information through a plurality of third processing stages; and merging the at least one piece of license plate information and the vehicle metadata to generate a license plate identification result. The plurality of first processing stages and the plurality of second processing stages form a first pipeline architecture, the plurality of third processing stages form a second pipeline architecture, and the first pipeline architecture and the second pipeline architecture are executed simultaneously.
Another one of the technical aspects adopted by the present disclosure is to provide an identification system for a license plate, and the identification system includes at least one processor and a memory. The at least one processor is configured to perform the following steps: sequentially obtaining a plurality of images, wherein each of the plurality of images includes one or more vehicles and at least one license plate; decomposing each of the plurality of images into a vehicle image and at least one license plate image, and inputting the vehicle image into a vehicle detection model to detect at least one vehicle through a plurality of first processing stages; inputting the at least one vehicle into a vehicle metadata identification model to obtain vehicle metadata through a plurality of second processing stages; inputting the at least one license plate image into a license plate identification model to identify at least one piece of license plate information through a plurality of third processing stages; and merging the at least one piece of license plate information and the vehicle metadata to generate a license plate identification result. The plurality of first processing stages and the plurality of second processing stages form a first pipeline architecture, the plurality of third processing stages form a second pipeline architecture, and the first pipeline architecture and the second pipeline architecture are executed simultaneously.
These and other aspects of the present disclosure will become apparent from the following description of the embodiment taken in conjunction with the following drawings and their captions, although variations and modifications therein may be affected without departing from the spirit and scope of the novel concepts of the disclosure.
The described embodiments may be better understood by reference to the following description and the accompanying drawings, in which:
The present disclosure is more particularly described in the following examples that are intended as illustrative only since numerous modifications and variations therein will be apparent to those skilled in the art. Like numbers in the drawings indicate like components throughout the views. As used in the description herein and throughout the claims that follow, unless the context clearly dictates otherwise, the meaning of “a,” “an” and “the” includes plural reference, and the meaning of “in” includes “in” and “on.” Titles or subtitles can be used herein for the convenience of a reader, which shall have no influence on the scope of the present disclosure.
The terms used herein generally have their ordinary meanings in the art. In the case of conflict, the present document, including any definitions given herein, will prevail. The same thing can be expressed in more than one way. Alternative language and synonyms can be used for any term(s) discussed herein, and no special significance is to be placed upon whether a term is elaborated upon or discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms is illustrative only, and in no way limits the scope and meaning of the present disclosure or of any exemplified term. Likewise, the present disclosure is not limited to various embodiments given herein. Numbering terms such as “first,” “second” or “third” can be used to describe various components, signals or the like, which are for distinguishing one component/signal from another one only, and are not intended to, nor should be construed to impose any substantive limitations on the components, signals or the like.
The processor 100 can be configured to execute a plurality of computer-readable instructions to implement functions corresponding to a vehicle detection model M1, a vehicle metadata identification model M2, and a license plate identification model M3 mentioned hereinafter.
Step S10: sequentially obtaining a plurality of images. In step S10, each of the images can include at least one vehicle and at least one license plate.
Specifically, after the image capturing device 14 obtains multiple continuous images (i.e., a captured video), the image computing device 16 can be further configured to track license plates and vehicles in these continuous images obtained by the image capturing device 14 through an object tracking algorithm for dynamic scenes, use an object detection model to capture the license plates in the continuous images frame by frame, and then calculate a confidence score of the captured license plates. The to-be-identified images in step S10 are images with a relatively high confidence score that have been stored, and each of the to-be-identified images can include one or more license plates and a corresponding quantity of vehicles.
Step S11: decomposing each of the images into a vehicle image and a license plate image, and inputting the vehicle image into a vehicle detection model to detect at least one vehicle through a plurality of first processing stages.
In step S11, each to-be-identified image is decomposed into the vehicle image and the license plate image, which can be carried out based on results generated during an execution of the object tracking algorithm and/or the object detection model by the image computing device 16. For example, based on positions of vehicles and license plates in the to-be-identified image obtained by the object tracking algorithm, a vehicle screenshot and a corresponding license plate screenshot are extracted from each to-be-identified image as the vehicle image and the license plate image, respectively; alternatively, the entire to-be-identified image can be used as the vehicle image, and the license plate screenshot captured by the object detection model can be extracted from the to-be-identified image. It should be noted that the present disclosure does not limit the method of decomposing each to-be-identified image into the vehicle image and the license plate image.
In addition, the vehicle detection model M1 can be stored in the memory 102 and has been trained to perform the plurality of first processing stages according to vehicle-related features, so as to detect one or more vehicles in an input image. The first processing stages include a frame pre-processing stage, a vehicle detection inference stage and a first post-processing stage. It should be noted that the vehicle detection model M1 can be, for example, an artificial intelligence detection model. The image pre-processing stage mainly involves pre-processing the images (i.e., vehicle images) predetermined to be input into the artificial intelligence detection model. For example, image processing procedures that are capable of highlighting vehicle features, such as noise reduction, edge sharpening and/or binarization, can be performed.
On the other hand, in the vehicle detection inference stage, the preprocessed image is input into an artificial intelligence core for computation. The artificial intelligence core can, for example, adopt a neural network that includes an input layer, multiple hidden layers, and an output layer. The features of the image are first extracted through the input layer, and positions and similarities of the features with respect to vehicle features are then identified through the hidden layers. Finally, detection results are output through the output layer. However, the above details of the vehicle detection inference stage are merely examples, and the present disclosure does not limit the detection algorithm used.
In the first post-processing stage, the detection results generated in the vehicle detection inference stage are integrated and statistically collected to detect a vehicle from the vehicle image. For example, the detection of the vehicle can involve generating a coordinate range relative to the vehicle image, or directly extracting a precise image of the vehicle from the vehicle image.
Step S12: inputting the vehicle into a vehicle metadata identification model to obtain vehicle metadata through a plurality of second processing stages.
Similarly, the vehicle metadata identification model M2 can be stored in the memory 102 and has been trained to perform the plurality of second processing stages, so as to detect vehicle metadata related to the vehicle in an input image. In this embodiment, the second processing stages include a first vehicle image pre-processing stage, a color classifier inference stage, and a second post-processing stage. The vehicle metadata identification model M2 can be, for example, another artificial intelligence detection model. The first vehicle image pre-processing stage mainly involves pre-processing the images (i.e., an image of the detected vehicle) predetermined to be input into the another artificial intelligence detection model. For example, image processing procedures that are capable of highlighting vehicle features, such as noise reduction, edge sharpening and/or binarization, can be similarly performed.
On the other hand, in the color classifier inference stage, the preprocessed image is input into an artificial intelligence core for computation. The artificial intelligence core can, for example, adopt a neural network that includes an input layer, multiple hidden layers, and an output layer. Color features of the vehicle image are first extracted through the input layer, and similarities of the color features with respect to specific colors are then identified through the hidden layers. Finally, vehicle color detection results are output through the output layer. However, the above details of the vehicle detection inference stage are merely examples, and the present disclosure does not limit the detection algorithm used.
In the second post-processing stage, the results generated in the color classifier inference stage are integrated and statistically collected to detect a vehicle color from the vehicle image. For example, the detection of the vehicle color can involve generating color coordinates (such as R, G, B) of the vehicle color, or directly generating text used to describe the vehicle color, such as red, green, or blue.
It should be emphasized that in order to achieve parallel processing of multiple tasks, the present disclosure constructs the aforementioned artificial intelligence computing processes into pipeline architectures. In particular, the first processing stages and the second processing stages are designed as a first pipeline. The concept of pipelines is to divide the same task into different stages and execute each stage asynchronously. Given a scenario in which a pipeline with a first stage and a second stage is provided, for a first task and a second task to be processed, after the first task is processed by the first stage and proceeds to the second stage of the pipeline, the first stage of the pipeline can be used to process the second task at the same time. That is to say, while the processing time required for each task remains unchanged, one task can be initiated without waiting for the previous one to be fully completed.
For example, suppose multiple vehicle images are decomposed from a single to-be-identified image. After a first vehicle image is input into the vehicle detection model M1, when the first vehicle proceeds to the vehicle detection inference stage after the frame pre-processing stage, a second vehicle image can be input and proceeds to the frame pre-processing stage without affecting the continuous processing of the first vehicle image in the vehicle detection inference stage. Similarly, when the first vehicle image proceeds to the first post-processing stage, and the second vehicle image proceeds to the vehicle detection inference stage, a third vehicle image can be input into the frame pre-processing stage. Therefore, three vehicle images can be processed by the first pipeline architecture at a time.
The architecture of the first pipeline has been illustrated. Next, the architecture of the second pipeline will be described.
Step S13: inputting the license plate image into a license plate identification model to identify license plate information through a plurality of third processing stages.
Similarly, the license plate identification model M3 can be stored in the memory 102 and has been trained to perform the plurality of third processing stages, so as to detect license plate information related to the license plate in an input image. The plurality of third processing stages include a license plate image pre-processing stage, a license plate classifier inference stage, and a fourth post-processing stage. The license plate image pre-processing stage and the fourth post-processing stage are similar to the pre-processing and post-processing stages mentioned above, and descriptions thereof will not be repeated herein.
The license plate identification model M3 can be, for example, another artificial intelligence identification model. In the license plate classifier inference stage, the pre-processed images are input to the license plate identification model M3 for computation. The license plate identification model M3, for example, can be trained through machine/deep learning technology using a large and diverse set of data, which allows the license plate identification model M3 to learn to identify text features of license plates, compare them with a database, and then extract relevant license plate information. However, the above details of the license plate classifier inference stage are for exemplary purposes only, and the present disclosure does not limit the identification algorithm used.
In this embodiment, the license plate recognition model M3 can include a location identification model M31 and a license plate number recognition model M32. Therefore, the license plate information obtained in step S13 further includes location information and license plate number information.
It should be noted that the above-mentioned plurality of third processing stages is further designed as a second pipeline architecture in the present disclosure.
Given that the first pipeline architecture includes at least six stages (three first stages and three second stages), and the second pipeline architecture includes at least three stages (three third stages), when multiple images are to be identified, one of the first processing stages, one of the second processing stages, and one of the third processing stages can each be executed simultaneously on different to-be-identified images.
For example, when there are three to-be-identified images, each containing only one license plate and one corresponding vehicle, a first vehicle image and a first license plate image can be simultaneously input into the vehicle detection model M1 and the license plate identification model M3. When the first vehicle image and the first license plate image respectively proceed to the vehicle detection inference stage and the license plate classifier inference stage, a second vehicle image and a second license plate image can be simultaneously input into the vehicle detection model M1 and the license plate identification model M3. When the first vehicle image and the first license plate image respectively proceed to the first post-processing stage and the fourth post-processing stage, and the second vehicle image and the second license plate image respectively proceed to the vehicle detection inference stage and the license plate classifier inference stage, a third vehicle image and a third license plate image can be simultaneously input into the vehicle detection model M1 and the license plate identification model M3, respectively.
When the first vehicle identified from the first vehicle image is input into the vehicle metadata identification model M2, a situation is formed where one of the first processing stages, one of the second processing stages, and one of the third processing stages can be respectively executed on different ones of the to-be-identified images at the same time. By utilizing a highly parallelized architecture, the identification system and the identification method for license plates provided by the present disclosure can process multiple to-be-identified images concurrently and accelerate the generation of license plate-related data, including vehicle color and license plate information. This helps with the subsequent processing of license plate-related data, such as merging such information into a database, and allowing users to view the identification results through webpages.
Step S14: merging the license plate information and the vehicle metadata to generate a license plate identification result. As shown in
Similarly, the vehicle metadata identification model M2 can be trained to detect vehicle metadata related to the vehicle in an input image. In this embodiment, the vehicle metadata can include a manufacturer. Therefore, in the manufacturer classifier inference stage, the pre-processed image is input into an artificial intelligence core for computation. The artificial intelligence core can, for example, adopt a neural network that includes an input layer, multiple hidden layers, and an output layer. First, the manufacturer-related features of the vehicle image (for example, a vehicle logo and a vehicle appearance) are extracted through the input layer. The hidden layers then identify the similarity of these manufacturer-related features to specific manufacturer-related features. Finally, manufacturer identification results are output through the output layer. However, the above details of the inference stage of the manufacturer classifier are for exemplary purposes only, and the present disclosure does not limit the identification algorithm used.
In this case, when a quantity of the vehicles detected by the vehicle detection model M1 is plural, different vehicles (i.e., images of the vehicles) can be input into the third pipeline architecture and the fourth pipeline architecture, respectively. Since the data to be processed by the third pipeline architecture and the fourth pipeline architecture have multiple records (i.e., corresponding to multiple vehicles), the third pipeline architecture and the fourth pipeline architecture can be executed synchronously.
That is to say, not only can the first pipeline architecture and the second pipeline architecture be executed synchronously, but the third pipeline architecture and the fourth pipeline architecture that can be executed synchronously are also designed in the first pipeline architecture. Therefore, although an addition of features to be identified, such as the make/manufacturer of a vehicle, may lead to an increase in a quantity or scale of artificial intelligence models, it will not proportionally increase the required processing time.
In conclusion, in the identification system and the identification method for license plates provided in the present disclosure, by utilizing a highly parallelized architecture, multiple to-be-identified images can be processed concurrently and the generation of license plate-related data can be accelerated, which helps with subsequent processing of license plate-related data.
Furthermore, in the identification system and the identification method for license plates provided in the present disclosure, not only can the first pipeline architecture and the second pipeline architecture be executed synchronously, but the third pipeline architecture and the fourth pipeline architecture that can be executed synchronously are also designed in the first pipeline architecture. Therefore, although an addition of features to be identified may lead to an increase in a quantity or scale of artificial intelligence models, it will not proportionally increase the required processing time. Therefore, by using pipelines in parallelization architecture and asynchronously executing each process in the serial architecture, the use of all computing resources can be maximized.
The foregoing description of the exemplary embodiments of the disclosure has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
The embodiments were chosen and described in order to explain the principles of the disclosure and their practical application so as to enable others skilled in the art to utilize the disclosure and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present disclosure pertains without departing from its spirit and scope.
Number | Date | Country | Kind |
---|---|---|---|
112143572 | Nov 2023 | TW | national |