This application claims priority to Taiwan Application Serial Number 110141384, filed Nov. 5, 2021, which is herein incorporated by reference in its entirety.
The present disclosure relates to a recognition system and a recognition method. More particularly, the present disclosure relates to an image recognition system and an image recognition method.
Nowadays, enterprises operating in the catering industry or fast food industry pay attention to the speed and time of rotation between customers and different customers on site, but generally speaking, they need to let the staff visually assess the headcount on site, which makes the assessment inaccurate, and if they want to quantify it, they need to spend manpower and time on statistics and records.
The present disclosure provides an image recognition system. The image recognition system, comprising: at least one sensor, a memory, and a processor. The at least one sensor is configured to capture a plurality of images. The memory is configured to store a plurality of commands. The processor is configured to obtain a plurality of commands from the memory to perform the following steps: capturing at least two images in a building by the at least one sensor; performing a person detection on the at least two images at a first time point to obtain a first feature frame; obtaining a customer candidate from the at least two images according to the first feature frame; giving a first customer number to a first target of the at least two images according to the customer candidate; giving a second customer number to the first target when the first target leaves an outdoor entrance in a first period, and the first target enters the outdoor entrance in a second period; and showing the first customer number and the second customer number of the first target in a statistics interface.
The present disclosure provides an image recognition method. The image recognition method comprises following steps: capturing at least two images in a building; performing a person detection on at least two images at a first time point to obtain a first feature frame; obtaining a customer candidate from the at least two images according to the first feature frame; giving a first customer number to a first target of the at least two images according to the customer candidate; giving a second customer number to the first target when the first target leaves an outdoor entrance in a first period, and the first target enters the outdoor entrance in a second period; and showing the first customer number and the second customer number of the first target in a statistics interface.
Therefore, based on the technical content of the present disclosure, the image recognition system and the image recognition method shown in the embodiment of the present disclosure can automatically quantify and record the number of customers visiting the store.
It is to be understood that both the foregoing general description and the following detailed description are by examples, and are intended to provide further explanation of the present disclosure as claimed.
The present disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
Reference will now be made in detail to the present embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
For automatically quantifying and recording the number of customers visiting the store, the present disclosure provides the image recognition system 100 as shown in
In one embodiment, the at least one sensor 110 is configured to capture a plurality of images. The memory 121 is configured to store a plurality of commands. The processor 123 is configured to obtain a plurality of commands from the memory 121 to perform the following steps: capturing the at least two images in a building by the at least one sensor 110; performing a person detection on at least two images at first time point to obtain a first feature frame; obtaining a customer candidate from the at least two images according to the first feature frame; giving a first customer number to a first target of the at least two images according to the customer candidate; giving a second customer number to the first target when the first target leaves an outdoor entrance in a first period, and the first target enters the outdoor entrance in a second period; and showing the first customer number and the second customer number of the first target in a statistics interface.
In order to make the above operations of the image recognition system 100 easy to understand, please refer to
Please refer to
Subsequently, the processor 123 performs a person detection on at least two images (such as the images 310 and 320) at a first time point to obtain a first feature frame 210. For example, the person detection can be differentiated detection through clothing and apparel.
Then, the processor 123 obtains a customer candidate from the at least two images (e.g. the images 310 and 320) according to the first feature frame 210. For example, the customer candidate may be the personal characteristics distinguished according to the characteristics of different clothes.
Afterward, the processor 123 gives a first customer number to a first target C1 of the at least two images (e.g. the images 310 and 320) according to the customer candidate. For example, the first target C1 can be a customer, the first customer number can be given to the customer C1, and the first customer number can be a positive integer, but the present disclosure is not limited to this.
Subsequently, when the first target C1 leaves an outdoor entrance T1 in a first period, and the first target C1 enters the outdoor entrance T1 in a second period, the processor 123 gives the second customer number to the first target C1. For example, the first target C1 can be a customer. When customer C1 left the outdoor entrance T1 at 9:00 a.m., and enters the outdoor entrance T1 at 9:05 a.m., the second customer number is given to the customer C1, the second customer number can be a positive integer, but the present disclosure is not limited to this.
Then, the processor 123 shows the first customer number and the second customer number of the first target C1 in a statistics interface 400.
Please refer to
Please refer to
In one embodiment, the at least one sensor 110 includes at least one of a camera and a camcorder. For example, the at least one sensor 110 can be the camera or the camcorder.
Please refer to
Step 610: capturing at least two images (e.g. the images 310 and 320) in a building;
Step 620: performing a person detection on the at least two images (e.g.
the images 310 and 320) at a first time point to obtain a first feature frame 210;
Step 630: obtaining a customer candidate from the at least two images (e.g. the images 310 and 320) according to the first feature frame 210;
Step 640: giving a first customer number to a first target C1 of the at least two images (e.g. the images 310 and 320) according to the customer candidate;
Step 650: giving a second customer number to the first target when the first target C1 leaves an outdoor entrance T1 in a first period, and the first target C1 enters the outdoor entrance T1 in a second period;
Step 660: showing the first customer number and the second customer number of the first target in a statistics interface 400.
Step 710: importing the at least two images (e.g. the images 310 and 320) at the first time point with an annotation tool;
Step 720: performing the person detection on at least two images (e.g. the images 310 and 320) at a first time point to obtain the first feature frame 210;
Step 730: automatically matching the at least two images (e.g. the images 310 and 320) at the first time point according to the first feature frame to obtain a headcount information at the first time point;
Step 740: averaging the headcount information of the at least two images (e.g. the images 310 and 320) at the first time point to obtain an average headcount information, and determining whether the average headcount information at the first time point is greater than 10 persons;
Step 750: automatically matching a first target C1 and a second target C1 A in the at least two images (e.g. the images 310 and 320) at the first time point and a second time point according to the first feature frame 210 to determine that the first target C1 and the second target C1A in the at least two images (e.g. the images 310 and 320) are the same;
Step 760: checking the first feature frame 210 and the second feature frame (e.g. second feature frames 210A, 220, 230) of the first target C1 and the second target (e.g. second target C1A, C2, or C3) in the at least two images (e.g. the images 310 and 320);
Step 770: outputting the at least two images (e.g. the images 310 and 320) at the first time point, and the first target C1 and the second target (e.g. second targets C1A, C2, or C3) in the at least two images (e.g. the images 310 and 320) include at least one of the first feature frame 210 and the second feature frame (e.g. second feature frame 210A, 220, 230).
In one embodiment, please refer to the step 740, importing the at least two images (e.g. the images 310 and 320) at another time (e.g. a third time point) by the annotation tool when the average headcount information is less than 10.
In one embodiment, please refer to the step 740, when the average headcount information is greater than 10, the step 750 is executed to automatically match the first target C1 and the second target C1A in the at least two images (e.g. the images 310 and 320) at the first time point and the second time point according to the first feature frame 210 to determine that the first target C1 and the second target C1A in the at least two images (e.g. the images 310 and 320) are the same.
In one embodiment, please refer to the step 760, it can be further check whether the first feature frame 210 and the second feature frame of the first target C1 and the second target (e.g. the second targets C1A and C3) in the at least two images (e.g. the images 310 and 320) are different.
In one embodiment, please refer to the step 760, when the first target C1 of the first feature frame 210 and the second target C3 of the second feature frame 230 are different, then the image recognition method 700 can amend the first feature frame 210 or the second feature frame 230.
In one embodiment, please refer to the step 760, when it is checked that the first target C1 and the second target C1A do not have the first feature frame, the image recognition method 700 can mark the first feature frame 210 by the annotation tool for the first target C1 or the second target C1A.
In one embodiment, the image recognition method 700 is a process of learning and training using the annotation tool. For example, the image recognition method 700 can be a learning process of algorithm training using the annotation tool.
Step 810: importing the at least two images (e.g. the images 310 and 320) at the first time point;
Step 820: performing the person detection on the at least two images (e.g.
the images 310 and 320) to obtain the first feature frame 210;
Step 830: obtaining the customer candidate from the at least two images (e.g. the images 310 and 320) according to the first feature frame 210;
Step 840: determining whether the customer candidate is a staff member W;
Step 841: deleting the customer candidate;
Step 850: giving the first customer number to the first target C1 according to the customer candidate;
Step 860: determining whether the first target C1 left the outdoor entrance T1;
Step 861: remaining the first customer number of the first target unchanged when the first target C1 left an indoor entrance in the building, and when the first target C1 enters the indoor entrance;
Step 870: giving the second customer number to the first target when the first target C1 left the outdoor entrance, and when the first target C1 enters the outdoor entrance;
Step 880: counting a number of customers information and a customer stay time information in the at least two images (e.g. the images 310 and 320) at the first time point, and showing the number of customers information and the customer stay time information in the statistics interface.
In one embodiment, please refer to the step 840, when the customer candidate is the staff member W, the step 841 is executed to delete the customer candidate. For example, the identification of the first feature frame 210 is through clothing, the customer generally wears casual clothes, and the staff member W wears shop uniforms, so it is excluded from the customer candidate.
In one embodiment, please refer to the step 840, when the customer candidate is not the staff member W, the step 850 is executed to give the first customer number to the first target C1 according to the customer candidate.
In one embodiment, please refer to the step 860, when the first target C1 left an indoor entrance T1 in the building, and when the first target C1 enters the indoor entrance, the step 861 is executed, the first customer number of the first target C1 remains unchanged.
It can be seen from the above implementation of the present disclosure that the application of the present disclosure has the following advantages. The image recognition system and the image recognition method shown in the embodiment of the present disclosure can automatically quantify and record the number of customers visiting the store.
Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the present disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of the present disclosure provided they fall within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
110141384 | Nov 2021 | TW | national |