This application claims the priority benefit of China application serial no. 201710680021.6, filed on Aug. 10, 2017. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The invention relates to a face recognition technique, and particularly relates to a face similarity evaluation method based on face recognition and an electronic device.
The current face recognition technique may identify multiple feature points in a face image, and users may learn their own face information based on the current face recognition technique. However, regarding the current technique and related products in the market, the users cannot know similarities between their looks and other people or celebrities. Therefore, how to determine the similarities between the user looks and other people or celebrities to develop more practical and interesting products is a subject to be develop by related technical staff of the field.
The invention is directed to a face similarity evaluation method and an electronic device, which are capable to recognize similarity of two face images by obtaining feature factors of each area of the faces, such that a user learns the similarity between his own look and other people or celebrities.
An embodiment of the invention provides a face similarity evaluation method including: obtaining a first image; obtaining a plurality of feature factors respectively corresponding to the first image and at least one second image; obtaining an overall similarity score corresponding to the at least one second image based on the feature factors respectively corresponding to the first image and the at least one second image, and generating an evaluation result based on the overall similarity score corresponding to the at least one second image; and outputting an inform message based on the evaluation result.
An embodiment of the invention provides an electronic device including a storage unit and a processor. The processor is coupled to the storage unit, and accesses and executes a plurality of modules stored in the storage unit. The modules include an image obtaining module, a feature factor obtaining module, a comparison module and an output module. The image obtaining module obtains a first image. The feature factor obtaining module obtains a plurality of feature factors respectively corresponding to the first image and at least one second image. The comparison module obtains an overall similarity score corresponding to the at least one second image based on the feature factors respectively corresponding to the first image and the at least one second image, and generates an evaluation result based on the overall similarity score corresponding to the at least one second image. The output module outputs an inform message based on the evaluation result.
According to the above description, in the invention, a difference of each of the feature factors is obtained according to the feature factors respectively corresponding to two images, and an area similarity score corresponding to each area of the face is obtained according to the difference of each of the feature factors, so as to obtain the overall similarity score corresponding to the face image. In this way, the user learns the similarity between his own look and other people or celebrities.
In order to make the aforementioned and other features and advantages of the invention comprehensible, several exemplary embodiments accompanied with figures are described in detail below.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Referring to
The processor 110 may be a central processing unit (CPU), a microprocessor, a digital signal processor, a programmable controller, an application specific integrated circuits (ASIC), a programmable logic device (PLD) or other device having a data computation function.
The storage unit 120 may be any type of a fixed or movable random access memory (RAM), a read-only memory (ROM), a flash memory, or a similar device or a combination of the above devices. In the present embodiment, the storage unit 120 is used for recording an image obtaining module 121, a feature factor obtaining module 122, a comparison module 123 and an output module 124. In other embodiments, the storage unit 120 may also be used for storing a database, and the electronic device 10 may obtain a stored image and a feature factor corresponding to the image from the database. The modules are, for example, computer programs stored in the storage unit 120, and the computer programs may be loaded to the processor 110, and the processor 110 accordingly executes a function of the face similarity evaluation method of the invention.
The image capturing unit 130 may be a camera equipped with a charge coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) device or other types of photo-sensing element, and may be used for capturing a current face image of the user. Detailed steps of the face similarity evaluation method are described below with reference of an embodiment.
Referring to
Then, in step S203, the processor 110 executes the feature factor obtaining module 122 to perform an analyzing operation to the first image to obtain a first feature factor corresponding to the first image. In the present embodiment, analyzing operation performed by the feature factor obtaining module 122 includes calculating the first feature factor corresponding to the first image according to a plurality of feature points of the first image. However, in other embodiments, the feature factor obtaining module 122 may directly obtain the pre-stored first feature factor corresponding to the first image from the database stored in the storage unit 130 or from the other electronic device.
Moreover, in step S205, the processor 110 executes the feature factor obtaining module 122 to obtain a second feature factor corresponding to each one of a plurality of second images. In the present embodiment, the feature factor obtaining module 122 may obtain the second feature factor corresponding to each of the second images from the database stored in the storage unit 130. The second feature factor corresponding to each of the second images may be pre-recorded in the database according to the steps of
In the present embodiment, the face image (for example, the first image and the second image) may include a plurality of areas. To be specific, the processor 110 may execute a face detection system using a Dlib database (Dlib face landmark) to detect and analyze 194 feature points of the face image. In other embodiments, only 119 face feature points may be analyzed, or the feature points in the face image may be obtained by using other algorithms for detecting the face feature points. In this way, a plurality of areas of the face image may be identified based on the obtained feature points. Moreover, the processor 110 may further define a coordinate system, and assign each of the feature points with coordinates, for example, (x, y). In the following embodiment, a horizontal line refers to a straight line parallel to an x-axis, and a vertical line refers to a straight line parallel to a y-axis. Then, the feature factor obtaining module 122 may perform the analyzing operation to the face image to obtain the feature factor of each area according to the feature points of each area. Alternatively, the feature factor obtaining module 122 may also directly obtain the feature factor corresponding to each of the areas in a certain face image from a database. An embodiment is provided below to describe the feature factor of each area.
Referring to
Referring to
In an embodiment, the horizontal line L1 used for calculating the face height Fh may also be located at a height average of two endpoints of two eyebrow tails, and the horizontal line L2 is, for example, located at a height average of two endpoints 310a and 310b of two mouth corners, and a vertical distance between the horizontal line L1 and the horizontal line L2 is taken as the face height Fh.
Referring to
Moreover, the feature factor obtaining module 122 may further obtain an eyebrow angle. The eyebrow angle may refer to an included angle θ1 between a reference line L43 and a horizontal line L44. The reference line L43 refers to a straight line simultaneously passing through the endpoint 410 and the endpoint 420, and the horizontal line L44 refers to a horizontal line passing through the endpoint 410. Although, in the present embodiment, the eyebrow angle is obtained according to the feature points of one eyebrow, in other embodiments, the eyebrow angle may also be obtained according to the feature points of the two eyebrows. For example, the feature factor obtaining module 122 may obtain two eyebrow angles of the two eyebrows in the first image 300 according to the aforementioned method, and takes an average of the two obtained eyebrow angles as the eyebrow angle of the first image 300.
Then, the feature factor obtaining module 122 may obtain a plurality of feature factors corresponding to the eyebrow area 400 according to the face width Fw, the face height Fh, the eyebrow width EBw, the eyebrow height EBh and the eyebrow angle (for example, the angle θ1). For example, the feature factor obtaining module 122 calculates a plurality of values such as a ratio between the eyebrow width EBw and the eyebrow height EBh, a tangent value of the eyebrow angle, a ratio between the eyebrow width EBw and a half of the face width Fw, a ratio between the eyebrow height EBh and the face height Fh, etc. to serve as the feature factors corresponding to the eyebrow area 400.
Referring to
Similarly, in an embodiment, the horizontal line L52 used for calculating the eye height Eh may be also be located at a height average of the highest points of the upper edges of the two eyes, and the horizontal line L53 may be also be located at a height average of the lowest points of the lower edges of the two eyes, and the vertical distance between the horizontal line L52 and the horizontal line L53 is taken as the eye height Eh.
Then, the feature factor obtaining module 122 obtains a plurality of feature factors corresponding to the eye area 500 according to the face width Fw, the face height Fh, the eye width Ew, the eye height Eh and the eye distance Ed. For example, the feature factor obtaining module 122 calculates a plurality of values such as a ratio between the eye width Ew and the eye height Eh, a ratio between the eye width Ew and a half of the face width Fw, a ratio between the eye height Eh and the face height Fh, a ratio between the eye distance Ed and the face width Fw, etc. to serve as the feature factors corresponding to the eye area 500.
Referring to
Moreover, the feature factor obtaining module 122 may obtain a nose angle. The nose angle refers to an angle θ2 included between a reference line L61 and a horizontal line L62. The reference line L61 refers to a straight line passing through both of an endpoint 630 and the endpoint 610a, and the horizontal line L62 refers to a horizontal line passing through the endpoint 630. However, in an embodiment, the feature factor obtaining module 122 may also obtain an angle θ2′ included between the horizontal line L62 and a straight line passing through both of the endpoint 630 and the endpoint 610b, and take an average of the angle θ2 and the angle θ2′ as the nose angle.
Then, the feature factor obtaining module 122 obtains a plurality of feature factors corresponding to the nose area 600 according to the face width Fw, the face height Fh, the nose width Nw, the nose height Nh and the nose angle (for example, the angle θ2). For example, the feature factor obtaining module 122 calculates a plurality of values such as a ratio between the nose width Nw and the nose height Nh, a ratio between the nose height Nh and the face height Fh, a ratio between the nose width Nw and the face width Fw, a tangent value of the nose angle, etc. to serve as the feature factors corresponding to the nose area 600.
Referring to
Moreover, the feature factor obtaining module 122 may obtain a lip angle. The lip angle refers to an angle θ3 included between a reference line L71 and a horizontal line L72. The reference line L71 refers to a straight line passing through both of the endpoint 710 and an endpoint 740a representing a lip peak, and the horizontal line L72 refers to a horizontal line passing through the endpoint 730. However, in an embodiment, the feature factor obtaining module 122 may also obtain an angle θ3′ included between the horizontal line L72 and a straight line L73 passing through both of the endpoint 710 and an endpoint 740b representing a lip peak, and take an average of the angle θ3 and the angle θ3′ as the lip angle.
Then, the feature factor obtaining module 122 obtains a plurality of feature factors corresponding to the lip area 700 according to the face width Fw, the lip width Lw, the lip height Lh, the top lip height TLh, the bottom lip height and the lip angle (for example, the angle θ3). For example, the feature factor obtaining module 122 calculates a plurality of values such as a ratio between the lip width Lw and the lip height Lh, a ratio between the lip width Lw and the face width Fw, a ratio between the top lip height TLh and the bottom lip height BLh, a tangent value of the lip angle, etc. to serve as the feature factors corresponding to the lip area 700.
Referring to
Moreover, the feature factor obtaining module 122 may further obtain a jaw width Jw and a jaw height Jh. To be specific, the feature factor obtaining module 122 takes a distance between an endpoint 810a and an endpoint 810b as the jaw width Jw. The endpoint 810a and the endpoint 810b refer to endpoints at junctions between a horizontal line L84 passing through a lower edge of the lower lip and both sides of the cheek. The feature factor obtaining module 122 takes a vertical distance between the horizontal line L84 and a lower edge of the jaw as the jaw height Jh.
Moreover, the feature factor obtaining module 122 may further obtain a jaw angle. To be specific, the feature factor obtaining module 122 takes an angle θ4 included between the horizontal line L84 and a reference line L85 passing through both of the endpoint 810a and an endpoint 820a as the jaw angle. However, in an embodiment, the feature factor obtaining module 122 may also obtain an angle θ4′ included between the horizontal line L84 and a reference line L86 passing through both of the endpoint 810b and an endpoint 820b, and take an average of the angle θ4 and the angle θ4′ as the jaw angle.
Then, the feature factor obtaining module 122 obtains a plurality of feature factors corresponding to the face area 800 according to the face width Fw, the face height Fh, the forehead width FHw, the forehead height FHh, the jaw width Jw, the jaw height Jh and the jaw angle (for example, the angle θ4). For example, the feature factor obtaining module 122 calculates a sum of the face height Fh, the forehead height FHh and the jaw height Jh to obtain a height of a face profile. Further, the feature factor obtaining module 122 may calculate a plurality of values such as a ratio between the face width Fw and the height of the face profile, a ratio between the forehead width FHw and the face width Fw, a ratio between the forehead height FHh and the face height Fh, a ratio between the jaw width Jw and the face width Fw, a ratio between the jaw height Jh and the face height Fh, a tangent value of the jaw angle, etc. to serve as the feature factors corresponding to the face area 800.
Moreover, the feature factors of the second image may also be obtained according to the method mentioned in the embodiments of
Referring to
To be specific, the comparison module 123 obtains a feature difference parameter sim(f,i) of each set of the feature factors of the first image and the second image according to a following equation (1). Each set of the feature factors includes one first feature factor and one second feature factor obtained based on the same definition.
In the above equation (1), user(f) refers to one first feature factor of the first image, celebi(f) refers to one second feature factor of each of the second images. Namely, the comparison module 123 may calculate the feature difference parameter corresponding to each set of the feature factors.
Then, the comparison module 123 obtains an area similarity score AreaSim(i) corresponding to each area of each of the second images.
In the above equation (2), wf represents a weight value corresponding to each of the feature difference parameters. To be specific, each of the feature difference parameters sim(f, i) belonging to each area of the face image may have a corresponding weight value, and a sum of the weight values of all of the feature difference parameters sim(f, i) of each area (i.e., Σf∈AreaFactor wf in the equation (2)) is complied with a predetermined value. Each of the weight values and the predetermined value of the sum of the corresponding weight values may be adjusted according to an actual application. According to the equation (2), the comparison module 123 obtains a product of each of the feature difference parameters sim(f, i) of each area and the corresponding weight value wf, and accordingly obtains a sum of the above products Σf∈AreaFactor wf×Sim (f, i), and calculates a percentage of a ratio between the sum of the products Σf∈AreaFactor wf×Sim (f, i) and a weight summation Σf∈AreaFactor wf to obtain the area similarity score AreaSim(i). The area similarity score may represent a similarity degree of a certain area of the faces in two images.
Then, the comparison module 123 obtains an overall similarity score similarity(Celebi) corresponding to each of the second images according to a following equation (3).
According to the equation (3), the comparison module 123 obtains a sum of the area similarity scores ΣAreaSim (i) corresponding to all of the areas, and divides the sum of the area similarity scores ΣAreaSim (i) by the number of all of the areas N(Area) to obtain the overall similarity score similarity(Celebi) corresponding to each of the second images. In other words, the comparison module 123 may obtain an average of all of the area similarity scores corresponding to each of the second images to serve as the overall similarity score corresponding to each of the second images. The overall similarity score may represent a full face similarity degree of two images.
After obtaining the area similarity score and the overall similarity score corresponding to each of the second images through the aforementioned equations (1), (2), and (3), the comparison module 123 determines the second image that is the most similar to the first image as an evaluation result according to the overall similarity score corresponding to each of the second images. In the present embodiment, each area of one first image corresponds to one highest area similarity score, and one first image may correspond to one highest overall similarity score.
Taking the eye area 500 as an example, referring to a following table one, it is assumed that the first image represents a current user image, and a second image (a) represents an image of a celebrity. “Eye W/H”, “Eye-Face W”, “Eye-Face H” and “Eye distance” respectively represent four feature factors corresponding to the eye area 500 including the ratio between the eye width Ew and the eye height Eh, the ratio between the eye width Ew and a half of the face width Fw, the ratio between the eye height Eh and the face height Fh and the ratio between the eye distance Ed and the face width Fw.
As shown in the following Table. 1, the comparison module 123 respectively calculates the feature difference parameters sim(f,i) of four feature factors corresponding to the eye area between the first image and the second image (a) to be 0.7, 0.93, 0.89 and 0.96 according to the above equation (1). Then, the comparison module 123 obtains the area similarity score corresponding to the eye area of the second image (a) to be 85% according to the above equation (2).
Similarly, the comparison module 123 further compares the first image with other second images to obtain the area similarity scores corresponding to the other second images. For example, the comparison module 123 obtains the area similarity score of the eye area of another second image (b) to be 93%, and the area similarity score of the eye area of the other second image (c) to be 89%. Therefore, regarding the eye area, the comparison module 123 determines that the second image (b) corresponds to the highest area similarity score, and generates an evaluation result representing that the eye area of the first image is the most similar to the eye area of the second image. In an embodiment, the evaluation result may include information of the second image (b) corresponding to the highest area similarity score.
Besides the eye area, the comparison module 123 may respectively determine the highest area similarity scores corresponding to the other areas according to the aforementioned method, so as to generate the corresponding evaluation results. Moreover, the comparison module 123 may also calculate the overall similarity score corresponding to each of the second images according to the equation (3), and determines the highest overall similarity score to generate the evaluation result representing that the first image is the most similar to the second image with the highest overall similarity score. In an embodiment, the evaluation result may include information of the second image corresponding to the highest overall similarity score.
Referring to
Referring to
In summary, in the invention, the feature factors corresponding to each of the images are obtained based on the feature points of each of the face image, and a difference between each of the feature factors is obtained according to the feature factors respectively corresponding to the two images, and an area similarity score corresponding to each area of the face is obtained according to the difference of each of the feature factors, so as to obtain the overall similarity score corresponding to the face image. In this way, the user learns a similarity degree between his own look and other people or celebrities.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
201710680021.6 | Aug 2017 | CN | national |