This application claims priority to Taiwan Application Serial Number 110118349, filed May 20, 2021, which is herein incorporated by reference.
The present disclosure relates to a localization method and a localization system. More particularly, the present disclosure relates to an eye center localization method and a localization system thereof.
An eye center localization method can calculate an eye center coordinate from an image with human face. However, the conventional eye center localization methods are applied to an image of frontal face or image of head posture in specific rotating angle. If the rotating angle of the image is too big, the conventional eye center localization method cannot locate the eye center from the image correctly.
Thus, a method and a system for locating the eye center which is not restricting by the rotating angle of the head in the image are commercially desirable.
According to one aspect of the present disclosure, an eye center localization method is configured to locate an eye center position information from an image, the eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to drive a processing unit to sketch a face image from the image of a database. The frontal face generating step is performed to drive the processing unit to transform the face image into a frontal face image according to a frontal face generating model. The eye center marking step is performed to drive the processing unit to mark a frontal eye center position information on the frontal face image according to a gradient method. The geometric transforming step is performed to drive the processing unit to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.
According to another aspect of the present disclosure, an eye center localization system is configured to locate an eye center position information from an image, the eye center localization system includes a database and a processing unit. The database is configured to access the image, a frontal face generating model and a gradient method. The processing unit is electrically connected to the database, the processing unit receives the image, the frontal face generating model and the gradient method and is configured to implement an eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to sketch a face image from the image. The frontal face generating step is performed to transform the face image into a frontal face image according to the frontal face generating model. The eye center marking step is performed to mark a frontal eye center position information on the frontal face image according to the gradient method. The geometric transforming step is performed to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.
The present disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
The embodiment will be described with the drawings. For clarity, some practical details will be described below. However, it should be noted that the present disclosure should not be limited by the practical details, that is, in some embodiment, the practical details is unnecessary. In addition, for simplifying the drawings, some conventional structures and elements will be simply illustrated, and repeated elements may be represented by the same labels.
It will be understood that when an element (or device) is referred to as be “connected to” another element, it can be directly connected to other element, or it can be indirectly connected to the other element, that is, intervening elements may be present. In contrast, when an element is referred to as be “directly connected to” another element, there are no intervening elements present. In addition, the terms first, second, third, etc. are used herein to describe various elements or components, these elements or components should not be limited by these terms. Consequently, a first element or component discussed below could be termed a second element or component.
Please refer to
Please refer to
Please refer to
D* is a maximum value of an Euclidean distance from the estimate right eye center coordinate (AECr_x, AECr_y) and the estimate left eye center coordinate (AECI_x, AECl_y) to the chin feature point p8. α1 and α2 are adjustable coefficients. (ULC_x, ULC_y) is a coordinate of a begin point of sketching the facial area.
Please refer to
The eye center marking step S13 is performed to drive the processing unit to mark a frontal eye center position information C on the frontal face image IFf according to a gradient method. The eye center marking step S13 includes a weight adjusting step S132. The weight adjusting step S132 is performed to adjust a weight value of the frontal face image IFf according to an Iris-Ripple filter method. More particularly, the frontal eye center position information C includes a frontal right eye center coordinate (Cr_x, Cr_y) and a frontal left eye center coordinate (Cl_x, Cl _y). During marking the frontal eye center position information C, the shadow of the specific area (such as an eyelid area, a canthus area and an eyebrow area) of the frontal face image IFf will interfere the gradient of the frontal face image IFf, and reduce the accuracy of marking the frontal eye center position information C by the gradient method. Thus, adjusting the weight value by the Iris-Ripple method can increase the locating accuracy. The Iris-Ripple filter method is satisfied by a formula (6) and a formula (7), and the Iris-Ripple method combines with the gradient method is satisfied by a formula (8).
Rr* represents the eye area, IR(x, y) represents the coordinate of the current adjusting pixel, Eyem represents a column number of the pixel of the eye area, Eyen represents a row number of the pixel of the eye area, r represents a radius of the eye area, τ=2π, {Lx, Ly} is a coordinate of a pixel which is calculated by a radius perimeter taking the estimate right eye center coordinate (AECr_x, AEC-r_y) and the estimate left eye center coordinate (AECl_x, AECl_y) as centers, w(⋅)is a weight value before calculating, C′ represents a current eye center coordinate, N is a pixel number of the eye area, IFe(AEC(x, y)) is a strength of predicting the center of the eye area, d(x, y) is a displacement vector between c and p(x, y), g(x, y) is a gradient vector, and α3 is a maximum grayscale.
Please refer to
The rotating variable faceθ1 is a rotating variable between the face image If and the frontal face image IFf which is rotating along the x axis (i.e., yaw rot.), the rotating variable faceθ2 is a rotating variable between the face image If and the face transforming image If′ which is rotating along the z axis (i.e., roll rot.). L1 is a linear relation equation between the estimate right eye center coordinate (AECr _x, AECr_y) and the estimate left eye center coordinate (AECl_x, AECl_y), L2 is a linear relation equation between the frontal right eye center coordinate (Cr_x, Cr_y) and the frontal left eye center coordinate (Cl_x, Cl_y), and L3 is a linear relation equation between the estimate right eye center coordinate (AECr_x, AECr_y) and the estimate left eye center coordinate (AECl_x, AECl_y) after transforming into the three-dimensional coordinate. m1 is a slope of the linear relation equation L1, m2 is a slope of the linear relation equation L2, and m3 is a slope of the linear relation equation L3.
The eye center transforming step S144 is performed to predict a depth transforming coordinate (Ierc1_x, lerc1_y) of the face image If with respect to the frontal face image IFf according to the two rotating variables faceθ1, faceθ2, and calculate the eye center position information leC according to the depth transforming coordinate (lerc1_x, lerc1_y). The eye center transforming step S144 predicts the depth transforming coordinate (lerc1_x, lerc1_y) by a formula (10):
The eye center position information leC includes a right eye center coordinate (lerC_x, lerC_y) and a left eye center coordinate (lelC_x, lelC_y), and (IFAECr_x, IFAECr_y) is a frontal face estimate right eye center coordinate.
In detail, after the formula (10) obtains the depth transforming coordinate (lerc1_x, lerc1_y), in order to avoid the difference between the frontal eye center position information C calculated by the frontal face image IFf which is generated from the frontal face generating model and the actual value. The eye center transforming step 144 can adjust the depth transforming coordinate (lerc1_x, lerc1_y) by a formula (11):
(lerc2_x, lerc2_y) is a frontal right eye center coordinate which has a big difference with the actual value, α4 is a correction coefficient. Thus, the eye center localization method 100a of the present disclosure adjusts the eye center position information leC by the correction coefficient α4 to avoid the difference caused by the frontal face image IFf, thereby increasing the accuracy of the eye center position information leC.
Please refer to
Please refer to
The database 210 is configured to access the image I, a frontal face generating model 20 and a gradient method 30. In detail, the database 210 can be a memory or other data accessing element.
The processing unit 220 is electrically connected to the database 210, the processing unit 220 receives the image I, the frontal face generating model 20 and the gradient method 30, and the processing unit 220 is configured to implement the eye center localization methods 100, 100a. In detail, the processing unit 220 can be a microprocessor, a central processing unit (CPU) or other electronic processing unit, but the present disclosure is not limited thereto. Thus, the eye center localization system 200 locates the eye center position information leC from an image I with non-frontal face.
According to the aforementioned embodiments and examples, the advantages of the present disclosure are described as follows.
1. The eye center localization method and localization system thereof locate the eye center position information from image with non-frontal face.
2. The eye center localization method of the present disclosure adjusts the eye center position information by the correction coefficient to avoid the difference caused by the frontal face image, thereby increasing the accuracy of the eye center position information.
3. The eye center localization method of the present disclosure can predict the eye center position information from the image directly by the eye center locating model.
Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
110118349 | May 2021 | TW | national |