The present disclosure relates to the technical field of image processing, and particularly relates to a method and apparatus for determining a human-face size.
A method of non-invasive positive-pressure ventilation has been extensively used in Obstructive Sleep Apnea Syndrome (referred to for short as OSA), Chronic Obstructive Pulmonary Disease (referred to for short as COPD) and so on. No pipe is required to be inserted into the airway of the patient by surgery, but by using a blower, a pipeline and a patient interface device, a continuous pressure ventilation (CPAP) or varying pressure ventilation is delivered to the airway of the patient, for example, a double-level pressure varying with the respiratory cycle of the patient or an automatically regulated pressure ventilation varying with the monitored situation of the patient. Such a pressure-supporting therapy is usually also used in obstructive hypopnea, Upper Airway Resistance Syndrome (referred to for short as UARS), congestive heart failure and so on.
Non-invasive ventilation treatment device includes the interface device on the face of the patient. Such an interface device generally refers to a face mask that surrounds the nose and the mouth of the face of the patient and seals them. In the treatment, an external blower is the pressure supporting device, for example, a breathing machine, and the patient interface device connects the gas pressure supplied by the breathing machine and the airway of the patient, to send the respiratory air flow to the airway of the patient.
In order to adapt for different face sizes, in many cases the face masks are set to be different models, such as a large size, a middle size and a small size. In order to provide a more effective treatment, it is required to select the face mask that is suitable for the size of the face of the patient. When a face mask that does not match with the face size is selected, air leakage or other problem might happen, which affects the wearing comfortableness, and deteriorates the effect of the treatment. Therefore, it is very important to conveniently and quickly measure the face size of the patient, thereby selecting the face mask that is suitable for the face size of the patient. Regarding a patient having a malformed face, the face masks of the standard models cannot be used, and it is required to customize a dedicated face mask according to the contour of the face of the patient. In this case, it is required to conveniently and quickly acquire the 3D contour information of the face of the patient.
In the selection or personalized designing of medical face masks, a nose measuring card shown in
In the related art, in the selection or personalized designing of the medical face masks, a measuring tool such as a scale ruler as shown in
The method of measuring the nose width by using the nose measuring card in the related art, in an aspect, can merely obtain an approximate range of the nose width of the patient, and cannot be used for the personalized customization of the face mask. In another aspect, when the model selection of the face mask is performed merely by using the nose width, the single factor is taken into consideration, and unsuitableness at the other positions such as the bridge of the nose and the chin might happen.
The method of directly measuring the face size of the patient by using a scale ruler in the related art has tedious data recording and troublesome operation, and the manual measurement easily has a high error.
In view of the above problems, no effective solutions have been proposed at present.
The embodiments of the present disclosure provide a method and apparatus for determining a human-face size, which may accurately measure the human-face size of the to-be-measured target, to solve the technical problem in the related art of a low precision of measurement on human-face sizes.
In the first aspect, an embodiment of the present disclosure provides a method for determining a human-face size, where the method includes:
In the second aspect, an embodiment of the present disclosure further provides a method for determining a human-face size, where the method includes:
In the third aspect, an embodiment of the present disclosure further provides a method for determining a human-face size, where the method includes:
In the fourth aspect, an embodiment of the present disclosure further provides a method for determining a face-mask model, where the method includes:
In the fifth aspect, an embodiment of the present disclosure further provides an apparatus for determining a human-face size, where the apparatus includes:
In the sixth aspect, an embodiment of the present disclosure further provides an apparatus for determining a human-face size, where the apparatus includes:
In the seventh aspect, an embodiment of the present disclosure further provides an apparatus for determining a human-face size, where the apparatus includes:
The embodiments of the present disclosure have the following advantages:
An embodiment of the present disclosure includes collecting a first to-be-measured human-face image of a to-be-measured target, where the first to-be-measured human-face image includes a first to-be-measured human face of the to-be-measured target and a circular reference object; acquiring a first pixel value of the first to-be-measured human face, and a second pixel value of the circular reference object; in response to a circularity of the circular reference object being greater than a preset circularity threshold, acquiring an angle of inclination of the circular reference object; and according to the first pixel value, the second pixel value, the angle of inclination and an actual size of the circular reference object, determining a first size of the first to-be-measured human face. By using the proportion relation between the actual size and the pixel value in the first to-be-measured human-face image of the circular reference object, and according to the angle of inclination of the circular reference object, the first size of the first to-be-measured human face is determined, which simplifies the steps of the measuring operation in the related art, and increases the accuracy of the measurement on the human-face size.
Furthermore, another embodiment of the present disclosure includes by using a second image collecting device, collecting a second to-be-measured human-face image and a third to-be-measured human-face image of a to-be-measured target, where an image-collection distance of the second to-be-measured human-face image and an image-collection distance of the third to-be-measured human-face image are unequal; acquiring a second pixel value of the second to-be-measured human-face image, and a third pixel value of the third to-be-measured human-face image; and according to an image-collection-distance difference between the second to-be-measured human-face image and the third to-be-measured human-face image, a focal length of the second image collecting device, the second pixel value and the third pixel value, determining a second size of a second to-be-measured human face of the to-be-measured target. By photographing the plurality of human-face images of the to-be-measured target at unequal image-collection distances, and subsequently using the image-collection-distance difference and the pixel values of the different human-face images, the second size of the second to-be-measured human face is determined, which further increases the accuracy of the measurement on the human-face size.
Furthermore, another embodiment of the present disclosure includes by using a third image collecting device, collecting a fourth to-be-measured human-face image of a to-be-measured target, where the third image collecting device includes at least two cameras; acquiring at least two incident angles of inclination corresponding to information collecting points in the fourth to-be-measured human-face image, where the incident angles of inclination are obtained based on the third image collecting device when diffuse reflection from the information collecting points irradiates into the at least two cameras; according to the at least two incident angles of inclination and a camera distance between the at least two cameras, determining positions of the information collecting points; and according to the positions of the information collecting points, determining a third size of a third to-be-measured human face in the fourth to-be-measured human-face image. That avoids manually measuring the human face by using a scale ruler, which simplifies the measuring operation. Moreover, by photographing the fourth to-be-measured human-face image by using a plurality of lenses, the third size of the third to-be-measured human face may be determined based on the camera distance between the cameras, which ensures the accuracy of the measurement on the human-face size.
The above description is merely a summary of the technical solutions of the present disclosure. In order to more clearly know the elements of the present disclosure to enable the implementation according to the contents of the description, and in order to make the above and other purposes, features and advantages of the present disclosure more apparent and understandable, the particular embodiments of the present disclosure are provided below.
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the figures that are required to describe the embodiments of the present disclosure will be briefly described below. Apparently, the figures that are described below are embodiments of the present disclosure, and a person skilled in the art can obtain other figures according to these figures without paying creative work.
The technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings of the embodiments of the present disclosure. Apparently, the described embodiments are merely certain embodiments of the present disclosure, rather than all of the embodiments. All of the other embodiments that a person skilled in the art obtains on the basis of the embodiments of the present disclosure without paying creative work fall within the protection scope of the present disclosure.
The First Method Embodiment
Referring to
In the present embodiment, a first to-be-measured human-face image of a to-be-measured target is collected, where the first to-be-measured human-face image may be a photograph of the whole or a local part of a human body. The first to-be-measured human-face image includes the first to-be-measured human face that is required to be measured and a preset reference object, for example, a photograph of the full body, a photograph of the upper half body, a photograph of the head or the face, and so on, of a human body, which is not limited in any form in the present embodiment. The first image collecting device according to the present embodiment generally collects a planar image of the to-be-measured target, and the first image collecting device includes but is not limited to a photo camera, a video camera and other devices that have the function of image collection.
In the present embodiment, the circular reference object includes but is not limited to a coin, a circular cup cover, a circular disk and so on. Furthermore, the preset reference object further includes but is not limited to a circular scrip of a known size. In the present embodiment, the particular type and shape of the circular reference object are not limited in any form. The circularity according to the present embodiment refers to the difference between the radii of two concentric circles that include the same actual cross-sectional contour and have the lowest radius difference. The lower the value of the difference is, which indicates that the circle more tends to be a standard circle, and the higher the value of the difference is, which indicates that the circle deviates from a standard circle.
Furthermore, when the circularity of the circular reference object is less than or equal to the preset circularity threshold, it is deemed that, currently, the plane of the circular reference object and the plane where the image collecting device is located are parallel. Referring to
However, in the case that the image-collection angle when the collecting device is collecting the first to-be-measured human-face image of the to-be-measured target is a non-standard angle, the circularity of the circular reference object is greater than the preset circularity threshold, and therefore it is required to acquire the angle of inclination of the circular reference object. Referring to
When the circularity of the circular reference object is greater than a preset circularity threshold, it is determined that the first to-be-measured human face including the circular reference object is photographed at a non-standard angle, and accordingly it is deemed that the first to-be-measured human-face image is a projection on the photograph/negative film. Therefore, it is required to determine the angle of inclination of the circular reference object, to rectify the deviation of the angle of inclination. Therefore, when the circularity of the circular reference object in the first to-be-measured human-face image is greater than the preset circularity threshold, it is required to determine the angle of inclination of the circular reference object, and, after the angle of inclination of the circular reference object is determined, the first size of the first to-be-measured human face according to the angle of inclination is subsequently determined.
In other examples of the present embodiment, on the above basis, the coin adhered to the forehead of the to-be-measured target is replaced by a credit card, which, certainly, may also be a circular non-standard item whose size may be easily measured (including but not limited to a circular scrip). The implementations and the calculation processes of the other examples of the present embodiment are the same as the principle when a coin is used as the preset reference object, and are not discussed further herein.
It should be noted that the present embodiment includes collecting a first to-be-measured human-face image of a to-be-measured target, where the first to-be-measured human-face image includes a first to-be-measured human face of the to-be-measured target and a circular reference object; acquiring a first pixel value of the first to-be-measured human face, and a second pixel value of the circular reference object; if a circularity of the circular reference object is greater than a preset circularity threshold, acquiring an angle of inclination of the circular reference object; and according to the first pixel value, the second pixel value, the angle of inclination and an actual size of the circular reference object, determining a first size of the first to-be-measured human face. By using the proportion relation between the actual size and the pixel value in the first to-be-measured human-face image of the circular reference object, and according to the angle of inclination of the circular reference object, the first size of the first to-be-measured human face is determined, which simplifies the steps of the measuring operation in the related art, and increases the accuracy of the measurement on the human-face size.
Optionally, the present embodiment further includes but is not limited to: if the circularity is less than or equal to the preset circularity threshold, according to the first pixel value, the second pixel value and the actual size, determining the first size.
Particularly, in an example, referring to
By using the above formula, the actual sizes of the particular positions in the first to-be-measured human face may be obtained as follows:
By using the above example, in response to the circularity of the circular reference object being less than the preset circularity threshold, the first size of the first to-be-measured human face may be determined according to the first pixel value, the second pixel value and the actual size of the circular reference object, which increases the precision of the acquired human-face size.
Optionally, in the present embodiment, in response to the circularity being greater than the preset circularity threshold, the step of determining the angle of inclination of the circular reference object includes but is not limited to: acquiring a standard major axis of the circular reference object, where the standard major axis refers to, in response to the circularity being greater than the preset circularity threshold, a longest diameter in the circular reference object; acquiring a first major axis in a first direction and a second major axis in a second direction in the circular reference object, where the first direction refers to a straight line where a connecting line between centers of two eyes in the first to-be-measured human-face image is located, and the second direction refers to a straight line where a connecting line between an eyebrow center and a nasal tip in the first to-be-measured human-face image is located; according to the standard major axis and the first major axis, determining a horizontal angle of inclination of the circular reference object; and according to the standard major axis and the second major axis, determining a vertical angle of inclination of the circular reference object.
Particularly, referring to
Referring to
Optionally, in the present embodiment, the step of, according to the first pixel value, the second pixel value and the actual size of the preset reference object, determining the first size of the first to-be-measured human face includes but is not limited to: according to the first pixel value, the second pixel value, the horizontal angle of inclination, the vertical angle of inclination and the actual size, determining the first size.
Particularly, when the circular reference object is a 1 yuan coin, the diameter of the 1 yuan coin is 22.25 mm. In
and accordingly the true distance in the space between the centers of the two eyes is:
By substituting the
in the formula (5) into formula (7), it is obtained that:
Likewise, based on
Optionally, in the present embodiment, before the step of collecting the first to-be-measured human-face image of the to-be-measured target, the method further includes but is not limited to: on an image previewing interface of the first image collecting device for collecting the first to-be-measured human-face image, exhibiting an auxiliary line, where the auxiliary line is used to indicate the spatial position and the image-collection angle of the first image collecting device.
Particularly, in order to effectively reduce the error generated in the collection of the human-face image of the to-be-measured target at an image-collection angle as a non-standard angle, a photographing assisting means may be provided in the first image collecting device. For example, when the human-face image of the to-be-measured target is being collected, an auxiliary line is exhibited in a preview frame of the lens image, to guide the user to adjust the position of the camera, to reduce the horizontal angle of inclination and the vertical angle of inclination. The auxiliary line may be a horizontal line, to guide the user to, in photographing, make his eyes to flush with the horizontal line, and may also be a rectangular auxiliary line or circular auxiliary line that has the similar function, so as to place the preset reference object within the area of the auxiliary line and so on.
Optionally, in the present embodiment, the circular reference object includes but is not limited to an iris in the first to-be-measured human-face image.
The present embodiment includes collecting a first to-be-measured human-face image of a to-be-measured target, where the first to-be-measured human-face image includes a first to-be-measured human face of the to-be-measured target and a circular reference object; acquiring a first pixel value of the first to-be-measured human face, and a second pixel value of the circular reference object; in response to a circularity of the circular reference object being greater than a preset circularity threshold, acquiring an angle of inclination of the circular reference object; and according to the first pixel value, the second pixel value, the angle of inclination and an actual size of the circular reference object, determining a first size of the first to-be-measured human face. By using the proportion relation between the actual size and the pixel value in the first to-be-measured human-face image of the circular reference object, and according to the angle of inclination of the circular reference object, the first size of the first to-be-measured human face is determined, which simplifies the steps of the measuring operation in the related art, and increases the accuracy of the measurement on the human-face size.
The Second Method Embodiment
Referring to
Step 801: by using a second image collecting device, collecting a second to-be-measured human-face image and a third to-be-measured human-face image of a to-be-measured target, where an image-collection distance of the second to-be-measured human-face image and an image-collection distance of the third to-be-measured human-face image are unequal;
Step 802: acquiring a second pixel value of the second to-be-measured human-face image, and a third pixel value of the third to-be-measured human-face image; and
Step 803: according to an image-collection-distance difference between the second to-be-measured human-face image and the third to-be-measured human-face image, a focal length of the second image collecting device, the second pixel value and the third pixel value, determining a second size of a second to-be-measured human face of the to-be-measured target.
In the present embodiment, the collected second to-be-measured human-face image and third to-be-measured human-face image of the to-be-measured target include but are not limited to a photograph of the whole or a local part of a human body. The second to-be-measured human-face image or the third to-be-measured human-face image includes the second to-be-measured human face that is required to be measured, for example, a photograph of the full body, a photograph of the upper half body, a photograph of the head or the face, and so on, of a human body. In the present embodiment, the particular photograph type is not limited in any form.
The second image collecting device according to the present embodiment generally collects a planar image of the to-be-measured target, and the second image collecting device includes but is not limited to a photo camera, a video camera and other devices that have the function of image collection. It should be noted that, in the process of the second image collecting device collecting the second to-be-measured human-face image and the third to-be-measured human-face image of the to-be-measured target, the image-collection distances of the collection of the second to-be-measured human-face image and the collection of the third to-be-measured human-face image are unequal, but the image-collection angles are equal. In order to maintain the accuracy of the acquired second human-face size, it is required to maintain the camera and the human-face plane of the second to-be-measured human face to be parallel.
Furthermore, optionally, the present embodiment includes but is not limited to the second to-be-measured human-face image and the third to-be-measured human-face image that are obtained by performing twice collection at unequal distances by the second image collecting device, and may also include at least two human-face images that are obtained by performing at least twice collection at unequal distances by the second image collecting device, for example, the second to-be-measured human-face image, the third to-be-measured human-face image and another to-be-measured human-face image that are obtained by collection at three unequal distances. By collecting the plurality of to-be-measured human-face images, the accuracy of the second to-be-measured human face that is actually measured is increased, thereby reducing the error.
Referring to
In the example in
In
In most cases, the measurement on the image-collection distances μ1 and μ2 is difficult. If it is defined that the difference between the two image-collection distances is:
ΔX=μ1−μ2 (14)
Then, by simultaneously solving the formulas (10)-(14), the object size may be obtained:
It can be known from the formula (15) that it is merely required to measure the difference ΔX between the image-collection distances, and the sizes n and n′ of the same object in the photographs collected in two different images, and the true size of the specified object in the second to-be-measured human face may be estimated. It can be understood that, when the object size h is the sizes of multiple features in the face surface of the patient, the true sizes of the multiple features may be individually estimated.
In practical application scenes, when there is not a high requirement on the size precision, the image distance v2 is approximately equal to the focal length f(v2≈f). The image-collection distance from the specified object to the camera may be estimated as follows:
If the specified object is a certain tiny feature of the to-be-measured target, the distance from the tiny feature to the photo camera may be estimated.
Likewise, the image-collection distances from each of the tiny features (for example, point features and line features) of the face of the to-be-measured target to the photo camera may be estimated. By performing data processing to the image-collection distances from each of the point features and the line features of the face of the to-be-measured target to the camera, a 3D contour diagram of the face of the to-be-measured target may be obtained, to obtain the second size of the second to-be-measured human face of the to-be-measured target.
Optionally, in the present embodiment, the step of, by using the second image collecting device, collecting the second to-be-measured human-face image and the third to-be-measured human-face image of the to-be-measured target incudes but is not limited to: according to a first position sensor in the second image collecting device, and a second position sensor in the to-be-measured target, determining the image-collection-distance difference.
Particularly, the first position sensor and the second position sensor are provided in the second image collecting device and the to-be-measured target respectively, a first image-collection distance corresponding to the second to-be-measured human-face image and a second image-collection distance corresponding to the third to-be-measured human-face image are determined by using the first position sensor and the second position sensor, and subsequently the difference between the first image-collection distance and the second image-collection distance is acquired.
It should be noted that the first position sensor and the second position sensor according to the present embodiment include but are not limited to an alignment sensor, an infrared sensor and so on. When the first position sensor and the second position sensor are alignment sensors, by using the first position sensor and the second position sensor, the real-time positions of the image collecting device and the to-be-measured target may be determined respectively, thereby determining the image-collection-distance difference. Moreover, when the first position sensor and the second position sensor are infrared sensors, the distance between the first position sensor and the second position sensor may be acquired, thereby determining the image-collection-distance difference. In the present embodiment, the particular types of the first position sensor and the second position sensor are not limited in any form.
Optionally, in the present embodiment, the step of, by using the second image collecting device, collecting the second to-be-measured human-face image and the third to-be-measured human-face image of the to-be-measured target includes but is not limited to: after collecting the second to-be-measured human-face image by using the second image collecting device, controlling the second image collecting device to move by a first distance, where the first distance is equal to the image-collection-distance difference; or after collecting the second to-be-measured human-face image by using the second image collecting device, controlling the to-be-measured target to move by the first distance.
Particularly, in the present embodiment, the image-collection distance may be changed in two modes. The first mode is to maintain the position of the to-be-measured target unchanged, and move the second image collecting device relative to the to-be-measured target by the first distance. The second mode is to maintain the position of the second image collecting device unchanged, and control the to-be-measured target to be moved relative to the second image collecting device by the first distance. It should be noted that, after the second image collecting device or the to-be-measured target is moved by the first distance, it is required to maintain the camera of the second image collecting device and the human-face plane of the to-be-measured target to be parallel.
The present embodiment includes by using a second image collecting device, collecting a second to-be-measured human-face image and a third to-be-measured human-face image of a to-be-measured target, where an image-collection distance of the second to-be-measured human-face image and an image-collection distance of the third to-be-measured human-face image are unequal; acquiring a second pixel value of the second to-be-measured human-face image, and a third pixel value of the third to-be-measured human-face image; and according to an image-collection-distance difference between the second to-be-measured human-face image and the third to-be-measured human-face image, a focal length of the second image collecting device, the second pixel value and the third pixel value, determining a second size of a second to-be-measured human face of the to-be-measured target. By photographing the plurality of human-face images of the to-be-measured target at unequal image-collection distances, and subsequently using the image-collection-distance difference and the pixel values of the different human-face images, the second size of the second to-be-measured human face is determined, which further increases the accuracy of the measurement on the human-face size.
The Third Method Embodiment
Referring to
In the present embodiment, the fourth to-be-measured human-face image includes but is not limited to a photograph of the whole or a local part of a human body. The fourth to-be-measured human-face image includes the third to-be-measured human face that is required to be measured, for example, a photograph of the full body, a photograph of the upper half body, a photograph of the head or the face, and so on, of a human body. In the present embodiment, the particular photograph type is not limited in any form.
The third image collecting device according to the present embodiment includes but is not limited to a photo camera, a video camera and other devices that have the function of image collection. The third image collecting device includes at least two cameras, and the cameras have the function of data processing, and may acquire the incident angles of inclination of light rays. Preferably, the at least two cameras in the third image collecting device are arranged according to a predetermined layout. For example, when the image collecting device is a double-camera image collecting device, the straight line where the connecting line between the two cameras is located is a straight line in the vertical or horizontal direction, and is parallel to the plane where the human face of the to-be-measured target is located. Moreover, when the image collecting device is an image collecting device having three or more cameras, all of the three or more cameras are provided in the same plane, and are arranged at the positions of the vertexes of a polygon, where the quantity of the sides of the polygon is equal to the quantity of the cameras. For example, in an image collecting device having three cameras, the three cameras are arranged at the positions of the vertexes of a triangle. Four cameras are arranged at the positions of the vertexes of a square.
The cameras in the third image collecting device according to the present embodiment have the function of data processing, and may acquire the incident angles of inclination of light rays. Practically, by using the position layout of the cameras in the third image collecting device, the distance between the plurality of cameras may be acquired, and, subsequently, by using the distance between the plurality of cameras and the information collecting points in the human face of the to-be-measured target, the positions of the information collecting points may be determined.
In particular application scenes, a plurality of information collecting points are provided in advance in the human face of the to-be-measured target, and, by acquiring the positions of the plurality of information collecting points, a three-dimensional 3D model of the human face of the to-be-measured target is established. The quantity and the particular positions of the information collecting points may be set according to practical experience. For example, the information collecting points are configured according to the requirements of breathing masks in practical applications, for example, the two-eye distance, the distance from the nasal tip to the human eyes, the bridge height, and so on, in the human face, which is not limited in any form in the present embodiment.
Optionally, in the present embodiment, the third image collecting device includes a first camera and a second camera, and the information collecting points include a first end point and a second end point of a preset connecting line of the fourth to-be-measured human-face image; and the step of acquiring the at least two incident angles of inclination corresponding to the information collecting points in the fourth to-be-measured human-face image includes but is not limited to: acquiring an incident angle of inclination corresponding to the first end point and an incident angle of inclination corresponding to the second end point, where the preset connecting line is parallel to a connecting line between the first camera and the second camera.
Particularly, when the third image collecting device has merely two cameras, the acquired distances from the information collecting points to the first camera and to the second camera are actually relative distances. The connecting line formed by the first camera and the second camera is parallel to the preset connecting line, and the information collecting points in the preset connecting line are located in the plane formed by the first camera, the second camera, the first end point and the second end point. By acquiring the incident angles of inclination corresponding to the at least two information collecting points in the preset connecting line, i.e., the first end point and the second end point, by using the double-camera third image collecting device, the relative positions of the first end point and the second end point are determined.
Optionally, in the present embodiment, the step of acquiring the incident angle of inclination corresponding to the first end point and the incident angle of inclination corresponding to the second end point includes but is not limited to: acquiring a first incident angle of inclination of irradiation from the first end point into the first camera, and a second incident angle of inclination of irradiation from the first end point into the second camera; and acquiring a third incident angle of inclination of irradiation from the second end point into the first camera, and a fourth incident angle of inclination of irradiation from the second end point into the second camera.
Particularly, referring to
Optionally, in the present embodiment, the step of, according to the at least two incident angles of inclination and the camera distance between the at least two cameras, determining the positions of the information collecting points includes but is not limited to: according to the first incident angle of inclination, determining a first distance between the first end point and the first camera, and according to the first incident angle of inclination, determining a second distance between the first end point and the second camera; according to the second incident angle of inclination, determining a third distance between the second end point and the first camera, and according to the second incident angle of inclination, determining a fourth distance between the second end point and the second camera; based on a straight line where the first camera and the second camera are located, establishing a plane coordinate system; based on the plane coordinate system, according to the first distance, the second distance, the first incident angle of inclination, the second incident angle of inclination and the camera distance, determining a plane-coordinate position of the first end point; and based on the plane coordinate system, according to the third distance, the fourth distance, the third incident angle of inclination, the fourth incident angle of inclination and the camera distance, determining a plane-coordinate position of the second end point.
Particularly, referring to
In order to further determine the position of the first end point M, the present disclosure provides the following preferable logical-calculation method. By using the camera B as the origin of coordinates, using the direction of the connecting line between the camera B and the camera A as the Y-axis direction, and using the normal direction of the connecting line between the camera B and the camera A as the X-axis direction, a rectangular plane coordinate system XOY is established (
By simultaneously solving the formulas (20)-(21), it may be obtained that:
In other words, the position coordinate of the first end point M in the coordinate system XOY is
By using the same logical-calculation method, the position coordinate of the second end point N of the face of the patient can be obtained, which is not discussed further in the present embodiment.
Optionally, in the present embodiment, the step of, according to the positions of the information collecting points, determining the third size of the third to-be-measured human face in the fourth to-be-measured human-face image includes but is not limited to: acquiring a pixel value of the preset connecting line and a pixel value of the second to-be-measured human face in the fourth to-be-measured human-face image; according to the plane-coordinate position of the first end point and the plane-coordinate position of the second end point, determining a size of the preset connecting line; and according to the size of the preset connecting line, the pixel value of the preset connecting line and the pixel value of the third to-be-measured human face, determining the third size.
Particularly, in an example, if the coordinate of the second end point N is set to be (m,n), then the position coordinate of the second end point N in the coordinate system XOY may be obtained, and in turn it may be obtained that the distance between the first end point M and the second end point N of the preset connecting line in the face of the to-be-measured target is:
|MN|=√{square root over ((x−m)2+(y−n)2)} (24)
In other words, by using the above-described double-lens processing system, the distance between any two information collecting points in the face of the to-be-measured target may be obtained, to determine the third size of the third to-be-measured human face, which is used to instruct the to-be-measured target to select a suitable face mask, or used to manufacture a suitable face mask.
Optionally, in the present embodiment, the third image collecting device includes at least three cameras, and the at least three cameras are located in a same plane; the fourth to-be-measured human-face image includes a plurality of information collecting points, and the information collecting points are allocated into the fourth to-be-measured human-face image according to a preset density threshold; and the step of acquiring the at least two incident angles of inclination corresponding to the information collecting points in the fourth to-be-measured human-face image when the diffuse reflection irradiates into the at least two cameras includes but is not limited to: acquiring at least three incident angles of inclination corresponding to the information collecting points.
Particularly, referring to
Optionally, in the present embodiment, the step of, according to the at least two incident angles of inclination and the camera distance between the at least two cameras, determining the positions of the information collecting points includes but is not limited to: acquiring at least three distances between each of the information collecting points and the at least three cameras; according to a plane where the at least three cameras are located, establishing a space coordinate system; and based on the space coordinate system, according to camera distances between the at least three cameras and the at least three incident angles, determining space coordinate positions of the information collecting points.
Particularly, still taking the example in
By simultaneously solving the formulas (28)-(30), the values of x, y and z can be obtained, or, in other words, the space coordinate position of the information collecting point M is obtained.
Optionally, in the present embodiment, the step of, according to the positions of the information collecting points, determining the third size of the third to-be-measured human face in the fourth to-be-measured human-face image includes but is not limited to: based on the space coordinate positions corresponding to the plurality of information collecting points, establishing a human-face model of the third to-be-measured human face, to determine the third size.
Particularly, referring to
It should be noted that, in particular application scenes, especially regarding a to-be-measured target having a malformed face, by designing a matching face mask according to the face information of the to-be-measured target, the sealing property and the wearing comfortableness are effectively improved.
The present embodiment includes by using a third image collecting device, collecting a fourth to-be-measured human-face image of a to-be-measured target, where the image collecting device includes at least two cameras; acquiring at least two incident angles of inclination corresponding to information collecting points in the fourth to-be-measured human-face image; according to the at least two incident angles of inclination and a camera distance between the at least two cameras, determining positions of the information collecting points; and according to the positions of the information collecting points, determining a third size of a third to-be-measured human face in the fourth to-be-measured human-face image. That avoids manually measuring the human face by using a scale ruler, which simplifies the measuring operation. Moreover, by photographing the fourth to-be-measured human-face image by using a plurality of lenses, the third size of the third to-be-measured human face can be determined based on the camera distance between the cameras, which ensures the accuracy of the measurement on the human-face size.
The Fourth Method Embodiment
The present embodiment further provides a method for determining a face-mask size. The method may particularly include the following steps:
The target human-face size is obtained by using the methods in the first method embodiment, the second method embodiment or the third method embodiment, and the particular process of acquiring the target human-face size is not discussed further in the present embodiment.
Subsequently, in particular application scenes, the face masks of different models correspond to different human-face-size ranges, and each of the human-face-size ranges corresponds to one group of preset face-mask reference values. If the relevant parameters of the target human-face size fall within the human-face-size range corresponding to the preset face-mask reference values, then it is determined that the face-mask model corresponding to the current preset face-mask reference values is the face-mask model matching with the to-be-measured target.
Particularly, in the present embodiment, the face-mask reference values include but are not limited to the parameters such as the nose width, the first distance between the two inner canthi in the face, and the second distance between the eyebrow center and the philtrum. The particular types of the parameters of the face sites may be set according to practical experience, and are not limited in any form in the present embodiment.
Optionally, in the present embodiment, each of the groups of the face-mask reference values includes a first reference value corresponding to a first human-face site and a second reference value corresponding to a second human-face site, and the target human-face size includes a plurality of site sizes corresponding to the first face site and the second face site; and the step of, according to the plurality of groups of preset face-mask reference values and the target human-face size, determining the face-mask model of the to-be-measured target includes but is not limited to: in response to the site size corresponding to the first face site matching with the first reference value, determining the face-mask model corresponding to the first reference value.
In particular application scenes, the face sites of the human face are classified according to the gas-tightness priorities, where the face site that has a higher gas-tightness priority and/or a higher comfortableness weight value is classified as the first face site, and the face site that has a lower gas-tightness priority and/or a lower comfortableness weight value is classified as the second face site. If the gas-tightness priority is higher, the face mask has a better gas tightness, and if the comfortableness weight value is higher, the user has a higher comfortableness.
In the present embodiment, because all of the reference values corresponding to the different sites are different, the face sites in the target human face might not completely match with the reference values of one face-mask model. In this case, the corresponding face-mask model is determined according to the first reference value matching with the first face site, to ensure the gas tightness and the comfortableness of the face mask.
Optionally, in the present embodiment, the step of, according to the plurality of groups of preset face-mask reference values and the target human-face size, determining the face-mask model of the to-be-measured target includes but is not limited to: in response to the plurality of site sizes in the target human-face size completely matching with the plurality of reference values in the face-mask reference values, selecting the face-mask model corresponding to the face-mask reference values; or in response to the plurality of site sizes in the target human-face size partially matching with the plurality of reference values in the face-mask reference values, when a quantity of the matched reference values is greater than or equal to a preset quantity threshold, selecting the face-mask model corresponding to the face-mask reference values.
Particularly, in the present embodiment, the step of, according to the plurality of groups of preset face-mask reference values and the target human-face size, determining the face-mask model of the to-be-measured target includes matching a plurality of site sizes in the target human-face size and a plurality of site reference values in the face-mask reference values; in response to the plurality of site sizes in the target human-face size completely matching with the plurality of reference values in the face-mask reference values, selecting the face-mask model corresponding to the face-mask reference values; and in response to the plurality of site sizes in the target human-face size partially matching with the plurality of reference values in the face-mask reference values, when a quantity of the matched reference values is greater than or equal to a preset quantity threshold, selecting the face-mask model corresponding to the face-mask reference values.
In an example, the nose width, the first distance between the two inner canthi in the face, and the second distance between the eyebrow center and the philtrum in the target human-face size are matched with the face-mask reference values. If all of the nose width, the first distance and the second distance match with the site reference values of a face-mask model A, then it is determined that the face-mask model matching with the target person is the model A. If the nose width matches with the site reference value corresponding to the face-mask model A, and both of the first distance and the second distance match with the site reference values of a face-mask model B, then it is determined that the face-mask model matching with the target person is the model B.
In the above embodiment, by, according to a plurality of groups of preset face-mask reference values and the target human-face size, determining a face-mask model of the to-be-measured target, the gas tightness and the comfortableness of the face mask are ensured.
The First Device Embodiment
Referring to
Optionally, the particular examples of the present embodiment may refer to the examples described above with respect to the first process embodiment, and are not discussed further in the present embodiment.
The Second Device Embodiment
Referring to
Optionally, the particular examples of the present embodiment may refer to the examples described above with respect to the second process embodiment, and are not discussed further in the present embodiment.
The Third Device Embodiment
Referring to
Optionally, the particular examples of the present embodiment may refer to the examples described above with respect to the third process embodiment, and are not discussed further in the present embodiment.
The Fourth Device Embodiment
The present embodiment further provides an apparatus for determining a face-mask model, where the apparatus includes:
Optionally, the particular examples of the present embodiment may refer to the examples described above with respect to the fourth process embodiment, and are not discussed further in the present embodiment.
Regarding the device embodiments, because they are substantially similar to the process embodiments, they are described simply, and the related parts may refer to the description on the process embodiments.
The embodiments of the description are described in the mode of progression, each of the embodiments emphatically describes the differences from the other embodiments, and the same or similar parts of the embodiments may refer to each other.
The particular modes of the operations performed by the modules of the apparatus according to the above embodiments have already been described in detail in the embodiments of the method, and will not be explained and described in detail herein.
An embodiment of the present application further provides an electronic device. Referring to
An embodiment of the present application further provides a readable storage medium, where when an instruction in the storage medium is executed by a processor of an electronic device, the electronic device is able to implement the method for determining a human-face size according to the above embodiments.
Regarding the device embodiments, because they are substantially similar to the process embodiments, they are described simply, and the related parts may refer to the description on the process embodiments.
The algorithms and displaying provided herein are not inherently related to any particular computer, virtual system or other devices. Various general-purpose systems may be used with the teachings disclosed herein. On the basis of the above description, the structure required to construct such a type of systems is apparent. Furthermore, the embodiments of the present application are not limited to any specific programming language. It should be understood that the contents of the embodiments of the present application described herein may be implemented by using various programming languages, and the description above for a specific language is intended to disclose the most preferable embodiments of the embodiments of the present application.
The description provided herein describes many concrete details. However, it can be understood that the embodiments of the present application may be implemented without those concrete details. In some of the embodiments, well-known processes, structures and techniques are not described in detail, so as not to affect the understanding of the description.
Similarly, it should be understood that, in order to simplify the present application and facilitate to comprehend one or more of the aspects of the present application, in the above description on the exemplary embodiments of the embodiments of the present application, the features of the embodiments of the present application are sometimes grouped into individual embodiments, figures or the descriptions thereon. However, the method that is disclosed should not be interpreted as reflecting the intention that the embodiments of the present application that are claimed require more feature than the features that are explicitly set forth in each of the claims. More precisely, as reflected by the following claims, the aspects of the present application are less than all of the features of the individual embodiments that are disclosed above. Therefore, the claims which follow a particular embodiment are accordingly explicitly incorporated into the particular embodiment, and each of the claims itself serves as a separate embodiment of the embodiments of the present application.
A person skilled in the art can understand that the modules in the device according to an embodiment may be self-adaptively modified and be provided in one or more devices that are different from that embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and may also be divided into multiple submodules or subunits or subcomponents. Unless at least some of such features and/or processes or units are mutually rejected, all of the features that are disclosed by the description (including the accompanying claims, the abstract and the drawings) and all of the processes or units of any method or device that is disclosed herein may be combined in any combination. Unless explicitly stated otherwise, each of the features that are disclosed by the description (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar objects.
Each component embodiment of the embodiments of the present application may be implemented by hardware, or by software modules that are operated in one or more processors, or by a combination thereof. A person skilled in the art should understand that some or all of the functions of some or all of the components of the device according to the embodiments of the present application may be implemented by using a microprocessor or a digital signal processor (DSP) in practice. The embodiments of the present application may also be implemented as apparatus or device programs for implementing part of or the whole of the method described herein. Such programs for implementing the embodiments of the present application may be stored in a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, or provided on a carrier signal, or provided in any other forms. It should be noted that the above embodiments are for describing the embodiments of the present application, rather than limiting the embodiments of the present application, and a person skilled in the art may design alternative embodiments without departing from the scope of the appended claims.
In the claims, any reference signs between parentheses should not be construed as limiting the claims. The word “include” does not exclude elements or steps that are not listed in the claims. The word “a” or “an” preceding an element does not exclude the existing of a plurality of such elements. The embodiments of the present application may be implemented by means of hardware including several different elements and by means of a properly programmed computer. In unit claims that list several devices, some of those devices may be embodied by the same item of hardware. The words first, second, third and so on do not denote any order. Those words may be interpreted as names.
A person skilled in the art can clearly understand that, in order for the convenience and concision of the description, the particular working processes of the above-described systems, devices and units may refer to the corresponding processes according to the above-described process embodiments, and are not discussed herein further.
The above description is merely preferable embodiments of the embodiments of the present application, and is not indented to limit the embodiments of the present application. Any modifications, equivalent substitutions and improvements that are made within the spirit and the principle of the embodiments of the present application should fall within the protection scope of the embodiments of the present application.
The above are merely particular embodiments of the embodiments of the present application, and the protection scope of the embodiments of the present application is not limited thereto. All of the variations or substitutions that a person skilled in the art can easily envisage within the technical scope disclosed by the embodiments of the present application should fall within the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application should be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202110100112.4 | Jan 2021 | CN | national |
This application is the national phase entry of International Application No. PCT/CN2022/073708, filed on Jan. 25, 2022, which is based upon and claims priority to Chinese Patent Application No. 202110100112.4, filed on Jan. 25, 2021, the entire contents of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/073708 | 1/25/2022 | WO |