The present disclosure relates to facial feature recognition and, in particular, to controlling an electronic device by scanning a face with ultrasonic signals and recognizing facial features based on the ultrasonic signals.
Electronic devices typically use buttons, switches and other moving parts to make selections. Buttons include mechanical buttons and graphical buttons displayed on a touch-screen display. When mechanical buttons are used, multiple manufacturing steps may be employed to make the buttons, and over time the buttons wear down. In addition, foreign objects or particles may enter spaces between the buttons and a housing of the devices, which may result in device damage.
In some circumstances, the use of physical buttons or interfaces, where a user has to touch a device to cause the device to perform a function, may be inconvenient, such as when a user is holding a personal electronic device in one hand and has to hold on to something else with the other hand. Hands-free systems may be used to permit users to control functions of devices without pressing buttons or touch-screens with their hands. Cameras have been used to identify movements of a user, but cameras may be relatively expensive, especially high resolution models, while low-resolution cameras may not capture personal features or movements accurately. In addition, cameras suffer from detecting false positives and very high power consumption.
According to an aspect of the disclosure, a method of controlling an electronic device includes scanning a face at an ultrasonic frequency, by at least one audio speaker and mapping facial features based on the scanning. The method further includes detecting repetitive movements of the facial features by mapping the facial features over time and predicting future locations of the facial features based on the detecting of the repetitive movements. In addition, the method includes detecting control movements of the facial features based on the mapping of the facial features and the predicting of the future locations of the facial features and controlling the electronic device based on detecting the control movements.
According to another aspect of the disclosure, an electronic device includes an ultrasonic audio speaker, a microphone, and a processing circuit. The processing circuit is configured to: receive signals from the microphone based on ultrasonic signals emitted by the ultrasonic audio speaker and reflected from a face, map the face based on the signals from the microphone, detect repetitive movements of facial features based on mapping the face over time, predict future positions of the facial features based on the detecting of the repetitive movements, detect a control motion of at least one of the facial features based on the predicting of the future positions of the facial features, and perform a predetermined function of the electronic device based on the predicting of the future positions.
According to yet another aspect of the disclosure, a method of controlling an electronic device includes scanning a face with a spectrum of ultrasonic signals, mapping the face based on the scanning of the face, detecting, by ultrasonic signals, a control movement of at least one facial feature, and controlling the electronic device based on the detecting of the control movement.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that although illustrative implementations of one or more embodiments of the present disclosure are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The processing circuit 109 further includes a face mapper 112, which maps facial features based on the received ultrasonic signals. The face mapper 112 may also predict regular movements of the face, such as positions of facial features due to breathing. A control movement identifier 113 analyzes the detected ultrasonic signals to determine whether a control movement occurs, where a control movement is a pre-determined movement of one or more facial features that have been pre-designated to control a function of the electronic device 100. A function selection circuit 114 receives data regarding the control movement and initiates a function of the electronic unit 100 based on the control movement data. For example, selected functions may include adjusting a volume of the electronic device 100, scrolling up or down a document page or web page, turning a page from one page to the next, turning on or off the device 100, zooming in or out, or performing any other desired function.
While
In embodiments of the invention, the signal generator 110, frequency processor 111, the face mapper 112, the control movement identifier 113, and the function selection circuit 114 include hardware elements and software elements (e.g., instructions stored in memory) that are executed by the processing circuit 109 to generate, process, and analyze signals. The hardware elements include logic circuits, memory, filters, amplifiers, registers, arithmetic logic units, and any other elements necessary to perform the above functions.
In block 202, reflected signals are detected. For example, one or more microphones may detect the ultrasonic signals that are reflected from the face being analyzed.
In block 203, the face being analyzed is mapped. In particular, the reflected frequencies are analyzed to map facial features.
In addition to determining distances between facial features and dividing the face 120 into zones, the facial features are analyzed over time in block 204 to detect regular movements of the facial features, including regular movement of cheeks, lips, and nostrils corresponding to breathing and regular movements of eyes corresponding to blinking. Then the map of the face based on the geometric distances is synchronized with the measurements of the movement of facial features to detect regular patterns such a breathing and blinking.
In block 205, the locations of facial features are predicted based on the synchronized map and the detected movements of the facial features. In other words, the regular pattern of movement of the facial features is stored and updated as the face is scanned over time by the ultrasonic signals. For example, as regular breathing patterns of the nose and mouth are measured, the position of the eyes with respect to the nose and mouth are regularly re-calibrated.
In block 206, as the face is scanned over time by the ultrasonic signals, the detected facial features are compared with the predictions of the facial features based on the face map and the measured regular movements over time. In block 206, facial feature control movements are detected based on the comparison of the scanned facial features and the predicted facial features. For example, if a mouth is predicted to be at a first position based on regular breathing, and the mouth is detected as being in a second position, the second position may be determined to be a control movement to control a function of the electronic device (if the mouth movement is among the movements that have been pre-designated to control the device).
Finally, in block 207, the electronic device is controlled based on the detected control movement. The functions controlled according to embodiments of the invention include any function that can reasonably be associated with a detected movement of a facial feature, including any operation that requires only a single or double click or selection, such as a page change function, a scrolling function, a volume function, a power function, or any other similar function. The functions to be performed that are associated with the movement of facial features can be configured by a user through software executing on the electronic device. In such a manner, a user can select particular facial movements to control specific functions by assigning or associating the facial movements to control the desired functions.
In an example in which the directional facing A of the eye 600 controls the scrolling of a page, such as a word processing page or a web page displayed on an electronic device, determining that the eye is directed downward, as in
While detecting a directional facing of the eye has been provided by way of example, other aspects of the disclosure encompass controlling an electrical device based on any detected facial feature movement or head movement including performing a function, such as turning a page, when a user turns their head; detecting a circular motion of the head to perform a function, such as opening or closing a selected application; and performing a function, such as zooming in or out, based on detecting the tilt of a user's head. In another implementation, a function can be performed when a user moves their mouth, for example, if a smile is detected as a user is viewing material, such as text or video, the electronic device may prompt the user whether the user would like to add a tag or metadata to the material to indicate that the user likes the material, such as tagging the material as “liked” on a social media service. However, as discussed previously, these are provided only as examples, and the disclosure is not limited to the listed control movements of the face and head or the exemplary functions of an electronic device to be controlled using this methodology.
Aspects of the disclosure encompass any type of electronic device. In one embodiment, the electronic device is a cellular telephone having a touch-screen facing a user, and an ultrasonic transmitter (audio speaker) and microphone directed to the user, or in a same direction as the touch screen. In such an embodiment, the speaker and microphone are controlled by a processor in the cellular telephone to scan a user's face with the ultrasonic transmitter and microphone and detect control movements of the face. In other embodiments, the electronic device is a tablet computer, a laptop computer, or a desktop computer.
While aspects of the disclosure have been described with respect to handheld electronic devices, other embodiments may include electronic devices that are worn, such as eyeglasses or eye-pieces having ultrasonic transmitters and receivers, and a display.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods may be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
Also, techniques, systems, subsystems and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component, whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.