This disclosure is generally directed to user interfaces. More specifically, this disclosure is directed to multipath reflection processing in ultrasonic gesture recognition systems.
Many electronic devices support the use of gesture recognition. For example, some electronic devices use pulse-echo ultrasonic gesture sensing technology. In these types of systems, ultrasonic signals are transmitted from an ultrasonic transmitter, and reflected signals are received by an ultrasonic receiver. One challenge for these types of systems is to distinguish between direct reflections from a primary reflector (a primary target of interest) and multipath reflections from other reflectors in the field of view. These multipath reflections are unwanted signals that interfere with gesture sensing.
A typical ultrasonic system removes these unwanted multipath reflections using additional information, such as information from other ultrasonic transmitter-receiver pairs, to determine which reflections are not from the primary target. The system can then discard the undesired multipath reflections. This approach works well for primary targets that provide very strong reflections to an ultrasonic receiver. However, some primary targets provide weak reflections due to their ultrasonic reflective cross-sections or due to decreases in signal strength from dispersion or attenuation of the ultrasound signal with increased distance to an ultrasonic receiver, making it difficult to detect those targets.
For a more complete understanding of this disclosure and its features, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
As shown in
An ultrasonic transducer 104 is mounted on or near the display screen 102. The ultrasonic transducer 104 transmits and receives ultrasonic signals in order to help identify a location of at least one target 106 near the display screen 102. In this example, the target 106 represents a user's finger, although any other suitable target(s) could be detected.
Here, the ultrasonic transducer 104 emits and receives ultrasonic signals 108-110. Some ultrasonic signals travel directly towards the target 106 and are received directly from the target. Other ultrasonic signals travel directly towards the target 106 and are received indirectly from the target via some other reflector (namely the display screen 102). Yet other ultrasonic signals travel indirectly towards the target 106 via the other reflector and are received directly from the target. Still other ultrasonic signals travel indirectly towards the target 106 and are received indirectly from the target. As a result, the ultrasonic transducer 104 receives multiple sets of ultrasonic reflections, some of which represent multipath reflections. The ultrasonic transducer 104 includes any suitable structure for emitting and receiving ultrasonic signals. While shown as a single integrated unit, an ultrasonic transducer could include an ultrasonic transmitter and a separate ultrasonic receiver. Also, an array of multiple ultrasonic transducers could be used.
A gesture processor 112 analyzes the ultrasonic signals and attempts to identify a location of the target 106. For example, the gesture processor 112 could use time-of-arrival calculations (based on a time between transmitting signals and receiving reflections) to identify the target's location. By tracking the target's location over time, the gesture processor 112 can identify different movements of the target 106 and therefore identify different gestures being made by the target 106 on or near the display screen 102. In addition, as described below, the gesture processor 112 can use information about known reflectors (such as the display screen 102) to help enhance the identification of the target's location(s).
The gesture processor 112 includes any suitable structure for identifying locations of one or more targets and detecting gestures made by the target(s). The gesture processor 112 could, for instance, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. In some embodiments, the gesture processor 112 executes software or firmware instructions to provide the desired functionality.
The gesture processor 112 outputs any identified target location(s) and gesture(s) to an application processor 114. The application processor 114 executes one or more applications to provide any desired functionality. For example, the application processor 114 in a mobile smartphone or other device could use the identified location(s) and gesture(s) to initiate or accept telephone calls, send or view instant messages, allow a user to surf the Internet or play games, or any other of a wide variety of functions. As another example, the application processor 114 in a large display screen presenting information (such as in an office building or airport) could use the identified location(s) and gesture(s) to display certain information or invoke certain functions.
The application processor 114 includes any suitable structure for executing one or more applications. The application processor 114 could, for instance, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
In the system 100 of
Various techniques have been developed for handling multipath reflections, but they all suffer from various drawbacks. For example, a leading reflection approach considers only the first or leading reflection that is received by an ultrasonic receiver, but this does not work well for multi-reflector systems. In a strongest reflection approach, secondary reflections are eliminated by ignoring smaller signals, but this can be unreliable when there are large and small reflections in the field of view at the same time. In a hardware redundancy approach, redundant ultrasonic transmitters or receivers are used to determine a target's location using multi-lateration, although this increases the cost of the overall system. In a beam forming approach, transmit or receive beam-forming uses multiple elements to dynamically position beam nulls to eliminate reflections, although this again increases the cost of the overall system.
In accordance with this disclosure, the gesture processor 112 uses information about known fixed reflectors (also known as static reflectors) to help reduce or eliminate multipath interference problems. This can be done by using the information contained in multipath reflections rather than discarding them. In particular, the gesture processor 112 uses multipath reflections to improve the analysis of the direct reflections received directly from a target. In some embodiments, the gesture processor 112 detects any nearby fixed reflectors during a calibration process. The calibration process could involve the ultrasonic transducer 104 emitting ultrasonic signals and receiving reflections from the nearby reflectors. The calibration process could also involve placing at least one calibration target in at least one known position over the screen 102, and the gesture processor 112 could use ultrasonic reflections from the calibration targets to identify the positions of any nearby reflectors. The position(s) of the fixed reflector(s) can be stored by the gesture processor 112 (such as in a local memory of the gesture processor 112) and used later to identify the location of the target 106. For example, the gesture processor 112 could ignore the multipath reflections from known fixed reflectors or combine the multipath reflections with the direct reflections from the target 106. Additional details regarding this functionality are provided below.
Note that for reasons, one or more fixed reflectors that are detected by the system 100 could be intentionally placed near the system. That is, one or more fixed reflectors could be intentionally incorporated into the design of a device or system. This could be done for any number of reasons, such as to intentionally create multipath reflections that can be used by the system 100 to improve target identification. When a fixed reflector is incorporated into a design, a calibration process may or may not be used to identify the location of that reflector. Rather, prior knowledge of the reflector's location in the design could be used.
As shown in
In this example, the ultrasonic transducer 204a emits and receives ultrasonic signals 208a, where the received signals are reflected directly from the target 206. Similarly, the ultrasonic transducer 204b emits and receives ultrasonic signals 210a, where the received signals are reflected directly from the target 206. Other ultrasonic signals 208b are reflected off the target 206 and a fixed reflector 207 (such as a wall, table, or other object) before being received by the ultrasonic transducer 204a. Also, other ultrasonic signals 210b are reflected off the target 206 and the fixed reflector 207 before being received by the ultrasonic transducer 204b.
A gesture processor 212 analyzes the ultrasonic signals and attempts to identify a location of the target 206, such as by using time-of-arrival or angle-of-arrival calculations that identify distances or angles of a target from the ultrasonic transducers 204a-204b. Note that various calculations often require the use of multiple ultrasonic transmitters or receivers. The gesture processor 212 outputs any identified location(s) and gesture(s) to an application processor 214, which uses this information to provide any desired functionality.
In this example, the direct reflections (signals 208a and 210a) can be used by the gesture processor 212 to triangulate the location of the target 206. However, without additional information, the gesture processor 212 could use the two multipath reflections (signals 208b and 210b) to triangulate the location of a ghost target 216.
To help avoid this, the ultrasonic transducers 204a-204b can emit ultrasonic signals during a calibration process (with or without one or more calibration targets at one or more known locations). During this process, the gesture processor 212 can analyze ultrasonic reflections to identify the location(s) of any fixed reflector(s) around the ultrasonic transducers 204a-204b. In this case, the gesture processor 212 could identify the location of the fixed reflector 207. As noted above, prior knowledge of the fixed reflector's location could also be used, and the calibration process could be omitted. During normal operation, the gesture processor 212 could determine that the ultrasonic signals 208a and 210a are ultrasonic reflections received directly from the target 206. The gesture processor 212 could also use the known location of the reflector 207 to identify the ultrasonic signals 208b, 210b as multipath reflections. The multipath reflections can then be ignored or combined with the ultrasonic signals 208a, 210a for use in identifying the target's location. In this way, the gesture processor 212 can ignore multipath reflections or actually use the multipath reflections to help in the identification of the target's location.
As shown in
In this example, the gesture processor 312 can operate in a similar manner as in
As shown in
When an array 404 of ultrasonic transmitters and/or receivers is used, it is possible to track the target 406 using beam-steering. For example, using prior knowledge of the location of the fixed reflector 407, it is possible to track the multipath reflections from the target 406 off the fixed reflector 407. The multipath reflections can either be discarded (such as by creating a null in the beam pattern of the ultrasonic receiver array), or they can be enhanced (such as by using receive beam steering to create a secondary receiver beam pattern towards the fixed reflector 407 to detect the multipath signals in addition to the direct target signals).
This approach can be useful in various situations, such as with the detection of non-symmetric targets. Non-symmetric targets typically reflect energy in a non-uniform pattern. For some targets, multipath reflections off the fixed reflector 407 could be stronger than direct reflections. By tracking both the direct reflections (signals 408) and the multipath reflections (signals 410), the array 404 can receive reflections off the target 406 from two different angles, improving the average uniformity of the reflection cross-section of non-uniform targets.
As shown in
The gesture processor 312 can use prior knowledge of the location of the fixed reflector 307 to improve gesture recognition by allowing the gesture processor 312 to use the multipath reflections. For example, the gesture processor 312 can improve the power of reflections received directly from the target 306 by adding the received signal 500 to a time-shifted version of itself, generating a signal 500′ as shown in
The amount of time by which the time-shifted version of the signal 500 is shifted can be calculated in any suitable manner. For instance, the gesture processor 312 could create the time-shifted version of the signal 500 by shifting the signal 500 in an amount equal to the distance between two consecutive pulses in the signal 500. More specifically, given predefined knowledge of the reflector's location, the amount of time by which the time-shifted version of the signal 500 is shifted can be calculated as [(B+C)−A]/V, where A through C are the distances shown in
By using knowledge of one or more reflector locations, target detection is improved since multipath reflections can be ignored or used to obtain improved signal gain. The improved signal gain is obtained since multipath reflections can be combined with direct reflections. As a result, extended range can be obtained by using the environment to actually maximize reflected signals received from a target, improving the robustness of the overall system. It is possible to receive a return from a smaller target or a target at a larger distance with greater probability.
Note that while
Although
After calibration, the gesture recognition system is placed into operation at step 604. During this time, ultrasonic signals are transmitted from the ultrasonic transmitter(s) at step 606. Multiple sets of ultrasonic reflections are received by at least one ultrasonic receiver at step 608. This could include, for example, the ultrasonic receiver(s) receiving ultrasonic reflections directly from a target. This could also include the ultrasonic receiver(s) receiving ultrasonic reflections reflected directly from any other reflectors. This could further include the ultrasonic receiver(s) receiving multipath reflections reflected indirectly from the target off other reflectors.
The ultrasonic reflections are used to localize a primary target at step 610. This could include, for example, a gesture processor using time-of-arrival calculations to estimate distances to actual or ghost targets. More exact target locations could be identified if multiple receivers are present using triangulation or other technique. During this time, the location and configuration of reflectors may be mathematically inferred using multiple receivers, as well.
Pulses from the direct and multipath reflections are combined at step 612. This could include, for example, the gesture processor receiving a signal 500 and combining it with a time-delayed version of itself to create a signal 500′. The signal 500′ can contain combined pulses representing combinations of direct reflection pulses and multipath reflection pulses.
The actual location of the primary target is identified at step 614. This could include, for example, the gesture processor using the combined pulses to identify the location of the target. The combination of pulses helps to provide improved signal gain, which can help to improve target identification. The time-of-arrival of a multipath signal, given knowledge of the location of a reflector, can be used to refine the location of the target through geometric calculations. In a system with multiple transducers, an initial estimate of the position of a target can be confirmed by the existence of expected multipath signals, allowing classification of system noise and false reflections.
Although
In some embodiments, various functions described above are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The term “couple” and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like.
While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.