The present invention relates to a system and method for a force sensing device.
In general such a system and method may be implemented in a controller of the device, the device comprising one or more force sensors. The present disclosure relates to the controller and to the device comprising the controller, as well as to corresponding methods and computer programs. Such a device may be a portable electrical or electronic device.
Thus, the present disclosure relates in general to a sensor system which may form part or all of electronic devices with user interfaces, (e.g., mobile devices, game controllers, instrument panels, etc.).
Force sensors are known as possible input devices for electronic systems, and can be used as an alternative to traditional mechanical switches.
Many traditional mobile devices (e.g., mobile phones, personal digital assistants, video game controllers, etc.) include mechanical buttons to allow for interaction between a user of a mobile device and the mobile device itself. However, such mechanical buttons are susceptible to aging, wear, and tear that may reduce the useful life of a mobile device and/or may require significant repair if malfunction occurs. Also, the presence of mechanical buttons may render it difficult to manufacture mobile devices to be waterproof. Accordingly, mobile device manufacturers are increasingly looking to equip mobile devices with virtual buttons that act as a human-machine interface allowing for interaction between a user of a mobile device and the mobile device itself. Similarly, mobile device manufacturers are increasingly looking to equip mobile devices with other virtual interface areas (e.g., a virtual slider, interface areas of a body of the mobile device other than a touch screen, etc.). Ideally, for best user experience, such virtual interface areas should look and feel to a user as if a mechanical button or other mechanical interface were present instead of a virtual button or virtual interface area.
Presently, linear resonant actuators (LRAs) and other vibrational actuators (e.g., rotational actuators, vibrating motors, etc.) are increasingly being used in mobile devices to generate vibrational feedback in response to user interaction with human-machine interfaces of such devices. Typically, a sensor (traditionally a force or pressure sensor) detects user interaction with the device (e.g., a finger press on a virtual button of the device) and in response thereto, the linear resonant actuator may vibrate to provide feedback to the user. For example, a linear resonant actuator may vibrate in response to user interaction with the human-machine interface to mimic to the user the feel of a mechanical button click.
Force sensors thus detect forces on the device to determine user interaction, e.g. touches, presses, or squeezes of the device. There is a need to provide systems to process the output of such sensors which balances low power consumption with responsive performance. There is a need in the industry for sensors to detect user interaction with a human-machine interface, wherein such sensors and related sensor systems provide acceptable levels of sensor sensitivity, power consumption, and size.
Accordingly, in an aspect of the invention and with reference to the attached
Such a force sensing system can be provided as a relatively low-power, always-on module which can provide an input to a relatively high-power central processing unit or applications processor. The event detection stage may be power-gated by the output of the activity detection stage. By utilising an activity detection stage in combination with an event detection stage, accordingly the power consumption of the system can be minimised for always-on operation.
Force Sensors
Preferably, the system further comprises:
Preferably, the at least one force sensor comprises one or more of the following:
Input Channel
Preferably, the input channel is arranged to provide at least one of the following:
Sensor Conditioning
Preferably, the system further comprises:
Preferably, the sensor conditioning stage is configured to perform a normalisation of the input received from the input channel. The normalisation may be based on predefined calibration parameters for the system, and/or based on dynamically-updated calibration parameters.
Preferably, the sensor conditioning stage is configured to perform a filtering of the input received from the input channel, for noise rejection. Preferably, the filtering comprises applying a low-pass filter to the input received from the input channel. Preferably, the low pass filter is configured to have a low latency.
Preferably, the sensor conditioning stage is configured to perform a baseline tracking of the input received from the input channel. Preferably, the sensor conditioning is configured to allow for adjustable time constants for the baseline tracking.
Where the system is configured to receive inputs from multiple force sensors, preferably the sensor conditioning stage is configured to determine an inverse correlation matrix of the inputs received from the input channel, to determine a sensor diagonalization of the inputs received from the input channel.
It will be understood that the sensor conditioning stage may comprise some or all of the above-described elements, to provide the conditioned force sense signal.
Activity Detection
Preferably, the activity detection stage comprises a thresholding module, the thresholding module configured to determine if the activity level of the input from the at least one force sensor or of the conditioned force sense signal exceeds at least one threshold, wherein if the activity level exceeds the at least one threshold the activity detection stage determines that an activity has occurred at the force sensor.
Preferably, the thresholding module is configured to monitor a signal received by the activity detection stage, and to compare the power level of the monitored signal against a power level threshold indicative of an activity occurring at the sensor.
Preferably, the system further comprises a feature extraction module configured to extract features from a monitored signal, the monitored signal comprising the input from the at least one force sensor or the conditioned force sense signal.
Preferably, the feature extraction module is configured to extract at least one of the following from the monitored signal:
Preferably, the thresholding module is further configured to perform at least one of the following:
Preferably, the feature extraction module is power gated by the thresholding module, such that the feature extraction module is enabled if the power level of the monitored signal exceeds a power level threshold indicative of an activity occurring at the sensor.
Preferably, the indication generated by the activity detection stage and received by the event detection stage comprises:
Event Detection
Preferably, the event detection stage comprises a comparison stage arranged to compare the received indication against a stored touch event model, and to determine if a user input has occurred based on the comparison.
Preferably, the event detection stage is arranged to output an indication that a user input has been detected by the event detection stage.
Preferably, the event detection stage is configured to detect the type of user input that has occurred, by comparison of the received indication with a stored touch event model, to distinguish between:
wherein the event detection stage is arranged to output an indication of the type of user input that has been detected.
Calibration/Diagnostics
Preferably, the system further comprises a calibration and diagnostics stage which is configured to receive the output of the input channel and/or the output of the sensor conditioning stage and to determine if a recalibration of the system is required.
Preferably, the calibration and diagnostics stage is configured to generate a system output if a recalibration of the system is required. Preferably, the system output is provided as a system interrupt for transmission to a central controller or processing unit which may be coupled with the system.
Device
There is further provided a device comprising the force sensing system as described above. The device may comprise any suitable electronic device, e.g. a mobile phone or tablet device. The force sensing system is preferably provided coupled with an applications processor or central processing unit of any such device.
Input Path
In a further aspect of the invention, it will be understood that the force sensing system is provided with a configurable input path from the force sensors to the components of the force sensing system. In a preferred aspect, the mode of operation of the input path may be different depending on whether the system or module is performing activity detection or event detection. The signal path between the force sensors and the appropriate modules may be switched to allow for considerations of power consumption, latency, etc. For example, when performing event detection the input path can be configured in such a way to conserve power consumption, but which may result in a degradation in the Signal to Noise ratio.
Method
There is further provided a force sensing method, the method comprising the following steps:
Preferably, the method further comprises the step of:
It will be understood that the above-described features of the force sensing system may be provided as further method steps.
Further statements defining aspects of the present invention and related optional features are provided at the end of the description.
Reference will now be made, by way of example only, to the accompanying drawings, of which:
Device
As shown in
The enclosure 101 may comprise any suitable housing, casing, frame or other enclosure for housing the various components of device 100. Enclosure 101 may be constructed from plastic, metal, and/or any other suitable materials. In addition, enclosure 101 may be adapted (e.g., sized and shaped) such that device 100 is readily transported by a user (i.e. a person).
Controller 110 may be housed within enclosure 101 and may include any system, device, or apparatus configured to control functionality of the device 100, including any or all of the memory 120, the force sensors 130, and the I/O unit 140. Controller 110 may be implemented as digital or analogue circuitry, in hardware or in software running on a processor, or in any combination of these.
Thus controller 110 may include any system, device, or apparatus configured to interpret and/or execute program instructions or code and/or process data, and may include, without limitation a processor, microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), FPGA (Field Programmable Gate Array) or any other digital or analogue circuitry configured to interpret and/or execute program instructions and/or process data. Thus the code may comprise program code or microcode or, for example, code for setting up or controlling an ASIC or FPGA. The code may also comprise code for dynamically configuring re-configurable apparatus such as re-programmable logic gate arrays. Similarly, the code may comprise code for a hardware description language such as Verilog TM or VHDL. As the skilled person will appreciate, the code may be distributed between a plurality of coupled components in communication with one another. Where appropriate, such aspects may also be implemented using code running on a field-(re)programmable analogue array or similar device in order to configure analogue hardware. Processor control code for execution by the controller 110, may be provided on a non-volatile carrier medium such as a disk, CD- or DVD-ROM, programmed memory such as read only memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. The controller 110 may be referred to as control circuitry and may be provided as, or as part of, an integrated circuit such as an IC chip.
Memory 120 may be housed within enclosure 101, may be communicatively coupled to controller 110, and may include any system, device, or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media). In some embodiments, controller 110 interprets and/or executes program instructions and/or processes data stored in memory 120 and/or other computer-readable media accessible to controller 110.
The force sensors 130 may be housed within, be located on or form part of the enclosure 101, and may be communicatively coupled to the controller 110. Each force sensor 130 may include any suitable system, device, or apparatus for sensing a force, a pressure, or a touch (e.g., an interaction with a human finger) and for generating an electrical or electronic signal in response to such force, pressure, or touch. Example force sensors 130 include or comprise capacitive displacement sensors, inductive force sensors, strain gauges, piezoelectric force sensors, force sensing resistors, piezoresistive force sensors, thin film force sensors and quantum tunnelling composite-based force sensors. In some arrangements, other types of sensor may be employed.
In some arrangements, the electrical or electronic signal generated by a force sensor 130 may be a function of a magnitude of the force, pressure, or touch applied to the force sensor. Such electronic or electrical signal may comprise a general purpose input/output (GPIO) signal associated with an input signal in response to which the controller 110 controls some functionality of the device 100. The term “force” as used herein may refer not only to force, but to physical quantities indicative of force or analogous to force such as, but not limited to, pressure and touch.
The I/O unit 140 may be housed within enclosure 101, may be distributed across the device 100 (i.e. it may represent a plurality of units) and may be communicatively coupled to the controller 110. Although not specifically shown in
As a convenient example, the device 100 may be a haptic-enabled device. As is well known, haptic technology recreates the sense of touch by applying forces, vibrations, or motions to a user. The device 100 for example may be considered a haptic-enabled device (a device enabled with haptic technology) where its force sensors 130 (input transducers) measure forces exerted by the user on a user interface (such as a button or touchscreen on a mobile telephone or tablet computer), and an LRA or other output transducer of the I/O unit 140 applies forces directly or indirectly (e.g. via a touchscreen) to the user, e.g. to give haptic feedback. Some aspects of the present disclosure, for example the controller 110 and/or the force sensors 130, may be arranged as part of a haptic circuit, for instance a haptic circuit which may be provided in the device 100. A circuit or circuitry embodying aspects of the present disclosure (such as the controller 110) may be implemented (at least in part) as an integrated circuit (IC), for example on an IC chip. One or more input or output transducers (such as the force sensors 130 or an LRA) may be connected to the integrated circuit in use.
Of course, this application to haptic technology is just one example application of the device 100 comprising the plurality of force sensors 130. The force sensors 130 may simply serve as generic input transducers to provide input (sensor) signals to control other aspects of the device 100, such as a GUI (graphical user interface) displayed on a touchscreen of the I/O unit 140 or an operational state of the device 100 (such as waking components from a low-power “sleep” state).
The device 100 is shown comprising four force sensors 130, labelled s1, s2, s3 and s4, with their signals labelled S1, S2, S3 and S4, respectively. However, for some of the functionality disclosed herein the device 100 need only comprise one or a pair of (i.e. at least two) force sensors 130. Example pairs comprise s1 and s2 (on different or opposite sides of the device 100) and s1 and s3 (on the same side of the device 100). Of course, the device 100 may comprise more than four force sensors 130, such as additional sensors s5 to s8 arranged in a similar way to sensors s1 to s4 but in another area of the device 100. As another example, the device 100 may comprise three or more force sensors 130 arranged on the same side of the device (like 51 and s3), for example in a linear array so that some of those force sensors 130 are adjacent to one another and some are separated apart from one another by at least another force sensor 130.
Although
Thus, the force sensors s1 to s4 may be located on the device according to anthropometric measurements of a human hand. For example, where there is only a pair of force sensors 130, they may be provided on the same side (e.g. s1 and s3) or on opposite/different sides (e.g. s1 and s2) of the device 100 as mentioned above. The force sensors 130 are provided at different locations on the device, but may be in close proximity to one another.
In one arrangement (not shown in
A given user force input (from a touch of the device 100 in the area of the force sensors 130) may result in forces being picked up at a plurality of force sensors 130, in part due to mechanical interaction (e.g. mechanical connections, such as via the enclosure 101) between the force sensors 130.
To generalise the arrangement of
Examples of such force sensing systems will be better understood in connection with
Force Sensing System
The input sensor signals derived from the sensors 1 to N are received by the controller 110A at a sensor input channel of the controller as indicated, e.g. at a physical hardware input of the controller 110A. The functionality of the controller 110A is organised into three general stages in particular a sensor conditioning stage 300A, a sensor activity detection stage 400A and a sensor event detection stage 500A.
The division into stages may be considered a division into sections, units or modules, and may be considered schematic. The functionality of the controller 110A may be implemented in software, executed on a processor of the controller 110A, or in hardware, or in a combination of software and hardware.
The sensor conditioning stage 300A itself comprises stages corresponding to respective sensor (or signal) conditioning activities or processes. As in
The sensor activity detection stage 400A itself comprises stages corresponding to respective detection activities or processes. As in
The sensor event detection stage 500A comprises a press/tap model stage 510, configured to classify the user force input based on the features extracted by the feature extraction stage 420. The press/tap model stage 510 may in some arrangements be configured to act on the sensor signals as conditioned by the sensor conditioning stage 300A and/or on the sensor signals as received from the sensors 130, along with signal-level thresholds from the thresholding stage 410.
Further Force Sensing System
The system 200B is depicted as comprising sensors 130 (sensors 1 to N), a controller 1108 (as an example detailed implementation of the controller 110). As before, the system 200B may be taken as the controller 1108 alone.
In line with the system 200A, the functionality of the controller 1108 is organised into three general stages, in particular a preprocessing stage 300B, an edge detection stage 400B and an event classification stage 500B.
Stages 300B, 400B and 500B may be compared to stages 300A, 400A and 500A, respectively. To ease understanding, like elements have been denoted with like reference signs where possible. As before, the division into stages may be considered a division into sections, units or modules, and may be considered schematic. The functionality of the controller 1108 may be implemented in software, executed on a processor of the controller 1108, or in hardware, or in a combination of software and hardware.
In the preprocessing stage 300B as compared to the sensor conditioning stage 300A, the inverse correlation matrix stage 320 has been replaced with an anomaly detection stage 325, a re-calibration request stage 330, a sensor to button mapping stage 335 and an adjacent sensor suppression stage 340. The adjacent sensor suppression stage 340 may be compared with the inverse correlation matrix stage 320.
The anomaly detection stage 325 is configured to detect user force inputs which do not satisfy, or which deviate from, a model defining one or more “normal” or “expected” user force inputs. Put another way, the anomaly detection stage 325 may be configured to detect user force inputs which satisfy a model defining one or more “abnormal” user force inputs. In this way, the anomaly detection stage 325 may detect an abnormal use (or misuse) of the device 100 and use this detection to raise an anomaly flag (e.g. set the value of an anomaly-flag variable to 1 from 0). The anomaly flag may be used to disable or affect the operation of other stages of the controller 1108 (for example to save power or processing overhead) as indicated. Example functionality of the anomaly detection stage 325 is considered in more detail later herein.
The re-calibration request stage 330 is configured to detect properties of the sensor signals (as conditioned by the preceding stages) which indicate that a re-calibration process is needed. The re-calibration request stage 330 may be configured to initiate or trigger the re-calibration process when it is detected that the sensor signals (as conditioned by the preceding stages) no longer meet a defined specification.
The sensor to button mapping stage 335 is configured to combine (e.g. by weighted sum or average) sensor signals which originate from different sensors 130 (with appropriate weighting) so that each combined sensor signal corresponds to a virtual button whose location is defined relative to locations of the sensors 130. Sensor signals prior to the sensor to button mapping stage 335 may be considered sensor signals in the sensor domain (i.e. the signals are per sensor) and sensor signals output by the sensor to button mapping stage 335 may be considered sensor signals in the button domain (i.e. the signals are per “virtual” button).
The adjacent sensor suppression stage 340 is configured to subtract a proportion of one sensor signal from another to reduce or suppress the effect of mechanical crosstalk between the sensors 130 on the sensors signals. The adjacent sensor suppression stage 340 is shown operating in the button domain, i.e. on combined sensor signals, but it could operate in the sensor domain.
Although not shown in
In the edge detection stage 400B as compared to the sensor activity detection stage 400A, the thresholding stage 410 has been replaced by a noise power estimation stage 412 and an adaptive thresholding stage 414. In the running example, these stages are configured to determine signal-level thresholds in an adaptive manner, in particular a noise threshold below which signal energy is dominated by noise energy. Example functionality of the noise power estimation stage 412 and the adaptive thresholding stage 414 is considered in more detail later herein. The noise power estimation stage 412 and adaptive thresholding stage 414 are shown operating in the button domain, i.e. on combined sensor signals, but could operate in the sensor domain.
The feature extraction stage 420 in
Also provided in the system 200B is a squeeze detector stage 430. The squeeze detector stage 430 is configured to detect a user squeeze input as an example user force input in which the user has applied a compressive force to the device 100. The squeeze detector stage 430 is shown operating in the sensor domain but it could operate in the button domain, i.e. on combined sensor signals. Such a user squeeze input may be picked up by corresponding sensors 130 (or corresponding virtual buttons), for example located on different or opposing sides or edges of the device 100, and as such the squeeze detector stage 430 may be configured to operate on particular pairs or groups of sensors signals accordingly.
Also provided in the system 200B (e.g. in the edge detection stage 400B) is a fix (or fixed) threshold detection stage 440, which may be optional. The fix threshold detection stage 440 is shown operating in the button domain, i.e. on combined sensor signals, but it could operate in the sensor domain. The fix threshold detection stage 440 is configured to determine whether one or more sensor signals (e.g. combined sensors signals) exceed a fixed threshold value, for example to indicate that a base level of activity has been detected. Such a level of activity may for example be sufficient to cause the controller 1108 to transition between modes of operation.
In this respect, the fix threshold detection stage 440 (e.g. in the edge detection stage 400B) is indicated in
For example, the noise power estimation stage 412 and adaptive thresholding stage 414 may be disabled unless the controller 1108 is in the standby or active mode. The transition from the deep standby mode to the standby mode may occur when one or more sensor signals (or combined sensors signals) exceed the fixed threshold value mentioned above. As another example, the feature extraction stage 420 and the squeeze detector stage 430 may be disabled unless the controller 1108 is in the active mode. The transition from the standby mode to the active mode may occur when one or more sensor signals (or combined sensors signals) exceed a threshold value such as a noise threshold determined by the noise power estimation and adaptive thresholding stages 412, 414.
As another example, the sample frequency, data acquisition rate, scan rate or sample rate at which the sensors 130 are sampled (and at which the consequential sensor signals are processed) may be dependent on the mode of operation. For example, one (very low) sample rate may be used in the deep standby mode, another (low) sample rate may be used in the standby mode, and a further (high) sample rate may be used in the active mode. In the running example, a low sample rate of e.g. 30 Hz is used in the standby mode and a high sample rate of e.g. 1000 Hz is used in the active mode.
The event classification stage 500B comprises a press/tap model stage (module) 510 corresponding to that in the sensor event detection stage 500A, operating in the button domain and based on the feature stream output by the feature extraction stage 420. The press/tap model stage 510 may in some arrangements (although not indicated in
Although not shown in
Thresholds
To aid in an understanding of the functionality of various units of the controller 110B,
In each of
In both cases, four thresholds are shown, namely a noise threshold (TH noise), a fall threshold (TH fall), a tap threshold (TH tap) and a rise threshold (TH rise), having successively larger amplitude values in that order.
In
Looking at
The TH noise threshold is (in the running example) computed dynamically or adaptively as explained later. Also, if the current signal amplitude (force reading) is above the TH noise threshold, the system is configured to operate using the high sample rate mentioned earlier (active mode), however if the current amplitude (force reading) is below the TH noise threshold the system is configured to operate using the low sample rate mentioned earlier (standby mode). Therefore, the TH noise threshold may control the sample rate used for each new incoming signal sequence (or signal sample) and hence power consumption.
If the sensor signal hits the TH rise threshold (see
When the sensor signal rises above the tap threshold TH tap, the system (in particular, the feature extraction stage 420) starts to populate a feature vector. In the running example, this vector contains the length of the pulse, gradient to the maximum peak, time from the point it crosses the tap threshold TH tap to the maximum peak and also the maximum value of the peak. Effectively, the feature vector contains features which define the profile or shape of the waveform between rising through the tap threshold TH tap and then falling through the TH fall threshold.
If the sensor signal hits the TH fall threshold (once it was already above the TH tap or TH rise threshold) the system sends the features of the current signal sequence (the feature vector) as obtained by the feature extraction stage 420 to the press/tap model stage 510 and triggers a fall flag. In a case where the signal only reached the tap threshold TH tap but not the rise threshold TH rise (as shown in
This feature vector is updated for each sample (i.e. at the sample rate) until it is finally sent to the model when the signal is below the fall threshold TH fall. This avoids continually sending information to the model, i.e. the press/tap model stage 510. After the feature vector is sent to the press/tap model stage 510 for a given signal sequence, it is removed or cleared from the feature extraction stage 420 so that it can be repopulated for a subsequent signal sequence.
Anomaly Detection
It will later be explained how the press/tap model stage 510 uses the features, in particular the feature vector, to determine whether a tap event or a press event (or some other defined event, i.e. a “wanted” event) has occurred. However, before the detection of such events (desired or recognised “normal” user inputs) is explained, the detection of anomalies (undesired user inputs) will be considered.
The sensors 130, placed on any location of the device 100, are intended to measure displacements of the enclosure 101 which are then mapped to a force given the sensitivity of the sensor 130. These sensors 130 can be used for example to measure the force in a particular location of the device 100 and trigger a ‘button pressed’ internal state or event when the force is above a certain level. However, there are other actions, such as twisting or bending the device 100, that can cause a displacement of the enclosure 101 and therefore create a force on the sensor 130 that is similar (in terms of force magnitude and/or duration) to a button press. This kind of action might potentially falsely trigger internal events or states and thus reduce the overall performance of the force sense system 200B.
The anomaly detection stage 325 is intended to avoid such false triggers caused by a misuse (non-intended use) of the device 100. Any unexpected behaviour of a candidate (input) sensor signal, i.e. the signal to be processed, is flagged in time.
In force sense domain, the signals acquired, when an intended or “normal” event is present, can be well characterised as a signal rising from approximately 0 amplitude and then falling after a limited period of time to approximately again the same initial value (see
The anomaly flag may be used to disable making any decision (e.g. on a type of user force input occurring, such as a push or tap event) at that point. For example, the anomaly flag may be used to disable the feature extraction stage 420 and/or the squeeze detector stage 430. The anomaly flag may also be used to disable bias tracking (e.g. by baseline tracking stage 315) in order to avoid using any of this data to update the force sense system.
In step 602, a candidate sensor signal is monitored. This signal may be a sensor signal derived from an individual sensor 130, or for example a combination (such as an average) of sensor signals derived from individual sensors 130, i.e. in the sensor domain. In another arrangement the candidate sensor signal may be a combination sensor signal representing a virtual button as output by the sensor to button mapping stage 335, or an average of such combination sensor signals, i.e. in the button domain.
In step 604 it is determined whether the candidate sensor signal has a given characteristic which identifies that signal as representing an anomalous user force input (an anomaly). If it does not (NO, step 604), the method 600 returns to step 602 to continue monitoring the candidate sensor signal. If it does (YES, step 604), the method proceeds to step 606 where an anomaly is registered (corresponding to the anomaly flag being raised). The method 600 then returns to step 602. The method 600 may be terminated at any point.
Multiple approaches can be used in step 604 to detect a deviation of the candidate sensor signal from the expected pattern, i.e. to detect the or at least one given characteristic. One approach is to calculate the sum of the amplitude signals coming from some or all of the sensors 130 and activate the anomaly flag if this summation is less than a predetermined negative value. This approach is explored further below. Alternatively, the candidate sensor signal can be modelled by its statistics and the anomaly flag triggered when the statistics of the incoming signal deviate from the model. Different features could potentially be used to measure this pattern inconsistency. Another possibility is to use a classification algorithm and this possibility is mentioned later in connection with event detection.
As above, one possible implementation of step 604 is to detect negative forces on the N channels. In an example, the parameter used to quantify this behaviour is the average
where the sensor signals are digital signals comprising a series of numbered samples, n is the sample number, i is the channel number, N is the total number of channels (which may have a 1-to-1 relationship with the force sensors 130 in the sensor domain, or a 1-to-many mapping in the case of virtual buttons), and xi(n) represents the sensor signal for channel i.
The parameter
In the test, first the six sensors 130 (buttons) were pressed sequentially and then two types of twist were made to the device 100. In the upper-most graph the individual sensor signals for the six sensors 130 are shown, and the tested sequence of button pressing and twisting can be seen. In the upper-central graph combined sensor signals (corresponding to virtual buttons) are shown using a 2-to-1 mapping of sensors 130 to virtual buttons, to help give a clearer indication of the forces applied to the device 100. In the lower-central graph the average (mean) of the combined sensor signals from the upper-central graph (effectively equivalent to an average of the sensor signals from the upper-most graph) is shown, i.e.
Another possibility, in addition to or instead of looking for a negative average (mean) value, would be to look for a negative cross-correlation between specific pairs of sensor signals (where such a negative cross-correlation would be expected to be indicative of an anomaly). In such a case the lower-central graph could plot the cross-correlation for specific pairs of sensor signals and a mapping function such as a sigmoid function could be used to translate each plot to an anomaly detection signal comparable to that in the lower-most graph.
As before, when an anomaly is detected the system 200B (e.g. the anomaly detection stage 325 itself) may be configured to disable one or more functions of the system (or indeed of the host device 100), in practice via raising an anomaly flag. For example, as in
Thus, in general, the system is configured to recognise anomalous (unwanted) user inputs and to control operation of the system in response. This control may comprise changing a mode of operation, changing a sample rate, or disabling (or transitioning to a low-power mode) one or more functions/units, e.g. selectively. This may enable improved power or processing overhead efficiency, or reduce the number of false positives (falsely recognised user inputs).
Event Detection
The press/tap model stage 510 will now be considered further, in particular its use of signal features (e.g. feature vectors) to determine whether a tap event or a press event (or some other defined event, associated with a corresponding “wanted”, “accepted”, “supported” or “intended” user force input) has occurred. Basic supported events such as tap and press events which correspond to such supported user force inputs may be referred to as core events.
In overview, the event detection functionality of the press/tap model stage 510 reduces the complexity of detection of core events from force sensor readings (i.e. sensor signals, or combined sensors signals). These core events may be tap, push/press, and long push/press events in the context of a device 100 such as a mobile telephone or tablet computer, although these are simply examples. The event detection functionality is configured to recognise not only defined core events but also combinations of them which may be referred to as gestures. Example gestures include sliding up/down, double tap and double press events.
The detection of core events from one or more sensor signals may involve considering multiple signal features (signal characteristics) such as duration, maximum force applied or the gradient of a rising pulse. This dependency could lead to the need for a complex and large state-machine implementation simply to distinguish between a few different core events.
To avoid such complexity the event detection functionality employs a classification algorithm which operates based on the input sensor signals (or, in the case of
A model may be generated from recorded training data and implemented by way of a corresponding (classification) algorithm to classify a given signal sequence (extracted from a sensor signal), with the added advantage that real data guides the discrimination problem. Such model-based core event detection may enable ready adaptation of the algorithm to new data, e.g. for a new setup of the sensors 130 on the device 100, whilst maintaining good performance on the given training data.
In step 802, a signal sequence is extracted from a candidate sensor signal as a sequence which may (or may not) represent a core event. This signal may be a sensor signal derived from an individual sensor 130, or for example a combination of sensor signals derived from individual sensors 130. In another arrangement the candidate sensor signal may be a combination sensor signal representing a virtual button, i.e. in the button domain, as output by the sensor to button mapping stage 335.
Looking back to
In step 804, defined signal features are extracted from the signal sequence. In the present example, when the sensor signal rises above the tap threshold TH tap the system (in particular, the feature extraction stage 420) starts to populate the feature vector. This vector contains the length of the pulse, gradient to the maximum peak, time from the point it crosses the tap threshold TH tap to the maximum peak and also the maximum value of the peak. This feature vector is updated for each sequence, sample-by-sample (i.e. at the given sample rate) until the samples which make up the sequence have been processed.
The feature vector once complete for the signal sequence concerned is sent to the press/tap model stage 510, after which the feature vector is removed or cleared from the feature extraction stage 420 so that it can be repopulated for a subsequent signal sequence.
In step 806, the signal sequence concerned is classified on the basis of its feature vector. This involves applying the feature vector to the model generated from recorded training data, to determine if the signal sequence can be classified as corresponding to any of the defined core events supported by the model. This may lead to a sequence classification corresponding to the signal sequence concerned, the classification indicating to which if any of the core events (categories) defined by the model it has been determined that the signal sequence belongs. In some cases thus the classification may indicate that the sequence concerned does not belong to (or represent) any of the core events.
In step 808 it is determined whether multiple signal sequences have been classified, so that it can be determined whether combinations of classifications (core events) can be recognised as corresponding to defined gestures as mentioned above. If so (YES, step 808), the method proceeds to step 810 where such combinations of classifications are classified as corresponding to gesture events. Otherwise (NO, step 808), the method returns to step 802. The method 800 may be terminated at any time.
Multiple approaches can be used in step 806 to classify the signal sequence on the basis of its feature vector, as indicated earlier. One approach is based on support vector machines (SVM) due to their generalization capabilities given a small dataset (and thus advantageous when implemented in “small” devices 100 such as mobile telephones). As above, the set of features extracted from the signal sequence (force sensing data when the signal is above one or more thresholds) for use by an SVM classifier may be (in the context of a push input): number of samples; area underneath the push waveform; gradient estimation and maximum value of the push.
The use of an SVM classifier or model by the press/tap model stage 510 will now be considered, using the running example where the feature extraction stage 420 extracts features based on a signal sequence which starts when it exceeds the tap threshold TH tap and ends when the signal falls below the fall threshold TH fall. As above, in the running example the sensors 130 are sampled at a low sample rate (e.g. 30 Hz) when the signal is below the noise threshold TH noise and at a high sample rate (e.g. 1000 Hz) when the signal is above the noise threshold TH noise.
As before, an SVM classifier is useful when there will not be a large amount of data for any one device 100; it can generalise patterns from few examples (training signal sequences or training feature vectors).
The classifier inputs a feature vector created by the feature extraction stage 420. Assuming a linear SVM kernel for the sake of example (a non-linear kernel would also be possible):
di(n)=βi·x(n)+bi
where di(n) is the distance of the normalized input feature vector x(n) to a hyperplane defined by the slope βi and bias bi. The variable i here represents an index of the model. The present arrangement employs a 1 vs 1 approach in the multiclass classification problem, and thus the total number of models created is given by P·(P−1)/2 where P is the number of classes (core events). The distances of each individual model are combined to provide an estimation of the class given the input feature vector x(n) using an Error Correcting Output Codes (ECOC) approach as follows:
where mip is the element in the ith row and pth column of the coding matrix M. This matrix M only contains three different elements {−1, 0, 1} where 0 indicates that signal sequences of the given class were not included in the training phase, and −1 and 1 indicate the label used in the training for the given class.
See for example “Error Correcting Output Codes for multiclass classification: Application to two image vision problems”, IEEE 16th CSI International Symposium on Artificial Intelligence and Signal Processing.
The SVM classifier may be configured for incremental learning. In particular, the model defined in the classifier may be updated while it is being used, adapting to each user from a generic model to improve the final performance. This learning may be semi-supervised so it will learn from new estimations output by the SVM classifier (i.e. the model), given additional information indicating whether the estimations are adequate for model adaptation. The learning may also be (fully) supervised so that it will learn from new estimations output by the SVM classifier (i.e. the model) given known user inputs, or unsupervised so that it will learn from its estimations without using additional information.
In relation to supervised or semi-supervised learning, the controller 1108 (or a separate applications processor) may be able to provide the additional information. For example, if a tap is triggered but in the given status of the device 100 that functionality is not supported, then it may be assumed that the estimation is wrong and not adequate for model adaptation. As another example, the device 100 may operate in a training mode when particular user inputs (core events) are invited.
In line with step 810 of method 800, the press/tap model stage 510 may be configured to detect gestures based on multiple classifications, for example occurring within a defined period of time. The SVM classifier may return core events which can be, as examples, a push/press, a long push/press or a tap. The event classifier operating in line with step 810 may then find gesture events such as a double tap or a double press. This occurs when two taps or two presses are detected in a given period of time.
Incidentally, the classification algorithm may be adapted when detecting gestures to make it more likely to detect the second and any subsequent core events of a gesture after detecting the first one. This may involve shifting (e.g. translating) the hyperplane concerned in the context of an SVM classifier. For example, after detecting a tap (as the first core event) the classification algorithm may be adapted to make it more likely (than before) that a second tap would be detected within a defined period of time given that it may be considered (highly) likely within that period of time that a user input would be a tap as part of a double-tap gesture. In the case of gestures involving more than two core events, the classification algorithm may be adapted after each subsequent core event is detected, given the increasing likelihood of such a gesture being intended. Similarly, after such a gesture has been detected, the classification algorithm may be adapted to make it less likely (than before) that a core event would be detected within a defined period of time given that it may be considered (highly) likely within that period of time that the user would be pausing before making further user inputs.
It was mentioned earlier that a classification algorithm, as well as recognising core events, could be used to recognise anomalies (i.e. unwanted user inputs). It will be understood that a classification algorithm could be trained and/or preconfigured to recognise both core events and anomalous events, or only one of them, and respond accordingly. In the case of core events, suitable indications could be provided to e.g. an applications processor for further processing (e.g. to control the device 100). In the case of anomalous events, the control may comprise changing a mode of operation, changing a sample rate, or disabling (or transitioning to a low-power mode) one or more functions/units as mentioned earlier.
To avoid complexity in the classification algorithm, it may be advantageous to provide one classification algorithm (implemented in the press/tap model stage 510) for detecting core events, and another classification algorithm (implemented in the anomaly detection stage 325) for detecting anomalous events. The classification algorithm for core events could operate in the button domain, and the classification algorithm for anomalies could operate in the sensor domain, in line with
The classification algorithm (e.g. an SVM classifier) for anomalies could take sensor signals as its inputs (or extracted feature vectors), or for example the outputs of one or more preceding blocks configured to look for e.g. a negative average and a negative cross-correlation as mentioned earlier. Supervised, semi-supervised and unsupervised learning may be applicable as for the classification algorithm for core events. The present disclosure will be understood accordingly.
Adaptive Thresholding
As mentioned earlier, in some arrangements one or more threshold values are dynamically or adaptively set, for example any of the noise threshold TH noise, fall threshold TH fall, tap threshold TH tap and rise threshold TH rise.
Considering the noise threshold TH noise in particular in the running example, it may be possible in this way to simultaneously reduce the chances of false detection or non-detection of events and the power consumption of the force sense system.
Considering firstly false detection or non-detection, in order to detect impulsive or brief pushes such as taps, the detection threshold (e.g. TH noise) needs to be as low as possible otherwise there is the risk of not detecting such short events in a low sampling frequency mode (standby mode). On the contrary, if this threshold is too low the likelihood of false triggering events increases substantially. Considering secondly power consumption, it has been described above that in the running example a low sampling rate is adopted below the detection threshold (TH noise) and a high sampling rate is adopted above it. With power consumption in mind it is advantageous to reduce the time spent in the high frequency mode, which implies increasing the detection threshold.
A desirable detection threshold (TH noise) might be one that minimises the risk of false negatives whilst avoiding false positives. Such a threshold should not be lower than the noise present in the sensor signal, otherwise the noise itself will falsely trigger events. With this in mind, in one arrangement the noise power estimation stage 412 and adaptive thresholding stage 414 are configured to adaptively set this minimum threshold (TH noise) to a level derived from an estimate of the noise level in the sensor signal. This adaptive threshold also has a direct impact on the power consumption by reducing the likelihood of the system moving to the high sampling rate (active) mode when no event is present, as well as reducing the chances of false positives created by signal noise.
In step 902, a sensor input signal is received. This signal may be a sensor signal derived from an individual sensor 130, or for example a combination of sensor signals derived from individual sensors 130. In another arrangement the sensor signal may be (in line with
In step 904 the noise level of the system is estimated by the noise power estimation stage 412, and in step 906 the noise threshold TH noise is set by the adaptive thresholding stage 414 based on this estimation. As indicated by the dashed arrow returning from step 906 to step 904, steps 904 and 906 may be carried out on an ongoing basis, i.e. adaptively setting the noise threshold TH noise based on the received sensor input signal.
Two example methods for adaptively setting the noise threshold TH noise in steps 904 and 906 will now be considered.
A first method comprises a recursive averaging (or recursive filtering) algorithm, to be carried out by the noise power estimation stage 412.
The recursive averaging algorithm effectively constitutes an IIR (Infinite Impulse Response) filter with two values for α that correspond to a fast rise and a slow fall to track the peak envelope of the noise λ(n), as follows:
λ(n)=[λ(n−1)·p]+[{a·λ(n−1)+(1−α)·|x(n)|}·{1−p}]
where p is a presence probability (the probability that a user input or “event” is present), x(n) is the current input and α is the forgetting factor.
To track the envelope two values of α used are as follows:
α=αfall when Δ(n)>|x(n)|
α=αrise when λ(n)<|x(n)|
Using the two values of α leads to an asymmetric recursive averaging algorithm. Of course, these values could be set the same as one another, to allow for a symmetric recursive averaging algorithm.
The absolute value of the input is used to account for fast changes from negative to positive seen in corner cases such as when a long press is larger than sensor timeout or when force is applied next to a button and then released.
The adaptive noise threshold THnoise (TH noise) is calculated by the adaptive thresholding stage 414 by adding a bias THbias to λ(n) and constraining the maximum value not to exceed THfall (TH fall)
THnoise(n)=min(Δ(n)+THbias,THfall)
The minimum value is also constrained:
THnoise(n)=max(λ(n)+THbias,THnoise Min)
In the first method, the presence probability is set to be either 0 or 1 as follows:
Thus, when p=1, the noise threshold TH noise can vary dynamically (on a sample-by-sample basis) based on the current value (and historical values) of the sensor signal x(n). Otherwise, when p=0, the noise threshold TH noise is maintained.
A second method is based on the first method but employs a mapping between a signal property and values of the presence probability p from 0 to 1, so that there is the potential for a (dynamically changing) combination between varying the noise threshold TH dynamically (which happens fully when p=1) and maintaining the noise threshold TH noise (which happens fully when p=0).
This can be appreciated by reconsidering the equation:
λ(n)=[λ(n−1)·p]+[{α·λ(n−1)+(1−α)·|x(n)|}·{1−p}]
For example, when p=0.5 it could be considered that there is a 50:50 mix or contribution between varying the noise threshold TH dynamically and maintaining the noise threshold TH noise.
The second method may for example involve calculating (on an ongoing basis) a signal property such as SNR (signal-to-noise ratio), and employing a mapping between SNR and the presence probability p so that the value of p varies with the SNR. Other signal properties could be determined instead, with corresponding mappings to the presence probability p.
The second method may be enhanced using a technique referred to as Improved Minima Controlled Recursive Averaging (IMCRA). In this respect, reference may be made to IEEE Transactions on Speech and Audio Processing, Vol. 11, No. 5, pages 466 to 475. The enhanced method may involve adding more than one iteration to search for the minimum in a buffer.
It can be seen that using the first method (recursive averaging) leads to more false accepts in the high noise section because the noise tracking is less accurate than for the second method.
By investigating the group delay of the rise and fall, IIR filter coefficients in the response time can be inferred as shown in
Inductive Sensing, Resistive-Inductive-Capacitive Sensing
It will be apparent that the above techniques are applicable for force sensing in general. Example types of force sensor mentioned above include capacitive displacement sensors, inductive force sensors, strain gauges, piezoelectric force sensors, force sensing resistors, piezoresistive force sensors, thin film force sensors and quantum tunnelling composite-based force sensors.
An example involving inductive sensing will now be considered, by way of example. The above techniques are applicable for example to enable dynamic accuracy in inductive sense systems. Systems and methods may reduce power consumption in inductive button sense systems by dynamically changing the measurement settings based on conditions in the system. Although the following considers inductive sense systems (e.g. employing a resistive-inductive-capacitive sensor), it will be appreciated that the considerations apply equally to other types of sensor system. The present disclosure will be understood accordingly.
An example inductive sense system 1000 is shown in
With reference to
In such an inductive sense system 1000, a force or mechanical movement in the metal plate 1002 will result in a change in inductance. This can be used to implement a button as shown in
An example system 1100 that measures phase shift which is proportional to the coil inductance, and similar to those described earlier, is shown in
With reference to
The DCO 1110 outputs a clock at a carrier frequency (Fc), referred to as the 0 degree output. The DCO 1110 outputs a second square wave clock that is notionally 90 degrees shifted relative to the primary output, referred to as the 90 degree output.
The output of the VCO (DCO) is coupled to the input of the driver 1120. The drive circuit 1120 drives a pseudo-sinusoidal current at the frequency and phase alignment of the 0 degree clock input. The drive circuit 1120 drives a fixed amplitude current.
The sensor (Sensor) 1130 in this example comprises an R-L-C circuit (corresponding to the sensor shown in
The Q-I receive path 1140 receives the voltage across the sensor 1130 and comprises a low noise input amplifier (Amplifier) 1141, an I path and a Q path. The Q path is coupled to the output of the amplifier 1141 and comprises an analog multiplier 1142 with inputs coupled to the VCO (DCO) output that is 90 degrees phase shifted to the current transmitted by the driver circuit 1120 and the output of the amplifier 1141, a low-pass filter 1143 coupled to the output of the analog multiplier 1142, and an ADC 1144 coupled to the output of the low pass filter 1143 to digitize the Q path voltage signal. The I path is coupled to the output of the amplifier 1141 and comprises an analog multiplier 1145 with inputs coupled to the VCO (DCO) output that is phase aligned to the current transmitted by the driver circuit 1120 and the output of the amplifier 1141, a low-pass filter 1146 coupled to the output of the analog multiplier 1145, and an ADC 1147 coupled to the output of the low pass filter 1146 to digitize the I path voltage signal.
The processing block (POST PROCESSING) 1150 generates amplitude and phase information from the Q-I paths wherein, the I path ADC output is coupled as an input into the processing block 1150, and the Q path ADC output is coupled as an input into the processing block 1150.
The button press detection block (input determination block) 1160 observes the phase information to determine if the shift in phase recorded by the I-Q detection path 1140 is interpreted as a button press.
In such a system, to do one scan of the R-L-C sensor 1130, the VCO (DCO) 1110 and drive circuit 1120 are activated. After the low pass filter 1143, 1146 has settled, the ADC 1144, 1147 is activated and one or multiple ADC samples are captured, nominally at 500 kHz (as an example). The duration over which the ADC samples are captured is referred to as the conversion time. Each ADC sample contains a certain amount of noise due to analog and digital factors including, but not limited to, circuit thermal noise, circuit flicker noise and digital quantization noise.
One or multiple ADC samples are filtered to attenuate noise, and processing converts the I and Q signals into phase and amplitude information.
Those skilled in the art will recognize that, while the filtering was described as occurring on the ADC inputs, it can occur at multiple places in the processing path such as on the ADC outputs.
The power in the system can vary based on a number of factors, such as scan rate, conversion time and drive current. If more scans are performed within a certain measure of time, the power will increase compared to less scans performed. Longer conversion times require the circuits to be active for a longer time, increasing power consumption. A higher drive current generated by the driver 1120 will provide a larger signal that increases the signal-to-noise ratio of the system, improving performance but increasing power consumption.
One possibility is to employ a fixed scan rate, conversion time, and drive current for a given setup. Against this backdrop, an improved system will now be considered, in which the system dynamically adjusts the scan rate, conversion time, and/or drive current (or one or more other system parameters) based on system conditions to minimize power consumption (or to meet a given performance target).
An example improved system 1200 is shown in
Beyond such elements the system 1200 comprises a dynamic monitoring block 1270. The drive circuit (Driver) 1220 corresponds to the drive circuit (Driver) 1120, the processing block 1250 corresponds to the processing block 1150, and the button press detection block (input determination block) 1260 corresponds to the button press detection block (input determination block) 1160.
The dynamic monitoring block (parameter control block) 1270 observes the state of the system (e.g. based on outputs of the processing block 1250 and/or the button press detection block 1260) and varies system parameters in response to the state of the system. Monitoring can comprise, for example: the Q/I values (not shown in
The system parameters (e.g. drive current of driver 1220, or conversion time or scan rate of processing block 1250) may be varied so that lower power modes are engaged when the phase signal is far away from triggering a system event and higher power modes are engaged when the system is closer to triggering a system event.
It will be understood that other arrangements may differ in terms of implementation from the above-described system as depicted in
In one arrangement, the dynamic monitoring block 1270 observes the distance between the measured phase value and the button press threshold. When the measured phase value is far away from the button press threshold the system may operate in a low-performance mode and when the measured phase value is close to the button press threshold the system may operate in a high-performance mode. Thus, the system may operate in different modes based on comparison with a threshold, in line with techniques described earlier herein.
An example of such implementation of the
In the
As shown in
The dynamic monitoring block 1270 comprises (e.g. in storage) a set of high performance mode settings (system parameters), a set of low performance mode settings (system parameters), and a high performance mode threshold. Thus, there are different modes of operation with corresponding different system parameter settings.
A comparison is performed by unit 1271 between the high performance threshold and the distance between the button press and the button press threshold. The dynamic monitoring block 1270 outputs either the high or low performance mode settings based on the results of the comparison, for example using selector 1272. The settings may include scan rate, conversion time, or drive current. This operation can be better understood with reference to the example waveforms shown in
With reference to
It is thus possible to drive a (programmable) current into the sensor. Dynamically varying this current drive has implications on noise, SNR and EMI. This can be generalized as varying the electrical signal that is driven into the sensor. It is also possible to have a programmable conversion time based on digital filtering. Dynamically varying conversion time directly varies phase accuracy. It is also possible to have a programmable scan rate, and to dynamically vary this too.
While an example has been shown, several system variations can of course exist based on this general teaching.
For example, while a single high performance threshold has been shown, this is one example. Multiple thresholds with different performance settings may be used. As another example, the performance settings may be a mathematical function of the distance between the phase signal and the button press threshold.
As another example, while the closeness to triggering a button press was used to decide when to engage, this is one example. High performance mode may be selectively engaged or disengaged based on the amplitude signal instead of the phase signal. High performance mode may be engaged if the input signal has deviated from a stable operating condition. This may be implemented by taking the derivative of the signal and comparing the derivative against a window or range.
As another example, while the system has been described as an inductive sense system, this is just an example. A system with a variable capacitor would result in similar electrical characteristic shifts in sensor. Possible system implementations are not required to assume that inductance shifts caused the change in sensor characteristics.
As another example, while the inductive sense system has been described as a fixed frequency driver and an I-Q receive path that measures phase and amplitude, this is an example. Changes in the R-L-C circuit may be measured by operating the R-L-C circuit as an oscillator and measuring the frequency of oscillation.
In overview, a system generally in line with the above may be considered to comprise: an inductance to digital system that digitally converts an inductance shift into a digital value; a processing block that interprets the digital value into a system event; and a dynamic monitoring block that monitors system state relative to a system event; wherein system configuration parameters are adjusted based on the digital value's distance from triggering a system event. The processing block that interprets the digital value into a system event may be a button detection block. The system event may be a button press. The system configuration parameter that is adjusted may be the scan rate, or the conversion time or the drive current, or any combination of these.
It will be apparent that such systems with inductive sensors are one example which demonstrates that thresholds as described earlier can be used to control how the system operates, including system parameters (which affect performance), sampling rates, and which units or stages are active or inactive (or in a low power state). The teaching of
Force Sense Direct Access Mode
In a further aspect of the invention, and with reference to the attached
The force sensor system module may be arranged in communication with a central processing unit (CPU) or applications processor (AP) of the device 100. The force sensor system module may further be provided in communication with a sensor hub, which may further be in communication with a CPU or AP. The force sensor system module may provide further functionality, for example a haptics module for the control and operation of haptic outputs in the device 100.
The combination of the central force sensor system module, the applications processor (AP) and the sensor hub may be referred to as a controller 110C, and may be considered a practical implementation of the controller 110, 110A or 1108.
A Force Sense Algorithm is used to determine the occurrence of a user input based on the output of the force sensors. The Force Sense Algorithm may be provided in the sensor hub, and/or may be provided in the force sensor system module, e.g. in the HALO DSP core.
The following interfaces may provide the communications to and from the force sensor system module:
Primary I2C Slave
Used for configuring the force sensor system module. Configured to have access to the register space, and used by the Driver Software to configure various aspect of the Haptics functions and the Force Sense Input Path. For example:
Secondary I2C Slave
Used for obtaining direct information from the Force Sense Input Path, as per the configuration of the force sensor system module.
IRQ1
Used by the force sensor system module to indicate any interrupt requiring attention of the force sensor system module Driver SW.
IRQ2
Used to indicate that a new set of Force Sense Samples is available.
Operation
When operating the force sensor system module in Direct Access Mode the following steps may be performed:
Modes
The force sensor system module may be configured to operate in 3 modes, as part of the Direct Access mode.
Recalibration
The force sensor system module may perform recalibration: Recalibration may be required under the rare condition that the input signal after DC offset removal exceeds the maximum input range of the AFE. In such condition the offset calibration needs to be performed again. Such condition may be performed when the accumulated contribution from aging, temperature drift and strong static forces exceed a threshold limit, e.g. 50N.
Recalibration can either be performed autonomously by the force sensor system module or can be initiated by the host through the primary I2C port.
Autonomous recalibration in combination with the Baseline Removal Filter is a preferred combination, but may be provided as an option that can be turned on or off.
The Sensor hub may be informed via the 2nd I2C interface when recalibration has been performed.
Configurability of the FIFO Messages
The format of the messages in the FIFO will be defined by the firmware running in the Halo Core. The messages in the FIFO may thus be defined in any way that suits the needs of the sensor hub best.
The above-described architecture allows the force sensor system module to power down after Force Sense Samples have been acquired. Only a small set of data bits (i.e. the FIFO) may be required for always on operation, which can be retrieved through the 2nd I2C interface and while keeping this FIFO safely separated from any other memory space in the device.
As apparent from
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word “comprising” does not exclude the presence of elements or steps other than those listed in the claim, “a” or “an” does not exclude a plurality, and a single feature or other unit may fulfil the functions of several units recited in the claims. Any reference numerals or labels in the claims shall not be construed so as to limit their scope.
The invention is not limited to the embodiments described herein, and may be modified or adapted without departing from the scope of the present invention.
The present disclosure extends to the following statements:
Event and Anomaly Detection
A1. A device, comprising:
Event Detection
A2. The device according to statement A1, wherein the classification operation comprises employing a classification model to determine whether a signal sequence extracted from the candidate input sensor signal belongs to one or more categories defined by the classification model.
A3. The device according to statement A2, wherein the controller is operable to:
A4. The device according to statement A3, wherein the feature-based definition comprises each feature extracted from the signal sequence, and optionally is a feature vector.
A5. The device according to any of statements A2 to A4, wherein the classification model is a support-vector machine model defined by a support-vector machine classification algorithm.
A6. The device according to statement A5, wherein the controller is configured to employ a supervised learning algorithm to define the classification model based on training data, the training data comprising a set of training signal sequences.
A7. The device according to statement A6, wherein the training data comprises the set of training signal sequences and information of the categories to which they belong.
A8. The device according to statement A6 or A7, wherein the supervised learning algorithm is a support-vector machine learning algorithm.
A9. The device according to any of statements A5 to A8, wherein the controller is operable to update the classification model based on a said signal sequence extracted from the candidate input sensor signal, optionally based on the determination made by the classification operation concerned, and optionally based on context information defining a context in which that signal sequence was obtained by the at least one force sensor.
A10. The device according to any of statements A5 to A9, wherein the classification model is a linear classification model or a non-linear classification model.
A11. The device according to any of statements A2 to A10, wherein the controller is configured to employ a support-vector clustering algorithm to define the classification model based on training data, the training data comprising a set of training signal sequences.
A12. The device according to any of statements A2 to A11, wherein the controller is operable to determine whether a defined core event or a defined anomalous event has occurred based on a said category to which the signal sequence is determined to belong.
A13. The device according to statement A12, wherein the controller is operable, if it is determined that a defined anomalous event has occurred, to:
operate in an anomaly mode; and/or disable one or more functions; and/or sample the force sensors at a given anomaly-mode sample rate; and/or operate using anomaly mode system parameters.
A14. The device according to any of statements A2 to A13, wherein the controller is operable to carry out the classification operation for a set of said signal sequences extracted from the candidate input sensor signal in a given order.
A15. The device according to statement A14, wherein the controller is operable to determine whether a defined gesture event has occurred based on a combination of categories to which the plurality of said signal sequences are determined to belong,
A16. The device according to statement A14 or A15, wherein the controller is operable, dependent on a category to which a first one of said extracted signal sequences in said order is determined to belong, to adapt the classification model for a second one of said extracted signal sequences in said order,
Anomaly Detection
A17. The device according to statement A1, wherein the device comprises at least two force sensors, and wherein the candidate input sensor signal is generated by combining a plurality of input sensor signals derived from the force sensors.
A18. The device according to statement A17, wherein the candidate input sensor signal is generated by averaging the plurality of input sensor signals, optionally by averaging amplitudes of the plurality of input sensor signals on a sample-by-sample basis.
A19. The device according to statement A1, A17 or A18, wherein the classification operation comprises determining whether said characteristic is present on a sample-by-sample basis.
A20. The device according to any of statements A1 and A17 to A19, wherein the characteristic comprises the amplitude of the candidate input sensor signal being negative.
A21. The device according to any of statements A1 and A17 to A20, wherein the characteristic comprises the magnitude of the candidate input sensor signal being greater than a threshold value.
A22. The device according to any of statements A1 and A17 to A21, wherein the classification operation comprises employing a mapping function to map an amplitude of the candidate input sensor signal to a value within a predefined range of values.
A23. The device according to statement A22, wherein the mapping function comprises a sigmoid function.
A24. The device according to statement A1, wherein the device comprises at least two force sensors, and wherein the characteristic comprises a cross-correlation of a pair of input sensor signals derived from the force sensors being negative.
A25. The device according to any of statements A1 and A17 to A24, wherein the characteristic is that one or more signal features or signal statistics of the candidate input sensor signal deviate from a defined model.
A26. The device according to any of statements A1 and A17 to A25, wherein the classification operation is an anomaly detection operation, and wherein the controller is operable to determine that an anomaly is occurring if it is determined that the candidate input sensor signal has the given characteristic.
A27. The device according to any of statements A1 and A17 to A26, wherein the controller is operable to enter an anomaly mode of operation if it is determined that the candidate input sensor signal starts to have the given characteristic and to exit the anomaly mode of operation if it is determined that the candidate input sensor signal ceases to have the given characteristic.
A28. The device according to any of statements A1 and A17 to A27, wherein the controller is operable, if it is determined that the candidate input sensor signal has the given characteristic, to:
A29. The device according to any of the preceding statements, wherein the force sensors are located on the device according to anthropometric measurements of a human hand.
A30. The device according to any of the preceding statements, wherein each of the force sensors comprises one or more of:
A31. The device according to any of the preceding statements, wherein the controller is configured to control operation of the device based on an output of the classification operation.
A32. The device according to any of the preceding statements, comprising one or more input/output components, wherein the controller is configured to control operation of at least one of the input/output components based on an output of the classification operation.
A33. The device according to any of the preceding statements, being a portable electrical or electronic device such as a portable telephone or computer.
A34. A controller for use in a device comprising at least one force sensor, the controller operable, in a classification operation, to determine whether a candidate input sensor signal derived from the at least one force sensor has a given characteristic.
A35. A method of controlling a device comprising at least one force sensor, the method comprising, in a classification operation, determining whether a candidate input sensor signal derived from the at least one force sensor has a given characteristic.
A36. A computer program which, when executed on a device comprising at least one force sensor, causes the device, in a classification operation, to determine whether a candidate input sensor signal derived from the at least one force sensor has a given characteristic.
Adaptive Noise
B1. A device, comprising:
B2. A method of adaptively deriving a noise threshold for use with a given signal, the method comprising calculating the threshold based on a running estimate of the noise in the signal.
B3. The method according to statement B2, comprising recursively calculating the noise threshold based on current and previous values of the signal.
B4. The method according to statement B2 or B3, comprising calculating a current value of the noise threshold based on a combination of a previous value of the noise threshold and a current value of the signal.
B5. The method according to statement B4, wherein said combination is a sum such as a weighted sum.
B6. The method according to statement B5, wherein the weighted sum is defined by a first weighting when the values of the signal are falling and a second weighting different from the first weighting when the values of the signal are rising.
B7. The method according to any of statements B2 to B6, comprising constraining the threshold value within upper and lower limit values.
System Parameter Control
C1. A sensor system comprising:
C2. The sensor system according to statement C1, wherein:
C3. The sensor system according to statement C2, wherein the user-input definition comprises one or more of:
C4. The sensor system according to statement C2 or C3, wherein the parameter-control definition comprises one or more of:
C5. The sensor system according to any of statements C2 to C5, wherein the parameter-control definition defines when the system parameter should be controlled to cause the sensor system to operate in:
C6. The sensor system according to any of the preceding statements, wherein:
C7. The sensor system according to statement C6, wherein the signal value comprises one or more of:
C8. The sensor system according to statement C6 or C7, wherein the parameter control block is configured to determine how close the signal value is to a value at which the input determination block would determine that a defined user input has occurred, and to control the system parameter based on the determined closeness.
C9. The sensor system according to any of statements C6 to C8, wherein:
C10. The sensor system according to any of the preceding statements, wherein:
C11. The sensor system according to any of the preceding statements, wherein the system parameter controls one or more of:
C12. The sensor system according to any of the preceding statements, wherein the parameter control block is configured to control the system parameter based on the sensor signal on-the-fly or dynamically or on an ongoing basis or periodically or from time to time.
C13. The sensor system according to any of the preceding statements, wherein the sensor comprises one or more of:
C14. The sensor system according to any of the preceding statements, comprising a plurality of said sensors, wherein:
C15. A host device comprising the sensor system of any of the preceding statements, optionally being an electrical or electronic device or a mobile device.
C16. A method of controlling a sensor system, the sensor system comprising a sensor operable to generate a sensor signal indicative of a user input, the method comprising:
C17. A computer program which, when executed on a sensor system comprising a sensor operable to generate a sensor signal indicative of a user input, causes the sensor system to carry out a method comprising:
C18. A sensor system comprising:
C19. A sensor system comprising:
Number | Date | Country | Kind |
---|---|---|---|
1817495 | Oct 2018 | GB | national |
This application is a continuation of U.S. patent application Ser. No. 17/471,529, filed Sep. 10, 2021, which is a continuation of U.S. patent application Ser. No. 17/076,489, filed Oct. 21, 2020, which is a continuation of U.S. patent application Ser. No. 16/422,543, filed May 24, 2019, issued as U.S. patent Ser. No. 10/860,202 on Dec. 8, 2020, which claims priority to U.S. Provisional Patent Application Ser. No. 62/842,821, filed May 3, 2019, and United Kingdom Patent Application No. 1817495.3, filed Oct. 26, 2018, each of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3686927 | Scharton | Aug 1972 | A |
4902136 | Mueller et al. | Feb 1990 | A |
5374896 | Sato et al. | Dec 1994 | A |
5684722 | Thorner et al. | Nov 1997 | A |
5748578 | Schell | May 1998 | A |
5857986 | Moriyasu | Jan 1999 | A |
6050393 | Murai et al. | Apr 2000 | A |
6278790 | Davis et al. | Aug 2001 | B1 |
6294891 | McConnell et al. | Sep 2001 | B1 |
6332029 | Azima et al. | Dec 2001 | B1 |
6388520 | Wada et al. | May 2002 | B2 |
6567478 | Oishi et al. | May 2003 | B2 |
6580796 | Kuroki | Jun 2003 | B1 |
6683437 | Tierling | Jan 2004 | B2 |
6703550 | Chu | Mar 2004 | B2 |
6762745 | Braun et al. | Jul 2004 | B1 |
6768779 | Nielsen | Jul 2004 | B1 |
6784740 | Tabatabaei | Aug 2004 | B1 |
6816833 | Iwamoto et al. | Nov 2004 | B1 |
6906697 | Rosenberg | Jun 2005 | B2 |
6995747 | Casebolt et al. | Feb 2006 | B2 |
7042286 | Meade et al. | May 2006 | B2 |
7154470 | Tierling | Dec 2006 | B2 |
7277678 | Rozenblit et al. | Oct 2007 | B2 |
7301094 | Noro et al. | Nov 2007 | B1 |
7333604 | Zernovizky et al. | Feb 2008 | B2 |
7392066 | Haparnas | Jun 2008 | B2 |
7456688 | Okazaki et al. | Nov 2008 | B2 |
7623114 | Rank | Nov 2009 | B2 |
7639232 | Grant et al. | Dec 2009 | B2 |
7777566 | Drogi et al. | Aug 2010 | B1 |
7791588 | Tierling et al. | Sep 2010 | B2 |
7825838 | Srinivas et al. | Nov 2010 | B1 |
7979146 | Ullrich et al. | Jul 2011 | B2 |
8068025 | Devenyi et al. | Nov 2011 | B2 |
8098234 | Lacroix et al. | Jan 2012 | B2 |
8102364 | Tierling | Jan 2012 | B2 |
8325144 | Tierling et al. | Dec 2012 | B1 |
8427286 | Grant et al. | Apr 2013 | B2 |
8441444 | Moore et al. | May 2013 | B2 |
8466778 | Hwang et al. | Jun 2013 | B2 |
8480240 | Kashiyama | Jul 2013 | B2 |
8572293 | Cruz-Hernandez et al. | Oct 2013 | B2 |
8572296 | Shasha et al. | Oct 2013 | B2 |
8593269 | Grant et al. | Nov 2013 | B2 |
8648659 | Oh et al. | Feb 2014 | B2 |
8648829 | Shahoian et al. | Feb 2014 | B2 |
8659208 | Rose et al. | Feb 2014 | B1 |
8754757 | Ullrich et al. | Jun 2014 | B1 |
8754758 | Ullrich et al. | Jun 2014 | B1 |
8947216 | Da Costa et al. | Feb 2015 | B2 |
8981915 | Birnbaum et al. | Mar 2015 | B2 |
8994518 | Gregorio et al. | Mar 2015 | B2 |
9019087 | Bakircioglu et al. | Apr 2015 | B2 |
9030428 | Fleming | May 2015 | B2 |
9063570 | Weddle et al. | Jun 2015 | B2 |
9070856 | Rose et al. | Jun 2015 | B1 |
9083821 | Hughes | Jul 2015 | B2 |
9092059 | Bhatia | Jul 2015 | B2 |
9117347 | Matthews | Aug 2015 | B2 |
9128523 | Buuck et al. | Sep 2015 | B2 |
9164587 | Da Costa et al. | Oct 2015 | B2 |
9196135 | Shah et al. | Nov 2015 | B2 |
9248840 | Truong | Feb 2016 | B2 |
9326066 | Kilppel | Apr 2016 | B2 |
9329721 | Buuck et al. | May 2016 | B1 |
9354704 | Lacroix et al. | May 2016 | B2 |
9368005 | Cruz-Hernandez et al. | Jun 2016 | B2 |
9489047 | Jiang et al. | Nov 2016 | B2 |
9495013 | Underkoffler et al. | Nov 2016 | B2 |
9507423 | Gandhi et al. | Nov 2016 | B2 |
9513709 | Gregorio et al. | Dec 2016 | B2 |
9520036 | Buuck | Dec 2016 | B1 |
9588586 | Rihn | Mar 2017 | B2 |
9640047 | Choi et al. | May 2017 | B2 |
9652041 | Jiang et al. | May 2017 | B2 |
9696859 | Heller et al. | Jul 2017 | B1 |
9697450 | Lee | Jul 2017 | B1 |
9715300 | Sinclair et al. | Jul 2017 | B2 |
9740381 | Chaudhri et al. | Aug 2017 | B1 |
9842476 | Rihn et al. | Dec 2017 | B2 |
9864567 | Seo | Jan 2018 | B2 |
9881467 | Levesque | Jan 2018 | B2 |
9886829 | Levesque | Feb 2018 | B2 |
9946348 | Ullrich et al. | Apr 2018 | B2 |
9947186 | Macours | Apr 2018 | B2 |
9959744 | Koskan et al. | May 2018 | B2 |
9965092 | Smith | May 2018 | B2 |
10032550 | Zhang et al. | Jul 2018 | B1 |
10039080 | Miller et al. | Jul 2018 | B2 |
10055950 | Saboune et al. | Aug 2018 | B2 |
10074246 | Da Costa et al. | Sep 2018 | B2 |
10082873 | Zhang | Sep 2018 | B2 |
10102722 | Levesque et al. | Oct 2018 | B2 |
10110152 | Hajati | Oct 2018 | B1 |
10165358 | Koudar et al. | Dec 2018 | B2 |
10171008 | Nishitani et al. | Jan 2019 | B2 |
10175763 | Shah | Jan 2019 | B2 |
10191579 | Forlines et al. | Jan 2019 | B2 |
10264348 | Harris et al. | Apr 2019 | B1 |
10275087 | Smith | Apr 2019 | B1 |
10402031 | Vandermeijden et al. | Sep 2019 | B2 |
10564727 | Billington et al. | Feb 2020 | B2 |
10620704 | Rand et al. | Apr 2020 | B2 |
10667051 | Stahl | May 2020 | B2 |
10726683 | Mondello et al. | Jul 2020 | B1 |
10735956 | Bae et al. | Aug 2020 | B2 |
10782785 | Hu et al. | Sep 2020 | B2 |
10795443 | Hu et al. | Oct 2020 | B2 |
10820100 | Stahl et al. | Oct 2020 | B2 |
10828672 | Stahl et al. | Nov 2020 | B2 |
10832537 | Doy et al. | Nov 2020 | B2 |
10841696 | Mamou-Mani | Nov 2020 | B2 |
10848886 | Rand | Nov 2020 | B2 |
10860202 | Sepehr et al. | Dec 2020 | B2 |
10955955 | Peso Parada et al. | Mar 2021 | B2 |
10969871 | Rand et al. | Apr 2021 | B2 |
10976825 | Das et al. | Apr 2021 | B2 |
11069206 | Rao et al. | Jul 2021 | B2 |
11079874 | Lapointe et al. | Aug 2021 | B2 |
11139767 | Janko et al. | Oct 2021 | B2 |
11150733 | Das et al. | Oct 2021 | B2 |
11259121 | Lindemann et al. | Feb 2022 | B2 |
11460526 | Foo et al. | Oct 2022 | B1 |
11500469 | Rao et al. | Nov 2022 | B2 |
11669165 | Das et al. | Jun 2023 | B2 |
20010043714 | Asada et al. | Nov 2001 | A1 |
20020018578 | Burton | Feb 2002 | A1 |
20020085647 | Oishi et al. | Jul 2002 | A1 |
20030068053 | Chu | Apr 2003 | A1 |
20030214485 | Roberts | Nov 2003 | A1 |
20040120540 | Mullenborn et al. | Jun 2004 | A1 |
20050031140 | Browning | Feb 2005 | A1 |
20050134562 | Grant et al. | Jun 2005 | A1 |
20050195919 | Cova | Sep 2005 | A1 |
20060028095 | Maruyama et al. | Feb 2006 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20070013337 | Liu et al. | Jan 2007 | A1 |
20070024254 | Radecker et al. | Feb 2007 | A1 |
20070241816 | Okazaki et al. | Oct 2007 | A1 |
20080077367 | Odajima | Mar 2008 | A1 |
20080226109 | Yamakata et al. | Sep 2008 | A1 |
20080240458 | Goldstein et al. | Oct 2008 | A1 |
20080293453 | Atlas et al. | Nov 2008 | A1 |
20080316181 | Nurmi | Dec 2008 | A1 |
20090020343 | Rothkopf et al. | Jan 2009 | A1 |
20090079690 | Watson et al. | Mar 2009 | A1 |
20090088220 | Persson | Apr 2009 | A1 |
20090096632 | Ullrich et al. | Apr 2009 | A1 |
20090102805 | Meijer et al. | Apr 2009 | A1 |
20090128306 | Luden et al. | May 2009 | A1 |
20090153499 | Kim et al. | Jun 2009 | A1 |
20090189867 | Krah et al. | Jul 2009 | A1 |
20090278819 | Goldenberg et al. | Nov 2009 | A1 |
20090279719 | Lesso | Nov 2009 | A1 |
20090313542 | Cruz-Hernandez et al. | Dec 2009 | A1 |
20100013761 | Birnbaum et al. | Jan 2010 | A1 |
20100080331 | Garudadri et al. | Apr 2010 | A1 |
20100085317 | Park et al. | Apr 2010 | A1 |
20100141408 | Doy et al. | Jun 2010 | A1 |
20100141606 | Bae et al. | Jun 2010 | A1 |
20100260371 | Afshar | Oct 2010 | A1 |
20100261526 | Anderson et al. | Oct 2010 | A1 |
20100331685 | Stein et al. | Dec 2010 | A1 |
20110056763 | Tanase et al. | Mar 2011 | A1 |
20110075835 | Hill | Mar 2011 | A1 |
20110077055 | Pakula et al. | Mar 2011 | A1 |
20110141052 | Bernstein et al. | Jun 2011 | A1 |
20110161537 | Chang | Jun 2011 | A1 |
20110163985 | Bae et al. | Jul 2011 | A1 |
20110167391 | Momeyer et al. | Jul 2011 | A1 |
20120011436 | Jinkinson et al. | Jan 2012 | A1 |
20120105358 | Momeyer et al. | May 2012 | A1 |
20120105367 | Son et al. | May 2012 | A1 |
20120112894 | Yang et al. | May 2012 | A1 |
20120206246 | Cruz-Hernandez et al. | Aug 2012 | A1 |
20120206247 | Bhatia et al. | Aug 2012 | A1 |
20120229264 | Company Bosch et al. | Sep 2012 | A1 |
20120249462 | Flanagan et al. | Oct 2012 | A1 |
20120253698 | Cokonaj | Oct 2012 | A1 |
20120306631 | Hughes | Dec 2012 | A1 |
20130016855 | Lee et al. | Jan 2013 | A1 |
20130027359 | Schevin et al. | Jan 2013 | A1 |
20130038792 | Quigley et al. | Feb 2013 | A1 |
20130096849 | Campbell et al. | Apr 2013 | A1 |
20130141382 | Simmons | Jun 2013 | A1 |
20130208923 | Suvanto | Aug 2013 | A1 |
20130275058 | Awad | Oct 2013 | A1 |
20130289994 | Newman et al. | Oct 2013 | A1 |
20130307786 | Heubel | Nov 2013 | A1 |
20140035736 | Weddle et al. | Feb 2014 | A1 |
20140056461 | Afshar | Feb 2014 | A1 |
20140064516 | Cruz-Hernandez et al. | Mar 2014 | A1 |
20140079248 | Short et al. | Mar 2014 | A1 |
20140085064 | Crawley et al. | Mar 2014 | A1 |
20140118125 | Bhatia | May 2014 | A1 |
20140118126 | Garg et al. | May 2014 | A1 |
20140119244 | Steer et al. | May 2014 | A1 |
20140125467 | Da Costa et al. | May 2014 | A1 |
20140139327 | Bau et al. | May 2014 | A1 |
20140176415 | Buuck et al. | Jun 2014 | A1 |
20140205260 | Lacroix et al. | Jul 2014 | A1 |
20140222377 | Bitan et al. | Aug 2014 | A1 |
20140226068 | Lacroix et al. | Aug 2014 | A1 |
20140253303 | Levesque | Sep 2014 | A1 |
20140292501 | Lim et al. | Oct 2014 | A1 |
20140300454 | Lacroix et al. | Oct 2014 | A1 |
20140340209 | Lacroix et al. | Nov 2014 | A1 |
20140347176 | Modarres et al. | Nov 2014 | A1 |
20150010176 | Scheveiw et al. | Jan 2015 | A1 |
20150201294 | Risberg et al. | Jan 2015 | A1 |
20150049882 | Chiu et al. | Feb 2015 | A1 |
20150061846 | Yliaho | Mar 2015 | A1 |
20150070149 | Cruz-Hernandez et al. | Mar 2015 | A1 |
20150070151 | Cruz-Hernandez et al. | Mar 2015 | A1 |
20150070154 | Levesque et al. | Mar 2015 | A1 |
20150070260 | Saboune et al. | Mar 2015 | A1 |
20150077324 | Birnbaum et al. | Mar 2015 | A1 |
20150084752 | Heubel et al. | Mar 2015 | A1 |
20150116205 | Westerman et al. | Apr 2015 | A1 |
20150130767 | Myers et al. | May 2015 | A1 |
20150154966 | Bharitkar et al. | Jun 2015 | A1 |
20150204925 | Hernandez et al. | Jul 2015 | A1 |
20150208189 | Tsai | Jul 2015 | A1 |
20150216762 | Oohashi et al. | Aug 2015 | A1 |
20150234464 | Yliaho | Aug 2015 | A1 |
20150249888 | Bogdanov | Sep 2015 | A1 |
20150264455 | Granoto et al. | Sep 2015 | A1 |
20150268768 | Woodhull et al. | Sep 2015 | A1 |
20150324116 | Marsden et al. | Nov 2015 | A1 |
20150325116 | Umminger, III | Nov 2015 | A1 |
20150339898 | Saboune et al. | Nov 2015 | A1 |
20150341714 | Ahn et al. | Nov 2015 | A1 |
20150356981 | Johnson et al. | Dec 2015 | A1 |
20160004311 | Yliaho | Jan 2016 | A1 |
20160007095 | Lacroix | Jan 2016 | A1 |
20160063826 | Morrell et al. | Mar 2016 | A1 |
20160070353 | Lacroix et al. | Mar 2016 | A1 |
20160070392 | Wang et al. | Mar 2016 | A1 |
20160074278 | Muench et al. | Mar 2016 | A1 |
20160097662 | Chang et al. | Apr 2016 | A1 |
20160132118 | Park et al. | May 2016 | A1 |
20160155305 | Barsilai et al. | Jun 2016 | A1 |
20160162031 | Westerman et al. | Jun 2016 | A1 |
20160179203 | Modarres et al. | Jun 2016 | A1 |
20160187987 | Ulrich et al. | Jun 2016 | A1 |
20160195930 | Venkatesan et al. | Jul 2016 | A1 |
20160227614 | Lissoni et al. | Aug 2016 | A1 |
20160239089 | Taninaka et al. | Aug 2016 | A1 |
20160246378 | Sampanes et al. | Aug 2016 | A1 |
20160277821 | Kunimoto | Sep 2016 | A1 |
20160291731 | Liu et al. | Oct 2016 | A1 |
20160305996 | Martens et al. | Oct 2016 | A1 |
20160328065 | Johnson et al. | Nov 2016 | A1 |
20160358605 | Ganong, III et al. | Dec 2016 | A1 |
20170031495 | Smith | Feb 2017 | A1 |
20170052593 | Jiang et al. | Feb 2017 | A1 |
20170078804 | Guo et al. | Mar 2017 | A1 |
20170083096 | Rihn et al. | Mar 2017 | A1 |
20170090572 | Holenarsipur et al. | Mar 2017 | A1 |
20170090573 | Hajati et al. | Mar 2017 | A1 |
20170097381 | Stephens et al. | Apr 2017 | A1 |
20170153760 | Chawda et al. | Jun 2017 | A1 |
20170168574 | Zhang | Jun 2017 | A1 |
20170168773 | Keller et al. | Jun 2017 | A1 |
20170169674 | Macours | Jun 2017 | A1 |
20170180863 | Biggs et al. | Jun 2017 | A1 |
20170220197 | Matsumoto et al. | Aug 2017 | A1 |
20170256145 | Macours et al. | Sep 2017 | A1 |
20170277350 | Wang et al. | Sep 2017 | A1 |
20170277360 | Breedvelt-Shouten et al. | Sep 2017 | A1 |
20170357440 | Tse | Dec 2017 | A1 |
20180021811 | Kutez et al. | Jan 2018 | A1 |
20180033946 | Kemppinen et al. | Feb 2018 | A1 |
20180059733 | Gault et al. | Mar 2018 | A1 |
20180059793 | Hajati | Mar 2018 | A1 |
20180067557 | Robert et al. | Mar 2018 | A1 |
20180074637 | Rosenberg | Mar 2018 | A1 |
20180082673 | Tzanetos | Mar 2018 | A1 |
20180084362 | Zhang et al. | Mar 2018 | A1 |
20180095596 | Turgeman | Apr 2018 | A1 |
20180139538 | Macours | May 2018 | A1 |
20180151036 | Cha et al. | May 2018 | A1 |
20180158289 | Vasilev et al. | Jun 2018 | A1 |
20180159452 | Eke et al. | Jun 2018 | A1 |
20180159457 | Eke | Jun 2018 | A1 |
20180159545 | Eke et al. | Jun 2018 | A1 |
20180160227 | Lawrence et al. | Jun 2018 | A1 |
20180165925 | Israr et al. | Jun 2018 | A1 |
20180178114 | Mizuta et al. | Jun 2018 | A1 |
20180182212 | Li et al. | Jun 2018 | A1 |
20180183372 | Li et al. | Jun 2018 | A1 |
20180194369 | Lisseman et al. | Jul 2018 | A1 |
20180196567 | Klein et al. | Jul 2018 | A1 |
20180224963 | Lee et al. | Aug 2018 | A1 |
20180227063 | Heubel et al. | Aug 2018 | A1 |
20180237033 | Hakeem et al. | Aug 2018 | A1 |
20180206282 | Singh | Sep 2018 | A1 |
20180253123 | Levesque et al. | Sep 2018 | A1 |
20180255411 | Lin et al. | Sep 2018 | A1 |
20180267897 | Jeong | Sep 2018 | A1 |
20180294757 | Feng et al. | Oct 2018 | A1 |
20180301060 | Israr et al. | Oct 2018 | A1 |
20180304310 | Long et al. | Oct 2018 | A1 |
20180321056 | Yoo et al. | Nov 2018 | A1 |
20180321748 | Rao et al. | Nov 2018 | A1 |
20180323725 | Cox et al. | Nov 2018 | A1 |
20180329172 | Tabuchi | Nov 2018 | A1 |
20180335848 | Moussette et al. | Nov 2018 | A1 |
20180367897 | Bjork et al. | Dec 2018 | A1 |
20190020760 | DeBates et al. | Jan 2019 | A1 |
20190033348 | Zeleznik et al. | Jan 2019 | A1 |
20190035235 | Da Costa et al. | Jan 2019 | A1 |
20190227628 | Rand et al. | Jan 2019 | A1 |
20190044651 | Nakada | Feb 2019 | A1 |
20190051229 | Ozguner et al. | Feb 2019 | A1 |
20190064925 | Kim et al. | Feb 2019 | A1 |
20190069088 | Seiler | Feb 2019 | A1 |
20190073078 | Sheng et al. | Mar 2019 | A1 |
20190102031 | Shutzberg et al. | Apr 2019 | A1 |
20190103829 | Vasudevan et al. | Apr 2019 | A1 |
20190138098 | Shah | May 2019 | A1 |
20190163234 | Kim et al. | May 2019 | A1 |
20190196596 | Yokoyama et al. | Jun 2019 | A1 |
20190206396 | Chen | Jul 2019 | A1 |
20190215349 | Adams et al. | Jul 2019 | A1 |
20190220095 | Ogita et al. | Jul 2019 | A1 |
20190228619 | Yokoyama et al. | Jul 2019 | A1 |
20190114496 | Lesso | Aug 2019 | A1 |
20190235629 | Hu et al. | Aug 2019 | A1 |
20190253031 | Vellanki et al. | Aug 2019 | A1 |
20190294247 | Hu et al. | Sep 2019 | A1 |
20190295755 | Konradi et al. | Sep 2019 | A1 |
20190296674 | Janko et al. | Sep 2019 | A1 |
20190297418 | Stahl | Sep 2019 | A1 |
20190305851 | Vegas-Olmos et al. | Oct 2019 | A1 |
20190311590 | Doy et al. | Oct 2019 | A1 |
20190341903 | Kim | Nov 2019 | A1 |
20190384393 | Cruz-Hernandez et al. | Dec 2019 | A1 |
20190384898 | Chen et al. | Dec 2019 | A1 |
20200117506 | Chan | Apr 2020 | A1 |
20200139403 | Palit | May 2020 | A1 |
20200150767 | Karimi Eskandary et al. | May 2020 | A1 |
20200218352 | Macours et al. | Jul 2020 | A1 |
20200231085 | Kunii et al. | Jul 2020 | A1 |
20200306796 | Lindemann et al. | Oct 2020 | A1 |
20200313529 | Lindemann | Oct 2020 | A1 |
20200313654 | Marchais et al. | Oct 2020 | A1 |
20200314969 | Marchais et al. | Oct 2020 | A1 |
20200342724 | Marchais et al. | Oct 2020 | A1 |
20200348249 | Marchais et al. | Nov 2020 | A1 |
20200395908 | Schindler et al. | Dec 2020 | A1 |
20200403546 | Janko et al. | Dec 2020 | A1 |
20210108975 | Peso Parada et al. | Apr 2021 | A1 |
20210125469 | Alderson | Apr 2021 | A1 |
20210153562 | Fishwick et al. | May 2021 | A1 |
20210157436 | Peso Parada et al. | May 2021 | A1 |
20210174777 | Marchais et al. | Jun 2021 | A1 |
20210175869 | Taipale | Jun 2021 | A1 |
20210200316 | Das et al. | Jul 2021 | A1 |
20210325967 | Khenkin et al. | Oct 2021 | A1 |
20210328535 | Khenkin et al. | Oct 2021 | A1 |
20210360347 | Aschieri | Nov 2021 | A1 |
20210365118 | Rajapurkar et al. | Nov 2021 | A1 |
20220026989 | Rao et al. | Jan 2022 | A1 |
20220328752 | Lesso et al. | Oct 2022 | A1 |
20220404398 | Reynaga et al. | Dec 2022 | A1 |
20220408181 | Hendrix et al. | Dec 2022 | A1 |
Number | Date | Country |
---|---|---|
2002347829 | Apr 2003 | AU |
103165328 | Jun 2013 | CN |
104811838 | Jul 2015 | CN |
204903757 | Dec 2015 | CN |
105264551 | Jan 2016 | CN |
106438890 | Feb 2017 | CN |
103403796 | Jul 2017 | CN |
106950832 | Jul 2017 | CN |
107665051 | Feb 2018 | CN |
107835968 | Mar 2018 | CN |
210628147 | May 2020 | CN |
113268138 | Aug 2021 | CN |
114237414 | Mar 2022 | CN |
0784844 | Jun 2005 | EP |
2306269 | Apr 2011 | EP |
2363785 | Sep 2011 | EP |
2487780 | Aug 2012 | EP |
2600225 | Jun 2013 | EP |
2846218 | Mar 2015 | EP |
2846229 | Mar 2015 | EP |
2846329 | Mar 2015 | EP |
2988528 | Feb 2016 | EP |
3125508 | Feb 2017 | EP |
3379382 | Sep 2018 | EP |
3546035 | Oct 2019 | EP |
3937379 | Jan 2022 | EP |
201620746 | Jan 2017 | GB |
2526881 | Oct 2017 | GB |
2606309 | Nov 2022 | GB |
201747044027 | Aug 2018 | IN |
H02130433 | May 1990 | JP |
08149006 | Jun 1996 | JP |
H10184782 | Jul 1998 | JP |
6026751 | Nov 2016 | JP |
6250985 | Dec 2017 | JP |
6321351 | May 2018 | JP |
20120126446 | Nov 2012 | KR |
2013104919 | Jul 2013 | WO |
2013186845 | Dec 2013 | WO |
2014018086 | Jan 2014 | WO |
2014094283 | Jun 2014 | WO |
2016105496 | Jun 2016 | WO |
2016164193 | Oct 2016 | WO |
2017034973 | Mar 2017 | WO |
2017113651 | Jul 2017 | WO |
2017113652 | Jul 2017 | WO |
2018053159 | Mar 2018 | WO |
2018067613 | Apr 2018 | WO |
2018125347 | Jul 2018 | WO |
2020004840 | Jan 2020 | WO |
2020055405 | Mar 2020 | WO |
Entry |
---|
Examination Report under Section 18(3), UKIPO, Application No. GB2112207.2, dated Nov. 7, 2022. |
Examination Report, Intellectual Property India, Application No. 202117019138, dated Jan. 4, 2023. |
Communication pursuant to Article 94(3) EPC, European Patent Application No. EP18727512.8, dated Sep. 26, 2022. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2019/050964, dated Sep. 3, 2019. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2019/050770, dated Jul. 5, 2019. |
Communication Relating to the Results of the Partial International Search, and Provisional Opinion Accompanying the Partial Search Result, of the International Searching Authority, International Application No. PCT/US2018/031329, dated Jul. 20, 2018. |
Combined Search and Examination Report, UKIPO, Application No. GB1720424.9, dated Jun. 5, 2018. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2019/052991, dated Mar. 17, 2020, received by Applicant Mar. 19, 2020. |
Communication Relating to the Results of the Partial International Search, and Provisional Opinion Accompanying the Partial Search Result, of the International Searching Authority, International Application No. PCT/GB2020/050822, dated Jul. 9, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2020/024864, dated Jul. 6, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/051035, dated Jul. 10, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/050823, dated Jun. 30, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/051037, dated Jul. 9, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/050822, dated Aug. 31, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/051438, dated Sep. 28, 2020. |
First Examination Opinion Notice, State Intellectual Property Office of the People's Republic of China, Application No. 201880037435.X, dated Dec. 31, 2020. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2020/056610, dated Jan. 21, 2021. |
Examination Report under Section 18(3), UKIPO, Application No. GB2113228.7, dated Feb. 10, 2023. |
Examination Report under Section 18(3), UKIPO, Application No. GB2113154.5, dated Feb. 17, 2023. |
First Office Action, China National Intellectual Property Administration, Application No. 2019107179621, dated Jan. 19, 2023. |
Examination Report under Section 18(3), UKIPO, Application No. GB2117488.3, dated Apr. 27, 2023. |
Second Office Action, National Intellectual Property Administration, PRC, Application No. 2019107179621, dated May 24, 2023. |
3 Examination Report under Section 18(3), UKIPO, Application No. GB2113228.7, dated Jun. 28, 2023. |
Combined Search and Examination Report under Sections 17 and 18(3), UKIPO, Application No. GB2210174.5, dated Aug. 1, 2022. |
Examination Report under Section 18(3), UKIPO, Application No. GB2112207.2, dated Aug. 18, 2022. |
Examination Report under Section 18(3), UKIPO, Application No. GB2115048.7, dated Aug. 24, 2022. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2022/030541, dated Sep. 1, 2022. |
Vanderborght, B. et al., Variable impedance actuators: A review; Robotics and Autonomous Systems 61, Aug. 6, 2013, pp. 1601-1614. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2022/033190, dated Sep. 8, 2022. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2022/033230, dated Sep. 15, 2022. |
Combined Search and Examination Report under Sections 17 and 18(3), UKIPO, Application No. GB2204956.3, dated Jul. 24, 2023. |
Notice of Preliminary Rejection, Korean Intellectual Property Office, Application No. 10-2023-7029306, dated Sep. 19, 2023. |
Examination Report under Section 17, UKIPO, Application No. GB2311104.0 dated Sep. 4, 2023. |
Examination Report under Section 17, UKIPO, Application No. GB2311103.2 dated Sep. 11, 2023. |
Invitation to Pay Additional Fees, Partial International Search Report and Provisional Opinion of the International Searching Authority, International Application No. PCT/US2020/052537, dated Jan. 14, 2021. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/GB2020/052537, dated Mar. 9, 2021. |
Office Action of the Intellectual Property Office, ROC (Taiwan) Patent Application No. 107115475, dated Apr. 30, 2021. |
First Office Action, China National Intellectual Property Administration, Patent Application No. 2019800208570, dated Jun. 3, 2021. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/US2021/021908, dated Jun. 9, 2021. |
Notice of Preliminary Rejection, Korean Intellectual Property Office, Application No. 10-2019-7036236, dated Jun. 29, 2021. |
Combined Search and Examination Report, United Kingdom Intellectual Property Office, Application No. GB2018051.9, dated Jun. 30, 2021. |
Communication pursuant to Rule 164(2)(b) and Article 94(3) EPC, European Patent Office, Application No. 18727512.8, dated Jul. 8, 2021. |
Gottfried Behler: “Measuring the Loudspeaker's Impedance during Operation for the Derivation of the Voice Coil Temperature”, AES Convention Preprint, Feb. 25, 1995 (Feb. 25, 1995), Paris. |
First Office Action, China National Intellectual Property Administration, Patent Application No. 2019800211287, dated Jul. 5, 2021. |
Steinbach et al., Haptic Data Compression and Communication, IEEE Signal Processing Magazine, Jan. 2011. |
Pezent et al., Syntacts Open-Source Software and Hardware for Audio-Controlled Haptics, IEEE Transactions on Haptics, vol. 14, No. 1, Jan.-Mar. 2021. |
Examination Report under Section 18(3), United Kingdom Intellectual Property Office, Application No. GB2018051.9, dated Nov. 5, 2021. |
Jaijongrak et al., A Haptic and Auditory Assistive User Interface: Helping the Blinds on their Computer Operations, 2011 IEEE International Conference on Rehabilitation Robotics, Rehab Week Zurich, ETH Zurich Science City, Switzerland, Jun. 29-Jul. 1, 2011. |
Lim et al., An Audio-Haptic Feedbacks for Enhancing User Experience in Mobile Devices, 2013 IEEE International Conference on Consumer Electronics (ICCE). |
Weddle et al., How Does Audio-Haptic Enhancement Influence Emotional Response to Mobile Media, 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), QMEX 2013. |
Danieau et al., Enhancing Audiovisual Experience with Haptic Feedback: A Survey on HAV, IEEE Transactions on Haptics, vol. 6, No. 2, Apr.-Jun. 2013. |
Danieau et al., Toward Haptic Cinematography: Enhancing Movie Experiences with Camera-Based Haptic Effects, IEEE Computer Society, IEEE MultiMedia, Apr.-Jun. 2014. |
Final Notice of Preliminary Rejection, Korean Patent Office, Application No. 10-2019-7036236, dated Nov. 29, 2021. |
Examination Report under Section 18(3), United Kingdom Intellectual Property Office, Application No. GB2018050.1, dated Dec. 22, 2021. |
Second Office Action, National Intellectual Property Administration, PRC, Application No. 2019800208570, dated Jan. 19, 2022. |
Examination Report under Section 18(3), UKIPO, Application No. GB2106247.6, dated Mar. 31, 2022. |
Number | Date | Country | |
---|---|---|---|
20230052112 A1 | Feb 2023 | US |
Number | Date | Country | |
---|---|---|---|
62842821 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17471529 | Sep 2021 | US |
Child | 17962086 | US | |
Parent | 17076489 | Oct 2020 | US |
Child | 17471529 | US | |
Parent | 16422543 | May 2019 | US |
Child | 17076489 | US |