The present disclosure is directed to motion compensation for an optical heart rate sensor. For example, motion compensation may be applied to an optical signal when the minimum amount of motion of the optical heart rate sensor exceeds a threshold over a testing duration. While described below in the context of a wearable computing device, it is to be understood that applying motion compensation to an optical signal for an optical heart rate sensor, as described herein, may be used in numerous different applications, and with various different types of sensory-and-logic systems.
Incorporating an optical heart rate sensor into a wearable computing device allows a user to monitor health factors, such as heart rate, calories burned, response to exertion exercises, heart rate variability, etc. However, the signal from the optical sensor may degrade in quality with increased motion, as user motion may change the optical properties of the skin, tissues, and blood vessels beneath the optical sensor. As such, motion compensation may be applied to an optical signal prior to discerning a heart rate from the optical signal. However, motion compensation must be applied selectively. If applied too conservatively, the motion frequencies in the optical signal may be mistaken for heart beats. If applied to aggressively, the heart rate frequencies in the optical signal may be filtered out, leaving only random noise.
The application of motion compensation is particularly challenging during so-called “hand-hold exercises”, such as when a user is riding a stationary bicycle. If the wearable computing device is a wrist-worn form factor, the computing device may realize relatively low amounts of movement while the user's hand is anchored to a handle bar of the stationary bicycle. However, the continuous movement of the user's lower body, in particular the user's footfalls, affect the signal output by the optical sensor, independent of the magnitude of motion indicated by a motion sensor coupled to the wearable computing device. Methods which determine whether to apply motion compensation based on a magnitude of movement may fail to apply motion compensation during hand-hold workouts, and thus indicate inaccurate heart rates. If the movement threshold is set relatively low, sporadic movement while the user is at rest may trigger motion compensation when not warranted.
According to this disclosure, a minimum amount of motion over a testing duration may be used to determine whether to apply motion canceling. Thus, in the case of a hand-hold exercise, the constant, low intensity movement of the user's hand will be enough to trigger motion compensation. However, while at rest, sporadic movements will not affect the minimum amount of motion of the user, and thus motion compensation will not be applied.
Wearable electronic device 10 includes various functional components integrated into regions 14. In particular, the electronic device includes a compute system 18, display 20, loudspeaker 22, communication suite 24, and various sensors. These components draw power from one or more energy-storage cells 26. A battery—e.g., a lithium ion battery—is one type of energy-storage cell suitable for this purpose. Examples of alternative energy-storage cells include super- and ultra-capacitors. In devices worn on the user's wrist, the energy-storage cells may be curved to fit the wrist, as shown in the drawings.
In general, energy-storage cells 26 may be replaceable and/or rechargeable. In some examples, recharge power may be provided through a universal serial bus (USB) port 30, which includes a magnetic latch to releasably secure a complementary USB connector. In other examples, the energy storage cells may be recharged by wireless inductive or ambient-light charging. In still other examples, the wearable electronic device may include electro-mechanical componentry to recharge the energy storage cells from the user's adventitious or purposeful body motion. For example, batteries or capacitors may be charged via an electromechanical generator integrated into device 10. The generator may be turned by a mechanical armature that turns while the user is moving and wearing device 10.
In wearable electronic device 10, compute system 18 is situated below display 20 and operatively coupled to the display, along with loudspeaker 22, communication suite 24, and the various sensors. The compute system includes a data-storage machine 27 to hold data and instructions, and a logic machine 28 to execute the instructions. Aspects of the compute system are described in further detail with reference to
Display 20 may be any suitable type of display. In some configurations, a thin, low-power light emitting diode (LED) array or a liquid-crystal display (LCD) array may be used. An LCD array may be backlit in some implementations. In other implementations, a reflective LCD array (e.g., a liquid crystal on silicon, LCOS array) may be frontlit via ambient light. A curved display may also be used. Further, AMOLED displays or quantum dot displays may be used.
Communication suite 24 may include any appropriate wired or wireless communications componentry. In
In wearable electronic device 10, touch-screen sensor 32 is coupled to display 20 and configured to receive touch input from the user. The touch sensor may be resistive, capacitive, or optically based. Pushbutton sensors may be used to detect the state of push buttons 34, which may include rockers. Input from the pushbutton sensors may be used to enact a home-key or on-off feature, control audio volume, turn the microphone on or off, etc.
Wearable electronic device 10 may also include motion sensing componentry, such as an accelerometer 48, gyroscope 50, and magnetometer 51. The accelerometer and gyroscope may furnish inertial and/or rotation rate data along three orthogonal axes as well as rotational data about the three axes, for a combined six degrees of freedom. This sensory data can be used to provide a pedometer/calorie-counting function, for example. Data from the accelerometer and gyroscope may be combined with geomagnetic data from the magnetometer to further define the inertial and rotational data in terms of geographic orientation. The wearable electronic device may also include a global positioning system (GPS) receiver 52 for determining the wearer's geographic location and/or velocity. In some configurations, the antenna of the GPS receiver may be relatively flexible and extend into flexion regions 12.
Compute system 18, via the sensory functions described herein, is configured to acquire various forms of information about the wearer of wearable electronic device 10. Such information must be acquired and used with utmost respect for the wearer's privacy. Accordingly, the sensory functions may be enacted subject to opt-in participation of the wearer. In implementations where personal data is collected on the device and transmitted to a remote system for processing, that data may be anonymized. In other examples, personal data may be confined to the wearable electronic device, and only non-personal, summary data transmitted to the remote system.
Compute system 110 may comprise optical heart rate sensor control subsystem 111. Optical heart rate sensor control subsystem 111 may provide control signals to optical source 104 and optical sensor 105. Optical heart rate sensor control subsystem 111 may receive raw signals from optical sensor 105, and may further process the raw signals to determine heart rate, caloric expenditures, etc. Processed signals may be stored and output via compute system 110. Control signals sent to optical source 104 and optical sensor 105 may be based on signals received from optical sensor 105, one or more motion sensors, ambient light sensors, information stored in compute system 110, input signals, etc.
The signal from the optical sensor may degrade in quality with increased motion, as user motion may change the optical properties of the skin, tissues, and blood vessels beneath the optical sensor. Further, user motion may impact the movement of blood and other fluids through the user's tissue. As such, the signal output by the optical sensor may need to be filtered or otherwise adjusted based on user movement prior to determining a heart rate of the user. Sensory-and-logic system 100 may include a motion sensor suite 120 communicatively coupled to compute system 108. Signals from motion sensor suite 120 may be provided to optical heart rate control subsystem 111. Motion sensor suite 120 may include gyroscope 125 and accelerometer 130. Gyroscope 125 and accelerometer 130 may be three-axis motion sensors. Accordingly, gyroscope 125 and accelerometer 130 may record and transmit signal channels for each axis.
As described with regards to
However, plot 321 has a relatively low signal-to-noise ratio, due to the movement of user 301. Numerous peaks and zero-crossing events shown in plot 321 do not result from the pulse of user 301, and may not be representative of a heartbeat. For example, the high frequency peaks indicated at 329 may result from leaked light or other adverse conditions. Prior to determining a heart rate, the raw optical signal may first be processed and smoothed, in order to compensate for the detected motion. The raw optical signal may be filtered based on the signal received from the motion sensor in order to remove the motion component from the optical signal, thus improving the accuracy of subsequently derived heart rates.
Although the wrist of user 301 has relatively low movement intensity in this example, the continuous motion of the user's lower body, including footfalls, affect the signal output by the optical sensor, independent of the magnitude of motion indicated by the motion sensor. This movement profile may be derived during “hand-hold workouts”, such as stationary bicycling, stair stepping, or other physical activities where the user's lower body is engaged in continuous, high intensity motion while the user's hands are braced against a stationary surface.
As the wrist of user 301 is predominantly engaged in a state of very low movement intensity in this example, it may not be advantageous to apply motion compensation to the optical signal prior to determining the user's heart rate.
A decision to apply or not apply motion cancellation can be made prior to heart rate determination. According to simplistic strategies, a magnitude of a signal from the motion sensor is compared to a threshold. The magnitude may be based on the maximum magnitude of the motion signal over a period of time, or may be based on a mean magnitude of the motion signal over a period of time. However, it is difficult to set the threshold for these approaches in a fashion which results in motion compensation being applied (or not applied) properly. In
Distinguishing a motion signal representative of a user at rest as opposed to a motion signal representative of a user performing a hand-hold workout may be accomplished by determining a minimum amount of movement over a period of time. For example, the minimum magnitude of plot 332 is greater than the minimum magnitude of plot 352, as the user in
Continuing at 415, method 400 includes recognizing a minimum amount of motion of the optical heart rate sensor during a testing duration. This may include recognizing a minimum value of the received motion signal during the testing duration. For a motion sensor with multiple signal channels, this may include recognizing minimum values for each signal channel, and then selecting an overall motion minimum out of the minimum values for each signal channel. The testing duration may be a suitable testing duration that comprises two or more heartbeats, for example, eight seconds. The testing duration may be a rolling or moving window, for example, comprising the most recent eight seconds. The testing duration may comprise a plurality of time periods of equal length. For example, an eight second testing duration may comprise eight one-second time periods.
Continuing at 420, method 400 may include recognizing an average amount of motion of the optical heart rate sensor during the testing duration. At 425, method 400 may include recognizing a maximum magnitude of motion of the optical heart rate sensor during the testing duration. The average amount and maximum magnitude of motion may be derived from the received motion signal.
At 430, method 400 includes determining whether the minimum amount of motion during the testing duration, as recognized at 415, is greater than a first predetermined threshold. If the minimum amount of motion is not greater than the predetermined threshold, method 400 may proceed to 435. At 435, method 400 includes not compensating for motion of the optical heart rate sensor. This may include not compensating for motion of the optical heart rate sensor even if the average amount of motion during the testing duration is greater than a second threshold, greater than the first threshold. This may further include not compensating for motion of the optical heart rate sensor even if the maximum magnitude of the motion signal during the testing duration is greater than a third threshold, greater than the first threshold. Continuing at 440, method 400 includes indicating a heart rate of the user based on the uncompensated optical signal.
Returning to 430, if the minimum amount of motion during the testing duration is greater than the first predetermined threshold, method 400 may proceed to 445. At 445, method 400 includes compensating for the motion of the optical heart rate sensor. This may include compensating for motion of the optical heart rate sensor even if the average amount of motion during the testing duration is less than the second threshold, and may further include compensating for the motion of the optical heart rate sensor even if the maximum magnitude of the motion signal during the testing duration is less than the third threshold.
Continuing at 450, method 400 includes determining whether the maximum magnitude of the motion signal during the testing duration is greater than the third threshold. If the maximum magnitude of the motion signal is not greater than the third threshold, method 400 may proceed to 455. At 455, method 400 includes applying a first motion filter to the optical signal based on the motion signal. Continuing at 460, method 400 includes indicating a heart rate of the user based on the filtered optical signal.
Returning to 450, if the maximum magnitude of the motion signal is not greater than the third threshold, method 400 may proceed to 465. At 465, method 400 includes applying a second motion filter to the optical signal based on the motion signal, the second motion filter altering the optical signal more than the first motion filter. Continuing at 460, method 400 includes indicating a heart rate of the user based on the filtered optical signal.
Additionally or alternatively, if the minimum amount of motion during the testing duration exceeds the first predetermined threshold, the second motion filter may be applied to the optical signal if the average amount of motion during the testing duration exceeds the second threshold, while the first motion filter may be applied to the optical signal if the average amount of motion during the testing duration is less than the second threshold.
As described with regard to
For a testing duration (T) comprising a plurality of time periods (t), the difference for each signal channel may be determined for each time period based on the following equations:
X_diff(t)=Max(X(t))−Min(X(t))
Y_diff(t)=Max(Y(t))−Min(Y(t))
Z_diff(t)=Max(Z(t))−Min(Z(t))
The minimum difference for each signal channel during the testing duration may then be determined based on the following equations:
X
—
M(T,t)=Min(X—diff(t), X_diff(t−1), . . . X—diff(t−T+1))
Y
—
M(T,t)=Min(Y—diff(t), Y—diff(t−1), . . . Y_diff(t−T+1))
Z
—
M(T,t)=Min(Z—diff(t), Z—diff(t−1), . . . Z_diff(t−T+1))
A minimum motion value may then be determined based on the minimum differences for each signal channel.
V(t)=Min(X—M(T,t),Y—M(T,t),Z—M(T,t))
V(t) may then be compared to a threshold to determine whether or not to apply motion cancellation to the signal prior to determining a heart rate.
As evident from the foregoing description, the methods and processes described herein may be tied to a sensory-and-logic system of one or more machines. Such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, firmware, and/or other computer-program product.
Logic machine 616 includes one or more physical devices configured to execute instructions. The logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
Logic machine 616 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic machine may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of a logic machine optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of a logic machine may be virtualized and executed by remotely accessible, networked computing devices in a cloud-computing configuration.
Data-storage machine 618 includes one or more physical devices configured to hold instructions executable by logic machine 616 to implement the methods and processes described herein. When such methods and processes are implemented, the state of the data-storage machine may be transformed—e.g., to hold different data. The data-storage machine may include removable and/or built-in devices; it may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The data-storage machine may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
Data-storage machine 618 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration.
Aspects of logic machine 616 and data-storage machine 618 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
Display subsystem 620 may be used to present a visual representation of data held by data-storage machine 618. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display subsystem 620 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 620 may include one or more display subsystem devices utilizing virtually any type of technology. Such display subsystem devices may be combined with logic machine 616 and/or data-storage machine 618 in a shared enclosure, or such display subsystem devices may be peripheral display subsystem devices. Display 20 of
Communication subsystem 622 may be configured to communicatively couple compute system 614 to one or more other computing devices. The communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, a local- or wide-area network, and/or the Internet. Communication suite 24 of
Input subsystem 624 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity. Touch-screen sensor 32 and push buttons 34 of
Sensor suite 612 may include one or more different sensors—e.g., a touch-screen sensor, push-button sensor, microphone, visible-light sensor, ultraviolet sensor, ambient-temperature sensor, contact sensors, and/or GPS receiver—as described above with reference to
Optical heart rate control subsystem 634 may receive raw signals from optical sensor 632, and may further process the raw signals to determine heart rate, caloric expenditures, etc. Processed signals may be stored and output via compute system 614. Control signals sent to optical source 630 and optical sensor 632 may be based on signals received from optical sensor 632, signals derived from sensor suite 612, information stored in data-storage machine 618, input received from communication subsystem 622, input received from input subsystem 624, etc.
The configurations and approaches described herein are exemplary in nature, and that these specific implementations or examples are not to be taken in a limiting sense, because numerous variations are feasible. The specific routines or methods described herein may represent one or more processing strategies. As such, various acts shown or described may be performed in the sequence shown or described, in other sequences, in parallel, or omitted.
The subject matter of this disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.