This present application relates generally to the field of personal health, and more specifically to new and useful systems and methods for monitoring the health of a user applied to the field of healthcare and personal health.
With many aspects of stress, diet, sleep, and exercise correlated with various health and wellness effects, the rate of individuals engaging with personal sensors to monitor personal health continues to increase. For example, health-related applications for smartphones and specialized wristbands for monitoring user health or sleep characteristics are becoming ubiquitous. However, these personal sensors, systems, and applications fail to monitor user health in a substantially holistic fashion and to make relevant short-term and long-term recommendations to users. The heart rate of an individual may be associated with a wide variety of characteristics of the individual, such as health, fitness, interests, activity level, awareness, mood, engagement, etc. Simple to highly-sophisticated methods for measuring heart rate currently exist, from finding a pulse and counting beats over a period of time to coupling a subject to an EKG machine. However, each of these methods requires contact with the individual, the former providing a significant distraction to the individual and the latter requiring expensive equipment.
Thus, there is a need in the fields of healthcare and personal health to create a new and useful methods, systems, and apparatus for monitoring the health of a user, including non-obtrusively detecting physiological characteristics of a user, such as a user's heart rate.
Various embodiments or examples (“examples”) of the present application are disclosed in the following detailed description and the accompanying drawings. The drawings are not necessarily to scale:
Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a user interface, or a series of program instructions on a non-transitory computer readable medium such as a computer readable storage medium or a computer network where the program instructions are sent over optical, electronic, or wireless communication links. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.
A detailed description of one or more examples is provided below along with accompanying drawing FIGS. The detailed description is provided in connection with such examples, but is not limited to any particular example. The scope is limited only by the claims and numerous alternatives, modifications, and equivalents are encompassed. Numerous specific details are set forth in the following description in order to provide a thorough understanding. These details are provided for the purpose of example and the described techniques may be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields related to the examples has not been described in detail to avoid unnecessarily obscuring the description.
As depicted in
The system 100 preferably functions to deliver short-term recommendations to the user 114 based upon facial features extracted from an image of the user 114. The system 100 may further function to deliver long-term recommendations to the user 114 based upon facial features extracted from the image 112i of the user 114 and the weight of the user 114. The first current health indicator may be user heart rate, mood, stressor, exhaustion or sleep level, activity, or any other suitable health indicator. The current health indicator is preferably based upon any one or more of user heart rate, respiratory rate, temperature, posture, facial feature, facial muscle position, facial swelling, or other health-related metric or feature that is identifiable in the image 112i of the user 114 (e.g., image 112i of face 112f). The first current health indicator is preferably determined from analysis of the present or most-recent image of the user 114 taken by the optical sensor 120, and the first, short-term recommendation is preferably generated through manipulation of the first current health indicator. The first, short-term recommendation is preferably immediately relevant to the user 114 and includes a suggestion that the user 114 may implement substantially immediately. Historic user health-related metrics, features, and indicators are preferably aggregated with the first current health indicator and the second current health indicator, which is related to user weight, to generate the second, long-term recommendation. The second, long-term recommendation is preferably relevant to the user 114 at a later time or over a period of time, such as later in the day, the next day, or over the following week, month, etc., though the first and second recommendations may be subject to any other timing.
The system 100 is preferably configured for arrangement within a bathroom such that user biometric data (e.g., user facial features, heart rate, mood, weight, etc.) may be collected at regular times or intended actions of the user 114, such as every morning when the user 114 wakes and every evening when the user 114 brushes his teeth before bed. The system 100 may therefore be configured to mount to a wall adjacent a mirror or is configured to replace a bathroom mirror or vanity (e.g., on wall 179 and above sink 180 of
The system 100 preferably collects and analyzes the image 112i of the user 114 passively (i.e. without direct user prompt or intended input) such that a daily routine or other action of the user 114 is substantially uninterrupted while user biometric data is collected and manipulated to generate the recommendations. However, the system 100 may function in any other way and be arranged in any other suitable location.
The system 100 preferably includes a tablet computer or comparable electronic device including the display 110, a processor 175, the optical sensor 120 that is a camera 170, and a wireless communication module 177, all of which are contained within the housing 140 of the tablet or comparable device. Alternatively, the system 100 may be implemented as a smartphone, gaming console, television, laptop or desktop computer, or other suitable electronic device. In one variation of the system 100, the processor 175 analyzes the image 112i captured by the camera 170 and generates the recommendations. In another variation of the system 100, the processor 175 collaborates with a remote server to analyze the image 112i and generate the recommendations. In yet another variation of the system 100, the processor 175 handles transmission of the image 112i and/or user weight data, through the wireless communication module 177, to the remote server, wherein the remote server extracts the user biometric data from the image 112i, generates the recommendations, and transmits the recommendations back to the system 100. Furthermore, one or more components of the system 100 may be disparate and arranged external the housing 140. In one example, the system 100 includes the optical sensor 120, wireless communication module 177, and processor 175 that are arranged within the housing 140, wherein the optical sensor 120 captures the image 112i, the processor 175 analyses the image 112i, and the wireless communication module 177 transmits (e.g., using a wireless protocol such as Bluetooth (BT) or any of 802.11 (WiFi)) the recommendation to a separate device located elsewhere within the home of the use, such as to a smartphone carried by the user 114 or a television location in a sitting room, and wherein the separate device includes the display 110 and renders the recommendations for the user 114. However, the system 100 may include any number of components arranged within or external the housing 140. As used herein the terms optical sensor 120 and camera 170 may be used interchangeably to denote an image capture system and/or device for capturing the image 112i and outputting one or more signals representative of the captured image 112i. Image 112i may be captured in still format or video (e.g., moving image) format.
As depicted in
As depicted in
The optical sensor 120 preferably records the image 112i of the user 114 that is a video feed including consecutive still images 102 with red 101, green 103, and blue 105 color signal components. However, the image 112i may be a still image 102, including any other additional or alternative color signal component 101, 103, 105), or be of any other form or composition. The image 112i preferably includes and is focused on the face 112f of the user 114, though the image may be of any other portion of the user 114.
The optical sensor 120 preferably records the image 112i of the user 114 automatically, i.e. without a prompt or input from the user 114 directed specifically at the system 100. In one variation of the system 100, the optical sensor 120 interfaces with a speaker or other audio sensor incorporated into the system 100, wherein an audible sound above a threshold sound level may activate the optical sensor 120. For example, the sound of a closing door, running water, or a footstep may activate the optical sensor 120. In another variation of the system 100, the optical sensor 120 interfaces with an external sensor that detects a motion or action external the system. For example, a position sensor coupled to a bathroom faucet 181 and the system 100 may activate the optical sensor 120 when the faucet 181 is opened. In another example, a pressure sensor arranged on the floor proximal a bathroom sink 180, such as in a bathmat or a bath scale (e.g., a wirelessly-enabled scale 190, such as a bathmat scale), activates the optical sensor 120 when the user 114 stands on or trips the pressure sensor. In a further variation of the system 100, the optical sensor 120 interfaces with a light sensor that detects when a light has been turned on a room, thus activating the optical sensor. In this variation, the optical sensor 120 may perform the function of the light sensor, wherein the optical sensor 120 operates in a low-power mode (e.g., does not focus, does not use a flash, operates at a minimum viable frame rate) until the room is lit, at which point the optical sensor 120 switches from the low-power setting to a setting enabling capture of a suitable image 112i of the user 114. In yet another variation of the system 100, the optical sensor 120 interfaces with a clock, timer, schedule, or calendar of the user 114. For example, for a user 114 who consistently wakes and enters the bathroom within a particular time window, the optical sensor 120 may be activated within the particular time window and deactivated outside of the particular time window. In this example, the system 100 may also learn habits of the user 114 and activate and deactivate the optical sensor 120 (e.g., to reduce power consumption) accordingly. In another example, the optical sensor 120 may interface with an alarm clock of the user 114, wherein, when the user 114 deactivates an alarm, the optical sensor 120 is activated and remains so for a predefined period of time. In a further variation of the system 100, the optical sensor interfaces 120 (e.g., via wireless module 177) with a mobile device (e.g., cellular phone) carried by the user 114, wherein the optical sensor 120 is activated when the mobile device is determined to be substantially proximal the system 100, such as via GPS, a cellular, Wi-Fi, or Bluetooth connection, near-field communications, or a RFID chip or tag indicating relative location or enabling distance- or location-related communications between the system 100 and the mobile device. However, the optical sensor 120 may interface with any other component, system, or service and may be activated or deactivated in any other way. Furthermore, the processor 175, remote server, or other component or service controlling the optical sensor 120 may implement facial recognition such that the optical sensor 120 only captures the image 112i of the user 114 (or the processor 175 or remote server only analyses the image 112i) when the user 114 is identified in the field of view of the optical sensor 120 (or within the image).
The optical sensor 120 preferably operates in any number of modes, including an ‘off’ mode, a low-power mode, an ‘activated’ mode, and a ‘record’ mode. The optical sensor 120 is preferably off or in the low-power mode when the user 114 is proximal or not detected as being proximal the system 100. As described above the optical sensor 120 preferably does not focus, does not use a flash, and/or operates at a minimum viable frame rate in the low-power mode. In the activated mode, the optical sensor 120 may be recording the image 112i or simply be armed for recordation and not recording. However, the optical sensor 120 may function in any other way.
As depicted in
The processor 175 and/or remote server preferably implements machine vision to extract at least one of the heart rate, the respiratory rate, the temperature, the posture, a facial feature, a facial muscle position, and/or facial swelling of the user from the image 112i thereof.
In one variation, the system 100 extracts the heart rate and/or the respiratory rate of the user 114 from the image 112i that is a video feed, as described in U.S. Provisional Application Ser. No. 61/641,672, filed on 2 May 2012, and titled “Method For Determining The Heart Rate Of A Subject”, already incorporated by reference herein in its entirety for all purposes.
In another variation, the system 100 implements thresholding, segmentation, blob extraction, pattern recognition, gauging, edge detection, color analysis, filtering, template matching, or any other suitable machine vision technique to identify a particular facial feature, facial muscle position, or posture of the user 114, or to estimate the magnitude of facial swelling or facial changes.
The processor 175 and/or remote server may further implement machine learning to identify any health-related metric or feature of the user 114 in the image 112i. In one variation of the system 100, the processor 175 and/or remote server implements supervised machine learning in which a set of training data of facial features, facial muscle positions, postures, and/or facial swelling is labeled with relevant health-related metrics or features. A learning procedure then preferably transforms the training data into generalized patterns to create a model that may subsequently be used to extract the health-related metric or feature from the image 112i. In another variation of the system 100, the processor 175 and/or remote server implements unsupervised machine learning (e.g., clustering) or semi-supervised machine learning in which all or at least some of the training data is not labeled, respectively. In this variation, the processor 175 and/or remote server may further implement feature extraction, principle component analysis (PCA), feature selection, or any other suitable technique to identify relevant features or metrics in and/or to prune redundant or irrelevant features from the image 112i of the user 114.
In the short-term, the processor 175 and/or remote server may associate any one or more of the health-related metrics or features with user stress. In an example implementation, any one or more of elevated user heart rate, elevated user respiratory rate, rapid body motions or head jerks, and facial wrinkles may indicate that the user 114 is currently experiencing an elevated stress level. For example, an elevated user heart rate accompanied by a furrowed brow may suggest stress, which may be distinguished from an elevated user heart rate and lowered eyelids that suggest exhaustion after exercise. Furthermore, any of the foregoing user metrics or features may be compared against threshold values or template features of other users, such as based upon the age, gender, ethnicity, demographic, location, or other characteristic of the user, to identify the elevated user stress level. Additionally or alternatively, any of the foregoing user metrics or features may be compared against historic user data to identify changes or fluctuations indicative of stress. For example, a respiratory rate soon after waking that is significantly more rapid than normal may suggest that the user is anxious or nervous for an upcoming event. In the short-term, the estimated elevated stress level of the user 114 may inform the first recommendation that is a suggestion to cope with current stressor. For example, the display 110 may render the first recommendation that is a suggestion for the user 114 to count to ten or to sit down and breathe deeply, which may reduce the heart rate and/or respiratory rate of the user 114. By sourcing additional user data, such as time, recent user location (e.g., a gym or work), a post or status on a social network, credit card or expenditure data, or a calendar, elevated user heart rate and/or respiratory rate related to stress may be distinguished from that of other factors, such as physical exertion, elation, or other positive factors.
Over the long-term, user stress trends may be generated by correlating user stress with particular identified stressors. User stress trends may then inform the second recommendation that includes a suggestion to avoid, combat, or cope with sources of stress. Additionally or alternatively, user stress may be correlated with the weight of the user 114 over time. For example, increasing incidence of identified user stress over time that occurs simultaneously with user weight gain may result in a second, long-term recommendation that illustrates a correlation between stress and weight gain for the user 114 and includes preventative suggestions to mitigate the negative effects of stress or stressors on the user 114. In this example, the second recommendation may be a short checklist of particular, simple actions shown to aid the user 114 in coping with external factors or stressors, such as to a reminder to bring a poop bag when walking the dog in the morning, to pack the next day's lunch the night before, to pack a computer power cord before leaving work, and to wash and fold laundry each Sunday. The system 100 may therefore reduce user stress by providing timely reminders of particular tasks, particularly when the user is occupied with other obligations, responsibilities, family, or work.
Current elevated user heart rate and/or respiratory rate may alternatively indicate recent user activity, such as exercise, which may be documented in a user activity journal. Over the long-term, changes to weight, stress, sleep or exhaustion level, or any other health indicator of the user 114 may be correlated with one or more user activities, as recorded in the user activity journal. Activities correlating with positive changes to user health may then be reinforced by the second recommendation. Additionally or alternatively, the user 114 may be guided away from activities correlating with negative user health changes in the second recommendation. For example, consistent exercise may be correlated with a reduced user resting heart rate of the user 114 and user weight loss, and the second recommendation presented to the user 114 every morning on the display 110 may depict this correlation (e.g., in graphical form) and suggest that the user 114 continue the current regimen. In another example, forgetting to take allergy medication at night before bed during the spring may be correlated with decreased user productivity and energy level on the following day, and the second recommendation presented to the user 114 each night during the spring may therefore include a suggestion to take an allergy medication at an appropriate time.
In the short-term, the processor 175 and/or remote server may also or alternatively associate any one or more of the health-related metrics or features with user mood. In general, user posture, facial wrinkles, and/or facial muscle position, identified in the image 112i of the user 114, may indicate a current mood or emotion of the user 114. For example, sagging eyelids and stretched skin around the lips and cheeks may correlate with amusement, a drooping jaw line and upturned eyebrows may correlate with interest, and heavy forehead wrinkles and squinting eyelids may correlate with anger. As described above, additional user data may be accessed and associated with the mood of the user 114. In the short-term, the first recommendation may include a suggestion to prolong or harness a positive mood or a suggestion to overcome a negative mood. Over the long-term, estimated user moods may be correlated with user experiences and/or external factors, and estimated user moods may thus be added to a catalogue of positive and negative user experiences and factors. This mood catalogue may then inform second recommendations that include suggestions to avoid and/or to prepare in advance for negative experiences and factors.
The processor 175 and/or remote server may also or alternatively associate any one or more of the health-related metrics or features with user sleep or exhaustion. In one variation, periorbital swelling (i.e. bags under the eyes) identified in the face 112f of the user 114 in the image 112i is associated with user exhaustion or lack of sleep. Facial swelling identified in the image 112i may be analyzed independently or in comparison with past facial swelling of the user 114 to generate an estimation of user exhaustion, sleep quality, or sleep quantity. In the long-term, user activities, responsibilities, expectations, and sleep may be prioritized and/or optimized to best ensure that the user 114 fulfills the most pressing responsibilities and obligations and completes desired activities and expectations with appropriate sleep quantity and/or quality. This optimization may then be preferably presented to the user 114 on the display 110. For example, for the user 114 who loves to cook but typically spends three hours cooking each night at the expense of eating late and sleeping less, the second recommendation may be for a recipe with less prep time such that the user 114 may eat earlier and sleep longer while still fulfilling a desire to cook. In another example, for the user 114 who typically awakes to an alarm in the middle of a REM cycle, the second recommendation may be to set an alarm earlier to avoid waking in the middle of REM sleep. In this example, all or a portion of the system 100 may be arranged adjacent a bed of the user 114 or in communication with a device adjacent the bed of the user 114, wherein the system 100 or other device measures the heart rate and/or respiratory rate of the user 114 through not contact means while the user sleeps, such as described in U.S. Provisional Application Ser. No. 61/641,672, filed on 2 May 2012, and titled “Method For Determining The Heart Rate Of A Subject”, already incorporated by reference herein in its entirety for all purposes.
Alternatively, the system 100 may interface with a variety of devices, such as a biometric or motion sensor worn by the user 114 while sleeping or during other activities, such as a heart rate sensor or accelerometer, or any other device or sensor configured to capture user sleep data or other data for use in the methods (e.g., flow charts) described in
In the long term, the processor 175 and/or remote server may also or alternatively access user dietary data, such as from a user dietary profile maintained on a local device, mobile device, or remote network and consistently updated by the user 114. For example, the system 100 may access ‘The Eatery,’ a mobile dietary application accessible on a smartphone or other mobile device carried by the user 114. Dietary trends may be associated with trends in user weight, stress, and/or exercise, to generate the second recommendation that suggests changes, improvements, and/or maintenance of user diet, user stress coping mechanisms, and user exercise plan. For example, periods of high estimated user stress may be correlated with a shift in user diet toward heavily-processed foods and subsequent weight gain, and the second recommendation may therefore include suggestions to cope with or overcome stress as well as suggestions for different, healthier snacks. However, the system 100 may account for user diet in any other way in generating the first and/or second recommendations.
The processor 175 and/or remote server may also or alternatively estimate if the user 114 is or is becoming ill. For example, facial analyses of the user 114 in consecutive images 112i may show that the cheeks on face 112f of the user 114 are slowly sinking, which is correlated with user illness. The system 100 may subsequently generate a recommendation that is to see a doctor, to eat certain foods to boost user immune system, to stay home from work or school to recover, or local sickness trends to suggest a particular illness and correlated risk or severity level. However, other use biometric data, such as heart rate or respiratory rate, may also or alternatively indicate if the user 114 is or is becoming sick, and the system 100 may generate any other suitable illness-related recommendation for the user 114.
According to some examples, computer system 200 performs specific operations by processor 204 executing one or more sequences of one or more instructions stored in system memory 206. Such instructions may be read into system memory 206 from another non-transitory computer readable medium, such as storage device 208 or disk drive 210 (e.g., a HD or SSD). In some examples, circuitry may be used in place of or in combination with software instructions for implementation. The term “non-transitory computer readable medium” refers to any tangible medium that participates in providing instructions to processor 204 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical, magnetic, or solid state disks, such as disk drive 210. Volatile media includes dynamic memory, such as system memory 206. Common forms of non-transitory computer readable media includes, for example, floppy disk, flexible disk, hard disk, SSD, magnetic tape, any other magnetic medium, CD-ROM, DVD-ROM, Blu-Ray ROM, USB thumb drive, SD Card, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer may read.
Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 202 for transmitting a computer data signal. In some examples, execution of the sequences of instructions may be performed by a single computer system 200. According to some examples, two or more computer systems 200 coupled by communication link 220 (e.g., LAN, Ethernet, PSTN, or wireless network) may perform the sequence of instructions in coordination with one another. Computer system 200 may transmit and receive messages, data, and instructions, including programs, (i.e., application code), through communication link 220 and communication interface 212. Received program code may be executed by processor 204 as it is received, and/or stored in disk drive 210, or other non-volatile storage for later execution. Computer system 200 may optionally include a wireless transceiver 213 in communication with the communication interface 212 and coupled 215 with an antenna 217 for receiving and generating RF signals 221, such as from a WiFi network, BT radio, or other wireless network and/or wireless devices, for example. Examples of wireless devices include but are not limited to: a data capable strap band, wristband, wristwatch, digital watch, or wireless activity monitoring and reporting device; a smartphone; cellular phone; tablet; tablet computer; pad device (e.g., an iPad); touch screen device; touch screen computer; laptop computer; personal computer; server; personal digital assistant (PDA); portable gaming device; a mobile electronic device; and a wireless media device, just to name a few. Computer system 200 in part or whole may be used to implement one or more components of system 100 of
The system 100 may additionally or alternatively provide a recommendation that is an answer or probably solution to an automatically- or user-selected question, as depicted in
As depicted in
Attention is now directed to
Orientation monitor 152 is configured to monitor an orientation 112 of the face (e.g., face 112f) of the organism (e.g., user 114), and to detect a change in orientation in which at least one face portion is absent. For example, the organism may turn its head away, thereby removing a cheek portion 111b from image capture device 104. For example, in
Physiological signal extractor 158 is configured to extract one or more signals including physiological information from subsets of light components captured by light capture device 104. For example, each subset of light components may be associated with one or more frequencies and/or wavelengths of light. According to some embodiments, physiological signal extractor 158 identifies a first subset of frequencies (e.g., a range of frequencies, including a single frequency) constituting green visible light, a second subset of frequencies constituting red visible light, and a third subset of frequencies constituting blue visible light. According to other embodiments, physiological signal extractor 158 identifies a first subset of wavelengths (e.g., a range of wavelengths, including a single wavelength) constituting green visible light, a second subset of wavelengths constituting red visible light, and a third subset of wavelengths constituting blue visible light. Other frequencies and wavelengths are possible, including those outside visible spectrum. As shown, a signal analyzer 159 of physiological signal extractor 158 is configured to analyze the pixel values or other color-related signal values 117a (e.g., green light), 117b (e.g., red light), and 117c (e.g., green light). For example, signal analyzer 159 may identify a time-domain component associated with a change in blood volume associated with the one or more surfaces of the organism. In some embodiments, physiological signal extractor 158 is configured to aggregate or average one or more AC signals from one or more pixels over one or more sets of pixels. Signal analyzer 159 may be configured to extracting a physiological characteristic based on, for example, a time-domain component based on, for example, using Independent Component Analysis (“ICA”) and/or a Fourier Transform (e.g., a FFT).
Physiological data signal generator 160 may be configured to generate a physiological data signal 115 representing one or more physiological characteristics. Examples of such physiological characteristics include a heart rate pulse wave rate, a heart rate variability (“HRV”), and a respiration rate, among others, in a non-invasive manner.
According to some embodiments, physiological characteristic determinator 150 may be coupled to a motion sensor, 104 such as an accelerometer or any other like device, to use motion data from the motion sensor to determine a subset of pixels in a set of pixels based on a predicted distance calculated from the motion data. For example, consider that pixel or group of pixels 171 are being analyzed in association with a face portion. Upon detecting a motion (of either the organism or the image capture device, or both) in which such motion with move face portion out from pixel or group of pixels 171. Surface detector 154 may be configured to, for example, detect motion of a portions of the face in a set of pixels 117c, which affects a subset of pixels 171 including a face portion from the one or more portions of the face. Surface detector 154 predicts a distance in which the face portion moves from the subset of pixels 171 and determines a next subset of pixels 173 in the set of pixels 117c based on the predicted distance. Then, reflected light associated with the next subset of pixels 173 may be used for analysis.
In some embodiments, physiological characteristic determinator 150 may be coupled to a light sensor 107 (e.g., 104, 120, 170). Signal analyzer 159 may be configured to compensate for a value of light received from the light sensor 107 that indicates a non-conforming amount of light. For example, consider that the light source generating the light is a fluorescent light source that, for instance, provides for less than desirable amount of, for example, green light. Signal analyzer 159 may compensate, for example, by weighting values associated with either the green light (e.g., either higher) or other values associated with other subsets of light components, such as red and blue light (e.g., weight the blue and red light to decrease influence of red and blue light). Other compensation techniques are possible.
In some embodiments, physiological characteristic determinator 150, and a device in which it is disposed, may be in communication (e.g., wired or wirelessly) with a mobile device, such as a mobile phone or computing device. In some cases, such a mobile device, or any networked computing device (not shown) in communication with physiological characteristic determinator 150, may provide at least some of the structures and/or functions of any of the features described herein. As depicted in
For example, physiological characteristic determinator 150 and any of its one or more components, such as an orientation monitor 152, a surface detector 154, a feature filter 156, a physiological signal extractor 158, and a physiological signal generator 160, may be implemented in one or more computing devices (i.e., any video-producing device, such as mobile phone, a wearable computing device, such as UP® or a variant thereof), or any other mobile computing device, such as a wearable device or mobile phone (whether worn or carried), that include one or more processors configured to execute one or more algorithms in memory. Thus, at least some of the elements in
As hardware and/or firmware, the above-described structures and techniques may be implemented using various types of programming or integrated circuit design languages, including hardware description languages, such as any register transfer language (“RTL”) configured to design field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”), multi-chip modules, or any other type of integrated circuit. For example, physiological characteristic determinator 150 and any of its one or more components, such as an orientation monitor 152, a surface detector 154, a feature filter 156, a physiological signal extractor 158, and a physiological signal generator 160, may be implemented in one or more circuits. Thus, at least one of the elements in
According to some embodiments, the term “circuit” may refer, for example, to any system including a number of components through which current flows to perform one or more functions, the components including discrete and complex components. Examples of discrete components include transistors, resistors, capacitors, inductors, diodes, and the like, and examples of complex components include memory, processors, analog circuits, digital circuits, and the like, including field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”). Therefore, a circuit may include a system of electronic components and logic components (e.g., logic configured to execute instructions, such that a group of executable instructions of an algorithm, for example, and, thus, is a component of a circuit). According to some embodiments, the term “module” may refer, for example, to an algorithm or a portion thereof, and/or logic implemented in either hardware circuitry or software, or a combination thereof (i.e., a module may be implemented as a circuit). In some embodiments, algorithms and/or the memory in which the algorithms are stored are “components” of a circuit. Thus, the term “circuit” may also refer, for example, to a system of components, including algorithms. These may be varied and are not limited to the examples or descriptions provided.
As depicted in
As depicted in
As depicted in
In one variation, the system 100 may further function as a communication portal between the user 114 and a second user (not shown). Through the system 100, the user 114 may access the second user to discuss health-related matters, such as stress, a dietary or exercise plan, or sleep patterns. Additionally or alternatively, the user 114 may access the system 100 to prepare for a party or outing remotely with the second user, wherein the system 100 transmits audio and/or visual signals of the user 114 and second user between the second user and the user 114. However, the system 100 may operate in any other way and perform any other function.
Moving now to
As depicted in
Turning now to
The stage 410 may comprise one or more adjunct stages denoted as stages 413-419. The stage 410 may include determining a respiratory rate of the user 114 by performing image analysis of the image 112i as depicted at a stage 413. The stage 410 may include determining a heart rate of the user 114 by performing image analysis of the image 112i as depicted at a stage 415. The stage 410 may include determining a mood of the user 114 by performing image analysis of the image 112i as depicted at a stage 417. The stage 410 may include estimating user exhaustion and/or user sleep of the user 114 by performing image analysis of the image 112i as depicted at a stage 419.
The stages 430 and/or 440 may comprise one or more adjunct stages denoted as stages 432 and 442, respectively. Stage 430 may comprise recommending, to the user 114, an action related to stress of the user 114 as denoted by a stage 432. Analysis of the image 112i may be used to determine that the user 114 is under stress. Stage 442 may comprise recommending an action related to diet, sleep, or exercise to user 114. Analysis of the image 112i may be used to determine which recommendations related to diet, sleep, or exercise to make to user 114.
Attention is now directed to
Method 400c may function to determine the HR of the subject through non-contact means, specifically by identifying fluctuations in the amount of blood in a portion of the body of the subject (e.g., face 112f), as captured in a video signal (e.g., from 120, 170, 104), through component analysis of the video signal and isolation of a frequency peak in a Fourier transform of the video signal. Method 400c may be implemented as an application or applet executing on an electronic device incorporating a camera, such as a cellular phone, smartphone, tablet, laptop computer, or desktop computer, wherein stages of the method 400c are completed in part or in whole by the electronic device. Stages of method 400c may additionally or alternatively be implemented by a remote server or network in communication with the electronic device. Alternatively, the method 400c may be implemented as a service that is remotely accessible and that serves to determine the HR of a subject in an uploaded, linked, or live-feed video signal, though the method 400c may be implemented in any other way. In the foregoing or any other variation, the video signal and pixel data and values generated therefrom are preferably a live feed from the camera in the electronic device, though the video signal may be preexisting, such as a video signal recorded previously with the camera, a video signal sent to the electronic device, or a video signal downloaded from a remote server, network, or website. Furthermore, method 400c may also include calculating the heart rate variability (HRV) of the subject and/or calculating the respiratory rate (RR) of the subject, or any other physiological characteristic, such as a pulse wave rate, a Meyer wave, etc.
In the example depicted in
The video camera preferably operates at a known frame rate, such as fifteen or thirty frames per second, or other suitable frame rate, such that a time-domain component is associated with the video signal. The video camera may also preferably incorporates a plurality of color sensors, including distinct red, blue, and green color sensors, each of which generates a distinct red, blue, and green source signal, respectively. The color source signal from each color sensor is preferably in the form of an image for each frame recorded by the video camera. Each color source signal from each frame may thus be fed into a postprocessor implementing other Blocks of the method 400c and/or 500 to determine the HR, HRV, and/or RR of the subject. In some embodiments, a light capture device may be other than a camera or video camera, but may include any type of light (of any wavelength) receiving and/or detecting sensor.
As depicted in
Stage 450 may preferably implement machine vision to identify the face in the video signal. In one variation, stage 450 may use edge detection and template matching to isolate the face in the video signal. In another variation, stage 450 may implement pattern recognition and machine learning to determine the presence and position of the face 112f in the video signal. This variation may preferably incorporate supervised machine learning, wherein stage 450 accesses a set of training data that includes template images properly labeled as including or not including a face. A learning procedure may then transform the training data into generalized patterns to create a model that may subsequently be used to identify a face in video signals. However, in this variation, stage 450 may alternatively implement unsupervised learning (e.g., clustering) or semi-supervised learning in which at least some of the training data has not been labeled. In this variation, stage 450 may further implement feature extraction, principle component analysis (PCA), feature selection, or any other suitable technique to prune redundant or irrelevant features from the video signal. However, stage 450 may implement edge detection, gauging, clustering, pattern recognition, template matching, feature extraction, principle component analysis (PCA), feature selection, thresholding, positioning, or color analysis in any other way, or use any other type of machine learning or machine vision to identify the face 112f of the subject (e.g., user 114) in the video signal.
In stage 450, each frame of the video feed, and preferably each frame of each color source signal of the video feed, may be cropped of all image data excluding the face 112f or a specific portion of the face 112f of the subject (e.g., user 114). By removing all information in the video signal that is irrelevant to the plethysmographic signal, the amount of time required to calculate subject HR may be reduced.
As depicted in
The plethysmographic signal that is extracted from the video signal in stage 455 may preferably be an aggregate or averaged AC signal from a plurality of pixels associated with a portion of the face 112f of the subject identified in the video signal, such as either or both cheeks 111b or the forehead 111a of the subject. By aggregating or averaging an AC signal from a plurality of pixels, errors and outliers in the plethysmographic signal may be minimized. Furthermore, multiple plethysmographic signals may be extracted in stage 455 for each of various regions of the face 112f, such as each cheek 111b and the forehead 111a of the subject, as shown in
As depicted in
As depicted in
In one variation of the method 400c as depicted in method 500 of
In the variation of the method 400c as depicted in method 500 of
Alternatively, in the variation of the method 400c in which multiple plethysmographic signals are transformed in the stage 460 and/or stage 464, stage 465 may include combining the multiple transformed plethysmographic signals into a composite transformed plethysmographic signal, wherein a peak frequency is isolated in the composite transformed plethysmographic signal to estimate the HR of the subject. However, stage 465 may function in any other way and implement any other mechanisms.
In a variation of the method 400c as depicted in method 500 in
In a variation of the method 400c as depicted in method 500 in
As depicted in
By enabling a mobile device, such as a smartphone or tablet, to implement one or more of the methods 400c, 500, or 600, the subject may access any of the aforementioned calculations and generate other fitness-based metrics substantially on the fly and without sophisticated equipment. The methods 400c, 500, or 600, as applied to exercise, are preferably provided through a fitness application (“fitness app”) executing on the mobile device, wherein the app stores subject fitness metrics, plots subject progress, recommends activities or exercise routines, and/or provides encouragement to the subject, such as through a digital fitness coach. The fitness app may also incorporate other functions, such as monitoring or receiving inputs pertaining to food consumption or determining subject activity based upon GPS or accelerometer data.
Referring back to
HR, HRV, and RR, which may correlate with the health, wellness, and/or fitness of the subject, may thus be tracked over time at the stage 606 and substantially in the background, thus increasing the amount of health-related data captured for a particular subject while decreasing the amount of positive action necessary to capture health-related data on the part of the subject, a medical professional, or other individual. Through the method 600, or methods 400c or 500, health-related information may be recorded substantially automatically during normal, everyday actions already performed by a large subset of the population.
With such large amounts of HR, HRV, and/or RR data for the subject, health risks for the subject may be estimated at the stage 622. In particular, trends in HR, HRV, and/or RR, such as at various times or during or after certain activities, may be determined at the stage 612. In this variation, additional data falling outside of an expected value or trend may trigger warnings or recommendations for the subject. In a first example, if the subject is middle-aged and has a HR that remains substantially low and at the same rate throughout the week, but the subject engages occasionally in strenuous physical activity, the subject may be warned of increased risk of heart attack and encouraged to engage is light physical activity more frequently at the stage 624. In a second example, if the HR of the subject is typically 65 bpm within five minutes of getting out of bed, but on a particular morning the HR of the subject does not reach 65 bpm until thirty minutes after rise, the subject may be warned of the likelihood of pending illness, which may automatically trigger confirmation a doctor visit at the stage 626 or generation a list of foods that may boost the immune system of the subject. Trends may also show progress of the subject, such as improved HR recovery throughout the course of a training or exercise regimen.
In this variation, method 600 may also be used to correlate the effect of various inputs on the health, mood, emotion, and/or focus of the subject. In a first example, the subject may engage an app on his smartphone (e.g., The Eatery by Massive Health) to record a meal, snack, or drink. While inputting such data, a camera on the smartphone may capture the HR, HRV, and/or RR of the subject such that the meal, snack, or drink may be associated with measured physiological data. Overtime, this data may correlate certain foods correlate with certain feelings, mental or physical states, energy levels, or workflow at the stage 620. In a second example, the subject may input an activity, such as by “checking in” (e.g., through a Foursquare app on a smartphone) to a location associated with a particular product or service. When shopping, watching a sporting event, drinking at a pub with friends, seeing a movie, or engaging in any other activity, the subject may engage his smartphone for any number of tasks, such as making a phone call or reading an email. When engaged by the user, the smartphone may also capture subject HR and then tag the activity, location, and/or individuals proximal the user with measured physiological data. Trend data at the stage 606 may then be used to make recommendations to the subject, such as a recommendation to avoid a bar or certain individuals because physiological data indicates greater anxiety or stress when proximal the bar or the certain individuals. Alternatively, an elevated HR of the subject while performing a certain activity may indicate engagement in and/or enjoyment of the activity, and the subject may subsequently be encouraged to join friends who are currently performing the activity. Generally, at the stage 610, social alerts may be presented to the subject and may be controlled (and scheduled), at least in part, by the health effect of the activity on the subject.
In another example implementation, the method 600 may measure the HR of the subject who is a fetus. For example, the microphone integral with a smartphone may be held over a woman's abdomen to record the heart beats of the mother and the child. Simultaneously, the camera of the smartphone may be used to determine the HR of the mother via the method 600, wherein the HR of the woman may then be removed from the combined mother-fetus heart beats to distinguish heart beats and the HR of the fetus alone. This functionality may be provided through software (e.g., a “baby heart beat app”) operating on a standard smartphone rather than through specialized. Furthermore, a mother may use such an application at any time to capture the heartbeat of the fetus, rather than waiting to visit a hospital. This functionality may be useful in monitoring the health of the fetus, wherein quantitative data pertaining to the fetus may be obtained at any time, thus permitting potential complications to be caught early and reducing risk to the fetus and/or the mother. Fetus HR data may also be cumulative and assembled into trends, such as described above.
Generally, the method 600 may be used to test for certain heart or health conditions without substantial or specialized equipment. For example, a victim of a recent heart attack may use nothing more than a smartphone with integral camera to check for heart arrhythmia. In another example, the subject may test for risk of cardiac arrest based upon HRV. Recommendations may also be made to the subject, such as based upon trend data, to reduce subject risk of heart attack. However, the method 600 may be used in any other way to achieve any other desired function.
Further, method 600 may be applied as a daily routine assistant. Block S450 may be configured to include generating a suggestion to improve the physical, mental, or emotional health of the subject substantially in real time. In one example implementation, the method 600 is applied to food, exercise, and/or caffeine reminders. For example, if the subject HR has fallen below a threshold, the subject may be encouraged to eat. Based upon trends, past subject data, subject location, subject diet, or subject likes and dislikes, the type or content of a meal may also be suggested to the subject. Also, if the subject HR is trending downward, such as following a meal, a recommendation for coffee may be provided to the subject. A coffee shop may also be suggested, such as based upon proximity to the subject or if a friend is currently at the coffee shop. Furthermore, a certain coffee or other consumable may also be suggested, such as based upon subject diet, subject preferences, or third-party recommendations, such as sourced from Yelp. The method 600 may thus function to provide suggestions to maintain an energy level and/or a caffeine level of the subject. The method 600 may also provide “deep breath” reminders. For example, if the subject is composing an email during a period of elevated HR, the subject may be reminded to calm down and return to the email after a period of reflection. For example, strong language in an email may corroborate an estimated need for the subject to break from a task. Any of these recommendations may be provided through pop-up notifications on a smartphone, tablet, computer, or other electronic device, through an alarm, by adjusting a digital calendar, or by any other communication means or through any other device.
In another example implementation, the method 600 may be used to track sleep patterns. For example, a smartphone or tablet placed on a nightstand and pointed at the subject may capture subject HR and RR throughout the night. This data may be used to determine sleep state, such as to wake up the subject at an ideal time (e.g., outside of REM sleep). This data may alternatively be used to diagnose sleep apnea or other sleep disorders. Sleep patterns may also be correlated with other factors, such as HR before bed, stress level throughout the day (as indicated by elevated HR over a long period of time), dietary habits (as indicated through a food app or changes in subject HR or RR at key times throughout the day), subject weight or weight loss, daily activities, or any other factor or physiological metric. Recommendations for the subject may thus be made to improve the health, wellness, and fitness of the subject. For example, if the method 600 determines that the subject sleeps better, such as with fewer interruptions or less snoring, on days in which the subject engages in light to moderate exercise, the method 600 may include a suggestion that the subject forego an extended bike ride on the weekend (as noted in a calendar) in exchange for shorter rides during the week. However, any other sleep-associated recommendation may be presented to the subject.
The method 600 may also be implemented through an electronic device configured to communicate with external sensors to provide daily routine assistance. For example, the electronic device may include a camera and a processor integrated into a bathroom vanity, wherein the HR, HRV, and RR of the subject is captured while the subject brushes his teeth, combs his hair, etc. A bathmat (e.g., 190) in the bathroom may include a pressure sensor configured to capture at the stage 608 the weight of the subject, which may be transmitted to the electronic device. The weight, hygiene, and other action and physiological factors may thus all be captured in the background while a subject prepares for and/or ends a typical day. However, the method 600 may function independently or in conjunction with any other method, device, or sensor to assist the subject in a daily routine.
Other applications of the stage 470 of
In another example, the method 600 may be used to determine mood, interest chemistry, etc. of one or more actors in a movie or television show. A user may point an electronic device implementing the method 600 at a television to obtain an estimate of the HR of the actor(s) displayed therein. This may provide further insight into the character of the actor(s) and allow the user to understand the actor on a new, more personal level. However, the method 600 may be used in any other way to provide any other functionality.
According to some examples, computing platform 700 performs specific operations by processor 704 executing one or more sequences of one or more instructions stored in system memory 706 (e.g., executable instructions embodied in a non-transitory computer readable medium), and computing platform 700 may be implemented in a client-server arrangement, peer-to-peer arrangement, or as any mobile computing device, including smart phones and the like. Such instructions or data may be read into system memory 706 from another computer readable medium, such as storage device 708. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation. Instructions may be embedded in software or firmware. The term “non-transitory computer readable medium” refers to any tangible medium that participates in providing instructions to processor 704 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks and the like. Volatile media includes dynamic memory, such as system memory 706.
Common forms of non-transitory computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer may read. Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 702 for transmitting a computer data signal.
In some examples, execution of the sequences of instructions may be performed by computing platform 700. According to some examples, computing platform 700 may be coupled by communication link 721 (e.g., a wired network, such as LAN, PSTN, or any wireless network) to any other processor to perform the sequence of instructions in coordination with (or asynchronous to) one another. Computing platform 700 may transmit and receive messages, data, and instructions, including program code (e.g., application code) through communication link 721 and communication interface 713. Received program code may be executed by processor 704 as it is received, and/or stored in memory 706 or other non-volatile storage for later execution.
In the example depicted in
Referring now to
One or more of data 813, 823, 853, 873, and 893 may comprise data for determining the health of a user including but not limited to: biometric data; weight data; activity data; recommended action data; first and/or second current health indicator data; historic health indicator data; short term data; long term data; user weight data; image capture data from face 112f; user sleep data; user exhaustion data; user mood data; user heart rate data; heart rate variability data; user respiratory rate data; Fourier method data; data related to the plethysmographic signal; red, green, and blue image data; user meal data; trend data; user calendar data; user activity data; user diet data; user exercise data; user health data; data for transforms; and data for filters, just to name a few. Data 813, 823, 853, 873, and 893 may reside in whole or in part in one or more of the wireless resources 100, 190, 810, 820, and 850.
Data and/or flows used by system 100 may reside in a single wireless resource or in multiple wireless resources. The following are non-limiting examples of interaction scenarios between the wireless resources depicted in
Some or all of the data from wireless resources (100, 190, 810, 820) may be wirelessly transmitted 855 to resource 850 which may serve as a central access point for data. System 100 may wirelessly access the data it requires from resource 850. Data 853 from resource 850 may be wirelessly 855 transmitted to any of the other wireless resources as needed. In some examples, data 853 or a portion thereof, comprises one or more of the data 813, 823, 873, or 893. Although not depicted, a wireless network such as a WiFi network, wireless router, cellular network, or WiMAX network may be used to wirelessly connect one or more of the wireless resources with one another.
One or more of the wireless resources depicted in
As one example, resource 810 may process data 813 using flows 890 and wirelessly communicate 815 results, recommendations, actions, and the like to resource 100 for presentation on display 110. As another example, resource 850 may include processing hardware (e.g., a server) to process data 853 using flows 890 and wirelessly communicate 815 results, recommendations, actions, and the like to resource 100 for presentation on display 110. System 100 may image 112i the face 112f of user 114, and then some or all of the image data (e.g., red 101, green 103, and blue 105 components) may be wirelessly transmitted 178 to another resource, such as 810 or 850 for processing and the results of the processing may be wirelessly transmitted back to system 100 where additional processing may occur and results presented on display 110 or on another resource, such as a display of resource 810. As depicted in
The systems, apparatus and methods of the foregoing examples may be embodied and/or implemented at least in part as a machine configured to receive a non-transitory computer-readable medium storing computer-readable instructions. The instructions may be executed by computer-executable components preferably integrated with the application, server, network, website, web browser, hardware/firmware/software elements of a user computer or electronic device, or any suitable combination thereof. Other systems and methods of the embodiment may be embodied and/or implemented at least in part as a machine configured to receive a non-transitory computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated by computer-executable components preferably integrated with apparatuses and networks of the type described above. The non-transitory computer-readable medium may be stored on any suitable computer readable media such as RAMs, ROMs, Flash memory, EEPROMs, optical devices (CD, DVD or Blu-Ray), hard drives (HD), solid state drives (SSD), floppy drives, or any suitable device. The computer-executable component may preferably be a processor but any suitable dedicated hardware device may (alternatively or additionally) execute the instructions.
As a person skilled in the art will recognize from the previous detailed description and from the drawing FIGS. and claims set forth below, modifications and changes may be made to the embodiments of the present application without departing from the scope of this present application as defined in the following claims.
Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described inventive techniques are not limited to the details provided. There are many alternative ways of implementing the above-described techniques or the present application. The disclosed examples are illustrative and not restrictive.
This application Claims the Benefit of and Priority to U.S. Provisional Patent Application Ser. No. 61/644,917, filed on May 9, 2012, having attorney docket number MSSV-P06-PRV, and titled “SYSTEM AND METHOD FOR MONITORING THE HEALTH OF A USER” which is hereby incorporated by reference in its entirety for all purposes. This application is related to U.S. Provisional Application Ser. No. 61/641,672, filed on 2 May 2012, having attorney docket number MSSV-P04-PRV, and titled “METHOD FOR DETERMINING THE HEART RATE OF A SUBJECT”, which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61644917 | May 2012 | US |