The present invention relates generally to electrical and electronic hardware, electromechanical and computing devices. More specifically, techniques related to a combination speaker and light source responsive to states of an organism based on sensor data are described.
Conventional devices for lighting typically do not provide audio playback capabilities, and conventional devices for audio playback (i.e., speakers) typically do not provide light. Although there are conventional speakers equipped with light features for decoration or as part of a user interface, such conventional speakers are typically not configured to provide ambient lighting or the light an environment. Also, conventional speakers typically are not configured to be installed into or powered using a light socket.
Conventional devices for lighting and playing audio also typically lack capabilities for responding automatically to a person's state and environment, particularly in a contextually-meaningful manner.
Thus, what is needed is a solution for a combination speaker and light source responsive to states of an organism based on sensor data without the limitations of conventional techniques.
Various embodiments or examples (“examples”) are disclosed in the following detailed description and the accompanying drawings:
Although the above-described drawings depict various examples of the invention, the invention is not limited by the depicted examples. It is to be understood that, in the drawings, like reference numerals designate like structural elements. Also, it is understood that the drawings are not necessarily to scale.
Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a device, and a method associated with a wearable device structure with enhanced detection by motion sensor. In some embodiments, motion may be detected using an accelerometer that responds to an applied force and produces an output signal representative of the acceleration (and hence in some cases a velocity or displacement) produced by the force. Embodiments may be used to couple or secure a wearable device onto a body part. Techniques described are directed to systems, apparatuses, devices, and methods for using accelerometers, or other devices capable of detecting motion, to detect the motion of an element or part of an overall system. In some examples, the described techniques may be used to accurately and reliably detect the motion of a part of the human body or an element of another complex system. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.
A detailed description of one or more examples is provided below along with accompanying figures. The detailed description is provided in connection with such examples, but is not limited to any particular example. The scope is limited only by the claims and numerous alternatives, modifications, and equivalents are encompassed. Numerous specific details are set forth in the following description in order to provide a thorough understanding. These details are provided for the purpose of example and the described techniques may be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields related to the examples has not been described in detail to avoid unnecessarily obscuring the description.
Physiological information generator 120 is shown to include a sensor selector 122, a motion artifact reduction unit 124, and a physiological characteristic determinator 126. Sensor selector 122 is configured to select a subset of electrodes, and is further configured to use the selected subset of electrodes to acquire physiological characteristics, according to some embodiments. Examples of a subset of electrodes include subset 107, which is composed of electrodes 110d and 110e, and subset 105, which is composed of electrodes 110c, 110d and 110e. More or fewer electrodes can be used. Sensor selector 122 is configured to determine which one or more subsets of electrodes 110 (out of a number of subsets of electrodes 110) are adjacent to a target location. As used herein, the term “target location” can, for example, refer to a region in space from which a physiological characteristic can be determined. A target region can be adjacent to a source of the physiological characteristic, such as blood vessel 102, with which an impedance signal can be captured and analyzed to identify one or more physiological characteristics. The target region can reside in two-dimensional space, such as an area on the skin of a user adjacent to the source of the physiological characteristic, or in three-dimensional space, such as a volume that includes the source of the physiological characteristic. Sensor selector 122 operates to either drive a first signal via a selected subset to a target location, or receive a second signal from the target location, or both. The second signal includes data representing one or more physiological characteristics. For example, sensor selector 122 can configure electrode (“D”) 110b to operate as a drive electrode that drives a signal (e.g., an AC signal) into the target location, such as into the skin of a user, and can configure electrode (“S”) 110a to operate as a sink electrode (i.e., a receiver electrode) to receive a second signal from the target location, such as from the skin of the user. In this configuration, sensor selector 112 can drive a current signal via electrode (“D”) 110b into a target location to cause a current to pass through the target location to another electrode (“S”) 110a. In various examples, the target location can be adjacent to or can include blood vessel 102. Examples of blood vessel 102 include a radial artery, an ulnar artery, or any other blood vessel. Array 101 is not limited to being disposed adjacent blood vessel 102 in an arm, but can be disposed on any portion of a user's person (e.g., on an ankle, ear lobe, around a finger or on a fingertip, etc.). Note that each electrode 110 can be configured as either a driver or a sink electrode. Thus, electrode 110b is not limited to being a driver electrode and can be configured as a sink electrode in some implementations. As used herein, the term “sensor” can refer, for example, to a combination of one or more driver electrodes and one or more sink electrodes for determining one or more bioimpedance-related values and/or signals, according to some embodiments.
In some embodiments, sensor selector 122 can be configured to determine (periodically or aperiodically) whether the subset of electrodes 110a and 110b are optimal electrodes 110 for acquiring a sufficient representation of the one or more physiological characteristics from the second signal. To illustrate, consider that electrodes 110a and 110b may be displaced from the target location when, for instance, wearable device 170 is subject to a displacement in a plane substantially perpendicular to blood vessel 102. The displacement of electrodes 110a and 110b may increase the impedance (and/or reactance) of a current path between the electrodes 110a and 110b, or otherwise move those electrodes away from the target location far enough to degrade or attenuate the second signals retrieved therefrom. While electrodes 110a and 110b may be displaced from the target location, other electrodes are displaced to a position previously occupied by electrodes 110a and 110b (i.e., adjacent to the target location). For example, electrodes 110c and 110d may be displaced to a position adjacent to blood vessel 102. In this case, sensor selector 122 operates to determine an optimal subset of electrodes 110, such as electrodes 110c and 110d, to acquire the one or more physiological characteristics. Therefore, regardless of the displacement of wearable device 170 about blood vessel 102, sensor selector 122 can repeatedly determine an optimal subset of electrodes for extracting physiological characteristic information from adjacent a blood vessel. For example, sensor selector 122 can repeatedly test subsets in sequence (or in any other matter) to determine which one is disposed adjacent to a target location. For example, sensor selector 122 can select at least one of subset 109a, subset 109b, subset 109c, and other like subsets, as the subset from which to acquire physiological data.
According to some embodiments, array 101 of electrodes can be configured to acquire one or more physiological characteristics from multiple sources, such as multiple blood vessels. To illustrate, consider that, for example, blood vessel 102 is an ulnar artery adjacent electrodes 110a and 110b and a radial artery (not shown) is adjacent electrodes 110c and 110d. With multiple sources of physiological characteristic information being available, there are thus multiple target locations. Therefore, sensor selector 122 can select multiple subsets of electrodes 110, each of which is adjacent to one of a multiple number of target locations. Physiological information generator 120 then can use signal data from each of the multiple sources to confirm accuracy of data acquired, or to use one subset of electrodes (e.g., associated with a radial artery) when one or more other subsets of electrodes (e.g., associated with an ulnar artery) are unavailable.
Note that the second signal received into electrode 110a can be composed of a physiological-related signal component and a motion-related signal component, if array 101 is subject to motion. The motion-related component includes motion artifacts or noise induced into an electrode 110a. Motion artifact reduction unit 124 is configured to receive motion-related signals generated at one or more motion sensors 160, and is further configured to receive at least the motion-related signal component of the second signal. Motion artifact reduction unit 124 operates to eliminate the magnitude of the motion-related signal component, or to reduce the magnitude of the motion-related signal component relative to the magnitude of the physiological-related signal component, thereby yielding as an output the physiological-related signal component (or an approximation thereto). Thus, motion artifact reduction unit 124 can reduce the magnitude of the motion-related signal component (i.e., the motion artifact) by an amount associated with the motion-related signal generated by one or more accelerometers to yield the physiological-related signal component.
Physiological characteristic determinator 126 is configured to receive the physiological-related signal component of the second signal and is further configured to process (e.g., digitally) the signal data including one or more physiological characteristics to derive physiological signals, such as either a heart rate (“HR”) signal or a respiration signal, or both. For example, physiological characteristic determinator 126 is configured to amplify and/or filter the physiological-related component signals (e.g., at different frequency ranges) to extract certain physiological signals. According to various embodiments, a heart rate signal can include (or can be based on) a pulse wave. A pulse wave includes systolic components based on an initial pulse wave portion generated by a contracting heart, and diastolic components based on a reflected wave portion generated by the reflection of the initial pulse wave portion from other limbs. In some examples, an HR signal can include or otherwise relate to an electrocardiogram (“ECG”) signal. Physiological characteristic determinator 126 is further configured to calculate other physiological characteristics based on the acquired one or more physiological characteristics. Optionally, physiological characteristic determinator 126 can use other information to calculate or derive physiological characteristics. Examples of the other information include motion-related data, including the type of activity in which the user is engaged, such as running or sleep, location-related data, environmental-related data, such as temperature, atmospheric pressure, noise levels, etc., and any other type of sensor data, including stress-related levels and activity levels of the wearer.
In some cases, a motion sensor 160 can be disposed adjacent to the target location (not shown) to determine a physiological characteristic via motion data indicative of movement of blood vessel 102 through which blood pulses to identify a heart rate-related physiological characteristic. Motion data, therefore, can be used to supplement impedance determinations of to obtain the physiological characteristic. Further, one or more motion sensors 160 can also be used to determine the orientation of wearable device 170, and relative movement of the same to determine or predict a target location. By predicting a target location, sensor selector 122 can use the predicted target location to begin the selection of optimal subsets of electrodes 110 in a manner that reduces the time to identify a target location.
In view of the foregoing, the functions and/or structures of array 101 of electrodes and physiological information generator 120, as well as their components, can facilitate the acquisition and derivation of physiological characteristics in situ—during which a user is engaged in physical activity that imparts motion on a wearable device, thereby exposing the array of electrodes to motion-related artifacts. Physiological information generator 120 is configured to dampen or otherwise negate the motion-related artifacts from the signals received from the target location, thereby facilitating the provision of heart-related activity and respiration activity to the wearer of wearable device 170 in real-time (or near real-time). As such, the wearer of wearable device 170 need not be stationary or otherwise interrupt an activity in which the wearer is engaged to acquire health-related information. Also, array 101 of electrodes 110 and physiological information generator 120 are configured to accommodate displacement or movement of wearable device 170 about, or relative to, one or more target locations. For example, if the wearer intentionally rotates wearable device 170 about, for example, the wrist of the user, then initial subsets of electrodes 110 adjacent to the target locations (i.e., before the rotation) are moved further away from the target location. As another example, the motion of the wearer (e.g., impact forces experienced during running) may cause wearable device 170 to travel about the wrist. As such, physiological information generator 120 is configured to determine repeatedly whether to select other subsets of electrodes 110 as optimal subsets of electrodes 110 for acquiring physiological characteristics. For example, physiological information generator 120 can be configured to cycle through multiple combinations of driver electrodes and sink electrodes (e.g., subsets 109a, 109b, 109c, etc.) to determine optimal subsets of electrodes. In some embodiments, electrodes 110 in array 101 facilitate physiological data capture irrespective of the gender of the wearer. For example, electrodes 110 can be disposed in array 101 to accommodate data collection of a male or female were irrespective of gender-specific physiological dimensions. In at least one embodiment, data representing the gender of the wearer can be accessible to assist physiological information generator 120 in selecting the optimal subsets of electrodes 110. While electrodes 110 are depicted as being equally-spaced, array 101 is not so limited. In some embodiments, electrodes 110 can be clustered more densely along portions of array 101 at which blood vessels 102 are more likely to be adjacent. For example, electrodes 110 may be clustered more densely at approximate portions 172 of wearable device 170, whereby approximate portions 172 are more likely to be adjacent a radial or ulnar artery than other portions. While wearable device 170 is shown to have an elliptical-like shape, it is not limited to such a shape and can have any shape.
In some instances, a wearable device 170 can select multiple subsets of electrodes to enable data capture using a second subset adjacent to a second target location when a first subset adjacent a first target location is unavailable to capture data. For example, a portion of wearable device 170 including the first subset of electrodes 110 (initially adjacent to a first target location) may be displaced to a position farther away in a radial direction away from a blood vessel, such as depicted by a radial distance 392 of
In addition, accelerometers 160 can be used to replace the implementation of subsets of electrodes to detect motion associated with pulsing blood flow, which, in turn, can be indicative of whether oxygen-rich blood is present or not present. Or, accelerometers 160 can be used to supplement the data generated by acquired one or more bioimpedance signals acquired by array 101. Accelerometers 160 can also be used to determine the orientation of wearable device 170 and relative movement of the same to determine or predict a target location. Sensor selector 122 can use the predicted target location to begin the selection of the optimal subsets of electrodes 110, which likely decreases the time to identify a target location. Electrodes 110 of array 101 can be disposed within a material constituting, for example, a housing, according to some embodiments. Therefore, electrodes 110 can be protected from the environment and, thus, need not be subject to corrosive elements. In some examples, one or more electrodes 110 can have at least a portion of a surface exposed. As electrodes 110 of array 101 are configured to couple capacitively to a target location, electrodes 110 thereby facilitate high impedance signal coupling so that the first and second signals can pass through fabric and hair. As such, electrodes 110 need not be limited to direct contact with the skin of a wearer. Further, array 101 of electrodes 110 need not circumscribe a limb or source of physiological characteristics. An array 101 can be linear in nature, or can configurable to include linear and curvilinear portions.
In some embodiments, wearable device 170 can be in communication (e.g., wired or wirelessly) with a mobile device 180, such as a mobile phone or computing device. In some cases, mobile device 180, or any networked computing device (not shown) in communication with wearable device 170 or mobile device 180, can provide at least some of the structures and/or functions of any of the features described herein. As depicted in
For example, physiological information generator 120 and any of its one or more components, such as sensor selector 122, motion artifact reduction unit 124, and physiological characteristic determinator 126, can be implemented in one or more computing devices (i.e., any mobile computing device, such as a wearable device or mobile phone, whether worn or carried) that include one or more processors configured to execute one or more algorithms in memory. Thus, at least some of the elements in
As hardware and/or firmware, the above-described structures and techniques can be implemented using various types of programming or integrated circuit design languages, including hardware description languages, such as any register transfer language (“RTL”) configured to design field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”), multi-chip modules, or any other type of integrated circuit. For example, physiological information generator 120, including one or more components, such as sensor selector 122, motion artifact reduction unit 124, and physiological characteristic determinator 126, can be implemented in one or more computing devices that include one or more circuits. Thus, at least one of the elements in
According to some embodiments, the term “circuit” can refer, for example, to any system including a number of components through which current flows to perform one or more functions, the components including discrete and complex components. Examples of discrete components include transistors, resistors, capacitors, inductors, diodes, and the like, and examples of complex components include memory, processors, analog circuits, digital circuits, and the like, including field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”). Therefore, a circuit can include a system of electronic components and logic components (e.g., logic configured to execute instructions, such that a group of executable instructions of an algorithm, for example, and, thus, is a component of a circuit). According to some embodiments, the term “module” can refer, for example, to an algorithm or a portion thereof, and/or logic implemented in either hardware circuitry or software, or a combination thereof (i.e., a module can be implemented as a circuit). In some embodiments, algorithms and/or the memory in which the algorithms are stored are “components” of a circuit. Thus, the term “circuit” can also refer, for example, to a system of components, including algorithms. These can be varied and are not limited to the examples or descriptions provided.
Referring to
Physiological characteristic determinator 226 can derive other physiological characteristics using other data generated or accessible by wearable device 209, such as the type of activity the wear is engaged, environmental factors, such as temperature, location, etc., whether the wearer is subject to any chronic illnesses or conditions, and any other health or wellness-related information. For example, if the wearer is diabetic or has Parkinson's disease, motion sensor 221 can be used to detect tremors related to the wearer's ailment. With the detection of small, but rapid movements of a wearable device that coincide with a change in heart rate (e.g., a change in an HR signal) and/or breathing, physiological information generator 220 may generate data (e.g., an alarm) indicating that the wearer is experiencing tremors. For a diabetic, the wearer may experience shakiness because the blood-sugar level is extremely low (e.g., it drops below a range of 38 to 42 mg/dl). Below these levels, the brain may become unable to control the body. Moreover, if the arms of a wearer shakes with sufficient motion to displace a subset of electrodes from being adjacent a target location, the array of electrodes, as described herein, facilitates continued monitoring of a heart rate by repeatedly selecting subsets of electrodes that are positioned optimally (e.g., adjacent a target location) for receiving robust and accurate physiological-related signals.
To illustrate the resiliency of a wearable device to maintain an ability to monitor physiological characteristics over one or more displacements of the wearable device (e.g., around or along wrist 303), consider that a sensor selector configures initially electrodes 310b, 310d, 310f, 310h, and 310j as driver electrodes and electrodes 310a, 310c, 310e 310g, 310i, and 310k as sink electrodes. Further consider that the sensor selector identifies a first subset of electrodes that includes electrodes 310b and 310c as a first optimal subset, and also identifies a second subset of electrodes that include electrodes 310f and 310g as a second optimal subset. Note that electrodes 310b and 310c are adjacent target location 304a and electrodes 310f and 310g are adjacent to target location 304b. These subsets are used to periodically (or aperiodically) monitor the signals from electrodes 310c and 310g, until the first and second subsets are no longer optimal (e.g., when movement of the wearable device displaces the subsets relative to the target locations). Note that the functionality of driver and sink electrodes for electrodes 310b, 310c, 310f, and 310g can be reversed (e.g., electrodes 310a and 310g can be configured as drive electrodes).
Next, consider that sensor selector 322 of
In some embodiments, a target location determinator 538 is configured to initiate the above-described sensor selection mode to determine a subset of electrodes 510 adjacent a target location. Further, target location determinator 538 can also track displacements of a wearable device in which array 501 resides based on motion data from accelerometer 540. For example, target location determinator 538 can be configured to determine an optimal subset if the initially-selected electrodes are displaced farther away from the target location. In sensor selecting mode, target location determinator 538 can be configured to select another subset, if necessary, by beginning the capture of data samples at electrodes for the last known subset adjacent to the target location, and progressing to other nearby subsets to either confirm the initial selection of electrodes or to select another subset. In some examples, orientation of the wearable device, based on accelerometer data (e.g., a direction of gravity), also can be used to select a subset of electrodes 501 for evaluation as an optimal subset. Motion determinator 536 is configured to detect whether there is an amount of motion associated with a displacement of the wearable device. As such, motion determinator 536 can detect motion and generate a signal to indicate that the wearable device has been displaced, after which signal controller 530 can determine the selection of a new subset that is more closely situated near a blood vessel than other subsets, for example. Also, motion determinator 536 can cause signal controller 530 to disable data capturing during periods of extreme motion (e.g., during which relatively large amounts of motion artifacts may be present) and to enable data capturing during moments when there is less than an extreme amount of motion (e.g., when a tennis player pauses before serving). Data repository 542 can include data representing the gender of the wearer, which is accessible by signal controller 530 in determining the electrodes in a subset.
In some embodiments, signal driver 532 may be a constant current source including an operational amplifier configured as an amplifier to generate, for example, 100 μA of alternating current (“AC”) at various frequencies, such as 50 kHz. Note that signal driver 532 can deliver any magnitude of AC at any frequency or combinations of frequencies (e.g., a signal composed of multiple frequencies). For example, signal driver 532 can generate magnitudes (or amplitudes), such as between 50 μA and 200 μA, as an example. Also, signal driver 532 can generate AC signals at frequencies from below 10 kHz to 550 kHz, or greater. According to some embodiments, multiple frequencies may be used as drive signals either individually or combined into a signal composed of the multiple frequencies. In some embodiments, signal receiver 534 may include a differential amplifier and a gain amplifier, both of which can include operational amplifiers.
Motion artifact reduction unit 524 is configured to subtract motion artifacts from a raw sensor signal received into signal receiver 534 to yield the physiological-related signal components for input into physiological characteristic determinator 526. Physiological characteristic determinator 526 can include one or more filters to extract one or more physiological signals from the raw physiological signal that is output from motion artifact reduction unit 524. A first filter can be configured for filtering frequencies for example, between 0.8 Hz and 3 Hz to extract an HR signal, and a second filter can be configured for filtering frequencies between 0 Hz and 0.5 Hz to extract a respiration signal from the physiological-related signal component. Physiological characteristic determinator 526 includes a biocharacteristic calculator that is configured to calculate physiological characteristics 550, such as VO2 max, based on extracted signals from array 501.
According to some examples, computing platform 800 performs specific operations by processor 804 executing one or more sequences of one or more instructions stored in system memory 806, and computing platform 800 can be implemented in a client-server arrangement, peer-to-peer arrangement, or as any mobile computing device, including smart phones and the like. Such instructions or data may be read into system memory 806 from another non-transitory computer readable medium, such as storage device 808. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation. Instructions may be embedded in software or firmware. The term “non-transitory computer readable medium” refers to any tangible medium that participates in providing instructions to processor 804 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks and the like. Volatile media includes dynamic memory, such as system memory 806.
Common forms of non-transitory computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 802 for transmitting a computer data signal.
In some examples, execution of the sequences of instructions may be performed by computing platform 800. According to some examples, computing platform 800 can be coupled by communication link 821 (e.g., a wired network, such as LAN, PSTN, or any wireless network) to any other processor to perform the sequence of instructions in coordination with (or asynchronous to) one another. Computing platform 800 may transmit and receive messages, data, and instructions, including program code (e.g., application code) through communication link 821 and communication interface 813. Received program code may be executed by processor 804 as it is received, and/or stored in memory 806 or other non-volatile storage for later execution.
In the example shown, system memory 806 can include various modules that include executable instructions to implement functionalities described herein. In the example shown, system memory 806 includes a physiological information generator module 854 configured to implement determine physiological information relating to a user that is wearing a wearable device. Physiological information generator module 854 can include a sensor selector module 856, a motion artifact reduction unit module 858, and a physiological characteristic determinator 859, any of which can be configured to provide one or more functions described herein.
In some embodiments, signal receiver 934 is configured to receive electrical signals representing acoustic-related information from a microphone 911. An example of the acoustic-related information includes data representing a heartbeat or a heart rate as sensed by microphone 911, such that sensor signal 925 can be an electrical signal derived from acoustic energy associated with a sensed physiological signal, such as a pulse wave or heartbeat. Wearable device 909 can include microphone 911 configured to contact (or to be positioned adjacent to) the skin of the wearer, whereby microphone 911 is adapted to receive sound and acoustic energy generated by the wearer (e.g., the source of sounds associated with physiological information). Microphone 911 can also be disposed in wearable device 909. According to some embodiments, microphone 911 can be implemented as a skin surface microphone (“SSM”), or a portion thereof, according to some embodiments. An SSM can be an acoustic microphone configured to enable it to respond to acoustic energy originating from human tissue rather than airborne acoustic sources. As such, an SSM facilitates relatively accurate detection of physiological signals through a medium for which the SSM can be adapted (e.g., relative to the acoustic impedance of human tissue). Examples of SSM structures in which piezoelectric sensors can be implemented (e.g., rather than a diaphragm) are described in U.S. patent application Ser. No. 11/199,856, filed on Aug. 8, 2005, and U.S. patent application Ser. No. 13/672,398, filed on Nov. 8, 2012, both of which are incorporated by reference. As used herein, the term human tissue can refer to, at least in some examples, as skin, muscle, blood, or other tissue. In some embodiments, a piezoelectric sensor can constitute an SSM. Data representing sensor signal 925 can include acoustic signal information received from an SSM or other microphone, according to some examples.
According to some embodiments, physiological signal extractor 936 is configured to receive sensor signal 925 and data representing sensing information 915 from another, secondary sensor 913. In some examples, sensor 913 is a motion sensor (e.g., an accelerometer) configured to sense accelerations in one or more axes and generates motion signals indicating an amount of motion and/or acceleration. Note, however, that sensor 913 need not be so limited and can be any other sensor. Examples of suitable sensors are disclosed in U.S. Non-Provisional patent application Ser. No. 13/492,857, filed on Jun. 9, 2012, which is incorporated by reference. Further, physiological signal extractor 936 is configured to operate to identify a pattern (e.g., a motion “signature”), based on motion signal data generated by sensor 913, that can used to decompose sensor signal 925 into motion signal components 937a and physiological signal components 937b. As shown, motion signal components 937a and physiological signal components 937b can correspondingly be used by motion artifact reduction unit 924, or any other structure and/or function described herein, to form motion data 930 and one or more physiological data signals, such as physiological characteristic signals 940, 942, and 944. Physiological characteristic determinator 926 is configured to receive physiological signal components 937b of a raw physiological signal, and to filter different physiological signal components to form physiological characteristic signal(s). For example, physiological characteristic determinator 926 can be configured to analyze the physiological signal components to determine a physiological characteristic, such as a heartbeat, heart rate, pulse wave, respiration rate, a Mayer wave, and other like physiological characteristic. Physiological characteristic determinator 926 is also configured to generate a physiological characteristic signal that includes data representing the physiological characteristic during one or more portions of a time interval during which motion is present. Examples of physiological characteristic signals include data representing one or more of a heart rate 940, a respiration rate 942, Mayer wave frequencies 944, and any other sensed characteristic, such as a galvanic skin response (“GSR”) or skin conductance. Note that the term “heart rate” can refer, at least in some embodiments, to any heart-related physiological signal, including, but not limited to, heart beats, heart beats per minute (“bpm”), pulse, and the like. In some examples, the term “heart rate” can refer also to heart rate variability (“HRV”), which describes the variation of a time interval between heartbeats. HRV describes a variation in the beat to beat interval and can be described in terms of frequency components (e.g., low frequency and high frequency components), at least in some cases.
In view of the foregoing, the functions and/or structures of motion artifact reduction unit 924, as well as its components and/or neighboring components, can facilitate the extraction and derivation of physiological characteristics in situ—during which a user is engaged in physical activity that imparts motion on a wearable device, whereby biometric sensors, such as electrodes, may receive bioimpedance sensor signals that are exposed to, or include, motion-related artifacts. For example, physiological signal extractor 936 can be configured to receive the sensor signal that includes data representing physical physiological characteristics during one or more portions of the time interval in which the wearable devices is in motion. A user 903 need not be required to remain immobile to determine physiological signal characteristic signals. Therefore, user 903 can receive heart rate information, respiration information, and other physiological information during physical activity or during periods of time in which user 903 is substantially or relatively active. Further, according to various embodiments, physiological signal extractor 936 facilitates the sensing of physiological characteristic signals at a distal end of a limb or appendage, such as at a wrist, of user 903. Therefore, various implementations of motion artifact reduction unit 924 can enable the detection of physiological signal at the extremities of user 903, with minimal or reduced effects of motion-related artifacts and their influence on the desired measured physiological signal. By facilitating the detection of physiological signals at the extremities, wearable device 909 can assist user 903 to detect oncoming ailments or conditions of the person's body (e.g., oncoming tremors, states of sleep, etc.) relative to other portions of the person's body, such as proximal portions of a limb or appendage.
In accordance with some embodiments, physiological signal extractor 936 can include an offset generator, which is not shown. An offset generator can be configured to determine an amount of motion that is associated with the motion sensor signal, such as an accelerometer signal, and to adjust the dynamic range of operation of an amplifier, where the amplifier is configured to receive a sensor signal responsive to the amount of motion. An example of such an amplifier is an operational amplifier configured as a front-end amplifier to enhance, for example, the signal-to-noise ratio. In situations in which the motion related artifacts induce a rapidly-increasing amplitude onto the sensor signal, the amplifier may drive into saturation, which, in turn, causes clipping of the output of the amplifier. The offset generator also is configured to apply in offset value to an amplifier to modify the dynamic range of the amplifier so as to reduce or negate large magnitudes of motion artifacts that may otherwise influence the amplitude of the sensor signal. Examples of an offset generator are described in relation to
Data correlator 1142 is configured to receive the raw sensor signal and the selected stream of accelerometer data. Data correlator 1142 operates to correlate the sensor signal and the selected motion sensor signal. For example, data correlator 1142 can scale the magnitudes of the selected motion sensor signal to an equivalent range for the sensor signal. In some embodiments, data correlator 1142 can provide for the transformation of the signal data between the bioimpedance sensor signal space and the acceleration data space. Such a transformation can be optionally performed to make the motion sensor signals, especially the selected motion sensor signal, equivalent to the bioimpedance sensor signal. In some examples, a cross-correlation function or an autocorrelation function can be implemented to correlate the sets of data representing the motion sensor signal and the sensor signal.
Parameter estimator 1144 is configured to receive the selected motion sensor signal from stream selector 1140 and the correlated data signal from data correlator 1142. In some examples, parameter estimator 1144 is configured to estimate parameters, such as coefficients, for filtering out physiological characteristic signals from motion-related artifact signals. For example, the selected motion sensor signal, such as accelerometer signal, generally does not include biological derived signal data, and, as such, one or more coefficients for physiological signal components can be reduced or effectively determined to be zero. Separation filter 1146 is configured to receive the coefficients as well as data correlated by data correlator 1142 and the selected motion sensor signal from stream selector 1140. In operation, separation filter 1146 is configured to recover the sources of the signals. For example, separation filter 1146 can generate a recovered physiological characteristic signal (“P”) 1160 and a recovered motion signal (“M”) 1162. Separation filter 1146, therefore, operates to separate a sensor signal including both biological signals and motion-related artifact signals into additive or subtractable components. Recovered signals 1160 and 1162 can be used to further determine one or more physiological characteristics signals, such as a heart rate, respiration rate, and a Mayer wave.
Window validator 1143 is optional, according to some embodiments. Window validator 1143 is configured to receive motion sensor signal data to determine a duration time (i.e., a valid window of time) in which sensor signal data can be predicted to be valid (i.e., durations in which the magnitude of motion-related artifacts signals likely do not affect the physiological signals). In some cases, window validator 1143 is configured to predict a saturation condition for a front-end amplifier (or any other condition, such as a motion-induced condition), whereby the sensor signal data is deemed invalid.
Further to flow 1300, consider two statistically independent noun Gaussian source signals S1 and S2, and two observation points O1 and O2. In some examples, observation points O1(t) and O2(t) are time-indexed samples associated with observed samples from the same sensor, at different locations. For example, O1(t) and O2(t) can represent observed samples from a first bioimpedance sensor (or electrode) and from a second bioimpedance sensor (or electrode), respectively. In other examples, O1(t) and O2(t) can represent observed samples from a first sensor, such as a bioimpedance sensor, and a second sensor, such as an accelerometer, respectively. At 1308, data associated with one or more of the two observation points O1 and O2 are preprocessed. For example, the data for the observation points can be centered, whitened, and/or reduced in dimensions, wherein preprocessing may reduce the complexity of determining the source signals and/or reduce the number of parameters or coefficients to be estimated. An example of a centering process includes subtracting the meaning of data from a sample to translate samples about a center. An example of a whitening process is eigenvalue decomposition. In some embodiments, preprocessing at 1308 can be different from, or similar to, the correlation of data as described herein, at least in some cases.
Observation points O1(t) and O2(t) can be expressed as follows:
O
1(t)=a11S1+a11S2 (Eqn. 1)
O
2(t)=a21S1+a21S2 (Eqn. 2)
where O=A×S, which represent matrices, and a11, a12, a21, and a22 represent parameters (or coefficients) that can be estimated. At 1310, the above equations 1 and 2 can be used to determine components for generating two (2) statistically-independent source signals, whereby A and S can be extracted from O. In some examples, A and S can be extracted iteratively, based on user-specified error rate and/or maximum number of iterations, among other things. Further, coefficients a11, a12, a21, and a22 can be modified such that one or more coefficients for the physiological characteristic and biological component is set to or near zero, as the accelerometer signal generally does not include physiological signals. In at least one embodiment, parameter estimator 1144 of
In some examples a matrix can be formed based on estimated coefficients, at 1310. At least some of the coefficients are configured to attenuate values of the physiological signal components for the motion sensor signal. An example of the matrix is a mixing matrix. Further, the matrix of coefficients can be inverted to form an inverted mixing matrix (e.g., to form an “unmixing” matrix). The inverted mixing matrix of coefficients can be applied (e.g., iteratively) to the samples of observation points O1(t) and O2(t) to recover the source signals, such as a recovered physiological characteristic signal and a recovered motion signal (e.g. a recovered motion-related artifact signal). In at least one embodiment, separation filter 1146 of
As shown, physiological state determinator 1812 includes a sleep manager 1814, an anomalous state manager 1816, and an affective state manager 1818. Physiological state determinator 1812 is configured to receive various physiological characteristics signals and to determine a physiological state of a user, such as user 1802. Physiological states include, but are not limited to, states of sleep, wakefulness, a deviation from a normative physiological state (i.e., an anomalous state), an affective state (i.e., mood, feeling, emotion, etc.). Sleep manager 1814 is configured to detect a stage of sleep as a physiological state, the stages of sleep including REM sleep and non-REM sleep, including as light sleep and deep sleep. Sleep manager 1814 is also configured to predict the onset or change into or between different stages of sleep, even if such changes are imperceptible to user 1802. Sleep manager 1814 can detect that user 1802 is transitioning from a wakefulness state to a sleep state and, for example, can generate a vibratory response (i.e., generated by vibration) or any other alert to user 1802. Sleep manager 1814 also can predict a sleep stage transition to either alert user 1802 or to disable such an alert if, for example, the alert is an alarm (i.e., wake-up time alarm) that coincides with a state of REM sleep. By delaying generation of an alarm, the user 1802 is permitted to complete of a state of REM sleep to ensure or enhance the quality of sleep. Such an alert can assist user 1802 to avoid entering a sleep state from a wakefulness state during critical activities, such as driving.
Anomalous state manager 1860 is configured to detect a deviation from the normative general physiological state in reaction, for example, to various stimuli, such as stressful situations, injuries, ailments, conditions, maladies, manifestations of an illness, and the like. Anomalous state manager 1860 can be configured to determine the presence of a tremor that, for example, can be a manifestation of an ailment or malady. Such a tremor can be indicative of a diabetic tremor, an epileptic tremor, a tremor due to Parkinson's disease, or the like. In some embodiments, anomalous state manager 1860 is configured to detect the onset of tremor related to a malady or condition prior to user 1802 perceiving or otherwise being aware of such a tremor. Therefore, anomalous state manager 1860 can predict the onset of a condition that may be remedied by, for example, medication and can alert user 1802 to the impending tremor. User 1802 then can take the medication before the intensity of the tremor increases (e.g., to an intensity that might impair or otherwise incapacitate user 1802). Further, anomalous state manager 1860 can be configured to determine if the physiological state of user 1802 is a pain state, in which user 1802 is experiencing pain. Upon determining a pain state, a wearable device (not shown) can be configured to transmit the presence of pain to a third-party via a wireless communication path to alert others of the pain state for resolution.
Affective state manager 1818 is configured to use at least physiological sensor data to form affective state data representing an approximate affective state of user 1802. As used herein, the term “affective state” can refer, at least in some embodiments, to a feeling, a mood, and/or an emotional state of a user. In some cases, affective state data can includes data that predicts an emotion of user 1802 or an estimated or approximated emotion or feeling of user 1802 concurrent with and/or in response to the interaction with another person, environmental factors, situational factors, and the like. In some embodiments, affective state manager 1818 is configured to determine a level of intensity based on sensor derived values and to determine whether the level of intensity is associated with a negative affectivity (e.g., a bad mood) or positive affectivity (e.g., a good mood). An example of an affective state manager 1818 is an affective state prediction unit as described in U.S. Provisional Patent Application No. 61/705,598 filed on Sep. 25, 2012, which is incorporated by reference herein for all purposes. While affective state manager 1818 is configured to receive any number of physiological characteristics signals in which to determine of an affective state of user 1802, affective state manager 1818 can use sensed and/or derived Mayer waves based on raw sensor signal 1842. In some examples, the detected Mayer waves can be used to determine heart rate variability (“HRV”) as heart rate variability can be correlated to Mayer waves. Further, affective state manager 1818 can use, at least in some embodiments, HRV to determine an affective state or emotional state of user 1802 as HRV may correlate with an emotion state of user 1802. Note that, while physiological information generating 1810 and physiological state determinator 1812 are described above in reference to distal portion 1804, one or more of these elements can be disposed at, or receive signals from, proximal portion 1806, according to some embodiments.
According to some embodiments, sleep manager 1912 is configured to determine a stage of sleep based on at least the heart rate and respiration rate. For example, sleep manager 1912 can determine the regularity of the heart rate and respiration rate to determine the person is in a non-REM sleep state, and, thereby, can generate a signal indicating the stage of the sleep is a non-REM sleep states, such as light sleep or deep sleep states. During light sleep and deep sleep, a heart rate and/or the respiration rate of the user can be described as regular or without significant variability. Thus, the regularity of the heart rate and/or respiration rate can be used to determine physiological sleep state of the user. In some examples the regularity of the heart rate and/or the respiration rate can include any heart rate or respiration rate that varies by no more than 5%. In some other cases, the regularity of the heart rate and/or the respiration rate can vary by any amount up to 15%. These percentages are merely examples and are not intended to be limiting, and ordinarily skilled artisan will appreciate that the tolerances for regular heart rates and respiration rates may be based on user characteristics, such as age, level of fitness, gender and the like. Sleep manager 1912 can use motion data 1905 to confirm whether a user is in a light sleep state or a deep sleep state by detecting indicative amounts of motion, such as a portion of motion that is indicative of involuntary muscle twitching.
As another example, sleep manager 1912 can determine the irregularity (or variability) of the heart rate and respiration rate to determine the person is in an REM sleep state, and, thereby, can generate a signal indicating the stage of the sleep is an REM sleep states. During REM sleep, a heart rate and/or the respiration rate of the user can be described as irregular or with sufficient variability to identify that a user is REM sleep. Thus, the variability of the heart rate and/or respiration rate can be used to determine physiological sleep state of the user. In some examples the irregularity of the heart rate and/or the respiration rate can include any heart rate or respiration rate that varies by more than 5%. In some other cases, the variability of the heart rate and/or the respiration rate can vary by any amounts up from 10% to 15%. These percentages are merely examples and are not intended to be limiting, and ordinarily skilled artisan will appreciate that the tolerances for variable heart rates and respiration rates may be based on user characteristics, such as age, level fitness, gender and the like. Sleep manager 1912 can use motion data 1905 to confirm whether a user is in an REM sleep state by detecting indicative amounts of motion, such as a portion of motion that includes negligible to no motion.
Sleep manager 1912 is shown to include sleep predictor 1914, which is configured to predict the onset or change into or between different stages of sleep. The user may not perceive such changes between sleep states, such as transitioning from a wakefulness state to a sleep state. Sleep predictor 1914 can detect this transition from a wakefulness state to a sleep state, as depicted as transition 1930. Transition 1930 may be determined by sleep predictor 1940 based on the transitions from irregular heart rate and respiration rates during wakefulness to more regular heart rates and respiration rates during early sleep stages. Also, lowered amounts of motion can also indicate transition 1930. In some embodiments, motion data 1905 includes a velocity or rate of speed at which a user is traveling, such as an automobile. Upon detecting an impending transition from a wakefulness state into a sleep state, sleep predictor 1914 generates an alert signal, such as a vibratory initiation signal, configuring to generate a vibration (or any other response) to convey to a user that he or she is about to fall asleep. So if the user is driving, predictor 914 assists in maintaining a wakefulness state during which the user can avoid falling asleep behind the wheel. Sleep predictor 1914 can be configured to also detect transition 1932 from a light sleep state to a deep sleep state and a transition 1934 from a deep sleep state to an REM sleep state. In some embodiments, transitions 1932 in 1934 can be determined by detected changes from regular to variable heart rates or respiration rates, in the case of transition 1934. Also, transition 1934 can be described by a decreased level of motion to about zero during the REM sleep state. Further, sleep predictor 1914 can be configured to predict a sleep stage transition to disable an alert, such as wake-up time alarm, that coincides with a state of REM sleep. By delaying generation of an alarm, the user is permitted to complete of a state of REM sleep to enhance the quality of sleep.
Examples of materials having acoustic impedances matching or substantially matching the impedance of human tissue can have acoustic impedance values in a range that includes 1.5×106 Pa×s/m (e.g., an approximate acoustic impedance of skin). In some examples, materials having acoustic impedances matching or substantially matching the impedance of human tissue can provide for a range between 1.0×106 Pa×s/m and 1.0×107 Pa×s/m. Note that other values of acoustic impedance can be implemented to form one or portions of housing 2003. In some examples, the material and/or encapsulant can be formed to include at least one of silicone gel, dielectric gel, thermoplastic elastomers (TPE), and rubber compounds, but is not so limited. As an example, the housing can be formed using Kraiburg TPE products. As another example, housing can be formed using Sylgard® Silicone products. Other materials can also be used. In some embodiments, sleep manager 1912 detects increase perspiration via skin conductance during an REM sleep state and determines the user is dreaming, whereby in generates a signal to store such an event or generate an other action.
Further to
Tremor determinator 2110 is configured to determine the presence of a tremor that, for example, can be a manifestation of an ailment or malady. As discussed, such a tremor can be indicative of a diabetic tremor, an epileptic tremor, a tremor due to Parkinson's disease, or the like. In some embodiments, tremor determinator 2110 is configured to detect the onset of tremor related to a malady or condition prior to a user perceiving or otherwise being aware of such a tremor. In particular, wearable devices disposed at a distal portion of a limb may be more likely, at least in some cases, to detect tremors more readily than when disposed at a proximal portion.
Therefore, anomalous state manager 2102 can predict the onset of a condition that may be remedied by, for example, medication and can alert a user to the impending tremor. In some cases, malady determinator 2112 is configured to receive data representing a tremor and data 2142 representing user characteristics, and is further configured to determine the malady afflicting the user. For example, if data 2142 indicates the user is a diabetic, the tremor data received from tremor determinator 2110 is likely to indicate a diabetic-related tremor. Therefore, malady determinator 2112 can be configured to generate an alert that, for example, the user's blood glucose is decreasing to low level amounts that cause such diabetic tremors. The alert can be configured to prompt the user to obtaining medication to treat the impending anomalous physiological state of the user. In another example, tremor determinator 2110 in malady determinator 2112 cooperate to determine that the user is experiencing and an epileptic tremor, and generates an alert to enable the user to either take medication or stop engaging in a critical activity, such as driving, before the tremors become worse (i.e., to an intensity that might impair or otherwise incapacitate the user). Upon detection of tremor and the corresponding malady, anomalous state manager 2102 transmits data indicating the presence of such tremors via communication module 2118 to wearable device 2170 or mobile computing device 2180, which, in turn, transmit via networks 2182 to a third-party or any other entity. In some examples, anomalous state manager 2102 is configured to distinguish malady-related tremors from movements and/or shaking due to nervousness and or injury.
Affective state manager 2220 is shown to include a physiological state analyzer 2222, a stressor analyzer 2224, and an emotion formation module 2223. According to some embodiments, physiological state analyzer 2222 is configured to receive and analyze the sensor data, such as bioimpedance-based sensor data 2211, to compute a sensor-derived value representative of an intensity of an affective state of user 2202. In some embodiments, the sensor-derived value can represent an aggregated value of sensor data (e.g., an aggregated an aggregated value of sensor data value). In some examples, aggregated value of sensor data can be derived by, first, assigning a weighting to each of the values (e.g., parametric values) sensed by the sensors associated with one or more physiological characteristics, such as those shown in
According to some examples, the activity-related managers can include a nutrition manager, a sleep manager, an activity manager, a sedentary activity manager, and the like, examples of which can be found in U.S. patent application Ser. No. 13/433,204, filed on Mar. 28, 2012 having Attorney Docket No. ALI-013CIP1; U.S. patent application Ser. No. 13/433,208, filed Mar. 28, 2012 having Attorney Docket No. ALI-013CIP2; U.S. patent application Ser. No. 13/433,208, filed Mar. 28, 2012 having Attorney Docket No. ALI-013CIP3; U.S. patent application Ser. No. 13/454,040, filed Apr. 23, 2012 having Attorney Docket No. ALI-013CIP1CIP1; U.S. patent application Ser. No. 13/627,997, filed Sep. 26, 2012 having Attorney Docket No. ALI-100; all of which are incorporated herein by reference for all purposes.
In some embodiments, stressor analyzer 2224 is configured to receive activity-related data 2114 to determine stress scores that weigh against a positive affective state in favor of a negative affective state. For example, if activity-related data 2114 indicates user 402 has had little sleep, is hungry, and has just traveled a great distance, then user 2202 is predisposed to being irritable or in a negative frame of mine (and thus in a relatively “bad” mood). Also, user 2202 may be predisposed to react negatively to stimuli, especially unwanted or undesired stimuli that can be perceived as stress. Therefore, such activity-related data 2114 can be used to determine whether an intensity derived from physiological state analyzer 2222 is either negative or positive, as shown.
Emotive formation module 2223 is configured to receive data from physiological state analyzer 2222 and stressor analyzer 2224 to predict an emotion in which user 2202 is experiencing (e.g., as a positive or negative affective state). Affective state manager 2220 can transmit affective state data 2230 via network(s) to a third-party, another person (or a computing device thereof), or any other entity, as emotive feedback. Note that in some embodiments, physiological state analyzer 2222 is sufficient to determine affective state data 2230. In other embodiments, stressor analyzer 2224 is sufficient to determine affective state data 2230. In various embodiments, physiological state analyzer 2222 and stressor analyzer 2224 can be used in combination or with other data or functionalities to determine affective state data 2230.
As shown, aggregated sensor-derived values 2290 can be generated by a physiological state analyzer 2222 indicating a level of intensity. Stressor analyzer 2224 is configured to determine whether the level of intensity is within a range of negative affectivity or is within a range of positive affectivity. For example, an intensity 2240 in a range of negative affectivity can represent an emotional state similar to, or approximating, distress, whereas intensity 2242 in a range of positive affectivity can represent an emotional state similar to, or approximating, happiness. As another example, an intensity 2244 in a range of negative affectivity can represent an emotional state similar to, or approximating, depression/sadness, whereas intensity 2246 in a range of positive affectivity can represent an emotional state similar to, or approximating, relaxation. As shown, intensities 2240 and 2242 are greater than that of intensities 2244 and 2246. Emotive formulation module 2223 is configured to transmit this information as affective state data 230 describing a predicted emotion of a user. An example of affective state manager 2220 is described as a affective state prediction unit of U.S. Provisional Patent Application No. 61/705,598 filed on Sep. 25, 2012, which is incorporated by reference herein for all purposes.
In at least some examples, the structures and/or functions of any of the above-described features can be implemented in software, hardware, firmware, circuitry, or a combination thereof. Note that the structures and constituent elements above, as well as their functionality, may be aggregated with one or more other structures or elements. Alternatively, the elements and their functionality may be subdivided into constituent sub-elements, if any. As software, the above-described techniques may be implemented using various types of programming or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques. As hardware and/or firmware, the above-described techniques may be implemented using various types of programming or integrated circuit design languages, including hardware description languages, such as any register transfer language (“RTL”) configured to design field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”), or any other type of integrated circuit. According to some embodiments, the term “module” can refer, for example, to an algorithm or a portion thereof, and/or logic implemented in either hardware circuitry or software, or a combination thereof. These can be varied and are not limited to the examples or descriptions provided.
In some examples, light socket connector 2408 may be configured to be coupled with a light socket (e.g., standard Edison screw base, as shown, bayonet mount, bi-post, bi-pin, or the like) for powering (i.e., electrically) device 2400. In some examples, light socket connector 2408 may be coupled to housing 2402 on a side opposite to optical diffuser 2424 and/or speaker 2418. In some examples, housing 2402 may be configured to house one or more of parabolic reflector 2404, positioning mechanism 2406, passive radiators 2410-2412, light source 2414, PCB 2416, speaker 2418 and frontplate 2420. Electronics (not shown) configured to support control, audio playback, light output, and other aspects of device 2400, may be mounted anywhere inside or outside of housing 2402, for example on a plate (e.g., plate 2704 in
In some examples, speaker 2418 may be suspended in the center of frontplate 2420, which may be sealed. In some examples, frontplate 2420 may be transparent and mounted or otherwise coupled with one or more passive radiators. In some examples, speaker 2418 may be configured to be controlled (e.g., to play audio, to tune volume, or the like) remotely using a controller (not shown) in data communication with speaker 2418 using a wired or wireless network. In some examples, housing 2402 may be acoustically sealed to provide a resonant cavity when combined with passive radiators 2410-2412 (or other passive radiators, for example, disposed on frontplate 2420 (not shown). In other examples, radiators 2410-2412 may be disposed on a different internal surface of housing 2402 than shown. The combination of an acoustically sealed housing 2402 with one or more passive radiators (e.g., passive radiators 2410-2412) improves low frequency audio signal reproduction, while optical diffuser 2424 may be acoustically transparent, thus sound from speaker 2418 may be projected out of a front end of housing 2402 through optical diffuser 2424. In some examples, optical diffuser 2424 may be configured to be waterproof (e.g., using a seal, chemical waterproofing material, and the like). In some examples, optical diffuser 2424 may be configured to spread light (i.e., reflected using parabolic reflector 2404) evenly as light exits housing 2402 through a transparent frontplate 2420. In some examples, optical diffuser 2424 may be configured to be acoustically transparent in a frequency selective manner (i.e., acoustically transparent, or designed to not impede sound waves, in certain selected frequencies), functioning as an additional acoustic chamber volume (i.e., forming an acoustic chamber volume with a front end of housing 2402, as defined by frontplate 2420, as part of a passive radiator system including housing 2402, radiators 2410-2412, and other components of device 2400). In other examples, the quantity, type, function, structure, and configuration of the elements shown may be varied and are not limited to the examples provided.
In
In some examples, mobile device 2504 may be configured to run application 2510, which may be configured to receive and process state data 2520 to generate data 2516. In some examples, data 2516 may include light data (i.e., light characteristic data, as described herein) associated with light patterns congruent with state data provided by wearable device 2502 (e.g., state data 2520 and the like). For example, where state data 2520 indicates a predetermined or designated wake up time, application 2510 may generate light data associated with a gradual brightening of a light source implemented in speaker-light 2506. In another example, where state data 2520 indicates a sleep or resting state, application 2510 may generate light data associated with a dimming of a light source implemented in speaker-light 2506. In still other examples, light data generated by application 2510 may be associated with a light pattern, a level of light, or the like, for example, depending on an activity (e.g., dancing, meditating, exercising, walking, sleeping, or the like) indicated by state data 2520. In some examples, data 2516 may include audio data (i.e., audio characteristic data, as described herein) associated with audio output congruent with state data provided by wearable device 2502 (e.g., state data 2520 and the like). For example, application 2510 may be configured to generate audio data associated with playing audio content (e.g., a playlist, an audio file including animal noises, an audio file including a voice recording, or the like) associated with an activity (e.g., dancing, meditating, exercising, walking, sleeping, or the like) using a speaker implemented in speaker-light 2506 when state data 2520 indicates said activity is beginning or ongoing. In another example, application 2510 may be configured to generate audio data associated with adjusting white noise or other ambient noise (e.g., to improve sleep quality, to ease a waking up process, to match a mood or activity, or the like) output by a speaker implemented in speaker-light 2506 when state data 2520 indicates an analogous physiological state. In other examples, application 2510 may be implemented directly in controller 2508, for example, using state data 2522, which may include the same or similar kinds of data associated with physiological states as described herein in relation to state data 2520. In some examples, controller 2508 may be configured to generate one or more control signals, for example, using API 2512, and to send said one or more control signals to speaker-light 2506 to adjust a light source and/or speaker. For example, the one or more control signals may be configured to cause a light source to dim or brighten. In another example, the one or more control signals may be configured to cause the light source to display a light pattern. In still another example, the one or more control signals may be configured to cause a speaker to play audio content. In yet another example, the one or more control signals may be configured to cause a speaker to play ambient noise. In other examples, the quantity, type, function, structure, and configuration of the elements shown may be varied and are not limited to the examples provided.
In some examples, profile data 2608a may comprise activity-related profiles indicating optimal lighting and acoustic output (i.e., light and audio characteristics) for an activity (e.g., warm, yellow light and/or soft background music for an evening social setting; low, yellow light and/or white noise for resting or sleeping; bright, blue-white light with no music or sounds for working or studying during the day). In some examples, profile data 2608a also may comprise identity-related profiles for one or more users, the identity-related profiles including preference data indicating a user's preferences for light characteristics and audio characteristics in a room or other environment surrounding speaker-light device 2600. Such preference data may be uploaded or saved to speaker-light device 2600, for example, from a personal device (e.g., wearable device, mobile device, portable device, or other device attributable to a user or owner) using communication facility 2618, or it may be learned by speaker-light device 2600 over a period of time through manual manipulation by a user identified using motion analysis module 2620 (e.g., gesture command, motion fingerprint, or the like), communication facility 2618 (i.e., identity data received from a personal device), or the like. In other examples, profile data 2608a may include data correlating light and audio characteristics with other types of sensor data and derived data (e.g., a visual or audio alarm for toxic chemical levels or smoke, light and audio characteristics associated with one or more hand gestures or speech commands, and the like). In some examples, a personal device may be configured to implement an application configured to provide an interface for inputting, uploading, or otherwise indicating, a user's or owner's lighting and audio preferences.
In some examples, communication facility 2618 may include antenna 2618a and communication controller 2618b, and may be implemented as an intelligent communication facility, techniques associated with which are described in co-pending U.S. patent application Ser. No. 13/831,698 (Attorney Docket No. ALI-191CIP1), filed Mar. 15, 2013, which is incorporated by reference herein in its entirety for all purposes. As used herein, “facility” refers to any, some, or all of the features and structures that are used to implement a given set of functions. In some examples, communication controller 2618b may include one or both of a short-range communication controller (e.g., Bluetooth, NFC, ultra wideband, and the like) and longer-range communication controller (e.g., satellite, mobile broadband, GPS, WiFi, and the like). In some examples, communication facility 2618 may be configured to ping, or otherwise send a message or query to, a network or personal device detected using antenna 2618a, for example, to obtain preference data or other data associated with a light characteristic or audio characteristic, as described herein. In some examples, antenna 2618a may be implemented as a receiver, transmitter, or transceiver, configured to detect and generate radio waves, for example, to and from electrical signals. In some examples, antenna 2618a may be configured to detect radio signals across a broad spectrum, including licensed and unlicensed bands. In some examples, communication facility may include other integrated circuitry (not shown) for enabling advanced communication capabilities (e.g., Bluetooth® Low Energy system on chip (SoC), and the like).
In some examples, logic 2610 may be implemented as firmware or application software that is installed in a memory (e.g., memory 2608, memory 2806 in
In some examples, enclosure 2702 may be hemispherical or substantially hemispherical in shape. In some examples, enclosure 2702 may be partially opaque, thus allowing light from light source 2714 to be directed out of enclosure 2702 through a portion that is not opaque (e.g., translucent or transparent). In other examples, enclosure 2702 may be partially or wholly translucent and/or transparent.
In some examples, platform 2710 and electronic components 2712a-2712b may be coupled to plate 2704. In some examples, platform 2710 also may be coupled to light source 2714, and may include a heatsink for light source 2714. In some examples, extension structure 320 may be included to couple plate 2704 to light socket connector 2722, where speaker-light device 2700 is configured to be plugged, inserted, or otherwise coupled to a recessed light or power connector socket. In some examples, electronics 2712a-2712b may include a motion analysis system, a power system, a speaker amplifier, a noise removal system, a PCB, and the like, as described herein in
In some examples, one or more passive radiators (not shown) may be implemented within enclosure 2702, either within an acoustically opaque speaker enclosure 2708 or to both sides of an acoustically transparent speaker enclosure 2708, to form a passive radiation system for speaker 2706. In other examples, the quantity, type, function, structure, and configuration of the elements shown may be varied and are not limited to the examples provided.
In at least some examples, the structures and/or functions of any of the above-described features can be implemented in software, hardware, firmware, circuitry, or a combination thereof. Note that the structures and constituent elements above, as well as their functionality, may be aggregated with one or more other structures or elements. Alternatively, the elements and their functionality may be subdivided into constituent sub-elements, if any. As software, the above-described techniques may be implemented using various types of programming or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques. As hardware and/or firmware, the structures and techniques described herein can be implemented using various types of programming or integrated circuit design languages, including hardware description languages, such as any register transfer language (“RTL”) configured to design field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”), multi-chip modules, or any other type of integrated circuit. For example, speaker-light devices 2400, 2450, 2600, 2700, and 2750, including one or more components, can be implemented in one or more computing devices that include one or more circuits. Thus, at least one of the elements in
In
In some examples, speaker-light devices 3006 and 3008 also may be configured to derive acoustic or audio data using noise removal modules 3006f and 3008f, respectively. For example, noise removal module 3006f may derive audio data comprising ambient acoustic sound by subtracting or removing audio output (i.e., “noise”) from a speaker implemented by speaker-light 3006 from the total acoustic input captured by acoustic sensor 3006b. As used herein, “noise” refers to any sound or acoustic energy not desired to be included in audio data being derived for a purpose, which may include ambient noise in some examples, speaker output in other examples, and the like. In other examples, noise removal modules 3006f and 3008f may be configured to derive audio data comprising speech or a speech command by removing ambient acoustic sound and audio output from a speaker. In some examples, motion analysis modules 3006e and 3008e also may receive sensor data from acoustic sensors 3006b and 3008b, respectively, temperature data from temperature sensors 3006c and 3008c, respectively, image/video data from cameras 3006d and 3008d, respectively, and/or derived audio data from noise removal modules 3006f and 3008f, respectively. In some examples, motion analysis modules 3006e and 3008e also may cross-reference said sensor data with profiles (e.g., activity or preference profiles, or the like) stored in a memory (e.g., memory 2608 in
In some examples, speaker-light devices 3006 and 3008 also may include a speaker and a light source (e.g., speaker 2606 and light source 2616 in
Chemicals that may be sensed by one or more chemical sensors in a speaker-light device (e.g., by chemical sensors 3206h and/or 3208h) may include but are not limited to carbon monoxide (CO), carbon dioxide (CO2), oxides of nitrogen (e.g., nitrogen oxide—NO2, NOx), sulfur dioxide (SO2), sulfates (SO4), volatile organic compounds (VOC), ozone (e.g., ground level ozone—O3), lead (Pb), mercury (Hg), hydrogen fluoride (HF), hydrogen sulfide (H2S), solid or liquid matter suspended in air (e.g., sub-millimeter matter and/or liquid, aerosols), air pollution (e.g., man-made or naturally occurring), asbestos, chlorofluorocarbons (CFCs), chlorine (CL, CL2) gas, hydrochloric acid/hydrogen chloride (HCL), hydrochlorofluorocarbons (HCFCs), toxic air pollutants (e.g., pesticides, power plants, industrial chemicals, etc.), methane (CH4) (e.g., from cattle, livestock, etc.), radon (Rn), second hand smoke from tobacco, off gassing from plastics and other materials, and greenhouse gasses, just to name a few. The one or more chemical sensors included in a speaker-light device may be selected to sense one or more types of atmosphere born compounds, such as gasses, particles, aerosols, for example. The one or more chemical sensors may be selected to sense atmospheric compounds that are of the most concern of a user and/or are most likely to be present at a location (e.g., a house, apartment, workplace) the speaker-light device is installed at. For example, a user who lives close to a cattle ranch may select for installation in his/her speaker-light device a chemical sensor operative to sense methane (CH4) generated by manure. As another example, a user living in vicinity of a coal fired electrical power generation plant may select a chemical sensor operative to sense carbon dioxide (CO2) and/or sulfur dioxide (SO2). One or more of the chemical sensors may be operative to sense chemicals and/or compounds (e.g., VOC) in the atmosphere that may affect sleep in a user (e.g., REM sleep and Non-REM sleep).
In some examples, the chemical sensor(s) may be designed to be removeably interchangeable in the speaker-light device, such that the chemicals to be sensed may be captured by specific suites of chemical sensors that may be inserted into and removed from the speaker-light device. As one example, a speaker-light device may include locations for one or more chemical sensors that may be inserted and removed from the speaker-light device as the sensing needs of the user change. For example, the speaker-light device may include slots, ports, openings, docks, etc. for a plurality of chemical sensors, and a user may select two chemical sensors, one for sensing greenhouse gases carbon dioxide (CO2) and sulfur dioxide (SO2) and another for sensing radon (Rn) gas. Later, the user becomes concerned about ozone (O3) and inserts an ozone chemical sensor into one of the available slots in the speaker-light device. Other examples of modules and/or suites of chemical sensors that may be installed and optionally later removed from the speaker-light device, include chemical sensors operative to detect smoke and/or other atmospheric particulates associated with fire, and chemical sensors operative to detect carbon monoxide (CO) which may be generated by a furnace, water heater, lawn tool, or automobile. In yet other examples, some or all of the chemical sensor(s) may be non-removable from the speaker-light device.
Attention is now directed to
The combination speaker and light source device 3300 may include a scent generator 3377 operative to generate (e.g., disperse as an aerosol, gas, liquid, mist, droplets, or the like) one or more chemicals 3379 that affect a user, such as reducing stress, relaxing the user, increasing focus, concentration, attention, helping the user to fall asleep, or helping the user to awaken from sleep, for example. The aforementioned air mover (internal and/or external) may be used in conjunction with scent generator 3377 to disperse/circulate the one or more chemicals 3379. As will be described below, device 3300 may be in communication with an external scent generator that may be used for the same purposes as scent generator 3377.
Moving now to
In some examples, one or more of the chemical sensor(s) 3320 may be removable from the device 3400. As one example, a chemical sensor operative for sensing carbon monoxide (CO) gas may be removed and replaced with an upgraded version of the CO sensor or replaced with a different type of sensor, such as one operative to sense oxides of nitrogen (NOx) or carbon dioxide (CO2). Removable sensors 3320 may also allow for servicing and/or replacing defective sensors or expired sensors (e.g., a smoke and/or fire sensor may need replacement approximately every 5 years). In
Similarly, device 3400 may include one or more scent generators 3377 operative to emit or generate a scent 3379, and the scent generator 3377 may be removable from a slot, docking port, or the like, denoted as 3378. Scent generator 3377 may be inserted or removed in the slot 3378 as depicted by dashed line 3376. Removable/replaceable scent generators 3377 may be in a form of a cartridge or other structure that may include electrical nodes that connect with a connection 3374 when the scent generator 3377 is inserted 3376 into slot 3378. A connector including but not limited to Universal Serial Bus (USB), Lightning, RS-232, XLR, RCA, TRS, TRRS, DIN, 3.5 mm plug, or other may be used to establish an electrical connection between the removable/replaceable scent generators 3377 and systems (e.g., PCB 2760) of the device 3400.
The air mover described above may comprise a fan or other device operative to generate an air flow. The chemical sensor may be positioned within housing (e.g., 2402, 2704) at a location operative to couple air flow 3211 with the chemical sensor (e.g., sensors 3206h, 3208h, 3320) positioned to be in fluid communication with air flow 3211 from ambient 3201, 3203). The air mover (e.g., 3206i, 3208i, 3340) may be operated continuously or periodically and operation may be controlled by logic 2610 (e.g., a processor, a controller, FPGA, μC, μP, etc.) or other circuitry and/or software in the speaker-light device. In other examples, the air mover may be positioned external to the speaker-light device and may be controlled by the speaker-light device or by some other device (e.g., a switch for a fan or ceiling fan). Examples of external air movers include a fan, a HVAC system, and a ceiling fan. Housing 2402 may include slots, opening, vents, etc. that allow for air flow over the chemical sensors either by an internal air mover, an external air mover or both. An external air mover may be controlled (e.g., turned on, turned off, have its speed/flow rate varied) by the speaker-light device. For example, the speaker-light device may be mounted to or in proximity of a ceiling fan and a signal(s) from the speaker-light device may be used to activate/deactivate the ceiling fan and may also control fan speed and/or direction of rotation of the ceiling fan (e.g., to move air up or down).
In
Turning now to
Referring now to
Alternatively or in addition to device 3700a, device 3700b may be disposed in proximity of an air flow 3501 of an external air mover, such as air mover 3720. For example, device 3700b may be mounted on a ceiling or a wall that is in close enough proximity to air mover 3720 to receive at least a portion of flow 3501. Here flow 3501 generated by movement of blades 3722 of air mover 3720 may flow over device 3700b and/or flow through apertures 3511 in device 3700b and couple with chemical sensor disposed internally and/or externally in device 3700b. Commands, control, data, and other signals may be communicated between air mover 3720 and/or device 3700a using one or more of a wired link 3771, wireless link 3321, or wireless link 3343. Examples of commands, data, control and other signals for device (3700a, 3700b) include but are not limited to turning air mover on or off, controlling fan speed, controlling direction of rotation (e.g., CW or CCW to set direction of flow 3501) of blades 3722, controlling fan speed as a function of ambient temperature, controlling fan speed as a function of device (3700a, 3700b) temperature, activating (e.g., turning air mover “ON”) or deactivating (e.g., turning air mover “OFF”) the air mover 3720 for chemical sensing by specific chemical sensor(s) in device (3700a, 3700b), just to name a few.
Turning attention now to
From a bottom to a top of the drawing sheet for
Each combination speaker and light source device as represented by its respective chemical sensor 3320 may be aware of locations of other combination speaker and light source devices via information communicated to those devices over the wireless links 3321, information included in data storage in those devices, information communicated to those devices from another device, such as client device 3803, media device 3805, wearable device 3801, and resource 3899, for example. Each combination speaker and light source device as represented by its respective chemical sensor 3320 may be aware of locations of one or more users 3802 and/or devices associated with the one or more users, such as client device 3803 (e.g., a smartphone) and/or wearable device 3801 (e.g., a data capable strap band) donned by a user (e.g., on a wrist or other portion of the user's body). User data including but not limited to sleep activity, sleep behavior, quality of sleep, time of sleep, REM sleep, non-REM sleep, accelerometry (e.g., from motion sensor signals), biometric data, arousal of the sympathetic nervous system (SNS), number of steps (e.g., from walking/running), exercise, calorie intake, calorie expenditure, diet, hydration, almanac data, and other user specific data, may be captured and/or be otherwise accessible by the wireless devices depicted in
A map of structure 3800 and locations of combination speaker and light source devices (e.g., 3320) may comprise data included in an application (APP), an application programming interface (API), data structure, algorithm or other form in a device or devices including but not limited to one or more of the combination speaker and light source devices (e.g., 3320), client device 3803, wearable device 3801, media device 3805, and resource 3899, for example. As one example, an APP executing on a processor of client device 3803 may include the map of structure 3800 and may also include locations of the chemical sensors 3320. APP may be embodied in a non-transitory computer readable medium residing in client device 3803 are accessible to client device (e.g., via wireless link 3321). One or more of the combination speaker and light source devices 3320 may also include (e.g., in non-volatile memory) or have access to (e.g., via client device 3803, resource 3899) the map of structure 3800 and may use data in the map to route the user to different rooms, spaces, or other environments internal to and/or external to structure 3800 based on signals sensed from chemical sensors 3320 and/or other systems in the combination speaker and light source devices.
The following are non-limiting examples of how the chemical sensors 3320 in one or more of the combination speaker and light source devices may operate to monitor the ambient (3201, 3501) in their respective environments and take one or more actions, if any, upon sensing chemicals of concern to a user or that may be harmful to the user. As a first example, if user 3802 is sleeping in a bedroom on the upper floor and ENV 3850a of the bedroom includes an above nominal concentration of carbon dioxide (CO2) as detected by chemical sensor 3320, then the combination speaker and light source device associated with sensor 3320 may notify the user 3802 via sound, audio, light, colors of light, varying intensity and/or color of light, in an electronic message (e.g., email, text message, voice mail, etc.) to open one or more of the windows 3831 in the bedroom to increase air circulation in ENV 3850a. Other chemical sensors 3320 may be queried to determine if the proposed action (e.g., opening windows 3831) may be effective or may make matters worse. For example, chemical sensor 3320 positioned in external environment ENV 3891 may be queried to determine that a source of the CO2 gas is not coming from a source outside of the structure 3800 such that opening the windows 3831 will make matters worse by potentially further increase concentration of the CO2 gas in ENV 3850a. In some examples the notification may be designed to awaken the user 3802 so that the suggested action may be taken immediately; whereas, in other examples the notification may occur after the user 3802 has awaken and the user 3802 may take future actions based on the notification, such as opening or more of the windows 3831 prior to going to sleep or taking a nap. An icon (e.g., of a fan) may be presented to the user 3802 on a device, such as client device 3803, and the icon may be used to inform the user that the fan was turned ON to decrease a concentration of some chemical(s), such as the above described CO2 gas.
As a second example, the user 3802 may be studying in another room in the upper floor where a chemical sensor 3320 in ENV 3850b detects chemicals in the ambient that are typically associated with outgassing (e.g., from newly laid carpet or plastics). Here, accelerometry from motion signals generated by motion sensors in wearable device 3801 may indicate the user 3802 is becoming sluggish and that sensor data may be synthesized along with other data, such as the detection of the outgas chemicals, to determine that a possible cause of the sluggishness may be due a physiological change in the user 3802 from breathing the outgas chemicals. The determining and/or data synthesis may be performed on a processor in one or more of the combination speaker and light source devices, the client device 3803, media device 3805, wearable device 3801, an external resource (e.g., resource 3899), or some combination of the foregoing. An action taken by one or more of the combination speaker and light source devices (e.g., associated with chemical sensor 3320 in ENV 3850b) may comprise sending a text message to user 3802's client device 3803 instructing the user 3802 to open one or more windows 3381, to leave the room for another room, to turn on ceiling fan 3832, to turn on fan 3832 and open windows 3381, for example. In another example, the action taken may comprise one or more of the combination speaker and light source devices notifying the user and/or automatically and without user 3802 intervention, causing the fan 3832 to turn “ON” to circulate air throughout ENV 3850b. In yet other examples, opening and closing of one or more of the windows 3831 may be controlled by a control system 3851 and one or more of the combination speaker and light source devices may communicate 3321a signal to control system 3851 that commands the control system 3851 to open windows 3831 in the room for ENV 3850b. Another signal may command the control system to turn ON/OFF the fan 3832. Opening the windows 3831 and turning ON fan 3832 may be actions taken without intervention on part of user 3802, and may be operative to increase air circulation in ENV 3850b and may allow for removal or reduction of the chemicals from the outgassing. Fan 3832 may comprise fan 3700a or 3700b described above in reference to
As a third example, chemicals and gasses associated with smoke and/or fire that may be detected by chemical sensors 3320 positioned in ENV 3850c and/or ENV 3850d may trigger an alarm or other warning signal on one or more devices such as the combination speaker and light source devices, media device 3805, client device 3803, wearable device 3801, control system 3851 or others. One or more of the combination speaker and light source devices may also communicate to the user 3802 a safe escape route away from the smoke/fire and/or out of structure 3800. The combination speaker and light source devices may query one another and may operate to generate an escape route based on combination speaker and light source devices that are not detecting the smoke/fire.
To further illustrate examples of possible routes that may be generated and communicated to the user 3802 and/or a device accessible by the user 3802, attention is now directed to
As one example, path a may be calculated for user 3802 to follow from X to a point Y1 on a balcony of the middle floor where rescue may be possible by first responders. As another example, path b may be calculated for user to follow from X to a point Y2 on the first level using stair well 3811 as an escape route to a front door 3831 of structure 3800. As yet another example, path c may be calculated for user to follow from X to a point Y3 on the first level using stair well 3811 as an escape route to a patio door 3831 of structure 3800.
Routing may occur for other than emergency situations, for example, consider route e from a point Z on the middle floor, up stairwell 3811 to a point Y4 on the upper floor. Here, route e may be calculated based on chemical sensor 3320 in ENV 3850d detecting chemicals/gasses associated with tobacco smoke in ENV 3850d. Route e may be selected based on chemical sensor 3320 in ENV 3850b detecting an ambient that is free of the tobacco related chemicals/gasses.
The routes described above may be presented to user 3802 visually on a display of a device, such as a display 3804 of client device 3803, for example. The routes described above may be presented to user 3802 in one or more other forms including but not limited to verbal instructions, sound, light, and vibration. Another device, such as a mobile device may be used to map locations of one or more of the combination speaker and light source devices at the positions they are disposed at in the ENV's that they monitor and serve. For example, client device 3803 may execute the APP and a GUI on display 3804 may guide user 3802 to position the client device 3802 next to or into contact with each combination speaker and light source device on each floor of structure 3800. GPS or other location based systems on client device 3803 or accessible (e.g., via link 3321) may be used to determine locations of each combination speaker and light source device in structure 3800. Client device 3803 may also be moved along a perimeter of major walls or other structures within structure 3800 to map locations of walls, doors, windows, stairwells and other relevant areas of structure 3800.
Detection of other chemicals such as carbon monoxide (CO) may also trigger an emergency alarm and may result in one or more possible safe routes being calculated and presented to the user 3802. For example, detection of carbon monoxide (CO) in ENV 3850e in a garage on the lower floor by a chemical sensor 3320 may result in automatic raising of a garage door to reduce a concentration of the CO and may also result in calculation of safe routes away from CO contaminated areas of structure 3800 to other safe areas as described above. In that some chemicals, such as CO for example, may be denser than the ambient air, a fan such as described above may be activated to draw and/or mix up (e.g., circulate) ambient air in order to determine if that air includes a lower layer of CO or other denser chemical(s) that may not rise up to a level of the chemical sensor 3320. Chemicals other than CO may be present and the above description using CO is a non-limiting example of one type of chemical that may be detected using one or more chemical sensors 3320 in one or more combination speaker and light source devices. Not all of the chemicals that may be detected by the using one or more chemical sensors 3320 need be harmful or toxic to the user 3802. Chemicals that may be detected by the one or more chemical sensors 3320 may include but are not limited to chemicals that may affect physical health, mental health, one or more parameters related to sleep, ability to stay away, awareness, attention span, stress, relaxation, mood, or one or more biometric parameters of the user's 3802 body (e.g., respiration, heart rate, blood pressure, arousal of the SNS, etc.), for example.
Although the foregoing example described a scenario were an emergency condition that may affect the user's 3802 health, safety, or welfare may result in one or more combination speaker and light source devices taking an action(s), such as presenting a escape route from an area of danger to an area of safety. However, a non-emergency situation in the environment of the user 3802 may also result in one or more combination speaker and light source devices analyzing using their various systems, such as chemical sensors and others, the environment and detecting a chemical(s) (e.g., CO2) know to make the user 3802 drowsy and less productive. Drowsiness may be detected using other sensors in one or more other devices, such as wearable device 3801 (e.g., using accelerometry from motion sensors) and/or media device 3805 (e.g., using proximity sensors and/or passive motion sensors). Historical data (e.g., from an almanac of user data about user 3802) may be used to correlate the drowsiness with the presence of the chemical(s) (e.g., CO2) and analysis by the one or more combination speaker and light source devices and/or an external device (e.g., a server or Cloud resource) may result in an action to be taken by the one or more combination speaker and light source devices, such as presenting a suggestion to the user 3802 to move from the environment (e.g., ENV 3850a) where the chemical(s) causing the drowsiness are present to another environment (e.g., ENV 3850d) where no such chemical(s) are detected as being present. The information presented may also include a route (e.g., via stairwell 3811) from the unfavorable environment (e.g., ENV 3850a) to the more favorable environment (e.g., ENV 3850d). In other examples, detection of one or more chemicals may result in the one or more combination speaker and light source devices accessing (e.g., using wired and/or wireless communications) one or more other systems such as a fan (e.g., ceiling fan 3832), HVAC system 3881, opening/closing a window 3831, opening/closing a door (e.g., garage door 3837), activating a security system (e.g., an alarm), transmitting a message to a security service and/or first responder (e.g., ADT®, Fire, Police, Paramedics, 911, etc.), transmitting a message to a client device (e.g., device 3803), just to name a few.
One or more combination speaker and light source devices may learn over time, using any number of data inputs and one or more of their respective systems, how various chemicals detected by their chemical sensors 3320 affect a user. Initially, a correlation between a detected chemical and its effect on the user may not be immediately determinable; however, over time the correlation may be determined by analyzing user almanac data (e.g., sleep data) for a pattern or other form of data signature that manifests at or around the time the chemical is detected. For example, if chemicals from tobacco smoke are detected and over time the user's sleep patterns indicate the user is restless during sleep (e.g., from captured motion data), does not sleep for as long (e.g., temporal data, motion data, biometric data), then a correlation between the tobacco smoke and the user data may be correlated to determine that if tobacco smoke is present the user will not sleep well and a suggested course of action may be presented to the user (e.g., via a smartphone, a media device, a speaker in the combination speaker and light source device, a vibration in a wearable device, etc.).
Almanac data about a user may include relevant medical history, medical data, and other health related data that may be used to determine what impact (positive or negative) that a presence of one or more detected chemicals may have on the user. A data store such as the Cloud, NAS, the Internet, memory internal to combination speaker and light source device, and As one example, if the almanac data indicates the user has a history of respiratory illness or asthma, then detection of chemicals known to be detrimental to the user may result in an appropriate action, such as suggesting the user keeps his/her inhaler in close proximity in case an asthma attack may be caused by the detected chemicals, or activating an air filtration system to scrub or remove the harmful chemicals from the environment the user is in. In some examples, a chemical detected by chemical sensor(s) 3320 may be perceived as a foul or irritating odor to a user, such as in the case of industrial pollution from farming or raising livestock/cattle. The almanac data may include information on the user's sensitivity to those odors and an action taken may include the combination speaker and light source device(s) activating an air filtration unit, a fan, or a scent generator (3377, 3980) (e.g., an air freshener) to emit chemicals (3379, 3981) that may counter to odor caused by the chemicals (see scent generator 3990 in ENV 3850b in
In other examples a chemical that is detected by chemical sensor(s) 3320 may be of known danger to the user (e.g., fire, smoke, industrial chemicals, toxic chemicals) and immediate action to notify the user and/or others may be taken by one or more combination speaker and light source devices and/or other devices in communication with the one or more combination speaker and light source devices. A structure that includes the one or more combination speaker and light source devices and their associated chemical sensors 3320 need not be a building or other types of terrestrial structures, but may also include without limitation, vehicles, open spaces such as parking lots, parks, fields, stadiums, plazas, just to name a few.
Attention is now directed to
At a stage 4004 a determination may be made as to whether or not signals have been detected on one or more of the chemical sensors (e.g., a signal detected on an output of a chemical sensor(s)). If no signals have been detected, then a NO branch may be taken to another stage of flow 4000, such as the stage 4002 where outputs from the one or more chemical sensors may continue to be read. If signals have been detected on an output(s) of one or more of the chemical sensors, then a YES branch may be taken to a stage 4006.
At the stage 4006, chemical sensors for which an output signal has been detected have their respective signals processed to determine which action(s), if any, are required to be taken by one or more combination speaker and light source devices and/or by other devices in communication with the one or more combination speaker and light source devices. Processing of the signals may occur internally to the one or more combination speaker and light source devices, externally to the one or more combination speaker and light source devices, or both. Processing may comprise hardware, software or both. Processing may occur internally (e.g., logic 2610 and/or μP 3360) and/or externally (e.g., resource 3899) to the combination speaker and light source devices. Data used for the processing and/or the determining whether or not an action is to be taken may include data from internal and/or external sources including but not limited to resource 3899, memory 2608, the Cloud, the Internet, user almanac data, a data store, database, NAS, RAID, a sever, a client device, etc. Algorithms and/or data embodied in a non-transitory computer readable may be applied to the processing and/or the determining and different algorithms and/or data may be applied to the signals from different types of chemical sensors (e.g., CO sensor vs. NO sensor).
At a stage 4008 if action is required, then a YES branch may be taken to a stage 4010. If no action is required, then a NO branch may be taken to another stage in flow 4000, such as a stage 4012. At the stage 4010, an appropriate action for the type(s) of chemicals detected by each chemical sensor may be taken by one or more combination speaker and light source devices and/or by other devices in communication with the one or more combination speaker and light source devices. For example, if a combination speaker and light source device include four (4) chemical sensors and two (2) of those sensors had signals on their outputs that were processed and determined to require action, then at the stage 4010 the appropriate action by the appropriate devices may be taken. In this example, different appropriate actions may be taken for the two sensors and those actions may be taken by the same or different devices. Further to the example, if a first of the two sensors detected chemicals associated with paint fumes, then the appropriate action may comprise communicating a message to a user to leave the environment (e.g., a room) where the fumes are present, may comprise the combination speaker and light source devices opening windows in a room where the fumes are present and/or turning on a fan in that room to increase air circulation. If a second of the two sensors detects high concentrations of CO gas in a room the user is in, the appropriate action may comprise presenting an emergency exit route to the user (e.g., on display 3804 of client device 3803) and visually and/or audibly guiding the user along the route to a point of safety (e.g., outdoors), opening windows in the room, or sounding an alarm using one of its speakers, by vibrating a wearable device (e.g., wear bale device 3801), or using a sound system in a media device (e.g., media device 3805).
At a stage 4012 if processing of signals is completed, then a YES branch may be taken and flow 4000 may terminate or may transition to a stage in another flow as described above. If processing of signals is not completed, then a NO branch may be taken and flow 4000 may transition to another stage, such as the stage 4006 where processing may continue.
Flow 4000 may occur in parallel or in series with other flows describe above and may use data and signals from other systems internal to and/or external to the combination speaker and light source devices. For example, one or more of data, signals, and processing related to sleep of a user as described above in reference to
Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described inventive techniques are not limited to the details provided. There are many alternative ways of implementing the above-described invention techniques. The disclosed examples are illustrative and not restrictive.
This application claims the benefit of U.S. Provisional Patent Application No. 61/825,509 (Attorney Docket No. ALI-274P), filed on May 20, 2013; this application is a continuation-in-part of U.S. patent application Ser. No. 14/209,329 (Attorney Docket No. ALI-421), filed on Mar. 13, 2014, which claims the benefit of U.S. Provisional Patent Application No. 61/825,509 (Attorney Docket No. ALI-274P), filed on May 20, 2013, U.S. Provisional Patent Application No. 61/786,179 (Attorney Docket No. ALI-270P), filed on Mar. 14, 2013, and U.S. Provisional Patent Application No. 61/786,473 (Attorney Docket No. ALI-271P), filed on Mar. 15, 2013; this application is a continuation-in-part of U.S. patent application Ser. No. 14/212,832 (Attorney Docket No. ALI-418), filed on Mar. 14, 2014, which claims the benefit U.S. Provisional Patent Application No. 61/786,473 (Attorney Docket No. ALI-271P), filed on Mar. 15, 2013; and this application is a continuation-in-part of U.S. patent application Ser. No. 14/207,420 (Attorney Docket No. ALI-271), filed on Mar. 12, 2014, which claims the benefit U.S. Provisional Patent Application No. 61/786,473 (Attorney Docket No. ALI-271P), filed on Mar. 15, 2013, all of which are incorporated by reference herein in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61825509 | May 2013 | US | |
61825509 | May 2013 | US | |
61786179 | Mar 2013 | US | |
61786473 | Mar 2013 | US | |
61786473 | Mar 2013 | US | |
61786473 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14209329 | Mar 2014 | US |
Child | 14281856 | US | |
Parent | 14212832 | Mar 2014 | US |
Child | 14209329 | US | |
Parent | 14207420 | Mar 2014 | US |
Child | 14212832 | US |