The present invention relates to sleep, and more particularly to improving the sleep experience.
Numerous types of white noise, nature noise, or music devices are available which are sold to help users sleep better. In general, these devices provide music or other sounds on a timer, helping the user fall asleep with the sounds. The sound can be used to block out background sounds. In some systems, the selected sounds are calming, such as rain or waves.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
A method and apparatus to provide conditions to guide a user into an optimal sleep patterns. In one embodiment, the system includes a physiological monitor that may be used to detect a user's current sleep state. The system is further able to determine environmental factors, which may include a current time, alarm settings, temperature, barometric pressure, weather, and other factors that may impact the user's optimal sleep experience. Furthermore, in one embodiment, the system may utilize a known waking time, to ensure that the user sleeps well and wakes up refreshed. In one embodiment the system may control conditions such as sounds, odors (olfactory), tactile outputs (haptics), environmental conditions (temperature, humidity, light), mattress firmness, and other environmental conditions which would improve the user's sleep quality. In one embodiment, the system may interface with a house automation system, to control some of these conditions.
For example, the system may use music or other sound patterns to guide the user into a deep sleep. In one embodiment, other conditions may also be adjusted to guide the user to the appropriate sleep state. In one embodiment, the system may guide the user back into deep sleep as needed, throughout the sleep period. The system may further adjust conditions to gradually wake the person at the perfect time, to complete the optimal sleep pattern. In one embodiment, the waking conditions may be designed to refresh, and energize the user. In one embodiment the time to wake up a user may be selected based on a number of factors, including sleep phase, calendar data, and weather and traffic data. For example, a user may set an alarm to “before 7:30 a.m.” to make an appointment at 9 a.m. However, if the system has information indicating that it is snowing, the system may wake up the user prior to that time.
The following detailed description of embodiments of the invention make reference to the accompanying drawings in which like references indicate similar elements, showing by way of illustration specific embodiments of practicing the invention. Description of these embodiments is in sufficient detail to enable those skilled in the art to practice the invention. One skilled in the art understands that other embodiments may be utilized and that logical, mechanical, electrical, functional, and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The sleep sensing system 100 further includes a sound output system 150. The sound output system 150 may be implemented in the same device as the sensor system 105, such as a speakerintegrated into the wristband 110, smart phone 120, pillow 135, or other sensing system 140. Alternatively, the sound output system 150 may be a separate device, such as a speaker system, a pillow 135 that includes a speaker, etc. The sound output system 150 is used to guide the user to an improved sleep pattern, as will be described in more detail below. The sound output system 150 communicates with the sensor system 105, which controls the sound output system 150 to generate sounds based on the real time data obtained by the sensors.
In one embodiment, in addition to outputting the sound output system 150, the system may include other output mechanisms, such as tactile output, that are useful to transition a user to a different sleep phase. For example, in one embodiment, the sleep sensing system 100 may include condition control system 155, to control the conditions other than sound in the environment.
In one embodiment, the condition control system 155 enables the sleep sensing system 100 to have access to home environmental controls 175. Home environmental controls 175 may interface with a home automation system 185. In one embodiment, one or more (temperature, light, humidity, mattress firmness). In one embodiment, the system interfaces with a home automation system 175, so it is able to open/close windows, adjust thermostats, humidity, mattress controls, lighting, olfactory stimulants (odors) etc.
In one embodiment, the sensor system 105 is located in a wristband 110 or armband 130, and the sound output system 150 is a smart phone 120 or pillow 135, while the condition control system 155 is a communication logic that enables the sleep sensing system 100 to communicate with home environmental controls 175 and/or home automation system 185. In one embodiment, the interface between the wristband 110 and the smart phone 120 or other computer system 170 may be through an audio jack interface. In one embodiment, the interface may be through a BLUETOOTH™ or other personal area network interface.
In one embodiment, the sensing system 100 may further communicate with a network 160. Network 160 may include one or more of a wireless network (WiFi), cellular network, a local area network, a personal area network (PAN) such as BLUETOOTH™, or another type of wired or wireless connection. The network 160 may provide access to a computer system 170, remote server 180, home environmental controls 175, home automation system 185, and/or data on various third party systems 190. The computer system 170 may provide additional processing, and may, in one embodiment, provide the user with an additional user interface features. In one embodiment, the computer system 170 may be coupled directly to the sleep sensing system 110, continuously or periodically, via direct connection 175. This could be implemented, in one embodiment, via a Universal Serial Bus (USB), micro-USB, audio input based interface, or other type of wired connection. Alternatively, the sensor system and the computer system 170 may be coupled via network 160.
In one embodiment, server 180 may provide data to the sleep sensing system 100. In one embodiment, the sleep sensing system 100 may also provide information to the server 180. In one embodiment, the server 180 may collect data from multiple users, and use cumulative information to provide predictive recommendations, set default options, and suggest alterations to settings.
In one embodiment, the server 180 may also provide sound files to the sleeps sensing system 100, enabling updates to the music or other sounds being used. This may be done in response to a user request, in response to additional information being learned, in response to new research, or for other reasons. In one embodiment, the server 180 may be used to stream the sounds played to the sensing system.
In one embodiment, the sleep sensing system 100 may also couple via network 160 to third party systems 190 and/or home environmental controls 175. Third party systems 190 and/or home environmental controls 175 may provide environmental data to the sleep sensing system 100. Third party systems 190 may provide external environmental data, while home environmental controls 175 provide data, and in one embodiment enable control of, internal environment. Third party systems 190 may also provide sounds to the sleep sensing system 100. In one embodiment, the system may be set up to enable a user to download music or other appropriate sound sets from a store, such as the ITUNES MUSIC STORE™ by APPLE COMPUTERS™. In one embodiment, the sleep sensing system 100 couples to computer 170 or server 180, which in turn is coupled to third party systems. In this way, the sleep sensing system 100 can obtain data from third parties, without having to manage third party connections.
The sleep sensing system 100 includes one or more sensors 205. The sensors 205 may include an accelerometer 210, a temperature sensor 212, a heart rate sensor 214. In one embodiment, temperature sensor 212 may include two sensors, one for the user's body temperature and the other for ambient temperature sensing. In one embodiment, sensors that detect brain waves 216 may also be used. In one embodiment, brainwave sensors, cameras to observe eye movement, or other sensors 218 may also monitor the user's state. Additional sensors to monitor the user's state 218, and/or the user's environment 219 may also be part of the sleep sensing system 100.
Sleep sensing system 100 further includes sleep state logic 220. Sleep state logic 220 utilizes data from one or more sensors 205, to determine the user's current sleep state. In one embodiment, the sleep state may be N1, N2, N3, and REM. In another embodiment, the sleep state may be light sleep, deep sleep, and REM sleep. In one embodiment, the sleep is grouped into light sleep and deep sleep, and awake. Other groupings of sleep state, including further divisions, or subdivisions, may also be used. In one embodiment, sleep state can be determined by sleep state logic 220 based on the user's micro-movements, body temperature, heart rate, and/or brain waves. Additional sensor data may be used to correctly identify the current sleep state of the user.
Sleep tracker 225 receives the current sleep state information from sleep state logic 220 and tracks the overall sleep for the present sleep period. Sleep tracker 225 in one embodiment adds the data for the current sleep period to a sleep database 230. The sleep database 230 tracks cumulative sleep information. In one embodiment, the information in the sleep database 230 may be used for predictive modeling for the user. The cumulative sleep data 290, collected on a computer system 280, from multiple users may be used for predictive modeling based on statistical analysis 292 for all users as well. In one embodiment, the data from sleep database 225 may be passed to computer system 280 using communication logic 235, to be used in the collected sleep data 290.
The data from sleep tracker 225 and sleep state logic 220 is used by the sleep stage selector 240, to select the optimal next sleep state for the user. In one embodiment, the sleep stage selector 240 also utilizes data from clock 245 and alarm logic 274.
In one embodiment, data from environment sensor 219 may also be used by sleep stage selector 240. In one embodiment, instead of an environment sensor 219, in sensors 205, the environment data may be received from a remote device, such as home environment controls 288, or home automation system 289. In one embodiment, data from computing system(s) 280 may also be used by the sleep stage selector 240. The external data may include traffic data 282, weather data 284, data from home environment controls 288, home automation system 289, and/or other data 286 that may influence the user's sleep quality and the time the user needs to awaken.
The optimal next sleep state, as selected by sleep state selector 240 is passed to the condition adjuster 250. In one embodiment, the optimal next sleep state may be the same state as the current sleep state. Condition adjuster 250 optionally selects one or more conditions to adjust. For example, sound selector 252 may select a sound from sound database 254, or use sound generator 270 to select the appropriate sound(s), to either maintain the user in their current sleep state, or transition them to the appropriate optimal next sleep stage. In one embodiment, sound selector 252 may access a remote sound source 294 through computer system 280. In one embodiment, sound selector 252 may receive sounds from sound generator 296.
Condition adjuster 250 may select, additionally or alternatively, one or more environmental variables to adjust using environment selector 256. The environmental variables may include temperature, light, humidity, mattress firmness, scents/smells, tactile feedback, etc. In one embodiment, one or more of these environmental variables may be adjusted via home environment controls 288 and/or home automation system 289.
In one embodiment, the condition adjuster 250 is only used when the user is not naturally transitioning to the optimal next sleep state. Thus, the system allows the user to move naturally through the sleep states, unless the system determines that the state to which the user would be moving is not optimal. In that case, the condition adjuster 250 may be used to guide the user to the pre-identified optimal sleep state.
In one embodiment, the user may enter condition preferences through user interface 265, or via a user interface 298 on computing system 280. The computing system 280 may be a local computer, a remote server, or remote computer system. In one embodiment, the user interface 298 may be presented via an application running on the sleep sending system 100 or computer system 280, or a web-based interface, or another type of interface. In one embodiment, the system includes one or more default sets of conditions for each sleep state and sleep state transition. In one embodiment, these default sets may be altered by the user, within certain parameters. For example, in one embodiment, transition to the awake state may be energizing music, the scent of coffee, and an increase in ambient light by default. If a user prefers something less energetic, or prefers not to have coffee, he or she may alter this setting. Similarly, the user may prefer the sounds of the rainforest rather than the sounds of the ocean, for transitioning into sleep.
When there is a condition adjustment selected by condition adjuster 250, it is sent to output system 260. Output system 260 may include one or more of a speaker/transducer 262 implemented as a separate device, in the same device, or headset. Output system 260 may also include automation output 264, to second information to home automation system 289 and/or control output 268 to adjust home environmental controls 288, or other output mechanisms to enable one or more conditions to be altered. In one embodiment, a sound may be conveyed via bone conduction or other means that does not involve traditional sound output. In one embodiment, the sound selector 252 may adjust the selected sounds, based on the type of sound output system 262 which is used. For example, bone conduction makes tones sound lower and fuller, thus a different selection may be preferred, or the selection's tone may be altered.
The other control output 268 may be haptic output, such as vibrations for example. In one embodiment, the other control output 268 may include scents. Other mechanisms, now known or later developed, to move the user between sleep phases may be utilized as part of output system 260.
Alarm logic 274, in one embodiment, optionally receives an alarm time from a user, via a user interface 265, 298. In one embodiment, the alarm logic 274 and the sleep stage selector 240 ensure that the user is woken up from an optimal sleep state. Waking up from a light sleep, rather than a deeper sleep, leads to the person feeling more awake, and less tired.
In one embodiment, the alarm logic 274 may interface to a calendar 275, to select a waking time based on calendared events. For example, if the user has a telephone conference scheduled at 7 a.m. the alarm logic 270 may set the alarm at 6:30 a.m. to ensure that the user can attend the scheduled telephone conference. In one embodiment, the system lets the user set a “nearest light sleep phase to Time” type of alarm. In one embodiment, the system lets the user set a particular time for the alarm, and ensures that the sleep states prior to that time enable the user to be in the optimal sleep state when the alarm is scheduled to go off.
Alarm logic 274, and user interface 265, 298 in one embodiment enable a user to select the alarm tones used. In one embodiment, the default alarm includes a sleep report. In one embodiment, the alarm initially is a set of tones designed to wake up and energize the user, followed by the sleep report. The sleep report, in one embodiment is an announcement of the user's completed sleep period data. For example, the announcement may be “Your optimal wake-up time is now. You slept 6.25 hours, with 4 hours of deep sleep, 2.1 hours of REM sleep, and it took you 17 minutes to fall asleep. Your sleep efficiency is 97%, and you are getting 95% of your optimal sleep.” Additional data, including week-to-date data, or less data, may be provided.
In one embodiment, the system may further make a control announcement regarding adjusting the preferences via the user interface. For example, the message may continue “Press once to snooze, twice to dismiss, 3 times to push you wake up time forward by [26] minutes.” In one embodiment, the length that the wake-up time is pushed forward is set based on sleep data. In one embodiment, the timing may be determined by the system to provide an optimal amount of additional rest. For example, the additional sleep segment may be 26 minutes as a default. This is the approximate time for a power nap, sufficient time to experience REM sleep, but not enough to go into a deep sleep. In one embodiment, the actual timing of the fraction of the sleep cycle selected is based on the user's own statistically analyzed sleep patterns.
In one embodiment, a sound of the alarm may be selected from available alarm tones. In one embodiment, this may be implemented as a ring tone. In one embodiment, the alarm may alternatively be a non-auditory alarm, such as a vibration, scent, or other mechanism. Such a silent alarm works because the alarm is played when the user is already in a light sleep phase.
At block 320, the process determines the current sleep state of the user. In one embodiment, this is done by utilizing the data from one or more sensors. As noted above, sensors may include accelerometers, thermometers, blood pressure meters, heart rate sensors, or other sensors.
At block 330, the process determines the optimal next stage of sleep for the user. In one embodiment, this depends on the current time (e.g. 2 a.m.), the expected waking time of the user (e.g., 7 a.m.), and the sleep pattern so far in the current sleep period. In general, a user needs a certain amount of REM sleep for dreaming. However, deeper sleep is more restful. Furthermore, waking up from a light sleep leaves one more rested than waking up from a deeper sleep. Therefore, the current optimal stage of sleep may vary based on numerous factors. Other factors may also be taken into account, as will be described below.
Generally sleep is cyclical, a sleeper cycles through light sleep, REM sleep, and deep sleep multiple times in a sleep period. For the purposes of this description a “sleep period” is the entire night or other sleep unit, while a sleep cycle is a full cycle of light sleep, REM sleep, and deep sleep, which usually is about three hours for the average sleeper.
At block 340, the process determines whether the current sleep stage is different from the optimal next sleep stage. If so, at block 350, the process determines which conditions, if any, should be transmitted in order to move the user to the optimal next stage of sleep.
If the current sleep stage is the optimal next sleep stage, the process at block 360 determines which conditions, if any, should be selected to maintain the user in the optimal stage of sleep. In one embodiment, conditions are selected to maintain the user in the current stage of sleep if the data indicates that the user may otherwise leave this stage of sleep.
At block 370, the selected conditions are implemented. These conditions may include sounds, scents, vibrations, light/visual effects, temperature, humidity, mattress condition, and/or other types of output that may impact the user's sleep. The term “sound” is used to indicate the sensation produced by stimulation of the organs of hearing by vibrations transmitted through the air or other medium. These sounds may be music, tones, soundscapes of nature or water, sounds below the threshold of hearing, vibrations, binaural beats, etc.
The process then returns to block 320, to determine the user's current state of sleep. In one embodiment, the system periodically tests the user's sleep state, e.g. every few minutes. In another embodiment, the system continuously tests the user's sleep state. In another embodiment, the system utilizes one sensor to trigger testing of the sleep state. For example, the system may continuously measure body temperature, and when a change in body temperature indicates a change in sleep state, the other sensors may be used. In another example, the system may continuously or frequently test movement, and turn on other sensors to verify sleep state when needed. In one embodiment, the process periodically evaluates the optimal next stage of sleep, e.g. every few minutes, or every 30 minutes.
In one embodiment, when determining the next optimal phase of sleep, the system may also evaluate an estimated length of that sleep phase. If that determination is made, the system in one embodiment reduces the frequency of testing the current sleep state and the next optimal sleep state, during most of the estimated length of the sleep phase. For example, in one embodiment, if the next optimal sleep phase is determined to be deep sleep for the next 50 minutes, the system tests frequently until the user is determined to be in the deep sleep phase, which may take 2-5 minutes. Then, the system tests only every 10 minutes for the next 40 minutes, as long as the user remains in the deep sleep phase during that time. When the estimated end of the deep sleep phase is approaching, the system starts testing more frequently again.
At block 420, the process receives the current sleep state and the target sleep phase. The target sleep stage is the optimal next sleep phase.
At block 430, the process determines additional available information about the user, and the environment when appropriate. The information about the user may include data from one or more sensors, as well as user personal data, e.g. age, sex, etc. Environmental data may include ambient temperature data, humidity, noise level, and other information that may impact the user's quality of sleep or sound effectiveness.
At block 440, the appropriate sounds are selected. The appropriate sound has a cadence and tone that is designed to guide the sleeper into the optimal sleep phase. For example, it is known that music with a slower cadence slows down the heart rate.
In one embodiment, brainwave entrainment can be used to guide a user to various sleep states. The sounds selected may use binaural beat, monaural beats, isochronic tones, or other modes of modulation that can guide the brain into the optimal sleep state.
In one embodiment, the sound type may be based on user preferences, entered through a user interface. For example, the user may indicate that he or she has a preference for nature sounds. In that case, the appropriate nature sounds would be selected. In one embodiment, jazz, pop, classical, or other musical genres may be set as a preference. In one embodiment, the system uses generated sounds in order to have precise control of the tones and beats used. In one embodiment, the system may utilize existing musical compositions, either as is or with modifications to assist in guiding a user into the optimal sleep state.
At block 450, the system in one embodiment selects an appropriate sound volume for the selected sound. The selected sound volume is based on a combination of the user's current sleep state and environmental factors. In a louder environment, the sound is played more loudly.
At block 460, the process determines whether the system is currently playing sounds. If so, at block 470, a transition is created, to gently transition from the current sounds to the appropriate selected sounds.
The process then ends at block 480. A similar flow, in one embodiment, can be used for selecting a sound to maintain the user in a particular sleep state.
At block 520, the process determines the expected time the user must wake up. In one embodiment, this calculation is done once per sleep period. Subsequent calculations of the appropriate sleep phase use the previously calculated waking time.
In one embodiment, if there is an alarm function available in the system, and the alarm is set, the alarm time is used as the waking time. In one embodiment, if no alarm is set, the system may interface with a user's calendar, and appointments in the calendar may be used as the basis for setting the waking time. In one embodiment, if the determination is based on an appointment in the calendar, the system may perform further calculations to provide sufficient time for a user to reach the destination. For example, if the user has on their calendar “Meeting at FullPower, Inc.” the system may wake the user in time to allow the user to get ready, and go to the scheduled meeting. In one embodiment, such calculations may take into account the weather. In one embodiment, such calculations may also take into account the user's general modes of transportation, if this information is available.
In one embodiment, if there is no other setting, the optimal sleep length of the user is set as the waking time. For example, the average 15-year old should sleep for 9 hours. Thus, in one embodiment, the system is set to wake such a user after 9 hours of sleep. In one embodiment, the optimal sleep length may be adjusted based on the user's personal characteristics, or additional data about the user. For example, if the user has indicated that he or she feels most rested after an 8-hour sleep, that may be used.
In one embodiment, if the user did not set an alarm, either directly or via the calendar, the system may not set an alarm and may let the user sleep as long as he or she can.
At block 525, the sleep data about the current sleep period so far is collected. The sleep data includes the number and length of each sleep phase in this sleep period.
There are three states of non-REM sleep: Stage N1 (Transition to sleep), Stage N2 (Light sleep), and Stage N3 (Deep sleep). The N1 stage is short, lasting only a few minutes, while stage N2 is the first stage of true sleep lasting about half an hour. The N3 stage is deep sleep, from which it is hard to wake. REM sleep is where dreaming occurs.
The average adult spends approximately fifty percent of total sleep time in stage N2, twenty percent in REM sleep, and thirty percent in the remaining stages, including deep sleep. Deep sleep renews the body, and is important for health, and REM sleep is necessary for the mind. While sleep needs vary from individual to individual, everyone needs an adequate amount of REM sleep and deep sleep.
Sleep generally occurs in cycles, which take on average 90 minutes. The precise length of the cycle depends on the individual as well as environmental factors. During each cycle, the user should get some deep sleep and some REM sleep. Optimally most of the sleep in any sleep period should be either REM or deep sleep. The current sleep phase and the cumulative sleep phases so far are used to calculate the optimal next sleep phase.
At block 530, the process determines whether the user should be waking up in less than a half sleep cycle. As noted above, a full sleep cycle is generally 90 minutes. However, the sleep cycle length varies by person, and in one embodiment, the system may adjust the default sleep cycle length by observed sleep cycle length for the particular user.
If the user should be waking up within half a sleep cycle, the system continues to block 535, where the process determines whether the user is in deep sleep. If the user is in deep sleep, at block 540, the process sets the optimal sleep phase to maintaining the user in deep sleep until approximately 15 minutes prior to the scheduled waking period. Approximately 15 minutes prior to the waking period, the system should shift the user into REM sleep. By waking up from the REM sleep, rather than deep sleep, the user will wake up more refreshed and rested, and have more energy. In one embodiment, the length of time needed to shift the user from deep sleep into REM sleep is user dependent, therefore this time frame may be adjusted, as needed. The process then ends at block 545.
If the user is not in deep sleep, the system at block 550 sets REM sleep as the optimal sleep state. In general, 45 minutes is too short to transition into deep sleep, have a sufficiently long deep sleep cycle, and transition back into REM sleep prior to waking. Therefore, the system sets as the optimal sleep state the REM state. The user can be maintained in the REM sleep state, which will enable them to awaken well rested. The process then ends at block 545.
If, at block 530, the process determines that the user does not need to wake in less than a half sleep cycle, the process continues to block 555.
At block 555, the process determines whether the user is in deep sleep. If so, at block 560, the process determines whether it is the end of a natural sleep cycle. In general, the system is designed not to disrupt the cyclic nature of sleep, but rather to optimize the sleep quality by adjusting the sleep. Therefore, if it is the end of the cycle, at block 565 the optimal next sleep phase is set to a transition to REM sleep. The process the ends at block 545.
If it is not the end of a sleep cycle, the optimal sleep stage is set to maintaining the deep sleep, at block 570. The process then ends at block 545.
If the user is not in deep sleep, at block 575 the process determines whether the user has had enough REM sleep. In general, the REM stage of sleep can be shortened without impacting the user, as long as the user gets more than a minimum required amount of REM sleep. In one embodiment, the minimum amount of REM sleep that allows a user to stay healthy and refreshed is 30 minutes per cycle. In one embodiment, this is user dependent, and the system may use data obtained during unmodified natural sleep cycles to set the REM sleep minimum for the user. If the user has not yet had enough REM sleep, the optimal next sleep stage at block 585 is set to maintain the user in the REM state. If the user has had enough REM sleep, at block 580 the process sets the optimal stage as a transition to deep sleep. The process then ends at block 545.
In this way, the system attempts to enable a user to have more refreshing sleep period, by ensuring that the user gets as much deep sleep as possible, while getting sufficient REM sleep.
The timings described in
At block 720, the system prompts the user to select alarm timing. In one embodiment, the alarm timing may be an exact time, e.g. 7:25 a.m., or a “nearest optimal time,” e.g. prior to 7:30 a.m.
At block 725, the system prompts the user to select the alarm conditions. Alarm conditions that may be adjusted depend on the controls available to the system. For example, if the user has a coffee machine that is connected to the alarm system, the coffee machine may be initiated to add the scent of coffee. A scent generation system may alternatively be used to generate a scent. Additional controls may include temperature, mattress firmness, light levels, humidity, and other aspects of the user's personal environment.
At block 730, the user can select alarm sounds. In one embodiment, in addition to the “standard” alarm sounds available, the user may be presented with additional options. Additional options, in one embodiment include “sleep reporting” and “silent.” A silent alarm would use the other conditions, such a vibration or other non-sound-based alert. In one embodiment, a “silent” alarm may also indicate that the alarm be personal. A personal alarm is designed not to wake any other sleepers, thus excluding alarm options such as increasing light levels in the room, or significantly altering the room temperature. The sleep reporting alarm is designed to announce information about the user's just completed sleep period. In one embodiment, this information is available throughout the day, as well.
At block 740, the process permits the user to select whether control announcement should be provided along with the alarm. In one embodiment, for any non-silent alarm a control announcement may be added to the alarm. The control announcement informs the user how to alter the alarm settings simply. In one embodiment, the control announcement may be, “Press once to snooze, press twice to dismiss, press three times for additional sleep time of 26 minutes.” In one embodiment, this announcement is included by default with the sleep reporting alarm, and may also be added to the other alarm formats that are non-silent.
At block 750, the process sets the alarm logic to use the selected alarm format. In one embodiment, regardless of whether the sleep reporting alarm format is used, the sleep reporting is available to the user via the user interface. In one embodiment, for example, the user may press a button, or otherwise issue a command to provide the sleep report.
The process ends at block 760.
The data processing system illustrated in
The system further includes, in one embodiment, a random access memory (RAM) or other volatile storage device 820 (referred to as memory), coupled to bus 840 for storing information and instructions to be executed by processor 810. Main memory 820 may also be used for storing temporary variables or other intermediate information during execution of instructions by processing unit 810.
The system also comprises in one embodiment a read only memory (ROM) 850 and/or static storage device 850 coupled to bus 840 for storing static information and instructions for processor 810. In one embodiment the system also includes a data storage device 830 such as a magnetic disk or optical disk and its corresponding disk drive, or Flash memory or other storage which is capable of storing data when no power is supplied to the system. Data storage device 830 in one embodiment is coupled to bus 840 for storing information and instructions.
The system may further be coupled to an output device 870, such as a cathode ray tube (CRT) or a liquid crystal display (LCD) coupled to bus 840 through bus 860 for outputting information. The output device 870 may be a visual output device, an audio output device, and/or tactile output device (e.g. vibrations, etc.)
An input device 875 may be coupled to the bus 860. The input device 875 may be an alphanumeric input device, such as a keyboard including alphanumeric and other keys, for enabling a user to communicate information and command selections to processing unit 810. An additional user input device 880 may further be included. One such user input device 880 is cursor control device 880, such as a mouse, a trackball, stylus, cursor direction keys, or touch screen, may be coupled to bus 840 through bus 860 for communicating direction information and command selections to processing unit 810, and for controlling movement on display device 870.
Another device, which may optionally be coupled to computer system 800, is a network device 885 for accessing other nodes of a distributed system via a network. The communication device 885 may include any of a number of commercially available networking peripheral devices such as those used for coupling to an Ethernet, token ring, Internet, or wide area network, personal area network, wireless network or other method of accessing other devices. The communication device 885 may further be a null-modem connection, or any other mechanism that provides connectivity between the computer system 800 and the outside world.
Note that any or all of the components of this system illustrated in
It will be appreciated by those of ordinary skill in the art that the particular machine that embodies the present invention may be configured in various ways according to the particular implementation. The control logic or software implementing the present invention can be stored in main memory 820, mass storage device 830, or other storage medium locally or remotely accessible to processor 810.
It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 820 or read only memory 850 and executed by processor 810. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the mass storage device 830 and for causing the processor 810 to operate in accordance with the methods and teachings herein.
The present invention may also be embodied in a handheld or portable device containing a subset of the computer hardware components described above. For example, the handheld device may be configured to contain only the bus 840, the processor 810, and memory 850 and/or 820.
The handheld device may be configured to include a set of buttons or input signaling components with which a user may select from a set of available options. These could be considered input device #1 875 or input device #2 880. The handheld device may also be configured to include an output device 870 such as a liquid crystal display (LCD) or display element matrix for displaying information to a user of the handheld device. Conventional methods may be used to implement such a handheld device. The implementation of the present invention for such a device would be apparent to one of ordinary skill in the art given the disclosure of the present invention as provided herein.
The present invention may also be embodied in a special purpose appliance including a subset of the computer hardware components described above, such as a kiosk or a vehicle. For example, the appliance may include a processing unit 810, a data storage device 830, a bus 840, and memory 820, and no input/output mechanisms, or only rudimentary communications mechanisms, such as a small touch-screen that permits the user to communicate in a basic manner with the device. In general, the more special-purpose the device is, the fewer of the elements need be present for the device to function. In some devices, communications with the user may be through a touch-based screen, or similar mechanism. In one embodiment, the device may not provide any direct input/output signals, but may be configured and accessed through a website or other network-based connection through network device 885.
It will be appreciated by those of ordinary skill in the art that any configuration of the particular machine implemented as the computer system may be used according to the particular implementation. The control logic or software implementing the present invention can be stored on any machine-readable medium locally or remotely accessible to processor 810. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g. a computer). For example, a machine readable medium includes read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or other storage media which may be used for temporary or permanent data storage. In one embodiment, the control logic may be implemented as transmittable data, such as electrical, optical, acoustical or other forms of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
The present invention claims priority to U.S. Utility patent application Ser. No. 13/781,742 field on Feb. 28, 2013, issuing in Oct. 4, 2016 as U.S. Pat. No. 9,459,597, which claimed priority to U.S. Provisional Patent Application No. 61/607,530, filed on Mar. 6, 2012. Both patent applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
2082843 | Samuel | Jun 1937 | A |
3541781 | Bloom | Nov 1970 | A |
3798889 | Chadwick | Mar 1974 | A |
4228806 | Lidow | Oct 1980 | A |
4297685 | Brainard, II | Oct 1981 | A |
4322609 | Kato | Mar 1982 | A |
4573804 | Kavoussi et al. | Mar 1986 | A |
4788533 | Mequignon | Nov 1988 | A |
4848360 | Palsgard et al. | Jul 1989 | A |
4858609 | Cole | Aug 1989 | A |
4982738 | Griebel | Jan 1991 | A |
5008865 | Shaffer et al. | Apr 1991 | A |
5047930 | Martens et al. | Sep 1991 | A |
5275159 | Griebel | Jan 1994 | A |
5335657 | Terry, Jr. et al. | Aug 1994 | A |
5458105 | Taylor et al. | Oct 1995 | A |
5545192 | Czeisler et al. | Aug 1996 | A |
5562106 | Heeke et al. | Oct 1996 | A |
5671733 | Raviv et al. | Sep 1997 | A |
5844996 | Enzmann et al. | Dec 1998 | A |
5928133 | Halyak | Jul 1999 | A |
5961447 | Raviv et al. | Oct 1999 | A |
6045514 | Raviv et al. | Apr 2000 | A |
6239706 | Yoshiike et al. | May 2001 | B1 |
6350275 | Vreman et al. | Feb 2002 | B1 |
6361508 | Johnson et al. | Mar 2002 | B1 |
6468234 | Van et al. | Oct 2002 | B1 |
6547728 | Cornuejols | Apr 2003 | B1 |
6556222 | Narayanaswami | Apr 2003 | B1 |
6834436 | Townsend et al. | Dec 2004 | B2 |
6888779 | Mollicone et al. | May 2005 | B2 |
6928031 | Kanevsky et al. | Aug 2005 | B1 |
6963271 | Fyffe | Nov 2005 | B1 |
7006650 | Wild | Feb 2006 | B1 |
7041049 | Raniere | May 2006 | B1 |
7106662 | Acker | Sep 2006 | B1 |
7153278 | Ono et al. | Dec 2006 | B2 |
7280439 | Shaddox | Oct 2007 | B1 |
7366572 | Heruth et al. | Apr 2008 | B2 |
7513003 | Mossbeck | Apr 2009 | B2 |
7559903 | Moussavi et al. | Jul 2009 | B2 |
7572225 | Stahmann et al. | Aug 2009 | B2 |
7841987 | Sotos et al. | Nov 2010 | B2 |
7862226 | Bracher et al. | Jan 2011 | B2 |
7914468 | Shalon et al. | Mar 2011 | B2 |
8179270 | Rai et al. | May 2012 | B2 |
8193941 | Wolfe et al. | Jun 2012 | B2 |
8398546 | Pacione et al. | Mar 2013 | B2 |
8407835 | Connor | Apr 2013 | B1 |
8475339 | Hwang et al. | Jul 2013 | B2 |
8482418 | Harman | Jul 2013 | B1 |
8577448 | Bauer et al. | Nov 2013 | B2 |
8680974 | Meiertoberens et al. | Mar 2014 | B2 |
8738925 | Park et al. | May 2014 | B1 |
8892036 | Causey et al. | Nov 2014 | B1 |
8942719 | Hyde et al. | Jan 2015 | B1 |
9060735 | Yang et al. | Jun 2015 | B2 |
9161719 | Tsutsumi et al. | Oct 2015 | B2 |
9448536 | Kahn et al. | Sep 2016 | B1 |
9474876 | Kahn et al. | Oct 2016 | B1 |
9594354 | Kahn et al. | Mar 2017 | B1 |
9675268 | Bauer et al. | Jun 2017 | B2 |
9844336 | Zigel et al. | Dec 2017 | B2 |
10004452 | Kazem-Moussavi et al. | Jun 2018 | B2 |
10207075 | Kahn et al. | Feb 2019 | B1 |
20020080035 | Youdenko | Jun 2002 | A1 |
20020100477 | Sullivan et al. | Aug 2002 | A1 |
20020124848 | Sullivan et al. | Sep 2002 | A1 |
20030095476 | Mollicone et al. | May 2003 | A1 |
20030204412 | Brier | Oct 2003 | A1 |
20030227439 | Lee et al. | Dec 2003 | A1 |
20030231495 | Searfoss | Dec 2003 | A1 |
20040034289 | Teller et al. | Feb 2004 | A1 |
20040049132 | Barron et al. | Mar 2004 | A1 |
20040133081 | Teller et al. | Jul 2004 | A1 |
20040210155 | Takemura et al. | Oct 2004 | A1 |
20040218472 | Narayanaswami et al. | Nov 2004 | A1 |
20050012622 | Sutton | Jan 2005 | A1 |
20050043645 | Ono et al. | Feb 2005 | A1 |
20050075116 | Laird et al. | Apr 2005 | A1 |
20050143617 | Auphan | Jun 2005 | A1 |
20050154330 | Loree | Jul 2005 | A1 |
20050190065 | Ronnholm | Sep 2005 | A1 |
20050236003 | Meader | Oct 2005 | A1 |
20050237479 | Rose | Oct 2005 | A1 |
20050245793 | Hilton et al. | Nov 2005 | A1 |
20050283039 | Cornel | Dec 2005 | A1 |
20050288904 | Warrior et al. | Dec 2005 | A1 |
20060017560 | Albert | Jan 2006 | A1 |
20060025299 | Miller et al. | Feb 2006 | A1 |
20060064037 | Shalon et al. | Mar 2006 | A1 |
20060097884 | Jang et al. | May 2006 | A1 |
20060136018 | Lack et al. | Jun 2006 | A1 |
20060150734 | Mimnagh-Kelleher et al. | Jul 2006 | A1 |
20060252999 | DeVaul et al. | Nov 2006 | A1 |
20060266356 | Sotos et al. | Nov 2006 | A1 |
20060279428 | Sato et al. | Dec 2006 | A1 |
20060293602 | Clark | Dec 2006 | A1 |
20060293608 | Rothman et al. | Dec 2006 | A1 |
20070016091 | Butt et al. | Jan 2007 | A1 |
20070016095 | Low et al. | Jan 2007 | A1 |
20070129644 | Richards et al. | Jun 2007 | A1 |
20070191692 | Hsu et al. | Aug 2007 | A1 |
20070239225 | Saringer | Oct 2007 | A1 |
20070250286 | Duncan et al. | Oct 2007 | A1 |
20070251997 | Brown et al. | Nov 2007 | A1 |
20070287930 | Sutton | Dec 2007 | A1 |
20080062818 | Plancon et al. | Mar 2008 | A1 |
20080109965 | Mossbeck | May 2008 | A1 |
20080125820 | Stahmann et al. | May 2008 | A1 |
20080191885 | Iv et al. | Aug 2008 | A1 |
20080234785 | Nakayama et al. | Sep 2008 | A1 |
20080243014 | Moussavi et al. | Oct 2008 | A1 |
20080289637 | Wyss | Nov 2008 | A1 |
20080319277 | Bradley | Dec 2008 | A1 |
20090030767 | Morris et al. | Jan 2009 | A1 |
20090048540 | Otto et al. | Feb 2009 | A1 |
20090069644 | Hsu et al. | Mar 2009 | A1 |
20090082699 | Bang et al. | Mar 2009 | A1 |
20090094750 | Oguma et al. | Apr 2009 | A1 |
20090105785 | Wei et al. | Apr 2009 | A1 |
20090121826 | Song et al. | May 2009 | A1 |
20090128487 | Langereis et al. | May 2009 | A1 |
20090143636 | Mullen et al. | Jun 2009 | A1 |
20090177327 | Turner et al. | Jul 2009 | A1 |
20090203970 | Fukushima et al. | Aug 2009 | A1 |
20090207028 | Kubey et al. | Aug 2009 | A1 |
20090227888 | Salmi et al. | Sep 2009 | A1 |
20090264789 | Molnar et al. | Oct 2009 | A1 |
20090320123 | Yu et al. | Dec 2009 | A1 |
20100010330 | Rankers et al. | Jan 2010 | A1 |
20100061596 | Mostafavi et al. | Mar 2010 | A1 |
20100075807 | Hwang et al. | Mar 2010 | A1 |
20100079291 | Kroll et al. | Apr 2010 | A1 |
20100079294 | Rai et al. | Apr 2010 | A1 |
20100083968 | Wondka et al. | Apr 2010 | A1 |
20100094148 | Bauer et al. | Apr 2010 | A1 |
20100100004 | Van Someren | Apr 2010 | A1 |
20100102971 | Virtanen et al. | Apr 2010 | A1 |
20100152543 | Heneghan et al. | Jun 2010 | A1 |
20100152546 | Behan et al. | Jun 2010 | A1 |
20100217146 | Osvath | Aug 2010 | A1 |
20100256512 | Sullivan | Oct 2010 | A1 |
20100283618 | Wolfe et al. | Nov 2010 | A1 |
20100331145 | Lakovic et al. | Dec 2010 | A1 |
20110015467 | Dothie et al. | Jan 2011 | A1 |
20110015495 | Dothie et al. | Jan 2011 | A1 |
20110018720 | Rai et al. | Jan 2011 | A1 |
20110054279 | Reisfeld et al. | Mar 2011 | A1 |
20110058456 | Van et al. | Mar 2011 | A1 |
20110090226 | Sotos et al. | Apr 2011 | A1 |
20110105915 | Bauer et al. | May 2011 | A1 |
20110137836 | Kuriyama et al. | Jun 2011 | A1 |
20110160619 | Gabara | Jun 2011 | A1 |
20110190594 | Heit et al. | Aug 2011 | A1 |
20110199218 | Caldwell et al. | Aug 2011 | A1 |
20110230790 | Kozlov | Sep 2011 | A1 |
20110295083 | Doelling et al. | Dec 2011 | A1 |
20120004749 | Abeyratne et al. | Jan 2012 | A1 |
20120083715 | Yuen et al. | Apr 2012 | A1 |
20120232414 | Mollicone et al. | Sep 2012 | A1 |
20120243379 | Balli | Sep 2012 | A1 |
20120253220 | Rai et al. | Oct 2012 | A1 |
20120296156 | Auphan | Nov 2012 | A1 |
20130012836 | Veiga et al. | Jan 2013 | A1 |
20130018284 | Kahn et al. | Jan 2013 | A1 |
20130023214 | Wang et al. | Jan 2013 | A1 |
20130053653 | Cuddihy et al. | Feb 2013 | A1 |
20130053656 | Mollicone et al. | Feb 2013 | A1 |
20130060306 | Colbauch | Mar 2013 | A1 |
20130144190 | Bruce et al. | Jun 2013 | A1 |
20130184601 | Zigel et al. | Jul 2013 | A1 |
20130204314 | Miller et al. | Aug 2013 | A1 |
20130208576 | Loree et al. | Aug 2013 | A1 |
20130286793 | Umamoto | Oct 2013 | A1 |
20130289419 | Berezhnyy et al. | Oct 2013 | A1 |
20130310658 | Ricks et al. | Nov 2013 | A1 |
20140005502 | Klap et al. | Jan 2014 | A1 |
20140051938 | Goldstein et al. | Feb 2014 | A1 |
20140085077 | Luna et al. | Mar 2014 | A1 |
20140135955 | Burroughs | May 2014 | A1 |
20140171815 | Yang et al. | Jun 2014 | A1 |
20140200691 | Lee et al. | Jul 2014 | A1 |
20140207292 | Ramagem et al. | Jul 2014 | A1 |
20140219064 | Filipi et al. | Aug 2014 | A1 |
20140232558 | Park et al. | Aug 2014 | A1 |
20140256227 | Aoki et al. | Sep 2014 | A1 |
20140259417 | Nunn et al. | Sep 2014 | A1 |
20140259434 | Nunn et al. | Sep 2014 | A1 |
20140276227 | Pérez | Sep 2014 | A1 |
20140288878 | Donaldson | Sep 2014 | A1 |
20140306833 | Ricci | Oct 2014 | A1 |
20140371635 | Shinar et al. | Dec 2014 | A1 |
20150068069 | Tran et al. | Mar 2015 | A1 |
20150073283 | Vugt et al. | Mar 2015 | A1 |
20150085622 | Carreel et al. | Mar 2015 | A1 |
20150098309 | Adams et al. | Apr 2015 | A1 |
20150141852 | Dusanter et al. | May 2015 | A1 |
20150148871 | Maxik et al. | May 2015 | A1 |
20150173671 | Paalasmaa et al. | Jun 2015 | A1 |
20150178362 | Wheeler | Jun 2015 | A1 |
20150190086 | Chan et al. | Jul 2015 | A1 |
20150233598 | Shikii et al. | Aug 2015 | A1 |
20150265903 | Kolen et al. | Sep 2015 | A1 |
20150289802 | Thomas et al. | Oct 2015 | A1 |
20170003666 | Nunn et al. | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
2003203967 | Nov 2004 | AU |
377738 | Jan 1964 | CH |
668349 | Dec 1988 | CH |
697528 | Nov 2008 | CH |
19642316 | Apr 1998 | DE |
1139187 | Oct 2010 | EP |
08-160172 | Jun 1996 | JP |
2007132581 | May 2007 | JP |
10-2010-0022217 | Mar 2010 | KR |
9302731 | Feb 1993 | WO |
2008038288 | May 2009 | WO |
20091099292 | Aug 2009 | WO |
2011141840 | Nov 2011 | WO |
Entry |
---|
“Power Nap,”, Last Modified Sep. 20, 2012, 4 pages. |
“Sara Mednick,”, Last Modified Sep. 12, 2012, 2 pages. |
“Slow Wave Sleep,”, Last Modified Jul. 22, 2012, 4 pages. |
Sound-Remedies.com: Sonic Solutions for Health, Learning & Productivity, <http://www.sound-remedies.com/ammusforslee.html>, Accessed May 23, 2013, 2 pages. |
Jaines, Kira, “Music to Help You Fall Sleep,” <http://www.livestrong.com/article/119802-music-fall-sleep/>, May 10, 2010, 2 pages. |
Patel, et al., Validation of Basis Science Advanced Sleep Analysis, Estimation of Sleep Stages and Sleep Duration, Basis Science, San Francisco, CA, Jan. 2014, 6 pages. |
Daniel et al., “ Activity Characterization from Actimetry Sensor Data for Sleep Disorders Diagnosis”, Sep. 2008, 10 pages. |
Desai, Rajiv, “The Sleep”, Mar. 17, 2011, Educational Blog, 82 pages. |
Fitbit Product Manual, “Fitbit Product Manual”, available online at <http://www.filtbit.com/manual>, Mar. 29, 2010, pp. 1-20. |
Haughton Mifflin, “Estimate”, The American Heritage dictionary of the English language (5th ed.), Jul. 24, 2017, 2 pages. |
How BodyMedia FIT Works, available online at <http://www.bodymedia.com/Shop/Learn-More/How-it-works>, Jun. 17, 2011, 2 pages. |
Jetlog, “Jetlog Reviewers Guide”, <http://www.jetlog.comifileadmin/Presse_us/24x7ReviewersGuide.pdf>, 2009, 5 pages. |
Lichstein, et al., “Actigraphy Validation with Insomnia”, SLEEP, Vol, 29, No. 2, 2006, pp. 232-239. |
Liden et al., “Characterization and Implications of the Sensors Incorporated into the SenseWear(TM) Armband for Energy Expenditure and Activity Detection”, 2011, 7 pages. |
Mattila et al., “A Concept for Personal Wellness Management Based on Activity Monitoring,” Pervasive Computing Technologies for Healthcare, 2008. |
Pollak et al., “How Accurately Does Wrist Actigraphy Identify the States of Sleep and Wakefulness?”, Sleep, vol. 24, No. 8, 2001, pp. 957-965. |
PowerNap, “ iPhone App”, available online at <http://forums.precentral.net/webos-apps-software/223091-my-second-app-powernap-out-app-catalog-nap-timer.html>, Jan. 6, 2010, 10 pages. |
Rechtschaffen et al., Manual of Standardized Terminology, Techniques and Scoring System for Sleep Stages of Human Subjects, 1968, 57 pages. |
Schulz et al. “Phase shift in the REM sleep rhythm.” Pflugers Arch. 358, 1975, 10 pages. |
Schulz et al. “The REM-NREM sleep cycle: Renewal Process or Periodically Driven Process?.” Sleep, 1980, pp. 319-328. |
Sunseri et al., “The SenseWear (TM) Armband as a Sleep Detection Device,” available online at <http://sensewear.bodymedia.com/SenseWear-Sludies/SW-Whilepapers/The-SenseWear-armband-as-a-Sleep-Delection-Device>, 2005, 9 pages. |
Wikipedia , “Sleep”, available online at <https://en.wikipedia.org/wiki/Sleep#Physiology>, Oct. 5, 2012, 21 pages. |
Wikipedia, “Acligraphy”, available online at <wikipedia.org/w/index.php?title=Actigraphy&oldid=703060760>, Feb. 3, 2016, 4 pages. |
Wikipedia, “David.sub Dinges”, available online at <en.wikipedia.org/wiki/David.sub_Dinges>, Sep. 12, 2012, 2 pages. |
Wikipedia, “Sleep Debt”, available online at <https://en.wikipedia.org/wiki/Sleep_debt>, Aug. 25, 2012, 3 pages. |
Wikipedia, “Sleep Inertia”, available online at <en.wikipedia.org/wiki/Sleep_inertia>, Sep. 12, 2012, 2 pages. |
Yassourdidis et al. “Modelling and Exploring Human Sleep with Event History Analysis.” Journal of Sleep Research, 1999, pp. 25-36. |
Number | Date | Country | |
---|---|---|---|
20170087330 A1 | Mar 2017 | US |
Number | Date | Country | |
---|---|---|---|
61607530 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13781742 | Feb 2013 | US |
Child | 15284352 | US |