This disclosure concerns wearable technology. More particularly, but not exclusively, the present disclosure concerns wearable devices that over time learn to identify the contexts in which they are being used.
Wearable technology may include any type of mobile electronic device that can be worn on the body, attached to or embedded in clothes and accessories of an individual, and currently exists in the consumer marketplace. Processors and sensors associated with the wearable technology can display, process, or gather information. Such wearable technology has been used in a variety of areas, including monitoring health data of the user as well as other types of data and statistics. These types of devices may be readily available to the public and may be easily purchased by consumers. Examples of some wearable technology in the health arena include the FitBit Flex™, the Nike Fuel Band™, the Jawbone Up™, and the Apple Watch™ devices.
Wearable devices that predict the context in which they are used based on previously tracked context data, and methods associated with the same, are provided.
Various embodiments described herein are directed to a wearable device comprising: a sensor configured to obtain sensor data descriptive of a physiological parameter of the user; a memory configured to store a plurality of records correlating context data to physiological parameters obtained from sensor data; and a processor configured to: establish a baseline value based on a first set of sensor data obtained from the sensor, compare the baseline value to a second set of sensor data obtained from the sensor to determine whether the second set of sensor data is a change from the first set of sensor data, request user input from the user descriptive of a current context in response to determining that the second set of sensor data is a change from the first set of sensor data, create a record based on the second set of sensor data and the current context, and store the record in the memory as a member of the plurality of records.
Various embodiments described herein are directed to a method for training a wearable device to predict a context in which a wearable device is used, the method comprising: establishing a baseline value based on a first set of sensor data obtained from a sensor configured to obtain sensor data descriptive of a physiological parameter of the user, comparing the baseline value to a second set of sensor data obtained from the sensor to determine whether the second set of sensor data is a change from the first set of sensor data, requesting user input from the user descriptive of a current context in response to determining that the second set of sensor data is a change from the first set of sensor data, creating a record based on the second set of sensor data and the current context, and storing the record in a memory as a member of a plurality of records.
Various embodiments described herein are directed to a non-transitory computer-readable medium having a computer program stored thereon, the computer program executable by a processor to perform a method for predicting a context in which a wearable health device is used, the method comprising: instructions for establishing a baseline value based on a first set of sensor data obtained from a sensor configured to obtain sensor data descriptive of a physiological parameter of the user; instructions for comparing the baseline value to a second set of sensor data obtained from the sensor to determine whether the second set of sensor data is a change from the first set of sensor data; instructions for requesting user input from the user descriptive of a current context in response to determining that the second set of sensor data is a change from the first set of sensor data; instructions for creating a record based on the second set of sensor data and the current context; and instructions for storing the record in a memory as a member of a plurality of records.
The method, device and the non-transitory machine readable medium as described above provides an improved method of constructing a labeled set of data for use in future determinations of user context (e.g., activity or emotional state). By identifying deviations from baseline physiological parameters, opportunities for requesting user input for labeling training examples for inclusion in a training set can be easily identified. The training set can then be used to train a machine learning algorithm (or otherwise used) for determining user context in the future from similar readings without user input. This approach is an improvement over relying on the user to identify such opportunities which may be unreliable, inconsistent, and overly-burdensome on the user. The approach is particularly lightweight when compared to training and employing additional trained models (e.g., logistic regression) for the additional purpose of identifying such key moments which may not be practicable in situations where processing power is limited (e.g. as is the case in many wearable devices).
Various embodiments additionally include a communication interface configured to communicate over a communication network, wherein the communication interface is configured to detect the presence of one or more other wearable devices communicatively coupled to the network.
Various embodiments are described wherein the communication interface is further configured to receive additional context data from the one or more other wearable devices communicatively coupled to the network, and wherein the record includes the additional context data.
Various embodiments are described wherein the processor is further configured to: train a machine-learning model using the plurality of records; and apply the machine-learning model to a third set of sensor data obtained from the sensor at a later time to estimate a context at the later time.
Various embodiments are described establishing the baseline value comprises obtaining a statistical mode figure from the first set of sensor data.
The description and drawings presented herein illustrate various principles. It will be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody these principles and are included within the scope of this disclosure. As used herein, the term, “or,” as used herein, refers to a non-exclusive or (i.e., and/or), unless otherwise indicated (e.g., “or else” or “or in the alternative”). Additionally, the various embodiments described herein are not necessarily mutually exclusive and may be combined to produce additional embodiments that incorporate the principles described herein.
Although existing wearable devices are useful in some regards, their usefulness may be limited by their inability to predict the context in which they are being used. As a result, wearable devices miss out on opportunities to enhance wearable sensor data based on contextual information. Current wearable devices do not have the ability to track contextual input over time and use the collected data to later predict contexts without requiring user input. Given those shortcomings, it would be desirable for a smart wearable device to, over time, learn to identify the contexts in which it is used.
In view of the foregoing, wearable devices that predict the context in which they are used based on previously tracked user input, and methods associated with the same, are provided. The wearable device may be any kind of wearable device, such as one primarily intended to be worn around a user's neck (e.g., a necklace), arm (e.g., an armband), head (e.g., hat, helmet, headband, or headlamp), leg, chest, or waist (e.g., a belt or body band), foot (e.g., a shoe or sock), ankle or knee (e.g., a knee brace), or any other area of a user's body. The wearable device may also be a device that is primarily intended to be held in a user's hand or stored in a user's pocket. The wearable device may also be a device meant to be worn over the skin (e.g., a patch or garment) or under the skin (e.g., an implant).
The wearable devices (e.g., an Apple Watch™ device) may include, as implemented in various systems, methods, and non-transitory computer-readable storage media, a context learn mode that a user of the wearable device may activate. In some embodiments, the context learn mode may be active by default. The context learn mode may identify a context in which the wearable device records data associated with a user of the wearable device (e.g., health-related data). When in context learn mode, the wearable device may detect changes in wearable sensor data and prompt the user to input contextual information (e.g., an emotion felt by the user when the wearable device is being used). Using the context learn mode, the wearable device may track user input over time and then use the tracked input to later predict—without any additional then-current contextual information provided by the user—the context in which the wearable device is then being used. The wearable device may track user input and correlate input with sensor data.
In one exemplary scenario, for instance, a wearable device user may produce a change in health-related sensor data (e.g., a change in motion, heart rate, blood pressure, temperature, geolocation, or the like being tracked by the wearable device by way of one or more sensors). The change in sensor data may occur as the user experiences a particular emotion, participates in a particular activity, a combination of both events, or as a result of other environmental influences. Using the context learn mode, the user may provide contextual input that correlates the change in sensor data with the activity, emotion, or other influence. The wearable device may receive the contextual input from the user in a variety of ways. The wearable device may, for instance, receive a “tag” by which a user tags a particular change in sensor data with an activity, emotion, or other influence. The next time the wearable device detects a similar change in sensor data, the wearable device may predict that the user is experiencing the same activity, emotion, or other influence that the user experienced when the user previously inputted the contextual information (e.g., tagged the sensor data with the activity, emotion, or other influence). Over time, the more contextual information the wearable device receives from the user, the better the wearable device can predict contexts without the need for concurrent contextual information supplied by the user. By learning to independently and automatically identify the context in which it is being used, the wearable device may provide more contextually relevant information to the user. As a result, the user's perspective on the data may be enhanced. The user may, in effect, be equipped with a greater ability to understand the impact of certain activities, emotions, or other influences on his or her body.
The context learn mode may increase the usefulness of wearable device data by expanding the number of ways in which a user might use wearable device sensor data. As discussed above, the context learn mode may allow a user to infuse the data with relevant context. Equipped with the context learn mode, a wearable device may show a user's body output based on the user's activities or emotions. Over time, the context learn mode may also improve the wearable device experience by demanding less and less input from the user while at the same time providing more and more useful data based on predicted contexts. The predicted contexts may be based on input the user has already supplied in the past. For example, the user input may be used as a label for a training example such as, for example, a record including the present or recent sensor data (e.g., raw sensor data or features extracted from raw sensor data such as, for example, a heart rate extracted from raw optic data from an optical sensor or a location tag associated with GPS location data) along with the user-input label(s). Thereafter, a collection of such training examples (“training set”) may be used to train one or more machine-learning models (e.g., logistic regression or neural networks using gradient descent) to identify the context tags from present or recent sensor data at a time in the future.
While an example set of sensors 130-150 are illustrated, it will be apparent that various embodiments may utilize sets that include fewer, additional, or alternative sensors. The sensor devices 110 may be virtually any sensor capable of sensing data about a user, the user's environment, the user's context, the state of various electronics associated with the user, etc. In some embodiments, the sensor devices 110 may sense physiological parameters about the user. For example, the sensor devices 110 may include accelerometers, conductance sensors, optical sensors, temperature sensors, microphones, cameras, etc. These or other sensors may be useful for sensing, computing, estimating, or otherwise acquiring physiological parameters descriptive of the wearer such as, for example, steps taken, walking/running distance, standing hours, heart rate, respiratory rate, blood pressure, stress level, body temperature, calories burned, resting energy expenditure, active energy expenditure, height, weight, sleep metrics, etc.
Wearable device 100 may be operable to store in memory 115, and processor 110 of wearable device 100 may be operable to execute, a wearable device operating system (“OS”) 165 and a plurality of executable software modules. As used herein, the term “processor” will be understood to encompass various hardware devices such as, for example, microprocessors, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and other hardware devices capable of performing the various functions described herein as being performed by the wearable 100 or other device. Further, the memory 115 may include various devices such as L1/L2/L3 cache, system memory, or storage devices and, while not shown, some of the components of the server (e.g., components 170-195) will also be understood to be stored among one or more similar memory devices. As used herein, the term “non-transitory machine-readable storage medium” will be understood to refer to both volatile memory (e.g., SRAM and DRAM) and non-volatile memory (e.g., flash, magnetic, and optical memory) devices, but to exclude mere transitory signals. While various embodiments may be described herein with respect to software or instructions “performing” various functions, it will be understood that such functions will actually be performed by hardware devices such as a processor executing the software or instructions in question. In some embodiments, such as embodiments utilizing one or more ASICs, various functions described herein may be hardwired into the hardware operation; in such embodiments, the software or instructions corresponding to such functionality may be omitted.
The OS may be, for example, a Microsoft Windows™, Google Android™, or Apple™ OS. The executable software modules may include a base software module 170, a context learn software module 175, an analysis software module 180, a predict context software module 185, and one or more graphical user interface (“GUI”) software modules 190 that, when executed, render and display one or more GUIs on a display of the wearable device (e.g., a settings GUI module, a learn mode GUI module, a context GUI module, a geo GUI module, an analysis GUI module, etc.). The wearable device may further be operable to store one or more databases 195 in memory (e.g., a settings database, a wearable database, etc.).
Learn mode GUI 300 may further include selectable elements 360 (e.g., a grid of selectable buttons or a free fillable form) directed to other types of context data, such as an activity. As shown in
Referring from left to right as shown in the example of
The second column displays an alternate context in which the user experienced an emotion 450 labeled “nervous” while participating in an activity 460 labeled “socializing.” Graph 420 displays that, during combination of the foregoing contextual data (i.e., “nervous” and “socializing”), the motion sensor data detected by the wearable device dropped significantly compared to when the context was “happy” and “running.” Graph 420 further displays that the heart rate sensor data decreased and that the data detected by the blood pressure and temperature sensors increased. Using context learn mode, the wearable device may interpret the significant increase in blood pressure to define a prediction rule (e.g., when the user is nervous, the user's blood pressure will increase). The wearable device may further interpret the data to define a prediction rule whereby an increase in temperature data indicates that the user's body temperature will increase in the context of socializing while feeling nervous. When selectable element 470 is selected by the user, the wearable device may identify the geolocation at which the context (i.e. “excited” and “walking”) occurred.
The third column displays a further example of possible contexts recorded and predicted by the wearable device. The column displays an emotion 450 labeled “excited” and an activity 460 labeled “walking.” Graph 420 indicates that, in the context of being “excited” while “walking,” the user's motion data increased compared to the context in which the user was socializing. Graph 420 further indicates that the user's motion data did not increase as much as when the user context was “running.” Graph 420 further displays that, in the exemplary set of data shown in
The fourth column displays yet another example of possible contexts recorded and predicted by the wearable device. The column displays an emotion 450 labeled “stressed” that corresponds to an activity 460 labeled “working.” Graph 420 indicates that the user's motion data decreased compared to the contexts in which the user was running, walking, or even socializing. Graph 420 further displays that the user's heart rate data has not changed significantly, but that the user's blood pressure has increased significantly in the context of being “stressed” while “working” compared to other contexts (e.g., “happy” and running”). When selectable element 470 is selected by the user, the wearable device may identify the geolocation at which the context (i.e. “stressed” and “working”) occurred.
The fifth column displays an exemplary context in which a user may be “motivated” and “working.” Graph 420 displays changes in sensor data during that particular context. In the example shown, graph 420 reveals that the user's motion (as determined by detected motion sensor data) is lower than when the user is running or walking. The graph reveals other trends and contextual information as well (e.g., that heart rate is normal, that blood pressure has notable decreased compared to when “working” and “stressed,” etc.). When selectable element 470 is selected by the user, the wearable device may identify the geolocation at which the context (i.e. “motivated” and “working”) occurred.
The sixth and final column displays an exemplary context in which the user is “relaxed” and “walking.” In context learn mode, the wearable device may analyze the foregoing data and other relationships between data to determine one more context prediction rules. The wearable device may then, in the future, apply the rules based on wearable device sensor data to predict the context in which the wearable device is being used. The context shown in the final column, for example, is displayed with an asterisk to indicate that the context is a predicted context as opposed to a learned context. By indicating that the context is a predicted context, the wearable device may inform the user that the context was generated by, for example, generating and applying a prediction rule by matching the user's current sensor data to a context previously input by the user when the wearable device was detecting the same or similar sensor data. The graphical representation of the sensor data may include a legend correlating the displayed line styles with different types of sensor data.
In some embodiments, context GUI 400 may include one or more selectable elements 470 and 480 that, when selected by a user of the wearable device, may cause the wearable device to render and display a different GUI (e.g., a geo GUI, an analysis GUI, etc.). Selectable elements 470 and 480 may allow a user to navigate between the various available GUIs. The graphical representation shown in
Processors 604, as illustrated in
Other sensors could be coupled to peripherals interface 606, such as a temperature sensor, a biometric sensor, or other sensing device to facilitate corresponding functionalities. Location processor 615 (e.g., a global positioning transceiver) can be coupled to peripherals interface 606 to allow for generation of geo-location data thereby facilitating geo-positioning. An electronic magnetometer 616 such as an integrated circuit chip could in turn be connected to peripherals interface 606 to provide data related to the direction of true magnetic North whereby the mobile device could enjoy compass or directional functionality. Camera subsystem 620 and an optical sensor 622 such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor can facilitate camera functions such as recording photographs and video clips.
Communication functionality can be facilitated through one or more communication subsystems 624, which may include one or more wireless communication subsystems. Wireless communication subsystems 624 can include 802.x or Bluetooth transceivers as well as optical transceivers such as infrared. Wired communication system can include a port device such as a Universal Serial Bus (USB) port or some other wired port connection that can be used to establish a wired coupling to other computing devices such as network access devices, personal computers, printers, displays, or other processing devices capable of receiving or transmitting data. The specific design and implementation of communication subsystem 624 may depend on the communication network or medium over which the device is intended to operate. For example, a device may include wireless communication subsystem designed to operate over a global system for mobile communications (GSM) network, a GPRS network, an enhanced data GSM environment (EDGE) network, 802.x communication networks, code division multiple access (CDMA) networks, or Bluetooth networks. Communication subsystem 624 may include hosting protocols such that the device may be configured as a base station for other wireless devices. Communication subsystems can also allow the device to synchronize with a host device using one or more protocols such as TCP/IP, HTTP, or UDP.
Audio subsystem 626 can be coupled to a speaker 628 and one or more microphones 630 to facilitate voice-enabled functions. These functions might include voice recognition, voice replication, or digital recording. Audio subsystem 626 in conjunction may also encompass traditional telephony functions.
I/O subsystem 640 may include touch controller 642 or other input controller(s) 644. Touch controller 642 can be coupled to a touch surface 646. Touch surface 646 and touch controller 642 may detect contact and movement or break thereof using any of a number of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, or surface acoustic wave technologies. Other proximity sensor arrays or elements for determining one or more points of contact with touch surface 646 may likewise be utilized. In one implementation, touch surface 646 can display virtual or soft buttons and a virtual keyboard, which can be used as an input/output device by the user.
Other input controllers 644 can be coupled to other input/control devices 648 such as one or more buttons, rocker switches, thumb-wheels, infrared ports, USB ports, or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of speaker 628 or microphone 630. In some implementations, device 600 can include the functionality of an audio or video playback or recording device and may include a pin connector for tethering to other devices.
Memory interface 602 can be coupled to memory 650. Memory 650 can include high-speed random access memory or non-volatile memory such as magnetic disk storage devices, optical storage devices, or flash memory. Memory 650 can store operating system 652, such as Darwin, RTXC, LINUX, UNIX, OS X, ANDROID, WINDOWS, or an embedded operating system such as VxWorks. Operating system 652 may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system 652 can include a kernel.
Memory 650 may also store communication instructions 654 to facilitate communicating with other mobile computing devices or servers. Communication instructions 654 can also be used to select an operational mode or communication medium for use by the device based on a geographic location, which could be obtained by the GPS/Navigation instructions 668. Memory 650 may include graphical user interface instructions 656 to facilitate graphic user interface processing such as the generation of an interface; sensor processing instructions 658 to facilitate sensor-related processing and functions; phone instructions 660 to facilitate phone-related processes and functions; electronic messaging instructions 662 to facilitate electronic-messaging related processes and functions; web browsing instructions 664 to facilitate web browsing-related processes and functions; media processing instructions 666 to facilitate media processing-related processes and functions; GPS/Navigation instructions 668 to facilitate GPS and navigation-related processes, camera instructions 670 to facilitate camera-related processes and functions; and instructions 672 for any other application that may be operating on or in conjunction with the mobile computing device. Memory 650 may also store other software instructions for facilitating other processes, features and applications, such as applications related to navigation, social networking, location-based services or map displays.
Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. Memory 650 can include additional or fewer instructions. Furthermore, various functions of the mobile device may be implemented in hardware or in software, including in one or more signal processing or application specific integrated circuits.
Certain features may be implemented in a computer system that includes a back-end component, such as a data server, that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of the foregoing. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Some examples of communication networks include LAN, WAN and the computers and networks forming the Internet. The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
One or more features or steps of the disclosed embodiments may be implemented using an API that can define on or more parameters that are passed between a calling application and other software code such as an operating system, library routine, function that provides a service, that provides data, or that performs an operation or a computation. The API can be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter can be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters can be implemented in any programming language. The programming language can define the vocabulary and calling convention that a programmer will employ to access functions supporting the API. In some implementations, an API call can report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, and communications capability.
Referring back to
At step 940, when the context learn software module determines that sensor data has changed (to any degree or enough to satisfy a predetermined threshold or range, depending on the embodiment), the context learn software module may request context input from the user by way of a learn mode GUI. The context learn software module may cause the processor of the wearable device to execute a learn mode GUI module that, when executed, may render and display the learn mode GUI on a display of the wearable device. In some alternative embodiments, such as those embodiments wherein the wearable device does not include a user interface for receiving such input, the wearable device may communicate with another device (e.g., an app on a user's mobile phone, tablet, or pc) to obtain the context input.
At step 945, the context learn software module may store any input received by way of the learn mode GUI with sensor data stored in the wearable database. The context learn software module may then continue polling the one or more sensors for further sensor data as described in the context of step 910. When the context learn software module determines that sensor data has not changed (either to any degree or not enough to satisfy a predetermined threshold or range, depending on the embodiment), the context learn software module may continue polling the one or more sensors for further sensor data. The processor may then proceed in a continuous monitoring loop in which the one or more sensors are polled for new sensor data, new sensor data is received, stored, and compared to baseline sensor data, and the data is evaluated for changes. When the context learn software module is executed in a loop as shown in
When the predict context software module determines that received sensor data does not match a previous context, as shown at step 1140, the predict context software module may determine whether context learn mode is “on” (i.e., activated or enabled) as dictated by the settings stored in a settings database. At step 1145, when the predict context software module determines that context learn mode is not “on” (i.e., is “off”), the predict context software module may take no further action other than causing the processor to return to the operations of the executing base software module at step 1150. At step 1155, when the predict context software module determines that context learn mode is “on,” the predict context software module may cause the processor of the wearable device to execute the context learn software module stored in memory of the wearable device.
In embodiments in which the predict context software module determines, at step 1120, whether geolocation data associated with the received sensor data matches a previous context, the predict context software module may return to step 1140 and determine whether learn mode is “on” when the geolocation data associated with the received sensor data does not match a previous context. As noted above, when the predict context software module determines that context learn mode is not “on” (i.e., is “off”), the predict context software module may, at step 1145, take no further action other than causing the processor to return to the operations of the executing base software module. When the predict context software module determines that context learn mode is “on,” the predict context software module may, at step 1155, cause the processor of the wearable device to execute the context learn software module stored in memory of the wearable device.
Method 1300 may include, at step 1310, allowing a user to turn on context learn mode and selecting one or more sensors for context learn mode. Allowing the user to do so may include receiving one or more setting selections by way of a settings GUI rendered and displayed on a display of the wearable device. At step 1315, method 1300 may include storing settings received by way of the settings GUI in a settings database stored in memory of the wearable device. Method 1300 may further include executing a base software module stored in memory of the wearable device at step 1320. Method 1300 may also include, at step 1325, executing a context learn software module stored in memory of the wearable device. Upon being executed by a processor of the wearable device, the context learn software module may detect sensor data that may indicate a new context associated with received sensor data.
Method 1300 may further include allowing the user to input context information to be associated with sensor data at step 1330. The context information and sensor data may be received by the wearable device and, at step 1335, may be stored in memory (e.g., in a wearable database). Allowing the user to input context information may include receiving context information from the user by way of a learn mode GUI rendered and displayed on a display of the wearable device. The learn mode GUI may be rendered and displayed during execution of a learn mode GUI module. The method may include storing received context information with sensor data in the wearable database.
At step 1340, method 1300 may further include executing a predict context software module stored in memory of the wearable device. Executing the predict context software module may include executing the module in a continuous loop so as to match received sensor data with learned contexts stored in the wearable database and generate predicted contexts. At step 1345, method 1300 may include storing predicted contexts in memory (e.g., in the wearable database).
Method 1300 may also include executing an analysis software module stored in memory of the wearable device at step 1350. Execution of the analysis software module may result in the rendering and display of one or more GUIs, such as a context GUI, a geo GUI, and an analysis GUI. Method 1300 may include, at step 1355, allowing a user to view various data by way of displaying the data through various the GUIs displayed on a display of the wearable device. Method 1300 may include displaying sensor data with overlaid context data in a context GUI. At step 1360, method 1300 may also include displaying geolocation data associated with one or more contexts represented by context data by way of a geo GUI displayed on the wearable device. Method 1300 may further include displaying one or more statistical elements at step 1365, such as a statistically significant correlation between sensor data and context. The statistical elements may be displayed by way of an analysis GUI rendered and displayed at a display of the wearable device.
The foregoing method steps have been described in one of many possible ordered sequences for illustrative purposes. Persons of ordinary skill in the art will readily appreciate that certain steps may be omitted or performed in a different order depending on the overall system architecture.
In various embodiments, the wearable device disclosed herein (e.g., wearable device 100 of
In various embodiments, the wearable device may provide access to learned contexts to a third-party system or network. The wearable device may analyze data and determine when a user is most effective at performing a particular activity (e.g., an exercise). The wearable device may also analyze data and determine when a user experiences the most positive emotions or feelings based on geolocation. The wearable device may analyze sensor data and contexts and display a map of a user's preferred or “happy” places. The displayed information may include times or locations at which sensor data indicates the user was exercising harder, running further or faster, experiencing happiness for longer periods of time, etc. As a result, the device may provide the user with an improved understanding of his or her own emotional and activity context data. In some embodiments, the wearable device may provide functionality by which a user may “pin” or otherwise designate learned contexts or set reminders for contexts to be achieved at specified time intervals (e.g., on a daily basis).
In various embodiments, the wearable device may provide functionality by which a user may enhance context data with additional data (e.g., hydration, caloric intake, injuries, or health condition information). Not only may the user input emotion and activity data, but the user may submit surveys or questionnaires that provide further detail (e.g., how much water the user drank in a given day, etc.).
In one or more embodiments, the wearable device may include a software module that automatically executes when a context is predicted. The software module may compare received data in real time to a user's previous context. For example, where a predicted context is “happy and walking,” the wearable device may show the current data and the data received the last time the user was “happy and walking.” As a result, the user may compare sensor data for two events in real-time. In an example in which a user is exercising, the user may see the last time he or she was walking or running and have a sort of “ghost” of themselves to which they can compare their current activity. The user may determine whether they are walking faster, farther, or shorter and slower, compared to a previous context.
In various embodiments, the wearable device may include functionality by which learned contexts automatically trigger third-party application functionality (e.g., updating a calendar, note, or journal entry). The functionality, which may be carried out by a software module executed by a processor the wearable device, may permit a user to set up custom triggers that, when triggered by an event at the wearable device, automatically execute an application of a wearable device or a smartphone based on learned and predicted contexts.
The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to enable others skilled in the art to best utilize it in various embodiments and with various modifications as suited to the particular design considerations at issue (e.g., cost, availability, preference, etc.). The scope of the technology should be defined only by the claims appended to this description.
Number | Date | Country | Kind |
---|---|---|---|
15178057.4 | Jul 2015 | WO | international |
This application a continuation of U.S. patent application Ser. No. 15/560,532 filed Sep. 22, 2017, which is the U.S. National Phase application, under 35 U.S.C. § 371 of International Application No. PCT/EP2016/056614 filed on Mar. 24, 2016 which claims the benefit of U.S. Provisional Application 62/137,712 filed Mar. 24, 2015 and EP Application No. 15178057.4 filed on Jul. 23, 2015. These applications are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62137712 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15560532 | Sep 2017 | US |
Child | 18089028 | US |