The present invention generally relates to interpretations of data, and in particular learning algorithms associated with user data.
Machine interpretations of data are different from how a human may perceive that data. A machine learning algorithm may identify situations or places based on data such as SPS, accelerometer, WiFi, or other signals. The labels that a machine assigns to the location or situation associated with these signals need to be modified to match labels meaningful to the user.
Embodiments of the present disclosure provide systems and methods for Dynamic Subsumption Inference. For example, in one embodiment, a method for Dynamic Subsumption Inference comprises, receiving a time signal associated with the current time; receiving a first input signal comprising data associated with a user at the current time; determining a first context based on the first input signal and the current time; comparing the first context to a database of contexts associated with the user; and determining a second context based in part on the comparison.
These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Illustrative embodiments are discussed in the Detailed Description, and further description of the disclosure is provided there. Advantages offered by various embodiments of this disclosure may be further understood by examining this specification.
a is a diagram of the difference between a user's perspective and a machine perspective of a location;
b is another diagram of the difference between a user's perspective and a machine perspective of a location;
a is a diagram of tags applied to signals based on location data;
b is a diagram of a subsumption determination based on tags;
a is a diagram of signals available at a specific location;
b is a diagram of a dynamic subsumption inference according to one embodiment;
a is another diagram of a dynamic subsumption inference according to one embodiment;
b is another diagram of a dynamic subsumption inference according to one embodiment; and
Embodiments of the present disclosure provide systems and methods for implementing a dynamically evolving model that can be refined when new information becomes available. This information may come in the form data received in response to a user prompt or data received from a sensor, for example, a Satellite Positioning Signal (“SPS”) signal, Wi-Fi signal, a response to a user prompt, or a signal received from a motion sensor or other sensor.
When a device according to the present disclosure receives new information, it combines this new information with other available information about the user, for example, past sensor data or responses to past prompts. This enables a device according to the present disclosure to develop a model that grows and adapts as new data is received.
As described herein, data about the user is referred to as context. Context is any information that can be used to characterize the situation of an entity. In some embodiments context may be associated with variables relevant to a user and a task the user is accomplishing. For example, in some embodiments, context may be associated with the user's location, features associated with the present location (e.g. environmental factors), an action the user is currently taking, a task the user is attempting to complete, the user's current status, or any other available information. In some embodiments, context may be associated with a mobile device or application. For example, in some embodiments, context may be associated with a specific mobile device or specific application. For example, in some embodiments, context may be associated with a mobile application, such as a social networking application, a map application, or a messaging application. Further examples of contexts include information associated with location, times, activities, current sounds, and other environmental factors, such as temperature, humidity, or other relevant information.
In some embodiments, context or contexts may be determined from sensors, for example physical sensors such as accelerometers, SPS and WiFi signals, light sensors, audio sensors, biometric sensors, or other available physical sensors known in the art. In other embodiments, context may be determined by information received from virtual sensors, for example, one or more sensors and logic associated with those sensors. In one embodiment, a virtual sensor may comprise a sensor that uses SPS, wi-fi, and time sensors in combination with programming to determine that the user is at home. In some embodiments, these sensors, and the associated programming and processor may be part of a single module, referred to as a sensor.
Context information gathered from sensors may be used for a variety of purposes, for example to modify operation of the mobile device or provide relevant information to the user. But a machine, for example, a mobile device, does not perceive information in the same way that a human may perceive information. The present disclosure describes systems and methods for associating human level annotations with machine readable sensor data. For example, some embodiments of the present disclosure contemplate receiving sensor data, comparing that sensor data to a database of data associated with the user, and making determinations about the user based on the comparison. These determinations may then be used to modify operation of a device or direct specific useful information to the user.
In one embodiment of the present disclosure, sensor data associated with a user may be received by a mobile device (or context engine) and used to determine information about the user's current context. In some embodiments, this determination may be made based at least in part on data associated with the user's past context or contexts. For example, in one embodiment a sensor may detect data associated with the user's present location and transmit that data to a context engine. In such an embodiment, based on that sensor data, the context engine may determine that the user is at home. Similarly, in another embodiment, the context engine may receive additional data, for example, the user's current physical activity, and based on this data make additional determinations. In still other embodiments, the context engine may receive multiple sensor signals. In one such embodiment, one sensor signal may indicate that the user is currently at home and another indicating that the user is sleeping. In such an embodiment, the context engine may compare this information to past user contexts, and determine that the user is in bed. Or, in another embodiment, the context engine may receive further sensor data, for example, from a time sensor, indicating that the current time is 4 PM. In such an embodiment, the system may then compare this additional data to a database and determine that, based on past contexts, the user is not in bed, but rather is sleeping on a couch in the user's living room. Such a determination introduces the concept of subsumption, in which one larger context, e.g. home, comprises multiple smaller contexts, e.g. bedroom, kitchen, and living room.
In further embodiments of the present disclosure the context engine may determine the context based at least in part on user input. For example, a context engine may be configured to generate a user interface to receive user input related to context, and in response to signals generated by user input may determine addition information associated with a user's current context. For example, in one embodiment, the context engine may display prompts to which the user responds with answers regarding the user's current situation. For example, in some embodiments, these prompts may comprise prompts such as “are you at work,” “are you eating,” or “are you currently late,” may be used to determine additional information about the user's current context.
In some embodiments of the present disclosure, the context engine may receive user input from other interfaces or applications, for example, social networking pages or posts, text messages, emails, calendar applications, document preparation software, or other applications configured to receive user input. For example, a context engine may be configured to access a user's stored calendar information. In one such embodiment, a user may have stored data associated with a dentist appointment at 8 AM. Based on sensor data, the context engine may determine that the current time is 8:10 AM and that the user is currently travelling 50 miles per hour. Based on this information, the context engine may determine that the user is currently late to the dentist appointment. The system may further reference additional sensor data, for example, SPS data showing the user's current location to determine that the user is in route to the dentist appointment. In a further embodiment, the system may receive additional sensor data that the user is at the dentist.
Further, in some embodiments of the present disclosure, a context engine may apply context information to a database associated with the user. In some embodiments, the context engine may be configured to access this database to make future determinations about the user's context. For example, in one embodiment, a database may store a context associated with walking to work at a specific time on each weekday. Based on this data, the context engine may determine that at that the specific time, the user is walking to work, or should be walking to work.
In an embodiment of the disclosure a system may use context data for a multitude of purposes. For example, in one embodiment, a context engine may use context data to selectively apply reminders, change the operation of a mobile device, direct specific marketing, or some other function. For example, in the embodiment described above with regard to the user who is late to a dentist appointment, based on context information the context engine may determine that the user is late and generate a reminder to output to the user. Further, in one embodiment, the context engine may identify the user's current location and generate and output a display showing the user the shortest route to the dentist. Or in another embodiment, the device may generate a prompt showing the user the dentist's phone number so the user can call to reschedule the appointment. Similarly, in some embodiments, if the user arrives at the dentist on time, the context engine may use context data to determine that the calendar reminder should be deactivated, so the user is not bothered.
In other embodiments, context information may be used for other purposes. In one embodiment, a context engine may receive sensor data that indicates there is a high probability a user is in a meeting, for example, based on SPS data that the user is in the office, the current time of day, and an entry in the user's calendar associated with “meeting.” Thus, a system according to the present disclosure may adjust the device settings of the user's mobile device to set the ringer to silent, so the user is not disturbed during the meeting. Further, in some embodiments, this information may be used for direct marketing. For example, in some embodiments, mobile advertising may be directed to the user based on the user's current location and activity. For example, in one such embodiment, a system of the present disclosure may determine that the user is likely hungry. In such an embodiment, the context engine may make this determination based on input data associated with the current time of day and past input regarding when the user normally eats. Based on this context, a context engine of the present disclosure may output web pages associated with restaurants to the user. In a further embodiment of the present disclosure, the context engine may determine a context associated with the user's current location and output marketing related to nearby restaurants. In a further embodiment, the system may determine a context associated with restaurants for which the user has previously indicated a preference and provide the user with information associated with only those restaurants.
Referring now to the drawings, in which like numerals indicate like elements throughout the several figures,
The processor 120 is an intelligent hardware device, e.g., a central processing unit (CPU) such as those made by Intel® Corporation or AMD®, a microcontroller, an application specific integrated circuit (ASIC), etc. The memory 122 includes non-transitory storage media such as random access memory (RAM) and read-only memory (ROM). The memory 122 stores the software 124 which is computer-readable, computer-executable software code containing instructions that are configured to, when executed, cause the processor 120 to perform various functions described herein. Alternatively, the software 124 may not be directly executable by the processor 120 but is configured to cause the computer, e.g., when compiled and executed, to perform the functions.
The sensor(s) 130 may comprise any type of sensor known in the art. For example, SPS sensors, speed sensors, biometric sensors, temperature sensors, clocks, light sensors, volume sensors, wi-fi sensors, or wireless network sensors. In some embodiments, sensor(s) 130 may comprise virtual sensors, for example, one or more sensors and logic associated with those sensors. In some embodiments, these multiple sensors, e.g. a Wi-Fi sensor, an SPS sensor, and a motion sensor, and logic associated with them may be packaged together as a single sensor 130.
The I/O devices 126 comprise any type of input output device known in the art. For example, a display, speaker, keypad, touch screen or touchpad, etc. I/O devices 126 are configured to enable a user to interact with software 124 executed by processor 120. For example, I/O devices 126 may comprise a touch-screen, which the user may use to update a calendar program running on processor 120.
Referring now to
The server 210 may include a processor 211 and a memory 212 coupled to the processor 211. In a particular embodiment, the memory 212 may store instructions 214 executable by the processor 211, where the instructions represent various logical modules, components, and applications. For example, the memory 212 may store computer-readable, computer-executable software code containing instructions that are configured to, when executed, cause the processor 211 to perform various functions described herein. The memory 212 may also store one or more security credentials of the server 210.
The mobile device 220 may include a processor 221 and a memory 222 coupled to the processor 221. In a particular embodiment, the memory 222 stores instructions 224 executable by the processor 221, where the instructions may represent various logical modules, components, and applications. For example, the memory 222 may store computer-readable, computer-executable software code containing instructions that are configured to, when executed, cause the processor 211 to perform various functions described herein. The memory 222 may also store one or more security credentials of the mobile device 220.
Turning now to
Turning now to
In the embodiment shown in
The embodiment shown in
Turning now to
Similarly, as shown in
Turning now to
Turning now to
Turning now to
Further, as shown in
Turning now to
Turning now to
Further, as shown in
The examples above are described with regard to locations and signals associated with location determination. But in other embodiments, a context engine may use subsumption to make other determinations based on other signals, for example signals from I/O devices 126, sensor(s) 130, or data stored in memory 122. Thus, in other embodiments, a context engine may build a subsumption model for composites of any type of context and corresponding labels and models. For example, in some embodiments, a user provided label may correspond to multiple machine produced contexts and corresponding models. For example, in some embodiments, labels may be associated with states of mind (e.g. happy, sad, focused, etc.), activities (work, play, exercise, vacation, etc.), or needs of the user (e.g. thirsty, hungry, or searching for something at a store). For example, in some embodiments, a context may be associated with movement in a user's car. In such an embodiment, sensor signals associated with factors such as the user's speed or location, time of day, entries stored in the user's calendar application, posts to social network, or any other available data, may be used by a context engine to make inferences regarding the user's context. For example, in one embodiment, if the context engine receives location signals indicating that the user is near several restaurants at the time of day the user normally eats, then the context engine may determine a context associated with the user searching for a restaurant. Similarly, in another embodiment, if at the same time the user is still in the office and the user continues to be in the office well past the time the user normally eats, then the device may determine a context associated with the user being hungry. In either of these embodiments, the device may further provide the user with menus from nearby restaurants.
In further embodiments, still other factors may be considered. For example, in some embodiments, the context engine may make determinations based on sensor signals associated with the user's activity. For example, in some embodiments, the context engine may associate different activities with different locations within the same larger location. For example, in one embodiment the context engine may determine a context associated with sitting in the living room, for example, while the user is watching TV. In such an embodiment, the context engine may determine another context associated with sitting while in the kitchen. In such an embodiment, the context engine may determine still another context associated with sleeping in the bedroom. In such an embodiment, even if the context engine cannot determine the user's precise location based on location signals, it may be able to narrow the location based on activity. For example, in the embodiment described above, the context engine may determine that if the user is sitting, the user is likely in one of two rooms.
Turning now to
As shown in
The method 700 continues to stage 704, when a first input signal is received. In some embodiments, the first input signal may comprise data associated with a user at the current time. In some embodiments, the first input signal may be received from one of I/O devices 126, sensor(s) 130, or antenna(s) 128 shown in
Next, at stage 706 a first context is determined In some embodiments, the first context may be determined based on the first input signal and the current time. For example, in some embodiments, the first context may comprise a context associated with the user's current location, e.g., in a specific room. In one embodiment, this specific room may comprise a kitchen. Such a determination may be made based on the first input signal. For example, if the first input signal comprises a location signal, it may indicate the user is in the kitchen. In other embodiments, such a determination may be made on a different type of input signal. For example, in some embodiments, the input signal may comprise an activity signal, which indicates the user is cooking. Further, in some embodiments, a microphone may detect sounds associated with the first room. For example, if the first room is a kitchen, these sounds may be associated with eating, a microwave running, coffee brewing, or some other sound associated with a kitchen. In still other embodiments, the context determination may be based on a light sensor. For example, in one embodiment, a user may be in a dark room. In such an embodiment a light sensor may detect the low level of ambient light, and determine a context associated with sleep or the bedroom.
The method continues to stage 708 when the first context is compared to a database of contexts associated with the user. In some embodiments, the database may be a database stored in memory 122 in
The method continues to stage 710 when a second context is determined based in part on the comparison discussed in stage 708. In some embodiments the second context comprises a subset of the first context. For example, in some embodiments, the first context may be based on a location signal, for example, a location signal associated with the user's house. In such an embodiment, the database may comprise data indicating that the user normally eats at the current time. In such an embodiment, the device may determine a second context associated with the kitchen.
The method continues at stage 712 when a second input signal is received. As with the first input signal, in some embodiments, the second input signal may comprise data associated with a user at the current time. In some embodiments, the first input signal may be received from one of I/O devices 126, sensor(s) 130, or antenna(s) 128 shown in
Next, at stage 714 a third context is determined In some embodiments, the third context may be based on the second input signal and the current time. For example, in some embodiments, the first context may be associated with the user's current location, e.g., at work. In such an embodiment, the database may indicate that the user normally has a meeting at the current time, thus the second context may be associated with a meeting. Further the second input signal may be associated with data input on the user's calendar application. In such an embodiment, the calendar application may indicate that the user has a conference call scheduled at the current time. Thus, the third context may be associated with a conference call at the office.
The method continues to stage 716 when the third context is compared to a database of contexts associated with the user. In some embodiments, the database may be a database stored in memory 122 in
The method continues to stage 718 when a fourth context is determined based in part on the comparison discussed with regard to stage 716. For example, in one embodiment, the first context may be based on a location signal, for example, a location signal associated with the user's house. In such an embodiment the database may comprise data indicating that the user normally eats at the current time. In such an embodiment, the device may determine a second context associated with the kitchen. In such an embodiment, the third input signal may be associated with a post on a social networking site that the user is hungry. Based on this, the third context may be associated with the user being hungry. Further, in such an embodiment, the database may comprise data associated with types of food the user likes. Thus, in such an embodiment, the device may provide the user with menus for nearby restaurants that serve the types of food the user normally likes.
The method continues to stage 720, when one or more of the contexts is stored in a database. In some embodiments, the database may be the same database discussed above with regard to stages 708 and 716. In other embodiments, the database may comprise a different database. In some embodiments, the database may be stored in memory 122 in
Embodiments of the present disclosure provide numerous advantages. For example, there are oftentimes not direct mappings between user input data and raw device data (e.g. data from sensors). Thus, embodiments of the present disclosure provide systems and methods for bridging the gap between device and human interpretations of data. Further embodiments provide additional benefits, such as more useful devices that can modify operations based on determinations about the user's activity. For example, embodiments of the present disclosure provide for devices that can perform tasks, such as searching for data or deactivating the ringer, before the user thinks to use the mobile device. Such embodiments could lead to wider adoption of mobile devices and greater user satisfaction.
The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Also, configurations may be described as a process that is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.
Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the disclosure. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bound the scope of the claims.
The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
Embodiments in accordance with aspects of the present subject matter can be implemented in digital electronic circuitry, in computer hardware, firmware, software, or in combinations of the preceding. In one embodiment, a computer may comprise a processor or processors. The processor comprises or has access to a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs including a sensor sampling routine, selection routines, and other routines to perform the methods described above.
Such processors may comprise a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), field programmable gate arrays (FPGAs), and state machines. Such processors may further comprise programmable electronic devices such as PLCs, programmable interrupt controllers (PICs), programmable logic devices (PLDs), programmable read-only memories (PROMs), electronically programmable read-only memories (EPROMs or EEPROMs), or other similar devices.
Such processors may comprise, or may be in communication with, media, for example tangible computer-readable media, that may store instructions that, when executed by the processor, can cause the processor to perform the steps described herein as carried out, or assisted, by a processor. Embodiments of computer-readable media may comprise, but are not limited to, all electronic, optical, magnetic, or other storage devices capable of providing a processor, such as the processor in a web server, with computer-readable instructions. Other examples of media comprise, but are not limited to, a floppy disk, CD-ROM, magnetic disk, memory chip, ROM, RAM, ASIC, configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read. Also, various other devices may include computer-readable media, such as a router, private or public network, or other transmission device. The processor, and the processing, described may be in one or more structures, and may be dispersed through one or more structures. The processor may comprise code for carrying out one or more of the methods (or parts of methods) described herein.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.
This patent application claims priority to U.S. Provisional Application No. 61/507,934, titled “Dynamic Subsumption Inference,” filed on Jul. 14, 2011, the entirety of which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61507934 | Jul 2011 | US |