Modern mobile devices (e.g., smartphones) may contain many applications. Certain applications may be designed to enable a user to interact with or communicate with other users. For instance, in addition to providing the capabilities of placing phone calls and sending SMS text messages, modern mobile devices may contain communication applications for composing email messages, instant messages, and for initiating video calls and video conferences. As modern mobile devices become more integrated with modern day life, the number of communication applications and contacts stored on the mobile devices increases. It is not uncommon for modern mobile phones to have multiple applications that can be used to interact with other users. Having numerous applications for interacting with people may allow the mobile device to be particularly useful to the user; however, it may be difficult and time consuming for the user to find and select a desired recipient amongst all of available contacts that the user wishes to interact or communicate with.
Embodiments suggest recipients for communications and interactions that are most likely to be relevant to a user of a computing device based on a current context of the device. Examples of a computing device are a phone, a tablet, a laptop, or a desktop computer. An example system gathers knowledge of previous interactions and suggests predicted recipients based on this knowledge. The knowledge can be stored in a historical interactions database with information indicating when, where, and how the user has interacted with other users. The system can recommend recipients (e.g., people) along with a mechanism to interact with them given a specific context. The context can be described in terms of state variables indicating time, location, and an account identifier (e.g., an email account). The context can also be based on keywords (e.g., keywords from an email subject or calendar event title) and other factors, such as, for example, sets of recipients the user has interacted with in the past. Additional constraints can be imposed to help narrow suggestions to particular users, accounts, applications (e.g., communications applications), or mechanisms of interaction.
Embodiments can provide systems, methods, and apparatuses for suggesting one or more recipients to contact with a computing device based on an event and a context. Example events include receiving an input to initiate a search, receiving an input to access an email application, composition of an email, receiving an input to access a text messaging application, composition of a text message, receiving an input to access a calendar application, creation of a calendar entry, editing a calendar entry, initiation of a phone call, initiation of a video call, and initiation of a video conference. Example contexts include location and time. Embodiments can predict recipients of a communication based on the context of the device a user is using to initiate or compose the communication (e.g., at home, commuting to work, at work, etc.). For instance, based on information known about the communication (e.g., whether the communication is an email, instant message, text message, video conference, or calendar invitation), recipients for the communication are predicted. Recipients for communications are also predicted based on previous communications. For instance, users or contacts that a user has interacted with in the past via previous emails, messages, or calls can be suggested as recipients for a communication.
Embodiments can provide methods for suggesting recipients to contact by using contextual information to predict people a user may want to interact with at a certain time and place. Some embodiments determine a current context representing a current state as a user of a device (e.g., a mobile device) is composing or initiating a communication in an application. In embodiments, the current context can include contextual information such as a time, a location, a next calendar entry, a title or subject of the communication (e.g., email subject or calendar entry title), a previous recipient of a similar communication, and account information (e.g., a personal email account or a work email account). Some embodiments use the current context to predict who is the most likely recipient the user will add as a recipient of the communication.
Other embodiments are directed to systems, portable consumer devices, and computer readable media associated with methods described herein.
A better understanding of the nature and advantages of embodiments of the present invention may be gained with reference to the following detailed description and the accompanying drawings.
A “user interface” corresponds to any interface for a user to interact with a device. A user interface for an application allows a user to interact with the application. The user interface could be an interface of the application when the application is running. As another example, the user interface can be a system interface that provides a reduced set of applications for users to select from, thereby making it easier for a user to use the application.
A “home screen” is a screen of a device that appears when a device is first powered on. For a mobile device, a home screen often shows an array of icons corresponding to various applications that can be run on the device. Additional screens may be accessed to browse other applications not appearing on the home screen.
“Contextual information” refers collectively to any data that can be used to define the context of a device. The contextual information for a given context can include one or more contextual data, each corresponding to a different property of the device. The potential properties can belong to different categories, such as a time category, a location category, or an application category. The application category can include properties of a device application used to initiate a communication. For example, the contextual information can include a time, an account identifier, and/or a location corresponding to a user device being used to compose or initiate a communication such as an email message, a calendar event, an instant message, a text message, a voice call, a video call, or a video conference. The contextual information can be used as a feature of a model (or sub-model), and the data can be used to train the model can include different properties of the same category. A particular context can correspond to a particular combination of properties of the device, or just one property.
A “confidence level” corresponds to a probability that a model can make a correct prediction (i.e., at least one of the predicted recipient(s) was chosen after the event) based on the historical interactions data. An example of a confidence level is the percentage of events where a correct prediction was made (i.e., a predicted recipient was suggested and chosen as a recipient for a communication). Another example uses a cumulative distribution function (CDF) of a probability distribution (e.g., beta distribution) generated from the number of correct and incorrect predictions. The CDF can be computed by integrating the probability distribution. In various implementations, the confidence level can be the amount of increase in the CDF past an input value (e.g., between 0 and 1, with 1 corresponding to a correct recipient prediction) or the input value providing a specified CDF past the input value. The probability of a recipient being selected can be required to be a threshold probability, which is the corollary of the model having a confidence level above a confidence threshold. The confidence level can be inversely proportional to a measure of entropy, and thus an increase in confidence level from a parent model to a sub-model can correspond to decrease in entropy.
Embodiments can provide a customized and personalized experience for suggesting recipients to a user of a computing device, thereby making use of the device for interacting and communicating with other users easier. Embodiments can provide methods for suggesting recipients to contact using people centric prediction. People centric prediction uses contextual information to predict people a user may want to interact with at a certain time and place. A user of a computing device can interact and communicate with a set of other users (e.g., contacts). Examples of a computing device are a phone, a tablet, a laptop, or a desktop computer. Interactions and communications with other users may occur after specific events. Example events include initiating a search, accessing a communication application, and composing or initiating a communication. Example communication applications include an email application, a calendar application, a video call application, an instant message application, a text message application, a video conference application, a web conferencing application, and a voice call application. Example communications include voice and data communications, such as, for example, an email message, a calendar invitation, a text message, an instant message, a video call, a voice call, and a video conference. When a communication application is used on a device, recipients of communications can be suggested based on comparing a current context of the device to historical information.
In embodiments, data from past, historical interactions is stored in tables of a database and used to suggest recipients of communications. The database can include contextual information for the past interactions such as, for example, timestamps, applications used for the interactions, account information (e.g., an account identifier for an email account), and location. The past interactions can be compared to a device's context to suggest recipients for a communication being initiated on the device. For example, the device's current context can be compared to historical interactions data to match the current context to similar past interactions with previous recipients.
Each data point (e.g., record) in the historical data can correspond to a particular context (e.g., corresponding to one or more properties of the device), with more and more data for a particular context being obtained over time. This historical data for a particular event can be used to suggest recipients to a user. As different users will have different historical data, embodiments can provide a personalized experience.
In some embodiments, recipients for prior, similar communications are used to suggest recipients for a communication being composed or initiated. For example, if a user selects a first recipient for a current communication, other recipients added to past communications with the selected first recipient can be used to predict additional recipients for the current communication. In an embodiment, recipients can be suggested based on contextual data indicating periodicity of interactions (e.g., communications repeatedly sent at a similar time of day or a same day of week). Recipients can also be suggested based on location information indicating that a user's current location is similar to a location the user was at when past communications were sent to certain contacts.
In embodiments, user-supplied information can be used to predict recipients. The user-supplied information can include an email subject, content of an email, a calendar entry title, an event time, and/or a user-selected recipient. Such user-supplied information can be compared to historical contextual information to predict recipients. For example, recipients of past communications having characteristics similar to the user-supplied information can be presented to the user as suggested recipients of a current communication. Some embodiments may use information the user has entered into the communication (e.g., if the user has included a subject or attachment) to determine that such information is relevant to the identification of potential recipients. For example, embodiments can parse a subject of an email message or calendar entry to identify one or more keywords that may be relevant to suggesting potential recipients if such information is available.
To provide an accurate personalized experience, various embodiments can start with a broad model that is simply trained without providing recipient suggestions or that suggests a same set of recipient(s) for a variety of contexts. With sufficient historical data, the broad model can be segmented into sub-models, e.g., as a group of people or interactions, with each sub-model corresponding to a different subset of the historical interactions data. Then, when an event does occur, a particular sub-model can be selected for providing one or more suggested recipients corresponding to a current context of the device. Various criteria can be used to determine when to generate a sub-model, e.g., a confidence level in the sub-model providing a correct prediction in the subset of historical data and an information gain (entropy decrease) in the distribution of the historical data relative to a parent model.
Accordingly, some embodiments can decide when and how to segment the historical data in the context of recipient recommendations. For example, after collecting a period of user interaction activity, embodiments can accumulate a list of possible segmentation candidates (e.g., location, day of week, time of day, etc.). Embodiments can also train a model on the entire dataset and compute a metric of the confidence in the joint distribution of the dataset and the model. A set of models can be trained, one for each of the segmented datasets (i.e., subsets), and then measure the confidence of each of the data model distributions. If the confidence of all data model distributions is admissible, embodiments can perform the segmentation (split) and then recursively examine the segmented spaces for additional segmentations.
In this way, some embodiments can use inference to explore the tradeoff between segmentation and generalization, creating more complex models for users who have more distinct, complex patterns, and simple, general models for users who have noisier, simpler patterns. And, some embodiments can generate a tree of probabilistic models based on finding divergence distributions among potential candidate models.
Embodiments can suggest one or more recipients based upon an event, which may be limited to certain predetermined events (also called triggering events). Example triggering events can include initiating a search, composing an email message, creating a calendar entry, etc. For instance, a contact that a user has previously sent email to using a certain email account can be suggested when a user begins composing an email using the email account. In some embodiments, contextual information may be used in conjunction with the event to identify a recipient to suggest to a user. As an example, when a calendar entry (e.g., a calendar event, meeting, or appointment) is being created or modified, contextual information relating to location may be used. If the device is at an office location, for instance, recipient A having an office at that location may be suggested as an invitee for the calendar event. Alternatively, if the device is at home, recipient B associated with the home location (i.e., a family member or roommate) can be suggested as an invitee for the calendar entry. Accordingly, recipients that are predicted to be relevant under certain contexts may be suggested at an opportune time, thus enhancing user experience. As another example, when a calendar entry is open for creation or modification, contextual information relating to time may be used. If the scheduled start time for the calendar entry corresponds to a user's typical work hours, recipient A who is a coworker may be suggested as an invitee for the calendar event. Alternatively, if the calendar entry has a start time corresponding to an evening or weekend, recipient B who is a friend or family member can be suggested as an invitee for the calendar event.
At block 102, user input at a user device is detected. In some embodiments, it can be determined whether the input corresponds to a triggering event for suggesting recipients. In some implementations, a determination of one or more suggested recipient(s) is only made for certain predetermined events (e.g., triggering events). In other implementations, a determination of the one or more suggested recipient(s) can be made for dynamic list of events, which can be updated based on historical user interactions made using the user device.
In some embodiments, a triggering event can be identified as sufficiently likely to correlate to an operation of a communications application of the device. A list of events that are triggering events can be stored on the device. Such events can be a default list and be maintained as part of an operating system and may or may not be configurable by a user.
A triggering event can be an event induced by a user and/or an external device. For instance, the triggering event can be when an input is received at the mobile device. Examples include receiving input to initiate a search, receiving input to access a communications application, and the like. In this example, each of these events can be classified as a different triggering event. As other examples, a triggering event can be a specific interaction of the user with the device. For example, the user can initiate a search on the device, access a communication application on the device, or begin composing a communication message on the device. Also, for example, the user can move the mobile device to a work location, where a location state of the device is a triggering event. Such a location state (or other states) can be determined based on sensors of the device.
At block 104, contextual information representing a current state of the device is determined. In an example, the contextual information can indicate an application executing on the device. For instance, the contextual information can indicate the state of a communication application being used to initiate a communication. The contextual information can also indicate the state of a search application being used to initiate a search. As an example, block 104 can include determining a time, account information (e.g., an email account identifier), and/or a location corresponding to a communication application being used on the device. Block 104 can also include determining a sub-state of the device, the sub-state being an application state of an executing application. For example, the application state can indicate the state of an email application being used to compose an email message, the state of a calendar application being used to create a calendar event, the state of an instant messaging client being used to initiate an instant message, the state of an application being used to compose a text message, or the state of an application being used to initiate a phone call, a video call, or a video conference.
Contextual information may specify one or more properties of the device for a certain context. The context may be the surrounding environment (type of context) of the device when the triggering event is received. For instance, contextual information may be the time of day that the event is detected. In another example, contextual information may be a certain location of the device when the event is detected. In yet another example, contextual information may be a certain day of year at the time the triggering event is detected. Such contextual information may provide more meaningful information about the context of the device such that the suggestion engine may accurately suggest a recipient that is likely to be selected by the user in that context. Accordingly, prediction engine utilizing contextual information may more accurately suggest a recipient to a user than if no contextual information were utilized.
At block 106, historical data representing past interactions between the user and other users is retrieved. The retrieval is based on the contextual information. For example, block 106 can include retrieving data corresponding to past emails, messages, phone calls, calendar entries, video calls, and video conferences. The historical data can be retrieved from tables corresponding to previous communications made using the user device, where each of the tables corresponds to a different device sub-state of the user device and includes a plurality of contact measures of previous communications for different recipients. As an example, block 106 can include using one or more state variables to identify a first set of the tables that correspond to the one or more state variables, and then obtaining, from the first set of tables, contact measures for one or more potential recipients.
At block 108, the contextual information is compared to the historical data. Block 108 can include querying a first set of tables identified at block 106 to determine correlations between historical data in the set of tables and the contextual information.
At block 110, one or more recipients for the communication are predicted. As shown in
Block 110 can use a prediction engine or prediction model to identify predicted recipients. For instance, a prediction model may be selected for a specific triggering event. The prediction model may use contextual information to identify the recipient(s), e.g., interactions or communications with different recipients may be more likely in different contexts. Some embodiments can suggest recipients only when there is a sufficient probability of the suggested recipients being selected by a user, e.g., as determined from historical interactions of the user with the recipients while using the device. Examples of historical interactions can include at least portions of communications that the user exchanged with the recipients using an email application, text messaging (e.g., SMS-based messaging), an instant messaging application, and a video conferencing application.
In some embodiments, a social element based on past communications and interactions can be used to predict recipients. For example, the historical data obtained at block 106 can be used to weigh recipients of previously sent emails. The social element reflects historical interactions data between a user of the user device and groups of past recipients of the user's communications (e.g., contacts and groups of contacts). Co-occurrences (i.e., communications sent to the same group of recipients) can be used to predict email recipients. For instance, a social element can weigh each recipient the user has sent email to, with higher weights being assigned to recipients who have been repeatedly included in a group of recipients (e.g., a CC list or a defined group of contacts). The recipients can be uniquely identified within the historical data by their respective email addresses. The social element weight can be higher for sent email messages as compared to received emails. The social element can also be weighted based on the email account (e.g., a personal account or a work account) that the user has used to send email messages. When the contextual information indicates an email is being composed, the social element can be used to identify co-occurrences of recipients for past email messages. These co-occurrences can be used in turn to predict recipients of the email being composed, particularly when the user selects a recipient that has been included in a group of recipients in the past email messages.
At block 112, an indication of the one or more predicted recipients is provided to the user. Block 112 can include presenting a list of the one or more predicted recipients in a user interface of the user device, or within a communication application executing on the user device. In some embodiments, an action can be performed in association with an executing application at block 112. In an embodiment, the action may be the displaying of a user interface for a user to select one or more of the predicted recipients. The user interface may be provided in various ways, such as by displaying on a screen of the device, projecting onto a surface, or providing an audio interface.
In other embodiments, an application may run, and a user interface specific to the application may be provided to a user. Either of the user interfaces may be provided in response to identifying a recipient, e.g., a potential recipient of a communication. In other implementations, a user interface to interact with the application may be provided after a user is authenticated (e.g., by password or biometric), but such a user interface would be more specific than just a home screen, such an interface with a list of suggested recipients.
Triggering events may be a predetermined set of events that trigger the identification of one or more recipients to provide to a user. The events may be detected using signals generated by device components. Details of how triggering events are detected are discussed in further detail below with reference to
A. Detecting Events
In embodiments, detection system 200 includes hardware and software components for detecting triggering events. As an example, detection system 200 may include a plurality of input devices, such as input devices 202. Input devices 202 may be any suitable device capable of generating a signal in response to an event. For instance, input devices 202 may include user interaction input devices 204 and locational input devices 206 that can detect device connection events, user interaction events, and locational events, respectively. When an event is detected at an input device, the input device can send a signal indicating a particular event for further analysis.
In some embodiments, a collection of components can contribute to a single event. For example, a person can be detected to be commuting to or from work based on motion sensors, a GPS location device, and a timestamp.
1. User Interaction Events
User interaction input devices 204 may be utilized to detect user interaction events. User interaction events can occur when a user interacts with the device. In some embodiments, a user can provide inputs to a displayed user interface of an application via one of user interaction input devices 204. In other embodiments, the user interface may not be displayed, but still is accessible to a user, e.g., via a user shaking a device or providing some other type of gesture. Further, the interaction may not include a user interface, e.g., when a state engine uses values from sensors of the device.
Any suitable device component of a user interface can be used as a user interaction input device 204. Examples of suitable user interaction input devices are a button 208 (e.g., a home or power button), a touch screen 210, a camera 212, an accelerometer 214, a microphone 216, and a mouse 218. For instance, button 208 of a mobile device, such as a home button, a power button, volume button, and the like, may be a user interaction input device 204. In addition, a switch such as a silent mode switch may be a user interaction input device 204. Also, for example, microphone 216 of a mobile device, such as an integrated microphone configured to detect voice commands, may be a user interaction input device 204. Further for example, a mouse 218 or a pointing device such as a stylus may be a user interaction input device 204 used to provide user inputs to a communication application.
When the user interacts with the device, it may be determined that a user has provided user input to an application, and a corresponding triggering event may be generated. Such an event may depend on a current state of the device, e.g., where the device is located or when the event occurs. That is, a triggering event can be generated based in part on input from a user interaction input device 204 in conjunction with a location state of the device (e.g., at a work location) and a time context (e.g., a weekday morning). Such information can also be used when determining whether an event is a triggering event.
Touch screen 210 may allow a user to provide user input via a display screen. For instance, the user may swipe his or her finger across the display to generate a user input signal. When the user performs the action, a corresponding triggering 228 event may be detected.
Accelerometer 214 or other motion sensors may be passive components that detect movement of the mobile device, such as shaking and tilting (e.g., using a gyrometer or compass). Such movement of a mobile device may be detected by an event manager 230, which can determine the movement to be of a particular type. The event manager 230 can generate an event signal 232 corresponding to the particular type of a user interaction event in a given state of the device. The state of the device may be determined by a state engine, further details of which can be found in U.S. Patent Publication No. 2012/0310587 entitled “Activity Detection” and U.S. Patent Publication No. 2015/0050923 entitled “Determining Exit From A Vehicle,” the disclosures of which are incorporated by reference in their entireties.
One example is when a user is running, the accelerometer may sense the shaking and generate a signal to be provided to the event manager 230. The event manager 230 can analyze the accelerometer signal to determine a type of event. Once the type of event is determined, the event manager 230 can generate an event signal 232 corresponding to the type of event. The mobile device can move in such a manner as to indicate that the user is running Thus, this particular user interaction can be identified as a running event. The event manager 230 can then generate and send the event signal 232 indicating that a running event has been detected.
2. Locational Events
Locational input devices 206 may be used to generate locational events. Locational events can be used in combination with user interaction events to trigger suggestion of a recipient. Any suitable positioning system may be used to generate locational events. For instance, a global positioning system (GPS) may be used to generate locational events. Locational events may be events corresponding to a specific geographic location. As an example, if the mobile device arrives at a specific location, the GPS component may generate an input signal corresponding to a locational event.
B. Determining Triggering Events
As further illustrated in
Detected event 222 may be received by an event manager 230. Event manager 230 can receive signals from input devices 202, and determine what type of event is detected. Depending on the type of event, event manager 230 may output signals (e.g., event signal 232) to different engines. The different engines may be have a subscription with the event manager 230 to receive specific event signals 232 that are important for their functions. For instance, triggering event engine 224 may be subscribed to receive event signals 232 generated in response to detected events 222 from input devices 202. Event signals 232 may correspond to the type of event determined from the detected events 222.
Triggering event engine 224 may be configured to determine whether the detected event 222 is a triggering event. To make this determination, triggering event engine 224 may reference a designated triggering events database 226, which may be coupled to the triggering event engine 224. The designated triggering events database 226 may include a list of predetermined events that are designated as triggering events.
Triggering event engine 224 may compare the received detected event 222 with the list of predetermined events and output a triggering event 228 if the detected event 222 matches a predetermined event listed in the designated triggering events database 226. An example the list of predetermined events may include any one or more of: (1) accessing a communications application, (2) initiating a search, (3) composing a communication, (4) sensing a certain type of movement of the device, and (5) arriving at a certain location. For (5), designated triggering events database 226 can include specifications of the certain location. For each of the predetermined events (1)-(5), a time or time range of the occurrence of the events can be included in designated triggering events database 226. For example, designated triggering events database 226 can store a designated triggering event corresponding to sensing arrival at a work location between 8-10 am.
Once a triggering event is detected, one or more potential recipients may be identified based on the triggering event. In some embodiments, identification of the recipient(s) is not a pre-programmed action. Rather, identification of the recipient(s) can be a dynamic action that may change depending on additional information. For instance, identification of the suggested recipient(s) can be determined based on contextual information and/or people-centric historical interaction information, as well as based on other information.
Each time a particular triggering event occurs (e.g., accessing an email client, calendar application, instant messaging application, or video conferencing application on the device), the device can track which recipient(s) are selected as recipients of a communication in association with the event. In response to each occurrence of the particular event, the device can save a data point corresponding to a selected recipient, interaction with the recipient performed using the application, and the event. In various embodiments, the data points can be saved individually or aggregated, with a count being determined for the number of times a particular recipient is selected, which may include a count for a specific action. For example, counts indicating the number of emails sent to a recipient can be saved with information indicating which email account was used to send the emails, the times when the emails were sent, and the location of the device when the emails were sent. In this example, the data points can also indicate the number of times the recipient was the first addressee for an email, was included as part of an email group or distribution list, was copied (e.g., carbon copied/CC or blind carbon copied/BCC). Thus, different counts are determined for different actions for a same selected recipient.
Historical data that indicates previous user interactions and communications with recipients can be used as an input to a prediction model that predicts whether a given recipient should be suggested as a recipient of a future communication. For instance, historical data used for predicting/suggesting recipients can include records of past interactions (i.e., historical interactions) with other users. Examples of such historical interactions include voice calls, emails, calendar entries/events, instant messages, text messages (e.g., SMS-based messages), video conferences, and video calls. For example, historical interactions can include a call history indicating times, durations, and recipients (identified by phone numbers) corresponding to past voice calls. Also, for example, historical interactions can include an email history indicating times, periodicity (e.g., daily, weekly), and recipients (identified by email addresses) corresponding to past email messages.
Once a particular event is detected, a prediction model corresponding to the particular event can be selected. The prediction model would be determined using the historical interactions data corresponding to the particular event as input to a training procedure. However, the historical data might occur in many different contexts (i.e., different combinations of contextual information), with different recipients being selected in different contexts. Thus, in aggregate, the historical interactions data might not suggest a recipient that will clearly be selected by a user when a particular event occurs.
A prediction model can correspond to a particular event. Suggested recipients to contact can be determined using one or more properties of the computing device. For example, a particular sub-model can be generated from a subset of historical data corresponding to user interactions with other users after occurrences of the event. The subset of historical interactions data can be gathered when the device has the one or more properties (e.g., user interactions with selected recipients after an event of accessing an email application, with a property of a particular location and/or time of day). The prediction model can be composed of sub-models, each for different combinations of contextual data. The different combinations can have differing amounts of contextual data. The sub-models can be generated in a hierarchical tree, with the sub-models of more specific combinations being lower in a hierarchical tree. In some embodiments, a sub-model can be generated only if the sub-model can predict a recipient with greater accuracy than a model higher in the tree. In this manner, a more accurate prediction can be made for which application the user will select. In some embodiments, the prediction model and sub-models may identify the top N recipients (e.g., a fixed number of a percentage) that are chosen by the user after the event when there is a particular combination of contextual data.
A model, such as a neural network or regression, can be trained to identify a particular application for a particular context, but this may be difficult when all of the corresponding historical data is used. Using all the historical interactions data can result in overfitting the prediction model, and result in lower accuracy. Embodiments of the present invention can segment the historical data into different input sets of the historical data, each corresponding to different contexts. Different sub-models can be trained on different input sets of the historical data.
A. Different Models Based on Different Contextual Data
When a particular event occurs, the device could be in various contexts, e.g., in different locations (such as at work, at home, or at school), at different times, on different days of the week (such as business days or weekends), at different motion states of the device (such as running, walking, driving in a car, or stationary), or at different states of communication application usage (such as being used to compose an email or create a calendar entry). The contextual information can be retrieved in association with the detected event, e.g., retrieved after the event is detected. The contextual information can be used to help predict which predicted recipient might be selected as a recipient for a communication in connection with the detected event. Different locations can be determined using a GPS sensor and times can be determined based on when prior communications were transmitted. Different motion states can be determined using motion sensors, such as an accelerometer, a gyrometer, or a GPS sensor.
Embodiments can use the contextual information in various ways. In one example, a piece of the contextual data (e.g., corresponding to one property of the device) can be used to predict which recipient(s) are most likely to be selected. For example, a particular location of the device can be provided as an input to a prediction model.
In another example, some or all of the contextual data of the contextual information can be used in a segmentation process. A certain piece of contextual data can be used to segment the input historical data, such that a particular sub-model is determined only using historical data corresponding to the corresponding property of that piece of contextual data. For example, a particular location of the device would not be used as an input to the sub-model, but would be used to select which sub-model to use, and correspondingly which input data to use to generate the particular sub-model.
Thus, in some embodiments, certain contextual data can be used to identify which sub-model to use, and other contextual data can be used as input to the sub-model for predicting which recipient(s) that the user might interact with. A particular property (e.g., a particular location) does not correspond to a particular sub-model, that particular property can be used as a future (input) to the sub-model that is used. If the particular property does correspond to a particular sub-model, the use of that property can become richer as the entire model is dedicated to the particular property.
One drawback of dedicating a sub-model to a particular property (or combination of properties) is that there may not be a large amount of the historical data corresponding to that particular property. For example, the user may have only performed a particular event (e.g., composing an email) at a particular location a few times. This limited amount of data is also referred as data being sparse. Data can become even more sparse when combinations of properties are used, e.g., a particular location at a particular time. To address this drawback, embodiments can selectively determine when to generate a new sub-model as part of a segmentation process.
1. Default Model
When a device is first obtained (e.g., bought) by a user, a default model can be used. The default model could apply to a group of events (e.g., all events designated as triggering events). The default model can be seeded with aggregate data from other devices associated with user. In some embodiments, the default model can simply pick the most popular recipient, regardless of the context, e.g., as not enough data is available for any one context. Once more data is collected, the default model can be discarded.
In some embodiments, the default model can have hardcoded logic that specifies predetermined recipient(s) to be suggested and actions to be performed. In this manner, a user can be probed for how the user responds (e.g., a negative response is a user does not select a suggested recipient), which can provide additional data that simply tracking for affirmative responses are used. In parallel with such a default model, a prediction model can be running to compare its prediction against the actual result. A prediction model can then be refined in response to the actual result. When the prediction model has sufficient confidence, the switch can be made from the default model to the prediction model. Similarly, the performance of a sub-model can be tracked. When the sub-model has sufficient confidence, the sub-model can be used for the given context. In some embodiments, there are different sub-models about for different events. For example, an email sub-model can be used for email contexts to predict email recipients, and a separate calendar sub-model can be used to predict invitees for calendar events. These different sub-models can use data from corresponding tables in a historical interactions database to identify recipients of previous emails and calendar invitations. In this example, an email table can have records for past email messages indicating recipients that a user previously added to the messages. Similarly, a calendar table in the historical interactions database can have records for past calendar events that indicate users that were invited to the calendar events.
2. Initial Training
A prediction model (e.g., an event model) can undergo initial training using historical data collected so far, where the model does not provide recipient suggestions to a user. This training can be called initial training. The prediction model can be updated periodically (e.g., every day) as part of the background process, which may occur when the device is charging and not in use. The training may involve optimizing coefficients of the model so as to optimize the number of correct predictions and compared to the actual results in historical interactions data. In another example, the training may include identifying the top N (e.g., a predetermined number a predetermined percentage) applications actually selected. After the training, the accuracy of the model can be measured to determine whether the model should be used to provide a suggested recipient (and potential corresponding type of interaction) to the user.
Once a model is obtaining sufficient accuracy (e.g., top selected application is being selected with a sufficiently high accuracy), then the model can be implemented. Such an occurrence may not happen for a top-level model (e.g., a first event model), but may occur when sub-models are tested for specific contexts. Accordingly, such an initial training can be performed similarly for a sub-model.
B. Segmenting as More Data is Obtained
When a user first begins using a device, there would be no historical interaction data for making predictions about the recipients the user might select to interact with after a particular event (e.g., after accessing an email application, a calendar application, a video conference application, or a calendar application). In an initial mode, historical interactions data can be obtained while no predicted recipients are suggested. As more historical data is obtained, determinations can be made about whether to segment the prediction model into sub-models. With even more historical interaction data, sub-models can be segmented into further sub-models. When limited historical data is available for user interactions with recipients, no recipients may be suggested or a more general model can be used.
A segmentation process can be performed by a user device (e.g., a mobile device, such as a smartphone), which can maintain data privacy. In other embodiments, a segmentation process can be performed by a server in communication with the user device. The segmentation process can be performed in parts over a period of time (e.g., over days or months), or all of the segmentation process can be performed together, and potentially redone periodically. The segmentation process can execute as a routine of a recipient prediction engine.
As more data is collected, a prediction model can be segmented into sub-models. At different points of collecting data, a segmentation may occur. As even more data is obtained, another segmentation may occur. Each segmentation can involve completely redoing the segmentation, which may or may not result in the same sub-models being created as in a previous segmentation.
In this example, a first event model can correspond to a particular event (e.g., sending an email to a particular contact, such as a co-worker). The event model can correspond to a top level of a prediction engine for the particular event. Initially, there can be just one model for the particular event, as minimal historical interaction data is available. At this point, the event model may just track the historical data for training purposes. The event model can make recipient predictions and compare those predictions to the actual results (e.g., whether the user selects a suggested recipient to interact with within a specified time after the event is detected). If no recipients have a probability greater than a threshold, no recipients may be suggested when the particular event occurs.
In some embodiments, the event model only uses data collected for the particular device. In other embodiments, the event model can be seeded with historical interactions data aggregated from other devices associated with the user. Such historical interactions data may allow the event model to provide some recipient recommendations, which can then allow additional data points to be obtained. For example, it can be tracked whether a user interacts with a suggested recipient via a particular application (e.g., email, audio call, video conference, instant message, or text message), which can provide more data points than just whether a user does select a recipient.
As more data is collected, a determination can be made periodically as to whether a segmentation should occur. Such a determination can be based on whether greater accuracy can be achieved via the segmentation. The accuracy can be measured as a level of probability that a prediction can be made, which is described in more detail below. For example, if a recipient can be predicted with a higher level of probability for a sub-model than with the event model, then a segmentation may be performed. One or more other criteria can also be used to determine whether a sub-model should be created as part of segmentation process. For example, a criterion can be that a sub-model must have a statistically significant amount of input historical data before the sub-model is implemented. The requirement of the amount of data can provide greater stability to the sub-model, and ultimately greater accuracy as a model trained on a small amount of data can be inaccurate.
C. System for Suggesting Recipients Based on Triggering Event
Prediction system 300 includes a prediction engine 302 for identifying the suggested recipient(s). Prediction engine 302 can receive a triggering event. The prediction engine 302 may use information gathered from the triggering event 328 to identify a suggested recipient 304. As shown, the prediction engine 302 may receive contextual data 306 in addition to the triggering event 328. The prediction engine 302 may use information gathered from both the triggering event 328 and the contextual data 306 to identify a suggested recipient 304. In embodiments, based on received contextual data 306, prediction engine 302 uses different models to identify suggested recipients for different types of communications. For example, prediction engine 302 can use an email sub-model when contextual data 306 indicates an email application is being accessed or an email is being composed. The email sub-model can use such contextual data 306 in conjunction with historical email data from a historical events database 316 to predict email recipients. The email sub-model can be used to predict recipients of an email, and a separate calendar sub-model can be used to predict invitees for calendar events. Prediction engine 302 may also determine an action to be performed, e.g., how and when a user interface may be provided for a user to interact with a suggested recipient.
1. Contextual Information
Contextual information may be gathered from contextual data 306. In embodiments, contextual information may be received at any time. For instance, contextual information may be received before and/or after the triggering event 328 is detected. Additionally, contextual information may be received during detection of the triggering event 328. Contextual information may specify one or more properties of the device for a certain context. The context may be the surrounding environment (type of context) of the device when the triggering event 328 is detected. For instance, contextual information may be the time of day the triggering event 328 is detected. In another example, contextual information may be a certain location of the device when the triggering event 328 is detected. In yet another example, contextual information may be a certain day of year at the time the triggering event 328 is detected. Such contextual information may provide more meaningful information about the context of the device such that the prediction engine 302 may accurately suggest a recipient that is likely to be selected as a recipient by the user in that context. Accordingly, prediction engine 302 utilizing contextual information may more accurately suggest a recipient to a user than if no contextual information were utilized.
Contextual data 306 may be generated by contextual sources 308. Contextual sources 308 may be components of a mobile device that provide data relating to the current situation of the mobile device. For instance, contextual sources 308 may be hardware devices and/or software code that operate as an internal digital clock 310, GPS device 312, and a calendar 314 for providing information related to time of day, location of the device, and day of year, respectively. Other contextual sources may be used.
Gathering the contextual data 306 for the prediction engine 302 may be performed in a power efficient manner. For example, continuously polling the GPS 312 to determine the location of the device may be excessively power intensive, which may decrease battery life. To avoid decreasing battery life, prediction engine 302 may determine the location of the device by requesting the device's location from sources other than the GPS 312. Another source for locational information may be an application that has recently polled the GPS 312 for the device's location. For instance, if application A is the most recent application that has polled the GPS 312 for the device's location, the prediction engine 302 may request and receive locational data from application A rather than separately polling the GPS 312.
2. Historical Information
In addition to the contextual sources 308, a historical events database 316 may also be utilized by the prediction engine 302 in certain embodiments. The historical events database 316 may include historical information of prior interactions between the user and the mobile device after a triggering event is detected.
The historical events database 316 may keep a record of the number of times a user interacted with a recipient following a certain triggering event. For instance, the database 316 may keep a record indicating that a user includes recipient B on an email or calendar invitation eight out of ten times when including recipient A. Accordingly, the prediction engine 302 may receive this information as historical data 318 to determine whether recipient B should be identified for the user when recipient A is selected for an email or calendar communication.
The historical events database 316 may also keep a record of the number of times a recipient was interacted with under different contexts when the triggering event is detected. For example, the database 316 may keep a record indicating that a user interacts with recipient A nine out of ten times after the user accesses a personal email account when the user is at home, and one out of the ten times when the user is at a work location and using a work email account. Accordingly, the prediction engine 302 may receive this information as historical data 318 and determine that recipient A should be suggested when the user accesses the personal email account at home, but not at work when accessing a work email account. It is to be appreciated that although examples discussed herein refer to locations as “home” or “work,” contextual data 306 representing “home” or “work” may be in the form of numerical coordinates such as, for example, geographic coordinates. One skilled in the art understands that time information relating to time of day, day of week, and day of year may be used instead of location in a similar manner to identify recipients.
Historical events database 316 may also keep a record of how often, and under what circumstances, the user decides not to select the identified recipient as a recipient for a communication. For instance, the database 316 may keep a record indicating that the user did not select recipient B as a recipient for a phone call two out of ten times that person was suggested to the user when the user inserted a headset into the device at home. Accordingly, the prediction engine 302 may receive this information as historical data 318 to adjust the probability of suggesting recipient B when the user inserts the headset into the device at home.
As described above, one aspect of the present technology is the gathering and use of data available from various sources to improve prediction of users that a user may be interested in communicating with. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact a specific person. Such personal information data can include location-based data, telephone numbers, email addresses, work addresses, home addresses, past interaction records, or any other identifying information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to predict users that a user may want to communicate with at a certain time and place. Accordingly, use of such personal information data included in contextual information enables people centric prediction of people a user may want to interact with at a certain time and place.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of people centric prediction services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide location information for recipient suggestion services. In yet another example, users can select to not provide precise location information, but permit the transfer of location zone information.
D. User Interfaces
Portions of the user interface 400 may be hidden in some situations. For instance, if a suggestion center, such as the suggestion center 320 in
In some embodiments, search results window 510 can be more than the example list of contacts shown in
E. Method
At block 602, the device detects an event at an input device. As shown, block 602 can include detecting a user input at a user device associated with the user. Examples of an input device are a touch screen, a microphone for providing voice commands, a camera, buttons, a mouse, a stylus, a keyboard, and the like. The event may be any action where the mobile device interacts with an external entity such as an external device or a user. The event can be of a type that recurs for the device. Thus, historical, statistical data can be obtained for different occurrences of the event. Models and sub-models can be trained using such historical data.
Block 602 can include receiving one or more properties of the user device. The one or more properties may be received by a recipient suggestion engine executing on the device. As mentioned herein, the properties can correspond to time, location, a motion state, calendar events, and the like. Such one or more properties can correspond to contextual data that defines a particular context of the device. The one or more properties can be measured at a time around the detection of the event, e.g., within some time period. The time period can include a time before and after the detection of the event, a time period just before the detection of the event, or just a time after the detection of the event.
At block 604, it is determined that the user input corresponds to a trigger for providing a suggested recipient via a suggestion engine. For instance, if a user input is received for composing an email in an email application, block 604 can determine that a suggested recipient for the email should be provided. Also, for example, if a user input for initiating a search in a search application is received, block 604 can include determining that predicted contacts are to be included in search results.
At block 606, one or more tables corresponding to previous communications made using the user device are populated. In the example of
At block 608, the one or more state variables are used to identify a first set of the one or more tables that corresponds to the one or more state variables. For example, if a location state variable indicates that the user device is at the user's home, block 608 can include identifying tables corresponding to previous communications associated with the user's home. That is, the tables corresponding to previous communications made using the user device can be filtered down to just tables corresponding to previous communications initiated or performed while the user was home. In this example, a set of tables for past emails composed, read, or edited while the user was home can be identified when the location state variable indicates the user device is at the user's home. Also, for example, if an account state variable indicates that the user is using a work email account, block 608 can include identifying a set of tables corresponding to past communications made using that work email account. Embodiments can use multiple state variables (e.g., a location state and an account state) to
At block 610, the first set of tables are queried to obtain the contact measures for one or more potential recipients. The contact measures can include, for example, contact measures for recipients of calendar invitations for previous calendar events made using the user device, times when previous email messages were made (i.e., composed or sent), email account identifiers associated with the previous email messages, other recipients copied on the email messages, a number of email messages sent to each recipient. In one example, querying the first set of tables can be done to compute a total number of previous communications sent to each of one or more potential recipients. For instance, querying the first set of tables can include querying email tables to determine a cumulative number of email messages sent to and received from each of the potential recipients. Querying the first set of tables can also include querying calendar event tables to determine a total number of calendar invitations sent to and received from each of the potential recipients.
Block 610 can query tables based on individual interactions with a potential recipient as well as group interactions with groups of recipients. For instance, block 610 can predict a next email recipient where the context data from an email table indicates previous email interactions (e.g., a sent or received email message) between a user and a recipient. Block 610 can include ranking historical interactions that correspond to the current context. For example, a weight for an interaction can include a social element indicating a co-occurrence of multiple recipients. In this example, ranks for a user's historical interactions with a group of recipients can be increased based on whether the user previously interacted with the group in other past interactions. That is, a rankings boost can be given to members of a set of recipients based on that set of recipients having been included in common, past interactions (e.g., recipients repeatedly copied as a group on emails sent by the user). In this way, if a user previously selected two recipients for past emails and both recipients were copied on emails sent to a third recipient, that third recipient would get a ranking boost based on previously being included with the two recipients. But, if the user had another interaction where only one of these three recipients was included, that interaction would not get the same ranking boost.
At block 612, a total contact measure of previous communications and interactions using the obtained contact measures is computed for each of the one or more potential recipients. In one example, the total contact measure of previous communications is a cumulative total number of previous communications sent to each of the one or more potential recipients. In this example, a total number of emails, messages, calls, and calendar invitations sent to each of the potential recipients can be calculated by querying the one or more tables.
At block 614, the prediction engine is used to identify one or more predicted recipients to suggest to the user based on the total contact measures of the one or more potential recipients and using one or more criteria. In some embodiments, the criteria can include a minimum number of predicted recipients to suggest (e.g., the top N recipients), a percentage of predicted recipients to suggest (e.g., the top 25 percent), and/or a threshold confidence level for suggesting a predicted recipient. Block 614 can include using a hard cutoff as a criterion. For example, recipients may only be considered that had a minimum number of prior interactions with the user. In some embodiments, a social criterion is used to suggest recipients. For instance, predicted recipients may be suggested when they have co-occurrences with another suggested recipient that the user has previously interacted with. In some embodiments, recipients having similar characteristics to other predicted recipients can be suggested. For instance, recipients with the same email address domain and who are associated with the same location as a predicted recipient may be suggested as additional recipients for a communication.
Block 614 can include using a particular sub-model to identify one or more recipients to suggest to the user. The one or more recipients can have at least a threshold probability of at least one of the one or more recipients being interacted with by the user in association with the triggering event. Predicting one of the one or more recipients in the historical data can be identified as a correct prediction. The threshold probability can be measured in a variety of ways, and can use a probability distribution determined from the historical data, as is described in more detail below. For example, an average (mean) probability, a median probability, or a peak value of a probability distribution can be required to be above the threshold probability (e.g., above 0.5, equivalent to 60%). Thus, a confidence level can be an average value, median value, or a peak value of the probability distribution. Another example is that the area for the probability distribution above a specific value is greater than the threshold probability.
At block 616, the one or more predicted recipients are provided to the user. Block 614 can include providing a user interface to the user for communicating with the one or more recipients. For example, the device may display the identified recipients to the user via a list interface with which the user may interact to indicate whether the user would like to access the identified recipients. For instance, the user interface may include a touch-sensitive display that shows the user one or more of the identified recipients, and allows the user to communicate with one or more of the recipients identified by the device by interacting with the touch-sensitive display. The user interface can allow interactions on a display screen with fewer recipients than provided in a list of all of the user's recipients.
As an example, one or more suggested recipients can be provided in a recipients list on a search screen. The user can select a recipient and then select how the selected recipient is to be communicated with from the search screen, thereby making it easier for the user to interact with the selected recipient. For example, a user interface specific to a communication application (e.g., an email application) can appear after authenticating the user (e.g., via password or biometric).
In an email context, block 614 can provide the suggested recipients as potential recipients of an email message. In this context, the example email application interface of
F. Example Models
In some embodiments, a model can select the top N recipients for a given set (or subset) of data. Since the N recipients have been picked most often in the past, it can be predicted that future behavior will mirror past behavior. N can be a predetermined number (e.g., 1, 2, or 3) or a percentage of recipients, which may be the number of recipients that were actual past recipients associated with the event. Such a model can select the top N recipients for providing to the user. Further analysis can be performed, e.g., to determine a probability (confidence) level for each of the N recipients to determine whether to provide them to the user, and how to provide them to the user (e.g., an action), which may depend on the confidence level.
In an example where N equals three, the model would return the top three most selected recipients when the event occurs with contextual information corresponding to the particular sub-model.
In other embodiments, a sub-model can use a composite signal, where some contextual information is used in determining the predicted recipient(s), as opposed to just using the contextual information to select the sub-model. For example, a neural network or a logistic regression model can use a location (or other features) and build sort of a linear weighted combination of those features to predict the recipient(s). Such more complex models may be more suitable when an amount of data for a sub-model is significantly large. Some embodiments could switch the type of sub-model used at a particular node (i.e., particular combination of contextual data) once more data is obtained for that node.
The accuracy of a model can be tested against the historical interactions data. For a given event, the historical interactions data can identify which recipient(s) the user interacted with in association with the event (e.g., just before or just after, such as within a minute of the event). For each event, the contextual data can be used to determine the particular model. Further, contextual data can be used as input features to the model.
In an example where the model (or sub-model) selects the top recipient, a number of historical data points where the top recipient actually was selected (i.e., sent a communication) can be determined as a correct count, and a number of historical data points where the top recipient was not selected can be determined as an incorrect count. In an embodiment where N is greater than one for a model that selects the top N recipients, the correct count can correspond to any historical data point where one of the top N recipients was chosen as a recipient of a communication.
Based on the first subset of historical interactions, the first sub-model can predict at least one recipient of a first group of one or more recipients that the user will interact with in association with the event with a first confidence level. The first sub-model can be created at least based on the first confidence level being greater than the initial confidence level at least a threshold amount, which may be 0 or more. This threshold amount can correspond to a difference threshold. In some implementations, the first sub-model can be created may not always be created when this criterion is satisfied, as further criteria may be used. If the confidence level is not greater than the initial confidence level, another property can be selected for testing. This comparison of the confidence levels can correspond to testing for information gain. The same process can be repeated for determining a second confidence level of a second sub-model (for a second property) of the first sub-model for predicting a second group of one or more recipients. A second subset of the historical interactions can be used for the second sub-model. A third property or more properties can be tested in a similar manner.
G. Regeneration of Decision Tree
Embodiments can generate a decision tree of the models periodically, e.g., daily. The generation can use the historical interactions data available at that time. Thus, the decision tree can change from one generation to another. In some embodiments, the decision tree is built without knowledge of previous decision trees. In other embodiments, a new decision tree can be built from such previous knowledge, e.g., knowing what sub-models are likely or by starting from the previous decision tree.
In some embodiments, all contexts are attempted (or a predetermined listed of contexts) to determined which sub-models provide a largest information gain. For example, if location provides the largest information gain for segmenting into sub-models, then sub-models for at least one specific location can be created. At each level of segmentation, contexts can be tested in such a greedy fashion to determine which contexts provide a highest increase in information gain.
The prediction model can test not only for the selected recipient(s) but a specific action (e.g., copying the recipient(s) on an email based on previously added recipients). In some embodiments, once the probability of selecting a recipient is sufficiently accurate, a more aggressive action can be provided than just providing a suggested recipient. For example, when the recipient is provided, an email application can automatically launch with the recipient included as a recipient in a new email message.
When selecting a recipient is predicted with sufficient probability (e.g., confidence level is above a high threshold), then the prediction can begin testing actions. Thus, the testing is not just for prediction of a recipient, but testing whether a particular action can be predicted with sufficient accuracy. The different possible actions (including launching email, text messaging, calendar, or video conference applications) can be obtained from the historical interactions data.
Accordingly, embodiments can be more aggressive with the actions to be performed when there is greater confidence. The prediction model may provide a particular user interface for a communication application if a particular means of communication (e.g., email, text message, voice call, video call, and video conference) has a high probability of being used to communicate with a recipient. For example, an interface of an email application can be provided by the prediction model if there is a high probability that a user will send an email to a suggested recipient. Thus, in some embodiments, the higher the probability of use, more aggressive action can be taken, such as automatically providing an interface for interacting with a recipient using a corresponding communication application (e.g., email, calendar, instant message, text message, voice call, or video conference), as opposed to just providing suggested recipient.
For example, a base model can have a certain level of statistical significance (accuracy and confidence) that the action might be to suggest the recipient(s) on a search screen. As other examples, a higher level of statistical significance can cause the screen to light up (thereby brining attention to the recipients, just one recipient can be selected, or for a user interface (UI) of a particular application can be provided (e.g., a UI of an email application).
The action can depend on whether the model predicts just one recipient or a group of recipients. For example, if there is an opportunity to make three recipient recommendations instead of one, then that also would change the probability distribution, as a selection of any one of the three recipients would provide a correct prediction. A model that was not confident for recommendation of one recipient might be sufficiently confident for three. Embodiments can perform adding another recipient to a group of recipients being predicted by the model (e.g., a next most likely contact not already in the group), thereby making the model more confident. If the model is based on a prediction of more than one contact, the user interface provided would then provide for an interaction with more than contact, which can affect the form for the UI. For example, all of the contacts can be provided in a list, and one contact would not automatically be selected. In an embodiment, a prediction can include a top contact, and if that contact is selected, other contacts can be copied on the message (i.e., due to co-occurrences in the historical interactions data). In the example of
There can also be multiple actions, and a suggestion for different actions. For example, there can be two playlists at the gym as part of the sub-model (e.g., one application is identified but two actions are identified in the model when the two actions have a similar likelihood of being selected). Together the two actions can have statistically significance, whereas separately they did not.
As an example, when the model for an event (e.g., composing an email) is first being trained, the model may not be confident enough to perform any actions. At an initial level of confidence, a recipient name, icon or other recipient identifier could be displayed. At a next higher level of confidence, a means of contacting the recipient may be displayed (e.g., an email address or phone number). At a further level of confidence, a user interface specific to a particular communication application can be displayed (e.g., controls for adding the predicted recipient as a recipient of a new email, instant message, phone call, or video call). These different levels could be for various values used to define a confidence level.
Other example actions can include changing a song now playing, providing a notification (which may be front and center on the screen). The action can occur after unlocking the device, e.g., a UI specific to the application can display after unlocking. The actions can be defined using deep links to start specific functionality of an application.
Data flow diagram 700 shows that recipient suggestions 702 can be based on data from variety of data sources 714. The data sources 714 can include information for past communications. The data sources can include events 716, searches 718, contacts found 720, recent activity 722, collection daemon 724, communication history 726, and contacts 728. Data sources 714 can be populated with data from the communications applications and interaction mechanisms 701. For example, calendar 704 can provide calendar event data to events 716. Similarly, phone 710 and video calling 712 can provide a call history for voice and video calls, respectively, to communication history 726. In the example of
Interaction module 810 also includes an XPC service 813 for communicating with an application 800. Application 800 can be one of the communications applications or interaction mechanisms shown in
At the top are UI elements. As shown, there is a search screen 910, a search screen 920, and a voice interface 925. These are ways that a user interface can be provided to a user. Other UI elements can also be used.
At the bottom, are data sources for an application suggestion engine 940 and a recipient suggestion engine 950. An event manager 942 can detect events and provide information about the event to application suggestion engine 940. In some embodiments, event manager 942 can determine whether an event triggers a suggestion of an application. A list of predetermined events can be specified for triggering an application suggestion. Location unit 944 can provide a location of the user device. As examples, location unit 944 can include GPS sensor and motion sensors. Location unit 944 can also include other applications that can store a last location of the user, which can be sent to application suggestion engine 940. Other contextual data can be provided from other context unit 946.
Application suggestion engine 940 can identify one or more applications, and a corresponding action. At a same level as application suggestion engine 940, a recipient suggestion engine 950 can provide suggested recipients for presenting to a user. An event manager 952 can detect events related to recipients and provide information about the event to recipient suggestion engine 950. In some embodiments, event manager 952 can determine whether an event triggers a suggestion of recipients. A list of predetermined events can be specified for triggering a recipient suggestion. Interactions history 954 can provide data for prior interactions and communications with other users. For example, interactions history 954 can be a data source for information recorded from previous emails exchanged between a user of the device and other users. Location unit 956 can provide a location of the user device. For examples, location unit 956 can include GPS and motion sensors. Location unit 956 can also include other applications that can store a last location of the user device, which can be sent to recipient suggestion engine 950. Other contextual data can be provided from other context unit 958.
The suggested recipient(s) can be provided to a suggestion center 930, which can determine what to provide to a user. For example, suggestion center 930 can determine whether to provide a suggested application or a recipient. In other examples, both the application(s) and recipient(s) can be provided. Suggestion center can determine a best manner for providing to a user. The different suggestions to a user may use different UI elements. In this manner, suggestion center 930 can control the suggestions to a user, so that different engines do not interrupt suggestions provided by other engines. In various embodiments, engines can push suggestions (recommendations) to suggestion center 930 or receive a request for suggestions from suggestion center 930. Suggestion center 930 can store a suggestion for a certain amount of time, and then determine to delete that suggestion if the suggestion has not been provided to a user, or the user has not interacted with the user interface.
Suggestion center 930 can also identify what other actions are happening with the user device, so as to inform the device when to send the suggestion. For example, if the user is using an application, suggested recipients may be provided, but a suggestion for an application may not be provided. Suggestion center 930 can determine when to send suggestions based on a variety of factors, e.g., a motion state of the device, whether a lock screen is on, or whether authorized access has been provided, whether user is using the device at work, home, etc.
It should be apparent that the architecture shown in
Wireless circuitry 1008 is used to send and receive information over a wireless link or network to one or more other devices' conventional circuitry such as an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, memory, etc. Wireless circuitry 1008 can use various protocols, e.g., as described herein.
Wireless circuitry 1008 is coupled to processing system 1004 via peripherals interface 1016. Interface 1016 can include conventional components for establishing and maintaining communication between peripherals and processing system 1004. Voice and data information received by wireless circuitry 1008 (e.g., in speech recognition or voice command applications) is sent to one or more processors 1018 via peripherals interface 1016. One or more processors 1018 are configurable to process various data formats for one or more application programs 1034 stored on medium 1002.
Peripherals interface 1016 couple the input and output peripherals of the device to processor 1018 and computer-readable medium 1002. One or more processors 1018 communicate with computer-readable medium 1002 via a controller 1020. Computer-readable medium 1002 can be any device or medium that can store code and/or data for use by one or more processors 1018. Medium 1002 can include a memory hierarchy, including cache, main memory and secondary memory.
Device 1000 also includes a power system 1042 for powering the various hardware components. Power system 1042 can include a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light emitting diode (LED)) and any other components typically associated with the generation, management and distribution of power in mobile devices.
In some embodiments, device 1000 includes a camera 1044. In some embodiments, device 1000 includes sensors 1046. Sensors can include accelerometers, compass, gyrometer, pressure sensors, audio sensors, light sensors, barometers, and the like. Sensors 1046 can be used to sense location aspects, such as auditory or light signatures of a location.
In some embodiments, device 1000 can include a GPS receiver, sometimes referred to as a GPS unit 1048. A mobile device can use a satellite navigation system, such as the Global Positioning System (GPS), to obtain position information, timing information, altitude, or other navigation information. During operation, the GPS unit can receive signals from GPS satellites orbiting the Earth. The GPS unit analyzes the signals to make a transit time and distance estimation. The GPS unit can determine the current position (current location) of the mobile device. Based on these estimations, the mobile device can determine a location fix, altitude, and/or current speed. A location fix can be geographical coordinates such as latitudinal and longitudinal information.
One or more processors 1018 run various software components stored in medium 1002 to perform various functions for device 1000. In some embodiments, the software components include an operating system 1022, a communication module (or set of instructions) 1024, a location module (or set of instructions) 1026, a recipient suggestion module (or set of instructions) 1028, and other applications (or set of instructions) 1034, such as a car locator app and a navigation app.
Operating system 1022 can be any suitable operating system, including iOS, Mac OS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system can include various procedures, a plurality of instructions, software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communication module 1024 facilitates communication with other devices over one or more external ports 1036 or via wireless circuitry 1008 and includes various software components for handling data received from wireless circuitry 1008 and/or external port 1036. External port 1036 (e.g., USB, FireWire, Lightning connector, 60-pin connector, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.).
Location/motion module 1026 can assist in determining the current position (e.g., coordinates or other geographic location identifier) and motion of device 1000. Modern positioning systems include satellite based positioning systems, such as Global Positioning System (GPS), cellular network positioning based on “cell IDs,” and Wi-Fi positioning technology based on a Wi-Fi networks. GPS also relies on the visibility of multiple satellites to determine a position estimate, which may not be visible (or have weak signals) indoors or in “urban canyons.” In some embodiments, location/motion module 1026 receives data from GPS unit 1048 and analyzes the signals to determine the current position of the mobile device. In some embodiments, location/motion module 1026 can determine a current location using Wi-Fi or cellular location technology. For example, the location of the mobile device can be estimated using knowledge of nearby cell sites and/or Wi-Fi access points with knowledge also of their locations. Information identifying the Wi-Fi or cellular transmitter is received at wireless circuitry 1008 and is passed to location/motion module 1026. In some embodiments, the location module receives the one or more transmitter IDs. In some embodiments, a sequence of transmitter IDs can be compared with a reference database (e.g., Cell ID database, Wi-Fi reference database) that maps or correlates the transmitter IDs to position coordinates of corresponding transmitters, and computes estimated position coordinates for device 1000 based on the position coordinates of the corresponding transmitters. Regardless of the specific location technology used, location/motion module 1026 receives information from which a location fix can be derived, interprets that information, and returns location information, such as geographic coordinates, latitude/longitude, or other location fix data.
Recipient suggestion module 1028 can include various sub-modules or systems, e.g., as described above with reference to
The one or more applications 1034 on the mobile device can include any applications installed on the device 1000, including without limitation, a browser, an address book, a contact list, email, instant messaging, video conferencing, video calling, word processing, keyboard emulation, widgets, JAVA-enabled applications, encryption, digital rights management, voice recognition, voice replication, a music player (which plays back recorded music stored in one or more files, such as MP3 or AAC files), etc.
There may be other modules or sets of instructions (not shown), such as a graphics module, a time module, etc. For example, the graphics module can include various conventional software components for rendering, animating and displaying graphical objects (including without limitation text, web pages, icons, digital images, animations and the like) on a display surface. In another example, a timer module can be a software timer. The timer module can also be implemented in hardware. The time module can maintain various timers for any number of events.
The I/O subsystem 1006 can be coupled to a display system (not shown), which can be a touch-sensitive display. The display displays visual output to the user in a GUI. The visual output can include text, graphics, video, and any combination thereof. Some or all of the visual output can correspond to user-interface objects. A display can use LED (light emitting diode), LCD (liquid crystal display) technology, or LPD (light emitting polymer display) technology, although other display technologies can be used in other embodiments.
In some embodiments, I/O subsystem 1006 can include a display and user input devices such as a keyboard, mouse, and/or track pad. In some embodiments, I/O subsystem 1006 can include a touch-sensitive display. A touch-sensitive display can also accept input from the user based on haptic and/or tactile contact. In some embodiments, a touch-sensitive display forms a touch-sensitive surface that accepts user input. The touch-sensitive display/surface (along with any associated modules and/or sets of instructions in medium 1002) detects contact (and any movement or release of the contact) on the touch-sensitive display and converts the detected contact into interaction with user-interface objects, such as one or more soft keys, that are displayed on the touch screen when the contact occurs. In some embodiments, a point of contact between the touch-sensitive display and the user corresponds to one or more digits of the user. The user can make contact with the touch-sensitive display using any suitable object or appendage, such as a stylus, pen, finger, and so forth. A touch-sensitive display surface can detect contact and any movement or release thereof using any suitable touch sensitivity technologies, including capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch-sensitive display.
Further, the I/O subsystem can be coupled to one or more other physical control devices (not shown), such as pushbuttons, keys, switches, rocker buttons, dials, slider switches, sticks, LEDs, etc., for controlling or performing various functions, such as power control, speaker volume control, ring tone loudness, keyboard input, scrolling, hold, menu, screen lock, clearing and ending communications and the like. In some embodiments, in addition to the touch screen, device 1000 can include a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad can be a touch-sensitive surface that is separate from the touch-sensitive display or an extension of the touch-sensitive surface formed by the touch-sensitive display.
In some embodiments, some or all of the operations described herein can be performed using an application executing on the user's device. Circuits, logic modules, processors, and/or other components may be configured to perform various operations described herein. Those skilled in the art will appreciate that, depending on implementation, such configuration can be accomplished through design, setup, interconnection, and/or programming of the particular components and that, again depending on implementation, a configured component might or might not be reconfigurable for a different operation. For example, a programmable processor can be configured by providing suitable executable code; a dedicated logic circuit can be configured by suitably connecting logic gates and other circuit elements; and so on.
Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C, C++, C#, Objective-C, Swift, or scripting language such as Perl or Python using, for example, conventional or object-oriented techniques. The software code may be stored as a plurality of instructions or commands on a computer readable medium for storage and/or transmission. A suitable non-transitory computer readable medium can include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.
Computer programs incorporating various features of the present invention may be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. Computer readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices. In addition program code may be encoded and transmitted via wired optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download. Any such computer readable medium may reside on or within a single computer product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
Although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
The present application claims the benefit of and priority to U.S. Provisional Application No. 62/171,859, filed Jun. 5, 2015; and is related to commonly owned U.S. patent application Ser. No. 14/732,359, filed Jun. 5, 2015, the disclosures of which are incorporated by reference in their entireties for all purposes.
Number | Date | Country | |
---|---|---|---|
62171859 | Jun 2015 | US |