This disclosure relates generally to systems, methods, and computer readable media for message threading across multiple communications formats and protocols.
The proliferation of personal computing devices in recent years, especially mobile personal computing devices, combined with a growth in the number of widely-used communications formats (e.g., text, voice, video, image) and protocols (e.g., SMTP, IMAP/POP, SMS/MMS, XMPP, YMSG, etc.) has led to a communications experience that many users find fragmented and difficult to search for relevant information in. Users desire a system that will provide for ease of message threading by “stitching” together related communications across multiple formats and protocols—all seamlessly from the user's perspective. Such stitching together of communications across multiple formats and protocols may occur, e.g., by: 1) direct user action in a centralized communications application (e.g., by a user clicking ‘Reply’ on a particular message); 2) using semantic matching (or other search-style message association techniques); 3) element-matching (e.g., matching on subject lines or senders/recipients/similar quoted text, etc.); and 4) “state-matching” (e.g., associating messages if they are specifically tagged as being related to another message, sender, etc. by a third-party service, e.g., a webmail provider or Instant Messaging (IM) service.
With current communications technologies, conversations remain “siloed” within particular communication formats or protocols, leading to users being unable to search across multiple communications in multiple formats or protocols and across multiple applications on their computing devices to find relevant communications (or even communications that a messaging system may predict to be relevant), often resulting in inefficient communication workflows—and even lost business or personal opportunities. For example, a conversation between two people may begin over text messages (e.g., SMS) and then transition to email. When such a transition happens, the entire conversation can no longer be tracked, reviewed, searched, or archived by a single source since it had ‘crossed over’ protocols. For example, if the user ran a search on their email search system for a particular topic that had come up only in the user's SMS conversations, such a search may not turn up optimally relevant results.
Further, a multi-format, multi-protocol, communication threading system, such as is disclosed herein, may also provide for the semantic analysis of conversations. For example, for a given set of communications between two users, there may be only a dozen or so keywords that are relevant and related to the subject matter of the communications. These dozen or so keywords may be used to generate an “initial tag cloud” to associate with the communication(s) being indexed. The initial tag cloud can be created based on multiple factors, such as the uniqueness of the word, the number of times a word is repeated, phrase detection, etc. These initial tag clouds may then themselves be used to generate further an expanded “predictive tag cloud,” based on the use of Markov chains or other predictive analytics based on established language theory techniques and data derived from existing communications data in a centralized communications server. These initial tag clouds and predictive tag clouds may be used to improve message indexing and provide enhanced relevancy in search results. In doing so, the centralized communications server may establish connections between individual messages that were sent/received using one or multiple communication formats or protocols and that may contain information relevant to the user's initial search query.
The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable seamless, multi-format, multi-protocol communication threading are described herein.
Disclosed are systems, methods, and computer readable media for threading communications for computing devices across multiple formats and multiple protocols. More particularly, but not by way of limitation, this disclosure relates to systems, methods, and computer readable media to permit computing devices, e.g., smartphones, tablets, laptops, wearables, and the like, to present users with a seamless, multi-format, multi-protocol, communication threading system that may also perform semantic and predictive analysis based on the content of the multi-format, multi-protocol communications that are stored by a centralized communications server.
Use of a multi-format, multi-protocol, communication threading system allows users to view/preview all their messages, conversations, documents, etc., which are related (or potentially related) to a particular query in a single unified results feed. Such a multi-format, multi-protocol, communication threading system may also provide the ability to “stitch” together communications across one or more of a variety of communication protocols, including SMTP, IMAP/POP, SMS/MMS, XMPP, YMSG, and/or social media protocols. Further, the use of semantic and predictive analysis on the content of a user's communications may help the user discover potentially valuable and relevant messages, conversations, documents, etc., that would not be returned by current string-based or single-format/single-protocol, index-based searching techniques.
Referring now to
Server 106 in the server-entry point network architecture infrastructure 100 of
Referring now to
Referring now to
System unit 205 may be programmed to perform methods in accordance with this disclosure. System unit 205 comprises one or more processing units, input-output (I/O) bus 225 and memory 215. Access to memory 215 can be accomplished using the communication bus 225. Processing unit 210 may include any programmable controller device including, for example, a mainframe processor, a mobile phone processor, or, as examples, one or more members of the INTEL® ATOM™, INTEL® XEON™, and INTEL® CORE™ processor families from Intel Corporation and the Cortex and ARM processor families from ARM. (INTEL, INTEL ATOM, XEON, and CORE are trademarks of the Intel Corporation. CORTEX is a registered trademark of the ARM Limited Corporation. ARM is a registered trademark of the ARM Limited Company). Memory 215 may include one or more memory modules and comprise random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), programmable read-write memory, and solid-state memory. As also shown in
Referring now to
The processing unit core 210 is shown including execution logic 280 having a set of execution units 285-1 through 285-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The execution logic 280 performs the operations specified by code instructions.
After completion of execution of the operations specified by the code instructions, backend logic 290 retires the instructions of the code 250. In one embodiment, the processing unit core 210 allows out of order execution but requires in order retirement of instructions. Retirement logic 295 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processing unit core 210 is transformed during execution of the code 250, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 262, and any registers (not shown) modified by the execution logic 280.
Although not illustrated in
As is shown across the top row of the interface 302, the multi-format, multi-protocol messages received by a user of the system may be grouped by protocol (e.g., Email, IM/SMS, Video, Voice, etc.), or all messages may be combined together into a single, unified inbox feed, as is shown in
Moving down to row 308 of inbox feed 300, messages from a second user, Peter Ehrmanntraut, have also been aggregated into a single row of the feed. As is displayed on the righthand side of row 308 is reveal arrow 310. Selection of reveal arrow 310 may provide additional options to the user such as to reply, delay reply/delay send, forward, return a call, favorite, archive, or delete certain message from a particular sender. Further, the reveal action may conveniently keep the user on the same screen and allows for quick visual filtering of messages. Gestures and icon features may help the user with the decision-making process regarding the choice to reply, delay replying (including the time delaying of response across multiple protocols), delete, mark as spam, see a full message, translate, read, or flag a message as being unread. With respect to the “delay reply/delay send” option, the multi-protocol, multi-format communication system may determine, based on the determined outgoing message format and protocol, that a particular communication in a particular format (or that is being sent via a particular protocol) should be delayed before being sent to the recipient. For example, a video or voice message may not be appropriate to send at midnight, and so the system may delay sending the message until such time as the recipient is more likely to be awake, e.g., 9:00 am. On the other hand, the outgoing message is in text format and being delivered via the SMS protocol, sending the message at midnight may be more socially appropriate. Delay reply/delay send may also take into account the time zone of the recipient and choose a more socially appropriate delivery time for a message based on the recipient's local time.
Finally, moving down to row 312, the ‘grayed-out’ characteristic of the row may be used to indicate that there are no remaining unread/unopened messages of any format or protocol type remaining from a particular sender. Alternately, each message type may be individually grayed out, indicating that there are no new messages of a particular type. It is to be understood that the use of a grayed out row is merely exemplary, and that any number of visual indicators may be used to inform the user of the device that no unread messages remain.
As may now be appreciated, the multi-protocol, person-centric, multi-format inbox feed 300 of
In other embodiments, users may also select their preferred delivery method for incoming messages of all types. For example, they can choose to receive their email messages in voice format or voice messages in text, etc.
Referring now to
Referring now to
As mentioned above, there are multiple ways by which the centralized communication system may associate or “stitch” together multiple messages across disparate messaging formats and protocols, creating a “relationship” between each associated message. Such relationships, which may be created uniquely for a variety of messages in a variety of formats and protocols through the system, may be used to create a “relationship map,” i.e., a cluster of relationships connecting each message to other messages with varying degrees of separation. The relationship map may be analyzed to determine communication patterns (e.g., system-wide or on a per-user basis), provide greater search relevancy with messages across format/protocols, and provide other such insights and benefits.
According to a first embodiment, direct user actions taken in a centralized communications application may be used to associate messages as part of the same thread of conversation. For example, if a user has ‘Message 1’ open and clicks a ‘Reply’ button in the multi-format, multi-protocol communication application, thus opening a ‘Message 2,’ then the system may know to associate ‘Message 1’ and ‘Message 2’ together as being part of the same “thread,” even if, for instance, ‘Message 1’ was received via an SMS protocol and ‘Message 2’ is eventually sent via an email protocol using the multi-format, multi-protocol communication application. Direct user actions taken from within the multi-format, multi-protocol communication application may be logged by the application, synced with the centralized communication server and any other properly authenticated client(s), and stored for future recall when a user requests to see a “message thread” view.
According to a second embodiment, the system may use semantic matching (or other search-based/keyword message association techniques) to associate messages. A variety of semantic and search-based/keyword techniques for associating related messages will be discussed in further detail below in reference to
According to a third embodiment, element-matching techniques may be employed to associate messages. For example, messages that match each other based on subject lines or senders/recipient lists, or which have similar quoted text within them, etc., may be intelligently associated together—even if the centralized system has not been provided with data that otherwise affirmatively associates the messages together as being a part of the same messaging thread or chain. This embodiment will be discussed in further detail below in reference to
According to a fourth embodiment, “state-matching” techniques may be employed to associate messages. For example, certain third-party services which can integrate with the centralized communication system (hereinafter, a “Valid Third-Party Service”) may specifically tag a message as a “Reply” to another message, and, thus, the centralized system may associate such messages as a part of the same thread or chain, even if the action to send the initial Reply message took place outside of the centralized communication system, i.e., was made directly via the Valid Third-Party Service's system.
One or more of the four techniques outlined above may be used in combination with each other in order for the system to most effectively thread together disparate messages across multiple formats and/or multiple protocols in a way that is most beneficial for the individual user of the centralized communication system.
Referring now to
Referring now to
Referring now to
Referring now to
Assuming the client device has access, in one embodiment, the query will be sent to a central server(s) of the multi-format, multi-protocol, contextualized communication search system, and, based on the nature of the query, a semantic analysis and/or predictive analysis of the query terms may be performed (Step 430). In such a “server-centric” approach, the central server(s) run search logic through a centralized content database, and the central server(s) may perform real-time relevancy ranking. The results (along with the rankings) may then be sent to the client, so that the client may display the results to a user. This “server-centric” approach may allow for enhanced speed and consistency across clients and services, and may also allow for greater richness in index data modeling. Other query implementations may utilize a more “client-centric” approach. In such a “client centric” approach, a user inputs a query on a client device, and then the client device may run search logic through a client database, allowing the client device to perform real-time relevancy ranking, and display the results on the client device. This option allows for enhanced user privacy, but may sacrifice speed. Still other query implementations may utilize a “hybrid” search architecture, which may comprise a combination of the “server-centric” and “client-centric” approaches outlined above. A “hybrid” architecture may be of particular value when the client device is either not connected to the Internet or when the two databases (i.e., the client database and server database) are not in perfect sync.
As discussed above, a semantic analysis may be performed on extant content on client devices, the system servers, and/or third-party content host servers in order to determine the particular keywords that are relevant and related to the subject matter of a given query(ies), document(s), or communication(s), etc. These keywords may be used to generate a “tag cloud” associated with the given query(ies), document(s), or communication(s), etc. These tag clouds may then themselves be used to generate further “predictive tag clouds,” based on the particular content of the words in the generated tag cloud, as will be described in further detail below. The tag clouds and predictive tag clouds may then be used to “stitch” together, i.e., associate, related query(ies), document(s), or communication(s), etc. into “clusters” (Step 435).
Once the related query(ies), document(s), or communication(s), etc. have been connected together via the above-described searching process, the user's query may be executed. For example, if the user's query is asking for all content related to a particular second user, the system may search all ‘person-centric’ content across multiple data formats and/or protocols related to the particular second user (Step 440). For example, if the user clicked on row 308 shown in
If the user's query is asking for all content related to a particular topic(s) that the user has discussed with user ‘Peter Ehrmanntraut,’ the system may search all ‘tag-centric’ content across multiple data formats related to the particular topic(s) (Step 445). For example, if the user typed the term ‘book’ into search box 326 shown in
Once all the query-relevant, contextualized multi-format, multi-protocol data has been located by the server, packaged, and then sent to the client device issuing the query, the client device retrieves the information, reformats it (if applicable), ranks or sorts it (if applicable), and displays the information on a display screen of the client device (Step 450).
Various conversations in
Moving on to Conversation #2 502, it is further clustered with Conversation #6 506 based on the fact that each conversation mentions a country (India,' in the case of Conversation #2 502, and ‘Italy’ in the case of Conversation #6 506), and these tags have been predictively semantically linked with one another in the example shown in
Moving on to Conversation #3 503, it is further clustered with Conversation #4 504 based on the fact that each conversation mentions a movie (Jackie Robinson,' in the case of Conversation #3 503, and ‘Batman’ in the case of Conversation #4 504), and these tags have been predictively semantically linked with one another in the example shown in
Moving on to Conversation #5 505, it is further clustered with Conversation #6 506 based on the fact that each conversation mentions a topic that has been semantically-linked to the concept of ‘Italy’ (pizza,' in the case of Conversation #5 505, and the word ‘Italy’ itself in the case of Conversation #6 506).
Finally, Conversation #6 506, is further clustered with Conversation #7 507 based on the fact that each conversation is in a video messaging format.
Based off each word in tag cloud 510, and additional predictive analysis may be performed, resulting in predictive tag cloud 520. In the example of
As the centralized messaging database grows, it will become possible for the system to rely more and more on its own data to drive the initial tag cloud and predictive tag cloud algorithms. For example, if a particular user always begins emails with, “Hope you're doing well,” the system could determine that it was not necessary to repeatedly index that phrase, and instead simply keep a note of a reference to the original phrase. This process of contextual learning may be employed for an individual user's content, as well as across global content stored in the centralized messaging database (e.g., the world may say, “Congratulations on the new baby!” phrase quite often). This process may allow for less duplication, smaller index sizes, etc.
Further, contextual learning may be used to determine that a particular user has recently started to use one phrase in place of another, e.g., if the user just spent a year living in London, he or she may begin to use the phrase “to let” instead of “for rent.” In such a situation, a machine learning system using contextual cues could determine that, for that the particular user only, the phrases “to let” and “for rent” are considered like terms and, therefore, would share word mapping. This way, when the user searches for “rent,” the system can include references to “let” as potentially relevant matches. Another machine learning technique(s) that may be employed include techniques to influence index term weight assignment. For example, a particular user's searches may indicate that “time” is not a significant search parameter for the user. In other words, the particular user may only really search for content within a one-week timeframe of the present date. The centralized system could monitor such behaviors and adjust the index weights at regular or semi-regular intervals accordingly to assign greater weight to the timestamp on recent content and reduce the weight when timestamps are “old” for that particular user, thus allowing the system to provide a more customized and relevant search experience. By employing these customized contextual learning techniques, the end result is that the same content, e.g., an email sent from User A to User B, could have two different index mappings in the centralized system so that both User A and User B can have an optimized search/threading experience. The system could also perform machine learning techniques based on historic patterns of communication to influence predictive threading. For example, in protocols where data is limited, e.g. SMS, the system could employ a historic look-back on the User's communication in order to determine the likelihood of a conversation to/from the User pervading across multiple protocols. That assigned weight pertaining to the likelihood of a conversation ‘jumping’ protocol could then impact the stitching results for that User. In this way, the system is able to apply machine learning techniques on an individual level in order to provide the most relevant search results to the user across formats and protocols.
In automated data visualization (ADV) content within data files and/or documents is identified, segmented, and/or broken apart from the corresponding files based on different data protocols and formats. For example, a word document may include text and pictures, or other media content including video, audio, etc. Further, an Excel or .xml spreadsheet may include multiple tables with different sections having text, graphs, pictures, and the like. Different data types and tables may therefore have different data assets, formats, protocols, and the like. However, traditional data extraction processes for individual files may be bound by either extracting regions of like properties (e.g., image portions of a larger image, or text from a document having an identified name, address, title, etc.) or by semantic labeling of identified content, which is schema driven (e.g., by identifying a portion of a data set video/picture, such as one that may include a cat and/or by generating another form of data such as text, audio, video, etc. that references a “cat”). These conventional operations may be limited by failing to address and/or operate when faced with multiformat data files, such as a spreadsheet with embedded images.
Thus, a machine learning model and/or engine may ingest a data file and break up the data file into one or more data portions and/or child data files that may be processed, searched, indexed, and the like. This may be done using data parsing and content identification logic based on one or more AI engines, rules, and/or machine learning models. Thus, AI and/or machine learning may identify and isolate independent data regions contained within a single data file or object. Once isolated, the utilities of the AI and/or machine learning models and engines may extract the child data assets into native formats that may or may not match the original data file type and/or format. The child data assets may correspond to content that may or may not conform to a specific predefined schema for the corresponding data file, such as data file 602.
In this regard, data file 602 includes text data 608, an image 610, and a spreadsheet 612. Each of these correspond to different data portions within data file 602, and may therefore be child data assets and/or contents that may be parceled, extracted, and/or segmented from data file 602. Using one or more machine learning models and/or operations, different data protocols and/or formats may be identified for different data portions in data file 602 corresponding to text data 608, image 610, and spreadsheet 612, as well as any other child data assets or contents that may be contained within the corresponding data file, document, or the like. The machine learning or other AI models and/or systems may be trained and/or configured to identify such child data assets based on their corresponding object similarity, formats, and/or individual properties including tags, metadata, and/or data content.
Data file 602 may then be segmented and/or parceled into isolated sections 604, where isolated sections 604 include isolated text data 614, an isolated image 616, and an isolated spreadsheet 618 corresponding, respectively, to text data 608, image 610, and spreadsheet 612. In this regard, the machine learning model(s) and/or other AI system may be used to identify distinct objects within data file 602, which may be based on correlations to other data files, formats, portions, and/or protocols. Further, analytics may be performed on the data, metadata, and/or file identifiers or tags in order to identify isolated sections 604. Once identified and/or segmented, predictive associations 606 may be generated to associate each of isolated sections 604 with a corresponding content and/or segmented data portion in order for searching, indexing, and/or generating of those child assets and/or contents.
Predictive associations 606 therefore include text associations 620, image associations 622, and spreadsheet associations 624. Each of these associations may be used to correlate the segmented and/or identified child data assets with other content and/or data assets, including data files and/or content within such data files. As such, text associations 620 may include isolated text data 614 associated with similar text patterns 626, image associations 622 may include isolated image 616 associated with similar image patterns 628, and spreadsheet associations may include isolated spreadsheet 618 associated with similar spreadsheet patterns 630. Further, if predictive associations 606 are not correct and/or do not properly associate child content and/or assets in data file 602 with corresponding contents, protocols, and/or data, reprocess requests 632 may be executed. This may include utilizing one or more machine learning or other AI systems in order to re-associate portions of data file 602 with other corresponding data files, contents, and/or portions.
First, at Step 702, a first data file having one or more first data portions and one or more second data files having one or more second data portions are obtained. The data files may correspond to a document, image, video, spreadsheet, or other computing file that may include different data portions. In this regard, each data portion may have a corresponding data format and/or protocol, such as .doc or .docx text data, .jpeg or other image data format, a video data format or protocol, .xml or other spreadsheet type data format, and the like. In this regard, each data file may include multiple different data portions having different types, formats, and/or protocols that may be separately segmented and/or searched.
At Step 704, an analysis of the different data portions in the obtained data files is performed. The analysis may correspond to executing one or more machine learning models, engines, and/or systems that are trained to analyze, determine, and identify different data and/or file formats within data files, which allow for breaking down and parceling of data files based on different data portions and their corresponding formats and/or data protocols. For example, in a data file, such as a document, a user may view text, an image, a video, a spreadsheet, and/or other data portions. Each may correspond to distinct data objects in the file or document, however, conventional computing systems may not identify each data portion. Thus, one or more machine learning models may be used to identify such data portions, types, and/or protocols within the data file. The machine learning models may be trained using annotated and/or unannotated training data, as well as supervised or unsupervised training, based on other data files and/or documents.
At Step 706, data types and/or protocols for each of the different data portions in the obtained data files are identified. The analysis may therefore identify the data portions using the machine learning model(s) based on corresponding data formats and/or protocols. This may be performed to identify child data assets in each data file, and may be based on a comparison with other data files and/or trained machine learning operations that correlate such data portions and/or protocols within similar data files. Once identified, each of these data portions may be segmented and parceled from the data files based on object similarity in the data chunks or portions for the data files. This allows for data portions to be flattened and separated in data files as child assets.
At Step 708, the different data portions are associated with different contents based on the identified data types and/or protocols. The different data portions may therefore be correlated and associated with particular child assets as text, image, video, audio, spreadsheet, or other data formats, protocols, and/or contents. One or more machine learning models and/or operations may be used to correlate and associate data portions with particular contents, such as text, image, video, audio, etc., content, which allows for indexing of the data portions and/or adding or providing identifiers or metadata to the data portions and/or data files. This may provide improved searching of the data files and identifying particular content in data files. At Step 710, associations are created with the different data portions and the different contents, and the obtained data files are segmented by the different data portions. The associations may be used to identify and/or label each data portion as particular content within the data files. Further, segmenting of the data files may be performed to segment content into particular data containers and/or child contents or assets, which may be independently searched, identified, and/or retrieved.
Example 1 is a non-transitory computer readable medium that comprises computer executable instructions stored thereon to cause one or more processing units to: obtain a first plurality of messages for a first user, wherein the first plurality of messages comprises: one or more messages in each of a first plurality of formats; and one or more messages sent or received via each of a first plurality of protocols; and create one or more associations between one or more of the first plurality of messages, wherein at least one of the one or more associations is between messages sent or received via two or more different protocols from among the first plurality of protocols, and wherein at least one of the one or more associations is between messages in two or more different formats from among the first plurality of formats.
Example 2 includes the subject matter of example 1, wherein the instructions further comprise instructions to cause the one or more processing units to receive a query requesting at least one message from the first plurality of messages.
Example 3 includes the subject matter of example 2, wherein the instructions further comprise instructions to cause the one or more processing units to generate a result set to the query.
Example 4 includes the subject matter of example 3, wherein the result set comprises the at least one requested message and one or more messages from the first plurality of messages for which associations have been created to the requested message.
Example 5 includes the subject matter of example 1, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform a semantic analysis on the first plurality of messages; and create one or more clusters of messages from the first plurality of messages, wherein a cluster of messages comprises two or more messages that are associated together, and wherein the instructions to create the one or more clusters of messages further comprise instructions to create the one or more clusters of messages based, at least in part, on the semantic analysis performed on the first plurality of messages.
Example 6 includes the subject matter of example 5, wherein the instructions to perform a semantic analysis on a first plurality of messages further comprise instructions to identify one or more keywords in one or more of the first plurality of messages.
Example 7 includes the subject matter of example 5, wherein the instructions to perform a semantic analysis on a first plurality of messages further comprise instructions to perform a predictive semantic analysis on one or more of the first plurality of messages.
Example 8 includes the subject matter of example 1, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform element matching on the first plurality of messages.
Example 9 includes the subject matter of example 8, wherein the instructions to perform element matching on the first plurality of messages further comprise instructions to: perform element matching on at least one of the following: sender, recipient list, subject, quoted text, and timestamp.
Example 10 includes the subject matter of example 1, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform state matching on the first plurality of messages.
Example 11 is a system that comprises: a memory; and one or more processing units, communicatively coupled to the memory, wherein the memory stores instructions to configure the one or more processing units to: obtain a first plurality of messages for a first user, wherein the first plurality of messages comprises: one or more messages in each of a first plurality of formats; and one or more messages sent or received via each of a first plurality of protocols; and create one or more associations between one or more of the first plurality of messages, wherein at least one of the one or more associations is between messages sent or received via two or more different protocols from among the first plurality of protocols, and wherein at least one of the one or more associations is between messages in two or more different formats from among the first plurality of formats.
Example 12 includes the subject matter of example 11, wherein the instructions further comprise instructions to cause the one or more processing units to receive a query requesting at least one message from the first plurality of messages.
Example 13 includes the subject matter of example 12, wherein the instructions further comprise instructions to cause the one or more processing units to generate a result set to the query.
Example 14 includes the subject matter of example 13, wherein the result set comprises the at least one requested message and one or more messages from the first plurality of messages for which associations have been created to the requested message.
Example 15 includes the subject matter of example 11, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform a semantic analysis on the first plurality of messages; and create one or more clusters of messages from the first plurality of messages, wherein a cluster of messages comprises two or more messages that are associated together, and wherein the instructions to create the one or more clusters of messages further comprise instructions to create the one or more clusters of messages based, at least in part, on the semantic analysis performed on the first plurality of messages.
Example 16 includes the subject matter of example 15, wherein the instructions to perform a semantic analysis on a first plurality of messages further comprise instructions to identify one or more keywords in one or more of the first plurality of messages.
Example 17 includes the subject matter of example 15, wherein the instructions to perform a semantic analysis on a first plurality of messages further comprise instructions to perform a predictive semantic analysis on one or more of the first plurality of messages.
Example 18 includes the subject matter of example 11, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform element matching on the first plurality of messages.
Example 19 includes the subject matter of example 18, wherein the instructions to perform element matching on the first plurality of messages further comprise instructions to: perform element matching on at least one of the following: sender, recipient list, subject, quoted text, and timestamp.
Example 20 includes the subject matter of example 11, wherein the instructions to create one or more associations between one or more of the first plurality of messages further comprise instructions to: perform state matching on the first plurality of messages.
Example 21 is computer-implemented method, comprising: obtaining a first plurality of messages for a first user, wherein the first plurality of messages comprises: one or more messages in each of a first plurality of formats; and one or more messages sent or received via each of a first plurality of protocols; and creating one or more associations between one or more of the first plurality of messages, wherein at least one of the one or more associations is between messages sent or received via two or more different protocols from among the first plurality of protocols, and wherein at least one of the one or more associations is between messages in two or more different formats from among the first plurality of formats.
Example 22 includes the subject matter of example 21, further comprising receiving a query requesting at least one message from the first plurality of messages.
Example 23 includes the subject matter of example 22, further comprising generating a result set to the query.
Example 24 includes the subject matter of example 23, wherein the result set comprises the at least one requested message and one or more messages from the first plurality of messages for which associations have been created to the requested message.
Example 25 includes the subject matter of example 21, wherein act of creating one or more associations between one or more of the first plurality of messages further comprises: performing a semantic analysis on the first plurality of messages; and creating one or more clusters of messages from the first plurality of messages, wherein a cluster of messages comprises two or more messages that are associated together, and wherein the act of creating the one or more clusters of messages further comprises creating the one or more clusters of messages based, at least in part, on the semantic analysis performed on the first plurality of messages.
In the foregoing description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, to one skilled in the art that the disclosed embodiments may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the disclosed embodiments. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one disclosed embodiment, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
It is also to be understood that the above description is intended to be illustrative, and not restrictive. For example, above-described embodiments may be used in combination with each other and illustrative process steps may be performed in an order different than shown. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, terms “including” and “in which” are used as plain-English equivalents of the respective terms “comprising” and “wherein.”
This application claims priority to, and is a continuation-in-part of, U.S. patent application Ser. No. 16/836,691, filed Mar. 31, 2020, issued as U.S. Pat. No. 11,366,838, which is a continuation of U.S. patent application Ser. No. 16/220,943, filed Dec. 14, 2018, issued as U.S. Pat. No. 10,606,871, which is a continuation of U.S. patent application Ser. No. 14/187,699, filed Feb. 24, 2014, issued as U.S. Pat. No. 10,169,447, both of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5481597 | Given | Jan 1996 | A |
5951638 | Hoss | Sep 1999 | A |
6101320 | Schuetze | Aug 2000 | A |
6950502 | Jenkins | Sep 2005 | B1 |
7317929 | El-Fishawy | Jan 2008 | B1 |
7450937 | Claudatos | Nov 2008 | B1 |
7673327 | Polis | Mar 2010 | B1 |
7680752 | Clune, III | Mar 2010 | B1 |
7734705 | Wheeler, Jr. | Jun 2010 | B1 |
7886000 | Polis | Feb 2011 | B1 |
7908647 | Polis | Mar 2011 | B1 |
8090787 | Polis | Jan 2012 | B2 |
8095523 | Brave | Jan 2012 | B2 |
8095592 | Polis | Jan 2012 | B2 |
8108460 | Polis | Jan 2012 | B2 |
8112476 | Polis | Feb 2012 | B2 |
8122080 | Polis | Feb 2012 | B2 |
8156183 | Polis | Apr 2012 | B2 |
8281125 | Briceno | Oct 2012 | B1 |
8296360 | Polis | Oct 2012 | B2 |
8433705 | Dredze | Apr 2013 | B1 |
8438223 | Polis | May 2013 | B2 |
8458256 | Polis | Jun 2013 | B2 |
8458292 | Polis | Jun 2013 | B2 |
8458347 | Polis | Jun 2013 | B2 |
8468202 | Polis | Jun 2013 | B2 |
8468445 | Gupta | Jun 2013 | B2 |
8521526 | Lloyd | Aug 2013 | B1 |
8527525 | Fong | Sep 2013 | B2 |
8959156 | Polis | Feb 2015 | B2 |
9088533 | Zeng | Jul 2015 | B1 |
9529522 | Barros | Dec 2016 | B1 |
9875740 | Kumar | Jan 2018 | B1 |
20020133509 | Johnston | Sep 2002 | A1 |
20020152091 | Nagaoka | Oct 2002 | A1 |
20020160757 | Shavit | Oct 2002 | A1 |
20020178000 | Aktas | Nov 2002 | A1 |
20020194322 | Nagata | Dec 2002 | A1 |
20030096599 | Takatsuki | May 2003 | A1 |
20040117507 | Torma | Jun 2004 | A1 |
20040137884 | Engstrom | Jul 2004 | A1 |
20040177048 | Klug | Sep 2004 | A1 |
20040243719 | Roselinsky | Dec 2004 | A1 |
20040266411 | Galicia | Dec 2004 | A1 |
20050015443 | Levine | Jan 2005 | A1 |
20050080857 | Kirsch | Apr 2005 | A1 |
20050101337 | Wilson | May 2005 | A1 |
20050198159 | Kirsch | Sep 2005 | A1 |
20060193450 | Flynt | Aug 2006 | A1 |
20060212757 | Ross | Sep 2006 | A1 |
20070054676 | Duan | Mar 2007 | A1 |
20070073816 | Kumar | Mar 2007 | A1 |
20070100680 | Kumar | May 2007 | A1 |
20070116195 | Thompson | May 2007 | A1 |
20070130273 | Huynh | Jun 2007 | A1 |
20070180130 | Arnold | Aug 2007 | A1 |
20070237135 | Trevallyn-Jones | Oct 2007 | A1 |
20070299796 | MacBeth | Dec 2007 | A1 |
20080062133 | Wolf | Mar 2008 | A1 |
20080088428 | Pitre | Apr 2008 | A1 |
20080112546 | Fletcher | May 2008 | A1 |
20080236103 | Lowder | Oct 2008 | A1 |
20080261569 | Britt | Oct 2008 | A1 |
20080263103 | McGregor | Oct 2008 | A1 |
20090016504 | Mantell | Jan 2009 | A1 |
20090119370 | Stem | May 2009 | A1 |
20090177477 | Nenov | Jul 2009 | A1 |
20090177484 | Davis | Jul 2009 | A1 |
20090177744 | Marlow | Jul 2009 | A1 |
20090181702 | Vargas | Jul 2009 | A1 |
20090187846 | Paasovaara | Jul 2009 | A1 |
20090271486 | Ligh | Oct 2009 | A1 |
20090292814 | Ting | Nov 2009 | A1 |
20090299996 | Yu | Dec 2009 | A1 |
20100057872 | Koons | Mar 2010 | A1 |
20100198880 | Petersen | Aug 2010 | A1 |
20100210291 | Lauer | Aug 2010 | A1 |
20100220585 | Poulson | Sep 2010 | A1 |
20100223341 | Manolescu | Sep 2010 | A1 |
20100229107 | Turner | Sep 2010 | A1 |
20100250578 | Athsani | Sep 2010 | A1 |
20100312644 | Borgs | Dec 2010 | A1 |
20100323728 | Gould | Dec 2010 | A1 |
20100325227 | Novy | Dec 2010 | A1 |
20110010182 | Turski | Jan 2011 | A1 |
20110051913 | Kesler | Mar 2011 | A1 |
20110078247 | Jackson | Mar 2011 | A1 |
20110078256 | Wang | Mar 2011 | A1 |
20110078267 | Lee | Mar 2011 | A1 |
20110130168 | Vendrow | Jun 2011 | A1 |
20110194629 | Bekanich | Aug 2011 | A1 |
20110219008 | Been | Sep 2011 | A1 |
20110265010 | Ferguson | Oct 2011 | A1 |
20110276640 | Jesse | Nov 2011 | A1 |
20110279458 | Gnanasambandam | Nov 2011 | A1 |
20110295851 | El-Saban | Dec 2011 | A1 |
20120016858 | Rathod | Jan 2012 | A1 |
20120209847 | Rangan | Aug 2012 | A1 |
20120210253 | Luna | Aug 2012 | A1 |
20120221962 | Lew | Aug 2012 | A1 |
20130018945 | Vendrow | Jan 2013 | A1 |
20130024521 | Pocklington | Jan 2013 | A1 |
20130067345 | Das | Mar 2013 | A1 |
20130097279 | Polis | Apr 2013 | A1 |
20130111487 | Cheyer | May 2013 | A1 |
20130127864 | Nevin, III | May 2013 | A1 |
20130151508 | Kurabayashi | Jun 2013 | A1 |
20130197915 | Burke | Aug 2013 | A1 |
20130238979 | Sayers, III | Sep 2013 | A1 |
20130262385 | Kumarasamy | Oct 2013 | A1 |
20130262852 | Roeder | Oct 2013 | A1 |
20130267264 | Abuelsaad | Oct 2013 | A1 |
20130268516 | Chaudhri | Oct 2013 | A1 |
20130304830 | Olsen | Nov 2013 | A1 |
20130325343 | Blumenberg | Dec 2013 | A1 |
20130325603 | Shamir | Dec 2013 | A1 |
20130332308 | Linden | Dec 2013 | A1 |
20140006525 | Freund | Jan 2014 | A1 |
20140020047 | Liebmann | Jan 2014 | A1 |
20140032538 | Arngren | Jan 2014 | A1 |
20140149399 | Kurzion | May 2014 | A1 |
20140270131 | Hand | Sep 2014 | A1 |
20140280460 | Nemer | Sep 2014 | A1 |
20140297807 | Dasgupta | Oct 2014 | A1 |
20140355907 | Pesavento | Dec 2014 | A1 |
20150019406 | Lawrence | Jan 2015 | A1 |
20150039887 | Kahol | Feb 2015 | A1 |
20150095127 | Patel | Apr 2015 | A1 |
20150149484 | Kelley | May 2015 | A1 |
20150186455 | Horling | Jul 2015 | A1 |
20150261496 | Faaborg | Sep 2015 | A1 |
20150278370 | Stratvert | Oct 2015 | A1 |
20150281184 | Cooley | Oct 2015 | A1 |
20150286747 | Anastasakos | Oct 2015 | A1 |
20150286943 | Wang | Oct 2015 | A1 |
20150339405 | Vora | Nov 2015 | A1 |
20160048548 | Thomas | Feb 2016 | A1 |
20160078030 | Brackett | Mar 2016 | A1 |
20160087944 | Downey | Mar 2016 | A1 |
20160092959 | Gross | Mar 2016 | A1 |
20160173578 | Sharma | Jun 2016 | A1 |
20170039246 | Bastide | Feb 2017 | A1 |
20170039296 | Bastide | Feb 2017 | A1 |
20170116578 | Hadatsuki | Apr 2017 | A1 |
20170206276 | Gill | Jul 2017 | A1 |
20170364587 | Krishnamurthy | Dec 2017 | A1 |
20180048661 | Bird | Feb 2018 | A1 |
20180101506 | Hodaei | Apr 2018 | A1 |
20180121603 | Devarakonda | May 2018 | A1 |
Number | Date | Country |
---|---|---|
9931575 | Jun 1999 | WO |
2013112570 | Aug 2013 | WO |
Entry |
---|
Guangyi Xiao et al., “User Interoperability With Heterogeneous IoT Devices Through Transformation,” pp. 1486-1496, 2014. |
Marr, Bernard, Key Business Analytics, Feb. 2016, FT Publishing International, Ch. 17 “Neural Network Analysis” (Year: 2016). |
Number | Date | Country | |
---|---|---|---|
Parent | 16220943 | Dec 2018 | US |
Child | 16836691 | US | |
Parent | 14187699 | Feb 2014 | US |
Child | 16220943 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16836691 | Mar 2020 | US |
Child | 17843909 | US |