Systems and methods of de-duplicating similar news feed items

Information

  • Patent Grant
  • 9984166
  • Patent Number
    9,984,166
  • Date Filed
    Friday, October 10, 2014
    10 years ago
  • Date Issued
    Tuesday, May 29, 2018
    6 years ago
  • CPC
  • Field of Search
    • US
    • 707 738000
    • 707 727000
    • 707 748000
    • 707 749000
    • 707 750000
    • 707 999101
    • CPC
    • G06F17/30867
    • G06F7/20
    • G06F7/14
    • G06F17/30648
  • International Classifications
    • G06F7/00
    • G06F17/30
    • Term Extension
      493
Abstract
The technology disclosed relates to de-duplicating contextually similar news feed items. In particular, it relates to assembling a set of news feed items from a plurality of electronic sources and preprocessing the set to generate normalized news feed items that share common company-name mentions and token occurrences. The normalized news feed items are used to calculate one or more resemblance measures based on a sequence alignment score and/or a hyperlink score. The sequence alignment score determines contextual similarity between news feed item pairs, arranged as sequences, based on a number of matching elements in the news feed item sequences and a number of edit operations, such as insertion, deletion, and substitution, required to match the news feed item sequences. The hyperlink score determines contextual similarity between news feed item pairs by comparing the respective search results retrieved in response to supplying the news feed item pairs to a search engine.
Description
RELATED APPLICATION

This application is related to U.S. patent application Ser. No. 14/512,222 entitled “Automatic Clustering By Topic And Prioritizing Online Feed Items,” filed contemporaneously. The related application is hereby incorporated by reference for all purposes.


BACKGROUND

The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.


As the volume of information flowing on the web continues to increase, the need for automated tools that can assist users in receiving information valuable to them also increases. The information overload created by multitude of information sources, such as websites and social media sites, makes it difficult for users to know what piece of information is more suitable, relevant, or appropriate to their needs and desires. Also, a substantial portion of users' web surfing time is spent on separating information from noise.


In particular, service providers are continually challenged to deliver value and convenience to users by, for example, providing efficient search engine with high precision and low recall. One area of interest has been the development of finding and accessing desired content or search results. Currently, users locate content by forging through lengthy and exhausting search results, many of which include similar information. However, such methods can be time consuming and troublesome, especially if users are not exactly sure what they are looking for. Although these issues exist with respect to non-mobile devices, such issues are amplified when it comes to finding desired content or search results using mobile devices that have much limited screen space and can only display few search results per screen.


An opportunity arises to shift the burden of information filtering from users to automated systems and methods that determine contextual similarity between news feed items and present a single news feed item that represents a group of contextually similar news feed items. Improved user experience and engagement and higher user satisfaction and retention may result.





BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the technology disclosed. In the following description, various implementations of the technology disclosed are described with reference to the following drawings, in which:



FIG. 1 shows an example environment of de-duplicating similar news feed items.



FIG. 2 shows a set of news feed items assembled from a plurality of electronic sources.



FIG. 3 is one implementation of a set of normalized news feed items.



FIG. 4 illustrates one implementation of determining a resemblance measure for normalized news feed items based on sequence alignment between news feed item pairs.



FIG. 5 depicts one implementation of determining a resemblance measure for normalized news feed items based on results returned in response to supplying the normalized news feed item pairs as search criteria.



FIG. 6 shows one implementation of constructing a resemblance graph of news feed item pairs with a resemblance measure above a threshold and representing the resemblance measure as edges between nodes representing the news feed item pairs.



FIG. 7 depicts one implementation of a plurality of objects that can be used to de-duplicate similar news feed items.



FIG. 8 is a representative method of de-duplicating similar news feed items.



FIG. 9 is a block diagram of an example computer system used to de-duplicate similar news feed items.





DETAILED DESCRIPTION

Introduction


Online news feed items, also referred to as “insights,” often spread through several channels such as websites, RSS feeds and Twitter feed. Often times, the same insight is repeated over multiple news sources and thus creates duplication. Such duplicate insights can show up as identical items, items with little textual difference, or even significant textual difference among the multiple news sources. However, contextually, they carry the same news item.


The technology disclosed can be used to solve the technical problem of de-duplicating contextually similar news feed items such as the following four news feed items, which include similar content and thus should be presented to a user as a single news feed item.

    • BlueSpring Owns a Satellite Now
    • BlueSpring Corp. to acquire satellite company Skybox in $500M deal
    • BlueSpring buys satellite imaging firm for $500 mn
    • BlueSpring Invests Billions on Satellites to Expand Internet Access


The technology disclosed relates to assembling a set of news feed items from a plurality of electronic sources and preprocessing the set to generate normalized news feed items that share common company-name mentions and common token occurrences. The normalized news feed items are then used to calculate one or more resemblance measures based on a sequence alignment score and/or a hyperlink score. The sequence alignment score determines contextual similarity between news feed item pairs, arranged as sequences, based on a number of matching tokens, and their proximity in the news feed item sequences and a number of edit operations, such as insertion, deletion, and substitution, required to match the news feed item sequences. The hyperlink score determines contextual similarity between news feed item pairs by comparing the respective search results retrieved in response to supplying the news feed item pairs to a search engine.


Further, the technology disclosed determines contextual similarity between large amounts of data representing the news feed items by constructing a resemblance graph of normalized news feed items with the resemblance measure above a threshold. In the resemblance graph, the resemblance measure is represented as edges between nodes representing the news feed item pairs, forming connected node pairs. Following this, contextual similar news feed items are then determined by clustering the connected node pairs into strongly connected components and cliques. After this, representative news feed items for the contextually similar news feed items are derived by identifying cluster heads of respective strongly connected components having highest degree of connectivity in the respective strongly connected components.


Examples of systems, apparatus, and methods according to the disclosed implementations are described in a “news feed items” context. The example of news feed items are being provided solely to add context and aid in the understanding of the disclosed implementations. In other instances, examples of different textual entities like contacts, documents, and social profiles may be used. Other applications are possible, such that the following examples should not be taken as definitive or limiting either in scope, context, or setting. It will thus be apparent to one skilled in the art that implementations may be practiced in or outside the “news feed items” context.


The described subject matter is implemented by a computer-implemented system, such as a software-based system, a database system, a multi-tenant environment, or the like. Moreover, the described subject matter can be implemented in connection with two or more separate and distinct computer-implemented systems that cooperate and communicate with one another. One or more implementations can be implemented in numerous ways, including as a process, an apparatus, a system, a device, a method, a computer readable medium such as a computer readable storage medium containing computer readable instructions or computer program code, or as a computer program product comprising a computer usable medium having a computer readable program code embodied.


As used herein, the “specification” of an item of information does not necessarily require the direct specification of that item of information. Information can be “specified” in a field by simply referring to the actual information through one or more layers of indirection, or by identifying one or more items of different information which are together sufficient to determine the actual item of information. In addition, the term “identify” is used herein to mean the same as “specify.”


De-Duplication Environment



FIG. 1 shows an example environment 100 of de-duplicating similar news feed items. FIG. 1 includes a lexical data database 102, news feed items database 105, and search results database 108. FIG. 1 also shows resemblance measuring engine 112, network(s) 115, clustering engine 118, user computing device 122, application 124, search engine 125, and preprocessing engine 128. In other implementations, environment 100 may not have the same elements or components as those listed above and/or may have other/different elements or components instead of, or in addition to, those listed above, such as a source database, social data database, sequence alignment engine, strongly connected components engine, and cluster head engine. The different elements or components can be combined into single software modules and multiple software modules can run on the same hardware.


Network(s) 115 is any network or combination of networks of devices that communicate with one another. For example, network(s) 115 can be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network (Public Switched Telephone Network (PSTN), Session Initiation Protocol (SIP), 3G, 4G LTE), wireless network, point-to-point network, star network, token ring network, hub network, WiMAX, WiFi, peer-to-peer connections like Bluetooth, Near Field Communication (NFC), Z-Wave, ZigBee, or other appropriate configuration of data networks, including the Internet. In other implementations, other networks can be used such as an intranet, an extranet, a virtual private network (VPN), a non-TCP/IP based network, any LAN or WAN or the like.


In some implementations, the engines can be of varying types including workstations, servers, computing clusters, blade servers, server farms, or any other data processing systems or computing devices. The engines can be communicably coupled to the databases via different network connections. For example, resemblance measuring engine 112 and clustering engine 118 can be coupled via the network 115 (e.g., the Internet), search engine 125 can be coupled via a direct network link, and preprocessing engine 128 can be coupled by yet a different network connection.


In some implementations, databases can store information from one or more tenants into tables of a common database image to form an on-demand database service (ODDS), which can be implemented in many ways, such as a multi-tenant database system (MTDS). A database image can include one or more database objects. In other implementations, the databases can be relational database management systems (RDBMSs), object oriented database management systems (OODBMSs), distributed file systems (DFS), no-schema database, or any other data storing systems or computing devices. In some implementations, user computing device 122 can be a personal computer, laptop computer, tablet computer, smartphone, personal digital assistant (PDA), digital image capture devices, and the like.


Application 124 can take one of a number of forms, including user interfaces, dashboard interfaces, engagement consoles, and other interfaces, such as mobile interfaces, tablet interfaces, summary interfaces, or wearable interfaces. In some implementations, it can be hosted on a web-based or cloud-based privacy management application running on a computing device such as a personal computer, laptop computer, mobile device, and/or any other hand-held computing device. It can also be hosted on a non-social local application running in an on-premise environment. In one implementation, application 124 can be accessed from a browser running on a computing device. The browser can be Chrome, Internet Explorer, Firefox, Safari, and the like. In other implementations, application 124 can run as an engagement console on a computer desktop application.


Lexical data 102 store entries associated with terms in news feed items. In one implementation, it can include a glossary of words and company names such that each entry identifies the multiple mention forms the corresponding word or company name can take. Examples of multiple mention forms include thesaurus (acquire vs. purchase vs. bought), abbreviations (Salesforce.com vs. SFDC), shortened forms (Salesforce vs. Sf), alternative spellings (Salesforce v. Salesforce.com), and stock aliases (Salesforce vs. CRM). When the news feed item pairs are matched, such multiple forms are taken in account to determine contextual resemblance between the news feed item pairs. In another implementation, it identifies common prefixes and postfixes used with company names, such as “LLP” and “Incorporation,” that can be used to extract company names from the news feed items.


In some implementations, lexical data 102 serves as a dictionary that identifies various root and affix references and verb and noun forms associated with a word. In yet another implementation, lexical data 102 can include a list of stop words that are the most common words in a language (e.g. and, the, but, etc. for English). These stop words are omitted from matching of the news feed item pairs. Eliminating stop words from matching ensures that resemblance measuring between news feed item pairs is faster, efficient and more accurate.


News feed items 105 include online news articles or insights assembled from different types of data sources. News feed items 105 can be web pages, or extracts of web pages, or programs or files such as documents, images, video files, audio files, text files, or parts of combinations of any of these stored as a system of interlinked hypertext documents that can be accessed via the network(s) 115 (e.g., the Internet) using a web crawler. Regarding different types of data sources, access controlled application programing interfaces (APIs) like Yahoo Boss, Facebook Open Graph, or Twitter Firehose can provide real-time search data aggregated from numerous social media sources such as LinkedIn, Yahoo, Facebook, and Twitter. APIs can initialize sorting, processing and normalization of data. Public internet can provide data from public sources such as first hand websites, blogs, web search aggregators, and social media aggregators. Social networking sites can provide data from social media sources such as Twitter, Facebook, LinkedIn, and Klout.


Preprocessing engine 128 generates a normalized version of the assembled news feed items to determine contextual resemblance between the news feed items. According to one implementation, this is achieved by identifying a name of at least one company to which a particular news feed item relates to and finding other news feed items about the same company. In one implementation, preprocessing engine 128 matches a textual mention in a news feed item to an entry in the lexical data 102, such as a company name, that is a canonical entry for the textual mention. This implementation also includes looking up variants of the company name to identify mentions of any known abbreviations, shortened forms, alternative spellings, or stock aliases of the company name.


According to some implementations, preprocessing engine 128 identifies news feed items with common text mentions, including exact matches of company names and equivalent matches of company names variants. In another implementation, preprocessing engine 128 removes any stop words from the news feed items to facilitate efficient comparison of the news feed items, preferably before identifying common company-name mentions.


According to some implementations, contextual resemblance between news feed items is further determined based on common token occurrences in the news feed items that are identified as belonging to a same company. A “token” refers to any of a variety of possible language units, such as a word, a phrase, a number, a symbol, or the like, that represents a smallest unit of language that conveys meaning. In one implementation, a news feed item can be decomposed into one or more tokens using a tokenizer, which represents a set of language specific rules that define a boundary of a token.


Based on noun and verb variants of the tokens (stored in lexical data 102), preprocessing engine 128 identifies not only exact token occurrences, but also equivalent token occurrences between news feed items belonging to a same company. For example, consider a first news feed item that includes “BlSp announce a new ceo” and a second news feed item that includes “BlSp announces upgrade in its servers.” In this example, processing engine 128, after determining that the first and second news feed item pairs belong to the same company named “BlSp,” further identifies that they respectively include distinctive singular and plural forms of the same word “announce” and hence have greater contextual resemblance with each other relative to other news feed items about the same company that lack such a common word.


Search engine 125 provides a search service for searching news feed items accessible online. In one implementation, search engine 125 includes a query server to receive a search query, find news feed items relevant to the search query, and return search results 108 indicating at least some of the found news feed items ranked according to mentions of the respective found news feed items. In some implementations, search engine 125 can include a crawler that downloads and indexes content from the web, including from one or more social networking sites.


Search results 108 stores search results returned by the search engine 125 in response to providing the news feed items as search criteria to the search engine 125. In one implementation, the search results 108 include metadata associated with the web pages, including unified resource locators (URLs), title, concise description, content, publication data, and authorship data.


Resemblance measuring engine 112 determines a degree of contextual resemblance between news feed item pairs. In one implementation, this is achieved by applying a sequence alignment algorithm to normalized news feed item pairs arranged as sequences and calculating a resemblance measure for the news feed item pairs based on number, length and proximity of exact matches between news feed item sequences and a number of edit operations required to match the respective sequences with each other. In another implementation, resemblance measuring engine 112 supplies normalized news feed item pairs to the search engine 125 as search criteria and based on the retrieved results of the respective news feed items, including web pages and their metadata, determines a resemblance measure for the news feed item pairs.


In some implementations, resemblance measuring engine 112 can measure the closeness between news feed items pairs by employing a plurality of resemblance functions, including “edit distance,” also known as Levenshtein distance. Given two news feed items n1 and n2, the edit distance (denoted ed (n1, n2)) between the news feed items can be given by the number of “edit” operations required to transform n1 to n2 (or vice versa). The edit distance is then defined by a set of edit operations that are allowed for the transformation, including insert, delete, and replace of one character at any position in the news feed items. Further, each edit operation results in incurring of a positive or negative cost, and the cost of sequence of operations is given by the sum of costs of each operation in the sequence. Then, the edit distance between two news feed items is given by the cost of the cost-minimizing sequence of edit operations that translates one news feed item to another.


In other implementations, resemblance measuring engine 112 uses “jaccard set resemblance” to identify contextually similar news feed items. The jaccard resemblance is the ratio of the size of the intersection over the size of the union. Hence, news feed item pairs that have a lot of elements in common are closer to each other. To apply the jaccard set resemblance between two news feed items, the two input news feed items n1 and n2 are transformed into sets. This is achieved by obtaining the set of all n-grams of the input news feed items. An n-gram is a continuous sequence of n characters in the input. Given the two input news feed items feed n1 and n2, n-grams of each news feed item are obtained to derive sets Q (n1) and Q (n2). The resemblance between n1 and n2 is given by the jaccard resemblance J (Q (n1), Q (n2)) between the two sets of n-grams. In some implementations, the sizes of the various sets can be replaced with weighted sets.


In yet other implementations, the resemblance measuring engine 112 employs a “cosine resemblance” function that uses a vector-based resemblance measure between news feed items where the input news feed items n1 and n2 are translated to vectors in a high-dimensional space. In one implementation, the transformation of the input news feed items to vectors is done based on the tokens that appear in the news feed item, with each token corresponding to a dimension and the frequency of the token in the input being the weight of the vector in that dimension. The contextual resemblance is then given by the cosine resemblance of the two vectors i.e., the cosine of the angle between the two vectors.


Given a collection of similar news feed item pairs to be de-duplicated, clustering engine 118 applies a resemblance function to all pairs of news feed items to obtain a weighted resemblance graph where the nodes are the news feed items in the collection and there is a weighted edge connecting each pair of nodes, the weight representing the amount of resemblance. The resemblance function returns a resemblance measure which can be a value between 0 and 1, according to one implementation. A higher value indicates a greater resemblance with 1 denoting equality. In some implementations, clustering engine 118 decomposes or partitions the resemblance graph into its strongly connected components where nodes that are connected with large edge weights have a greater likelihood being in the same group of contextually similar insights. In one implementation, only those edges whose weight is above a given threshold are used for determining the strongly connected components. As a result, when the set of news feed items is very large, clustering performs a blocking to bring similar “components” of news feed items together, and a finger-grained pairwise comparison is only performed within each component.


News Feed Items



FIG. 2 shows a set 200 of news feed items assembled from a plurality of electronic sources. In FIG. 2, six news feed items 205-255 are collected from different sources described above and include at least one of webpages, RSS feeds, social media feeds such as twitter feeds, and documents. In some implementations, news feed items 205-255 are published with a time window prior to a current time such that other news feed items outside the time window are not included in the set 200, irrespective of their contextual similarity. In one implementation, news feed items 205-255 are grouped together because they relate to a same company and are used to evaluate a newly received news feed item that shares the same company name reference as the news feed item group 205-255. FIG. 3 is one implementation of a set 300 of normalized news feed items 305-355 resulting from the elimination of stop words from the news feed items 205-255 and substitution of common company references (“BlueSpring,” “bluspr,” “BlSp,” “CRM,” “BlueSpring corp.”) with a constant token “_comp_.”


Sequence Alignment



FIG. 4 illustrates one implementation of determining a resemblance measure for normalized news feed items based on sequence alignment 400 between news feed item pairs. In FIG. 4, two news feed items 415 and 425 are compared as sequences to calculate raw scores and boosted scores of the sequence alignment. In one implementation, a term penalty matrix is used, giving lower penalty for replacing contextually-similar tokens, such as “acquire,” “purchase,” and “buy.” In another implementation, the term penalty matrix assigns a negative penalty for each edit operation such as insertion, deletion, and substitution. In yet another implementation, n-grams such as bigrams (two contiguous matching tokens) and trigrams (three contiguous matching tokens) are rewarded by augmenting the raw score to produce a boosted score. In other implementations, the resemblance measure is responsive to other factors such as original distance, normalized distance, maximum string length, minimum string length, and longest consecutive matches.


The sequence alignment algorithm can be applied based on predetermined rules. For instance, each insertion, deletion, and substitution results in a count one being deducted from the resemblance measure, each exact match causes a one incremental in the resemblance measure, a bigram is an addition of three positive counts to the raw score, and trigram is an addition of seven positive counts to the raw score.


As shown in FIG. 4, the first two tokens of the news feed item pairs 415 and 425 match exactly, causing the initially zero resemblance measure to become positive two. At the third token position, the word “faster” in sequence 415 is substituted by the text “$300m,” resulting in the resemblance measure depreciating to one from two. Further, the next two mismatches of words “venture” and “lay” in sequence 425 produce the resemblance measure of minus one. Advancing to the sixth token position in sequence 425, the word “undersea” exists in both the sequences 415 and 425, resulting in a plus incremental in the resemblance measure. After evaluating the entire sequences 415 and 425, the raw score is calculated to be minus three and the presence of a bigram adds three positive counts to the raw score and results in a boosted score of zero.



FIG. 5 depicts one implementation of determining a resemblance measure for normalized news feed items based on results returned 500 in response to supplying the normalized news feed item pairs as search criteria 505 and 508. In the example shown in FIG. 5, news feed item pairs 245 and 255 are supplied as search criteria to search engine 125. The returned results 515, 525, 535, 545, 555, and 565 for the news feed item 245 are then compared with the returned results 518, 528, 538, 548, 558, and 568 for the news feed item 255. For each match, such as feed item 525 and feed item 568, feed item 535 and feed item 528, feed item 555 and feed item 558, a positive count is allocated to the resemblance measure. In some implementations, the resemblance measure is boosted when the news feed item pairs appear in either's returned results, such as news feed item 505 appearing in the returned results of news feed item 508 as news feed item 548. In other implementations, different features of the returned results, such as URLs, content, description, and metadata are compared to determine the resemblance measure.


Clustering



FIG. 6 shows one implementation of constructing a resemblance graph 600 of news feed item pairs with a resemblance measure above a threshold and representing the resemblance measure as edges between nodes representing the news feed item pairs. The set of items S={I1, . . . , In} form the nodes of the resemblance graph G, and there is a weighted edge between nodes and Ii and Ij with weight given by the pairwise resemblance rsm (Ii, Ij). In one implementation, news feed items whose weight is above some threshold t are retained. The threshold t can be designated by a human and/or calculated by a machine based on training examples. For a given implementation, a higher t results in higher precision at the cost of lower recall, while a lower t increases recall at the cost of lower precision.


The resultant graph can be denoted by G (V, E), where V corresponds to items in I, and E is the set of unweighted edges such that (Ii, Ij) ∈ E if and only if rsm (Ii, Ij)≥t. The set S is then clustered using standard graph clustering techniques such as strongly connected components and cliques. In one implementation, the strongly connected components compute all connected components of G, with each connected component forming a disjoint cluster. In other implementations, cliques calculate all maximum cliques of G, and each maximum clique forms a cluster, which can be non-disjoint clusters in the case of maximal cliques of graph G. In yet other implementations, a representative node or cluster head with a highest degree of connectivity or betweeness in a particular cluster can be identified based on the number edges attached to the node. In scenarios where the degree of connectivity of more than one node in a cluster is same, a cluster head can be identified based on the collective edge weights of respective nodes. As illustrated in FIG. 6, nodes I1, I3, and I5 have high edge weights (0.7, 0.8, 0.9) and hence are identified as a cohort in cluster 1. Further, I3 is determined to be the cluster head of cluster 1 because it has the most number of edges attached to it. Similar, I6, I9, and In form cluster 2 with I6 as its cluster head.



FIG. 7 depicts one implementation of a plurality of objects 700 that can be used to de-duplicate similar news feed items. As described above, this and other data structure descriptions that are expressed in terms of objects can also be implemented as tables that store multiple records or object types. Reference to objects is for convenience of explanation and not as a limitation on the data structure implementation. FIG. 7 shows prefix objects 702, postfix objects 712, stop words objects 722, synonym objects 732, and company name objects 742. In other implementations, objects 700 may not have the same objects, tables, entries or fields as those listed above and/or may have other/different objects, tables, entries or fields instead of, or in addition to, those listed above.


Prefix objects 702 uniquely identify common prefixes (e.g. Dr., Mr., Sir) associated with company names using “PrefixID.” In contrast, postfix objects 712 store a list of common postfixes associated with company names using “PostfixID.” Examples of such postfixes include “LLP,” “Company,” “LLC,” and “Incorporated.” Stop word objects 722 specify the various commonly occurring words that can be eliminated from matching of the news feed item pairs. Each such word can be given a unique ID such as “STW01.”


Synonym objects 712 list the plurality of synonyms associated with a word. In the example shown in FIG. 7, the word “acquire” is assigned a unique ID “WDO1” and is linked to word “purchase” with a unique ID “WD02” as its synonym. Similarly, company name object 742 can identify the different name forms associated with a particular company. For instance, a company named “BlueSprin” can have an alternative name of “bluSpr” and an abbreviation of “BlsP,” and a stock ticker of “CRM.” Such variant name forms can be assigned unique name IDs that can be linked to the unique name ID of the most commonly used name or legal name of the company.


In other implementations, objects 700 can have one or more of the following variables with certain attributes: FEED_ID being CHAR (15 BYTE), SOURCE_ID being CHAR (15 BYTE), PUBLICATION_DATE_DATE being CHAR (15 BYTE), PUBLICATION_TIME_TIME being CHAR (15 BYTE), URL_LINK being CHAR (15 BYTE), CREATED_BY being CHAR (15 BYTE), CREATED_DATE being DATE, and DELETED being CHAR (1BYTE).


Flowchart of De-Duplicating Similar News Feed Items



FIG. 8 is a representative method 800 of de-duplicating similar news feed items. Flowchart 800 can be implemented at least partially with a database system, e.g., by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, varying, alternative, modified, fewer or additional actions than those illustrated in FIG. 8. Multiple actions can be combined in some implementations. For convenience, this flowchart is described with reference to the system that carries out a method. The system is not necessarily part of the method.


At action 802, a set of news feed items is assembled from a plurality of electronic sources. The electronic sources include access controlled APIs, public Internet, and social networking sites. In one implementation, the news feed items are published within a predetermined time window prior to a current time.


At action 812, the set is preprocessed to qualify some of the news feed items to return based on common company-name mentions and common token occurrences. In one implementation, preprocessing the set further includes removing stop word tokens from the news feed items. The pseudo code below illustrates one example of preprocessing the news feed items to extract company names and replace common company name references with a constant token “_COMP_.”

















//Pre-Comparison



for (Insight X: new set of previously unseen insights){



 X = normalize(X)



 String companies[ ] = extract_Company_Names(X)



 for (String company: companies){









X’ = String.replace(company, “_COMP_”);



company’ = normalize_Company_Name(company)



Set company_Insights_Set =



company_Insights_Map.get(company’)



company_Insights_Set.add(X')









 }



}










At action 822, a resemblance measure is pairwise determined for the qualified news feed items based on sequence alignment between news feed item pairs. In some implementations, the resemblance measure is determined by a plurality of resemblance measures, including edit distance, jaccard set resemblance, and cosine resemblance.


At action 832, a resemblance measure is pairwise determined for the qualified news feed items based on results returned in response to supplying the news feed item pairs as search criteria. In one implementation, the results returned include at least one of unified resource locators (URLs) of web pages, content of the web pages, and metadata about the web pages. In some implementations, the results returned in response to supplying a first news feed item as a search criteria include a second news feed item, further including augmenting the resemblance measure for the first and second news feed time pairs.


The pseudo code below shows one example of determining the resemblance measure by first sequentially aligning the news feed items to calculate a sequential alignment (SA) score and then comparing their search results to derive a hyperlink score.














// Comparison


for (Insight X: set of previously unseen insights and normalized){









Insight insights_compare[ ] = Insights with same company name



&& at least one_common_term



for (Insight Y: insights_compare){



 //Sequential Alignment Comparison



 int seq_alg = SequentialAlignment( X, Y)



 double sa_score = normalize( seq_al )∈ [0,1]









if (sa_score > Threshold_1){



 mark X and Y as similar with weight=sa_score









continue









}



//Search Results Comparison



links_X = links from Search Engine given insight as query









 links_Y = get the comp_Ins from the DB.



 if (links_Y is empty)









links_Y = links from Search Engine given comp_Ins as









query









double hyper_score = compare (links_X, links_Y)









f (hyper_score > Threshold_2){



 mark X and Y as similar with weight=hyper_score



}









}







}









At action 842, a graph of news feed item pairs with the resemblance measure above a threshold is constructed and the resemblance measure is represented as edges between nodes representing the news feed item pairs, thereby forming connected node pairs. The threshold can be designated by a human and/or calculated by a machine based on training examples. A higher threshold results in higher precision at the cost of lower recall, while a lower threshold increases recall at the cost of lower precision.


At action 852, similar news feed items are determined by clustering the connected node pairs into strongly connected components. In some implementations, the clusters can be created using standard graph clustering techniques such as strongly connected components and cliques.


At action 862, representative news feed items are determined for the similar news feed items by identifying cluster heads of respective strongly connected components, which have highest degree of connectivity in the respective strongly connected components.


At action 818, determination of resemblance measure based on sequence alignment at action of 822 is skipped and the qualified news feed items are used to determine a sole resemblance measure based on results returned in response to supplying the news feed item pairs as search criteria.


In contrast, at action 828, determination of resemblance measure based on results returned in response to supplying the news feed item pairs as search criteria at action of 832 is skipped and the qualified news feed items are used to determine a sole resemblance measure based on sequence alignment between news feed item pairs.


This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in sections of this application such as customization environment, visually rich customization protocol, text-based customization protocol, branding editor, case submitter, search view, etc.


Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.


Computer System



FIG. 9 is a block diagram of an example computer system 900 used to de-duplicate similar news feed items. Computer system 910 typically includes at least one processor 914 that communicates with a number of peripheral devices via bus subsystem 912. These peripheral devices can include a storage subsystem 924 including, for example, memory devices and a file storage subsystem, user interface input devices 922, user interface output devices 918, and a network interface subsystem 916. The input and output devices allow user interaction with computer system 910. Network interface subsystem 916 provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems.


User interface input devices 922 can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 910.


User interface output devices 918 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 910 to the user or to another machine or computer system.


Storage subsystem 924 stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processor 914 alone or in combination with other processors.


Memory 926 used in the storage subsystem can include a number of memories including a main random access memory (RAM) 934 for storage of instructions and data during program execution and a read only memory (ROM) 932 in which fixed instructions are stored. A file storage subsystem 928 can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem 928 in the storage subsystem 924, or in other machines accessible by the processor.


Bus subsystem 912 provides a mechanism for letting the various components and subsystems of computer system 910 communicate with each other as intended. Although bus subsystem 912 is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses. Application server 920 can be a framework that allows the applications of computer system 900 to run, such as the hardware and/or software, e.g., the operating system.


Computer system 910 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 910 depicted in FIG. 9 is intended only as one example. Many other configurations of computer system 910 are possible having more or fewer components than the computer system depicted in FIG. 9.


The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain implementations of the technology disclosed, it will be apparent to those of ordinary skill in the art that other implementations incorporating the concepts disclosed herein can be used without departing from the spirit and scope of the technology disclosed. Accordingly, the described implementations are to be considered in all respects as only illustrative and not restrictive.

Claims
  • 1. A method of efficient de-duplicating similar news feed items, the method including: assembling a set of news feed items from a plurality of electronic sources;preprocessing the set to qualify some news feed items to return based on common company-name mentions and common token occurrences;pairwise determining a resemblance measure for the qualified news feed items based on sequence alignment between news feed item pairs to calculate raw scores and boosted scores, including: matching tokens from the news feed item pairs and whenever the tokens match, causing a raw score for the resemblance measure to reflect a positive match;whenever two tokens mismatch, causing the raw score for the resemblance measure to reflect the mismatch using a term penalty matrix that assigns a negative penalty for an edit operation including insertion, deletion, and substitution;augmenting the raw score for the resemblance measure to produce a boosted score for the resemblance measure by rewarding an n-gram including bigrams of two contiguous matching tokens and trigrams of three contiguous matching tokens, and responsive to existence of one or more factors including original distance, normalized distance, maximum string length, minimum string length, and longest consecutive matches; andadvancing to subsequent token positions in sequence;after evaluating entire sequences of tokens in the qualified news feed items, constructing a graph of news feed item pairs with the resemblance measure above a threshold and representing the resemblance measure as edges between nodes representing the news feed item pairs, thereby forming connected node pairs; anddetermining similar news feed items by clustering the connected node pairs into strongly connected components; and wherein using the resemblance measure results in non-duplication of data entities holding news item data obtained from multiple sources.
  • 2. The method of claim 1, wherein the news feed items are published within a predetermined time window prior to a current time.
  • 3. The method of claim 1, further including determining representative news feed items for the similar news feed items by identifying cluster heads of respective strongly connected components, wherein the cluster heads have highest degree of connectivity in the respective strongly connected components.
  • 4. The method of claim 1, further including pairwise determining the resemblance measure for the news feed items based on results returned in response to supplying the news feed item pairs as search criteria.
  • 5. The method of claim 4, wherein the results returned include at least one of: unified resource locators (URLs) of web pages;content of the web pages; ormetadata about the web pages.
  • 6. The method of claim 1, wherein preprocessing the set further includes removing stop tokens from the news feed items.
  • 7. A method of efficient de-duplicating similar news feed items, the method including: assembling a set of news feed items from a plurality of electronic sources;preprocessing the set to qualify some news feed items to return based on common company-name mentions and common token occurrences;pairwise determining a resemblance measure for the qualified news feed items based on results returned in response to supplying news feed item pairs as search criteria, including: matching tokens from the news feed item pairs and whenever the tokens match, a count is allocated to the resemblance measure or the resemblance measure is boosted when news feed item pairs appear in either's returned results, a bigram of two contiguous matching tokens, or a trigram of three contiguous matching tokens is detected when matching the news feed item pairs;whenever two tokens mismatch, causing the resemblance measure to reflect the mismatch by reducing the resemblance measure by a count for an edit operation including insertion, deletion, and substitution; andadvancing to subsequent token positions in sequence;after evaluating entire sequences of tokens in the qualified news feed items, constructing a graph of news feed item pairs with the resemblance measure above a threshold and representing the resemblance measure as edges between nodes representing the news feed item pairs, thereby forming connected node pairs; anddetermining similar news feed items by clustering the connected node pairs into strongly connected components; and wherein using the resemblance measure results in non-duplication of data entities holding news item data obtained from multiple sources.
  • 8. The method of claim 7, wherein the news feed items are published within a predetermined time window prior to a current time.
  • 9. The method of claim 7, further including determining representative news feed items for the similar news feed items by identifying cluster heads of respective strongly connected components, wherein the cluster heads have highest degree of connectivity in the respective strongly connected components.
  • 10. The method of claim 7, wherein the results returned include at least one of: unified resource locators (URLs) of web pages;content of the web pages; ormetadata about the web pages.
  • 11. The method of claim 7, wherein the results returned in response to supplying a first news feed item as the search criteria include a second news feed item, further including augmenting the resemblance measure for the first news feed item and the second news feed item as paired.
  • 12. The method of claim 7, further including pairwise determining the resemblance measure for the news feed items based on sequence alignment between the news feed item pairs.
  • 13. The method of claim 7, wherein preprocessing the set further includes removing stop word tokens from the news feed items.
  • 14. A system of de-duplicating similar news feed items, the system including: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: assemble a set of news feed items from a plurality of electronic sources;preprocess the set to qualify some news feed items to return based on common company-name mentions and common token occurrences;pairwise determine a resemblance measure for the qualified news feed items based on sequence alignment to calculate raw scores and boosted scores, including: matching tokens from news feed item pairs and whenever the tokens match, causing a raw score for the resemblance measure to reflect a match;whenever two tokens mismatch, causing the raw score for the resemblance measure to reflect the mismatch using a term penalty matrix that assigns a negative penalty for an edit operation including insertion, deletion, and substitution;augmenting the raw score for the resemblance measure to produce a boosted score for the resemblance measure by rewarding an n-gram including bigrams of two contiguous matching tokens and trigrams of three contiguous matching tokens, and responsive to existence of one or more factors including original distance, normalized distance, maximum string length, minimum string length, and longest consecutive matches; andadvancing to subsequent token positions in sequence;after evaluating entire sequences of tokens in the qualified news feed items, construct a graph of news feed item pairs with the resemblance measure above a threshold and representing the resemblance measure as edges between nodes representing the news feed item pairs, thereby forming connected node pairs; anddetermine similar news feed items by clustering the connected node pairs into strongly connected components; and wherein using the resemblance measure results in non-duplication of data entities holding news item data obtained from multiple sources.
  • 15. The system of claim 14, wherein the news feed items are published within a predetermined time window prior to a current time.
  • 16. The system of claim 14, further configured to determine representative news feed items for the similar news feed items by identifying cluster heads of respective strongly connected components, wherein the cluster heads have highest degree of connectivity in the respective strongly connected components.
  • 17. The system of claim 14, further configured to pairwise determine the resemblance measure for the news feed items based on results returned in response to supplying the news feed item pairs as search criteria.
  • 18. The system of claim 17, wherein the results returned include at least one of: unified resource locators of web pages;content of the web pages; ormetadata about the web pages.
  • 19. The system of claim 17, wherein the results returned in response to supplying a first news feed item as the search criteria include a second news feed item, further including augmenting the resemblance measure for the first news feed item and the second news feed item as paired.
  • 20. The system of claim 14, wherein preprocessing the set further includes removing stop word tokens from the news feed items.
US Referenced Citations (204)
Number Name Date Kind
5577188 Zhu Nov 1996 A
5608872 Schwartz et al. Mar 1997 A
5649104 Carleton et al. Jul 1997 A
5715450 Ambrose et al. Feb 1998 A
5761419 Schwartz et al. Jun 1998 A
5819038 Carleton et al. Oct 1998 A
5821937 Tonelli et al. Oct 1998 A
5831610 Tonelli et al. Nov 1998 A
5873096 Lim et al. Feb 1999 A
5918159 Fomukong et al. Jun 1999 A
5963953 Cram et al. Oct 1999 A
6092083 Brodersen et al. Jul 2000 A
6161149 Achacoso et al. Dec 2000 A
6169534 Raffel et al. Jan 2001 B1
6178425 Brodersen et al. Jan 2001 B1
6189011 Lim et al. Feb 2001 B1
6216135 Brodersen et al. Apr 2001 B1
6233617 Rothwein et al. May 2001 B1
6266669 Brodersen et al. Jul 2001 B1
6295530 Ritchie et al. Sep 2001 B1
6324568 Diec Nov 2001 B1
6324693 Brodersen et al. Nov 2001 B1
6336137 Lee et al. Jan 2002 B1
D454139 Feldcamp Mar 2002 S
6367077 Brodersen et al. Apr 2002 B1
6393605 Loomans May 2002 B1
6405220 Brodersen et al. Jun 2002 B1
6434550 Warner et al. Aug 2002 B1
6446089 Brodersen et al. Sep 2002 B1
6457028 Pitkow Sep 2002 B1
6535909 Rust Mar 2003 B1
6549908 Loomans Apr 2003 B1
6553563 Ambrose et al. Apr 2003 B2
6560461 Fomukong et al. May 2003 B1
6574635 Stauber et al. Jun 2003 B2
6577726 Huang et al. Jun 2003 B1
6601087 Zhu et al. Jul 2003 B1
6604117 Lim et al. Aug 2003 B2
6604128 Diec Aug 2003 B2
6609150 Lee et al. Aug 2003 B2
6621834 Scherpbier et al. Sep 2003 B1
6654032 Zhu et al. Nov 2003 B1
6658423 Pugh Dec 2003 B1
6665648 Brodersen et al. Dec 2003 B2
6665655 Warner et al. Dec 2003 B1
6684438 Brodersen et al. Feb 2004 B2
6711565 Subramaniam et al. Mar 2004 B1
6724399 Katchour et al. Apr 2004 B1
6728702 Subramaniam et al. Apr 2004 B1
6728960 Loomans Apr 2004 B1
6732095 Warshavsky et al. May 2004 B1
6732100 Brodersen et al. May 2004 B1
6732111 Brodersen et al. May 2004 B2
6754681 Brodersen et al. Jun 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6763501 Zhu et al. Jul 2004 B1
6768904 Kim Jul 2004 B2
6772229 Achacoso et al. Aug 2004 B1
6782383 Subramaniam et al. Aug 2004 B2
6804330 Jones et al. Oct 2004 B1
6826565 Ritchie et al. Nov 2004 B2
6826582 Chatterjee et al. Nov 2004 B1
6826745 Coker et al. Nov 2004 B2
6829655 Huang et al. Dec 2004 B1
6842748 Warner et al. Jan 2005 B1
6850895 Brodersen et al. Feb 2005 B2
6850949 Warner et al. Feb 2005 B2
7062502 Kesler Jun 2006 B1
7069231 Cinarkaya et al. Jun 2006 B1
7069497 Desai Jun 2006 B1
7181758 Chan Feb 2007 B1
7289976 Kihneman et al. Oct 2007 B2
7293019 Dumais Nov 2007 B2
7340411 Cook Mar 2008 B2
7356482 Frankland et al. Apr 2008 B2
7401094 Kesler Jul 2008 B1
7412455 Dillon Aug 2008 B2
7508789 Chan Mar 2009 B2
7603483 Psounis et al. Oct 2009 B2
7620655 Larsson et al. Nov 2009 B2
7660819 Frieder Feb 2010 B1
7698160 Beaven et al. Apr 2010 B2
7707157 Shen Apr 2010 B1
7734627 Tong Jun 2010 B1
7779475 Jakobson et al. Aug 2010 B2
7814107 Thirumalai Oct 2010 B1
7851004 Hirao et al. Dec 2010 B2
7865830 Kim Jan 2011 B2
8014943 Jakobson Sep 2011 B2
8015495 Achacoso et al. Sep 2011 B2
8032297 Jakobson Oct 2011 B2
8073850 Hubbard et al. Dec 2011 B1
8082301 Ahlgren et al. Dec 2011 B2
8095413 Beaven Jan 2012 B1
8095594 Beaven et al. Jan 2012 B2
8209308 Rueben et al. Jun 2012 B2
8209333 Hubbard et al. Jun 2012 B2
8275836 Beaven et al. Sep 2012 B2
8290962 Chu Oct 2012 B1
8316292 Verstak Nov 2012 B1
8370390 Permakoff Feb 2013 B1
8457545 Chan Jun 2013 B2
8458194 Procopio Jun 2013 B1
8484111 Frankland et al. Jul 2013 B2
8490025 Jakobson et al. Jul 2013 B2
8504945 Jakobson et al. Aug 2013 B2
8510045 Rueben et al. Aug 2013 B2
8510664 Rueben et al. Aug 2013 B2
8566301 Rueben et al. Oct 2013 B2
8646103 Jakobson et al. Feb 2014 B2
8706739 Song Apr 2014 B1
8756275 Jakobson Jun 2014 B2
8769004 Jakobson Jul 2014 B2
8769017 Jakobson Jul 2014 B2
8793238 Carver Jul 2014 B1
8805840 Joshi Aug 2014 B1
9165061 Fertik Oct 2015 B2
20010044791 Richter et al. Nov 2001 A1
20020028021 Foote Mar 2002 A1
20020072951 Lee et al. Jun 2002 A1
20020082892 Raffel et al. Jun 2002 A1
20020129352 Brodersen et al. Sep 2002 A1
20020140731 Subramaniam et al. Oct 2002 A1
20020143997 Huang et al. Oct 2002 A1
20020162090 Parnell et al. Oct 2002 A1
20020165742 Robins Nov 2002 A1
20030004971 Gong et al. Jan 2003 A1
20030018705 Chen et al. Jan 2003 A1
20030018830 Chen et al. Jan 2003 A1
20030066031 Laane Apr 2003 A1
20030066032 Ramachandran et al. Apr 2003 A1
20030069936 Warner et al. Apr 2003 A1
20030070000 Coker et al. Apr 2003 A1
20030070004 Mukundan et al. Apr 2003 A1
20030070005 Mukundan et al. Apr 2003 A1
20030074418 Coker Apr 2003 A1
20030120675 Stauber et al. Jun 2003 A1
20030135445 Herz et al. Jul 2003 A1
20030151633 George et al. Aug 2003 A1
20030159136 Huang et al. Aug 2003 A1
20030187921 Diec Oct 2003 A1
20030189600 Gune et al. Oct 2003 A1
20030204427 Gune et al. Oct 2003 A1
20030206192 Chen et al. Nov 2003 A1
20030225730 Warner et al. Dec 2003 A1
20040001092 Rothwein et al. Jan 2004 A1
20040010489 Rio Jan 2004 A1
20040015981 Coker et al. Jan 2004 A1
20040027388 Berg et al. Feb 2004 A1
20040128001 Levin et al. Jul 2004 A1
20040186860 Lee et al. Sep 2004 A1
20040193510 Catahan et al. Sep 2004 A1
20040199489 Barnes-Leon et al. Oct 2004 A1
20040199536 Barnes Leon et al. Oct 2004 A1
20040199543 Braud et al. Oct 2004 A1
20040249789 Kapoor Dec 2004 A1
20040249854 Barnes-Leon et al. Dec 2004 A1
20040260534 Pak et al. Dec 2004 A1
20040260659 Chan et al. Dec 2004 A1
20040268299 Lei et al. Dec 2004 A1
20050021490 Chen Jan 2005 A1
20050027717 Koudas Feb 2005 A1
20050033657 Herrington et al. Feb 2005 A1
20050050555 Exley et al. Mar 2005 A1
20050060643 Glass Mar 2005 A1
20050091098 Brodersen et al. Apr 2005 A1
20060021019 Hinton et al. Jan 2006 A1
20060101069 Bell May 2006 A1
20060167942 Lucas et al. Jul 2006 A1
20060271534 Hamaguchi Nov 2006 A1
20070143322 Kothari Jun 2007 A1
20070214097 Parsons Sep 2007 A1
20080243837 Davis Oct 2008 A1
20080249966 Luege Mateos Oct 2008 A1
20080249972 Dillon Oct 2008 A1
20090043797 Dorie Feb 2009 A1
20090063415 Chatfield et al. Mar 2009 A1
20090099996 Stefik Apr 2009 A1
20090100342 Jakobson Apr 2009 A1
20090164408 Grigorik Jun 2009 A1
20090164411 Dasdan Jun 2009 A1
20090177744 Marlow et al. Jul 2009 A1
20090182712 Kamal Jul 2009 A1
20090271359 Bayliss Oct 2009 A1
20090313236 Hernacki Dec 2009 A1
20100153324 Downs Jun 2010 A1
20100198864 Ravid Aug 2010 A1
20100228731 Gollapudi Sep 2010 A1
20100254615 Kantor Oct 2010 A1
20110087668 Thomas Apr 2011 A1
20110218958 Warshavsky et al. Sep 2011 A1
20110247051 Bulumulla et al. Oct 2011 A1
20120042218 Cinarkaya et al. Feb 2012 A1
20120233137 Jakobson et al. Sep 2012 A1
20120290407 Hubbard et al. Nov 2012 A1
20130091229 Dunn Apr 2013 A1
20130138577 Sisk May 2013 A1
20130212497 Zelenko et al. Aug 2013 A1
20130247216 Cinarkaya et al. Sep 2013 A1
20140082006 Knight Mar 2014 A1
20140101134 Bohrer et al. Apr 2014 A1
20140108006 Vogel et al. Apr 2014 A1
20140280121 Sharp et al. Sep 2014 A1
20160048764 Viola et al. Feb 2016 A1
Non-Patent Literature Citations (3)
Entry
U.S. Appl. No. 14/512,222—“Automatic Clustering by Topic and Prioritizing Online Feed Items”, inventors Alon Talmor and T. Brown, filed Oct. 10, 2014, 29 pages.
Andrei Z. Broder, “Identifying and Filtering Near-Duplicate Documents”, CPM '00, Proceedings of the 11th Annual Symposium on Combinatorial Pattern Matching, Springer-Verlag London, UK, 2000, pp. 1-10.
U.S. Appl. No. 14/512,222—Office Action dated Jul. 13, 2017, 56 pages.
Related Publications (1)
Number Date Country
20160103916 A1 Apr 2016 US