This application is related by subject matter to the invention described in U.S. application Ser. No. 12/701,330, filed 5 Feb. 2010, and entitled “Semantic Advertising Selection From Lateral Concepts and Topics;” which is assigned or under obligation of assignment to the same entity as this application, and incorporated in this application by reference.
Conventionally, a user receives query formulation assistance from a local application or a remote server that provides cached terms based on queries previously received by conventional search engines from the user or other users that submit queries to the conventional search engines.
Conventional search engines receive queries from users to locate web pages having terms that match the terms included in the received queries. Conventional search engines assist a user with query formulation by caching terms sent to the conventional search engines from all users of the conventional search engines on servers that are remote from the users and displaying one or more of the cached terms to a user that is entering a user query for the conventional search engines. The user selects any one of the cached terms to complete the query and receives a listing of web pages having terms that match the terms included in the user query.
Embodiments of the invention relate to systems, methods, and computer-readable media for presenting and generating lateral concepts in response to a query from a user. The lateral concepts are presented in addition to search results that match the user query. A search engine receives a query from a client device. In turn, storage is searched to locate a match to the query. If a match exists, content corresponding to the query is retrieved by a lateral concept generator from the storage. In turn, categories associated with the content are identified by the lateral concept generator. The lateral concept generator also obtains additional content associated with each category. A comparison between the retrieved content and the additional content is performed by the lateral concept generator to assign scores to each identified category. The lateral concept generator selects several categories based on scores assigned to content corresponding to each category and returns the retrieved content and several categories as lateral concepts. If a match does not exist, the lateral concept generator compares content stored in the storage to the query to create a content collection that is used to identify categories and calculate scores based on similarity between the query and content in the content collection.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in isolation to determine the scope of the claimed subject matter.
Illustrative embodiments of the invention are described in detail below with reference to the attached drawing figures, which are incorporated by reference herein, wherein:
This patent describes the subject matter for patenting with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this patent, in conjunction with other present or future technologies. Moreover, although the terms “step” and “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
As used herein, the term “lateral concept” refers to words or phrases that represent orthogonal topics of a query.
As used herein the term “component” refers to any combination of hardware, firmware, and software.
Embodiments of the invention provide lateral concepts that allow a user to navigate a large collection of content having structured data, semistructured data, and unstructured data. The computer system generates lateral concepts by processing the collection of content matching a query provided by the user and selecting categories for the content. The lateral concepts comprise a subset of the selected categories. The lateral concepts are presented to user along with search results match the query. The lateral concepts allow the search engine to provide concepts that are orthogonal to a query or content corresponding to the query. In turn, the user may select one of the lateral concepts to search the combination of structured, unstructured, and semistructured data for content corresponding to the lateral concepts. In an embodiment, the lateral concepts may be stored in an index with a pointer to one or more queries received from a user. Accordingly, the lateral concepts may be returned in response to subsequent queries—similar to previous queries—received at a search engine included in the computer system without processing the content.
For instance, a search engine may receive a query for Seattle Space Needle from a user. The search engine processes the query to identify lateral concepts and search results. The lateral concepts may be selected from the structure of metadata stored with content for Seattle Space Needle. Or the lateral concepts may be selected from feature vectors generated by parsing search results associated with the user query.
The storage structure may include metadata, e.g., content attributes for the Seattle Space Needle. The Seattle Space Needle content attributes may include a tower attribute, a Seattle attraction attribute, and an architecture attribute. The tower attribute may include data that specifies the name and height of the Seattle Space Needle and other towers, such as Taipei 101, Empire State Building, Burj, and Shanghai World Financial Center. The Seattle attraction attribute may include data for the name and location of other attractions in Seattle, such as Seattle Space Needle, Pike Place Market, Seattle Art Museum, and Capitol Hill. The architecture attribute may include data for the architecture type, modern, ancient, etc., for each tower included in the tower attribute. Any of the Seattle Space Needle content attributes may be returned as a lateral concept by the search engine.
Alternatively, the search results may be processed by a computer system to generate lateral concepts that are returned with the search results. The content associated with the search results is parsed to identify feature vectors. The feature vectors include a category element that is associated with the content. The feature vectors are used to compare the search results and calculate a similarity score between the search results or between the search results and the query. The categories in the feature vectors are selected by the computer system based on the similarity score and returned as lateral concepts in response to the user query.
The computer system that generates the lateral concepts may include storage devices, a search engine, and additional computing devices. The search engine receives queries from the user and returns results that include content and lateral concepts. The storage is configured to store the content and the lateral concepts. In some embodiments, the content includes a collection of structured, unstructured, and semi-structured data.
The computing device 100 typically includes a variety of computer-readable media. By way of example, and not limitation, computer-readable media may comprise Random Access Memory (RAM); Read Only Memory (ROM); Electronically Erasable Programmable Read Only Memory (EEPROM); flash memory or other memory technologies; CDROM, digital versatile disks (DVD) or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to encode desired information and be accessed by the computing device 100. Embodiments of the invention may be implemented using computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computing device 100, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, modules, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the invention may be practiced in a variety of system configurations, including distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
The computing device 100 includes a bus 110 that directly or indirectly couples the following components: a memory 112, one or more processors 114, one or more presentation modules 116, input/output (I/O) ports 118, I/O components 120, and an illustrative power supply 122. The bus 110 represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various components of
The memory 112 includes computer-readable media and computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, nonremovable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. The computing device 100 includes one or more processors 114 that read data from various entities such as the memory 112 or I/O components 120. The presentation components 116 present data indications to a user or other device. Exemplary presentation components 116 include a display device, speaker, printer, vibrating module, and the like. The I/O ports 118 allow the computing device 100 to be physically and logically coupled to other devices including the I/O components 120, some of which may be built in. Illustrative I/O components 120 include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, and the like.
A computer system that generates lateral concepts includes a search engine, storage, and a lateral concept generator. The lateral concepts may be stored in storage along with content and queries that are related to the content. The search engine receives the query and transmits lateral concepts and results that include content corresponding to the query to a client device. The client device displays the results along with a list of at least some of the lateral concepts.
The client device 210 is connected to the search engine 230 via network 220. The client device 210 allows a user to enter queries. The client device 210 transmits the queries to the search engine 230. In turn, the client device 210 receives results that include lateral concepts and displays the results and lateral concepts to the users. In some embodiments, the client device 210 may be any computing device that is capable of web accessibility. As such, the client device 210 might take on a variety of forms, such as a personal computer (PC), a laptop computer, a mobile phone, a personal digital assistance (PDA), a server, a CD player, an MP3 player, a video player, a handheld communications device, a workstation, any combination of these delineated devices, or any other device that is capable of web accessibility.
The network 220 connects the client device 210, search engine 230, lateral concept generator 240, and storage 250. The network 220 may be wired, wireless, or both. The network 220 may include multiple networks, or a network of networks. For example, the network 220 may include one or more wide area networks (WANs), one or more local area networks (LANs), one or more public networks, such as the Internet, or one or more private networks. In a wireless network, components such as a base station, a communications tower, or even access points (as well as other components) may provide wireless connectivity in some embodiments. Although single components are illustrated for the sake of clarity, one skilled in the art will appreciate that the network 220 may enable communication between any number of client devices 210.
The search engine 230 is a server computer that provides results for queries received from client devices 210. In some embodiments, the search engine 230 provides lateral concepts in response to the queries. The search engine 230 may return some number, e.g., the top three, lateral concepts for each query received from the client devices 210. The search engine 230 may receive the lateral concepts from the lateral concept generator 240 or storage 250.
The lateral concept generator 240 generates lateral concepts in response to a query. In one embodiment, the lateral concept generator 240 includes an initial processing component 242, a similarity engine 244, and an indexing engine 246. The lateral concept generator 240 receives categories and content from storage 250. In turn, the content and categories are processed by one or more components 242, 244, and 246 of the lateral concept generator 240.
The initial processing component 242 is configured to locate content that matches the query received by the search engine 230, to analyze the content, and extract information using one or more data processing methods. In this regard, the initial processing component 242 may be used to analyze content and extract information from the three types of data: unstructured data, structured data, and semistructured data. Unstructured data may comprise documents with a series of text lines. Documents that are included in the category of unstructured data may have little or no metadata. Structured data, on the other hand, may comprise a traditional database where information is structured and referenced. Semistructured data may comprise a document such as a research paper or a Security and Exchange Commission filing, where part of the document comprises lines of text and part of the document comprises tables and graphs used for illustration. In the case of semistructured data, the structured components of a document may be analyzed as structured data and the unstructured components of the documents may be analyzed as unstructured data.
Feature vectors are used to compare content matching the query. The feature vectors may include the following elements: a group of words, a concept, and score. The group of words represent a summary or sampling of the content. The concept categorizes the content. And the score contains a similarity measure for the content and additional content matching the query. For instance, a feature vector for Space Needle content may include a group of words “monument built for world fair” a concept “tower” and a score “null.” The concepts element of the feature vectors may be selected as the lateral concept based on the score assigned to the feature vector.
The values for the elements of the feature vector may be generated manually or automatically. A subject matter expert may manually populate the elements of the feature vector. Alternatively, the elements of the feature vector may be populated automatically by the lateral concept generator 240.
The initial processing component 242 may include a lexical analysis, a linguistic analysis, an entity extraction analysis, and attribute extraction analysis. In an embodiment, the initial processing component 242 creates feature vectors for the content in storage 250. The initial processing component 242 automatically populates the words and concepts for feature vectors. In certain embodiments, the initial processing component 242 selects the concepts from the ontologies 252 in storage 250, or from the words extracted from the content.
The similarity engine 244 calculates a similarity score that populates the score element for the feature vector. The similarity engine 244 is a component of the lateral concept generator 240. The similarity engine calculates a similarity score that is stored in the feature vector for the content retrieved from storage 250. The score may represent similarity to other content, in storage 250, matching the query or similarity to the query received by the search engine 230. In turn, the similarity score is used to select several categories from concepts identified in the feature vectors associated with the content matching the query. The selected categories are returned to the search engine 230 as lateral concepts.
In one embodiment, the similarity engine 244 may calculate similarity between content matching the query using the feature vectors. The similarity score may be calculated based on distance between the feature vectors using the Pythagorean theorem for multidimensional vectors. For instance, when the storage 250 includes content matching the query, the lateral concept generator 240 may return several categories based on scores assigned to content within each of the several categories. The lateral concept generator 240 obtains the matching content and corresponding categories from storage 250. In turn, the lateral concept generator 240 generates the feature vector for the matching content. Also, the lateral concept generator 240 generates a content collection using the categories associated with the matching content. Each content in the content collection is processed by the lateral concept generator 240 to create feature vectors. In turn, each feature vector for the content collection is compared to the feature vector for the matching content to generate a similarity score. In turn, the feature vectors for the content collection are updated with similarity scores calculated by the similarity engine 244. The similarity engine 244 may select a number of feature vectors with high similarity scores in each category, average the scores, and assign the category the averaged score. In an embodiment, the similarity engine 244 selects three feature vectors within each category assigned the highest score to calculate the average score that is assigned to the categories. Thus, as an example, the top five categories with the highest scores may be returned to the search engine 230 as lateral concepts.
In another embodiment, the similarity engine 244 may calculate similarity between content and the query. The similarity score may be calculated based on distance between the feature vectors using the Pythagorean theorem for multidimensional vectors. For instance, when the storage 250 does not include content matching the query, the lateral concept generator 240 may return several categories based on scores assigned to content within each of the several categories. The lateral concept generator 240 obtains a predetermined number of content related to the query and corresponding categories from storage 250. In one embodiment, the lateral concept generator obtains fifty items of content from storage 250 having a high query similarity score. In turn, the lateral concept generator 240 generates a feature vector for the query. Also, the lateral concept generator 240 retrieves a collection of content using the categories associated with the obtained content. Content in the collection of content is processed by the lateral concept generator 240 to create feature vectors. In turn, the feature vectors for content in the collection of content is compared to the feature vector for the query to generate a similarity score. In turn, the feature vectors for the content collection are updated with similarity scores calculated by the similarity engine 244. The similarity engine 244 may select a number of feature vectors with high similarity scores in each category, average the scores, and assign the category the averaged score. In an embodiment, the similarity engine 244 selects three feature vectors within each category assigned the highest score to calculate the average score that is assigned to the categories. In turn, the top five categories with the highest scores are returned to the search engine as lateral concepts.
The similarity engine 244 may use word frequency to calculate a query similarity score for the content in storage 250. The query similarity score (Sq) is calculated by the similarity engine when a match to the query is not stored in the storage 250. Sq=√{square root over (freq(w)×log(docfreq(w)))}{square root over (freq(w)×log(docfreq(w)))}, where freq(w) is the frequency of the query (w) in the storage and docfreq is the frequency of the query within the content that is selected for comparison. The content assigned the largest Sq are collected by the similarity engine 244, and the top fifty documents are used to generate the lateral concepts.
The indexing engine 246 is an optional component of the lateral concept generator 240. The indexing engine 246 receives the lateral concepts from the similarity engine 244 and stores the lateral concepts in index 254 along with the query that generates the lateral concept. In turn, a subsequent query similar to a previously processed query may bypass the lateral concept generator 240 and obtain the lateral concepts stored in the index 254.
The storage 250 provides content and previously generated lateral concepts to the search engine 230. The storage 250 stores content, ontologies 252, and an index 254. In certain embodiments, the storage 250 also includes one or more data stores, such as relational and/or flat file databases and the like, that store a subject, object, and predicate for each content. The index 254 references content along with previously generated lateral concepts. The content may include structured, semistructured, and unstructured data. In some embodiments, the content may include video, audio, documents, tables, and images having attributes that are stored in the flat file databases. The computer system 200 may algorithmically generate the lateral concepts, or content attributes may be used as lateral concepts.
For instance, content attributes for the Seattle Space Needle or of a particular stock may be stored in storage 250. The content attributes may be provided as lateral concepts in response to a search query for the Seattle Space Needle or the particular stock, respectively. The Seattle Space Needle content attributes may include a tower attribute, a Seattle attraction attribute, and an architecture attribute. The tower attribute may include data that specifies the name and height of the Seattle Space Needle and other towers, such as Taipei 101, Empire State Building, Burj, and Shanghai World Financial Center. The Seattle attraction attribute may include data for the name and location of other attractions in Seattle, such as Seattle Space Needle, Pike Place Market, Seattle Art Museum, and Capitol Hill. The architecture attribute may include data for the architecture type, modern, ancient, etc., for each tower included in the tower attribute. Any of the Seattle Space Needle content attributes may be returned as a lateral concept by the computer system 200.
The particular stock may also include stock content attributes. For instance, MSFT content attributes may include a type attribute, an industry attribute, and a profit to earnings (PE) attribute. The type attribute includes data for business type, e.g., corporation, company, incorporated, etc. The industry attribute, may specify the industry, e.g., food, entertainment, software, etc., and the PE attribute includes the value of the PE. Any of the stock content attributes may be returned as a lateral concept by the computer system 200.
The lateral concepts that are generated algorithmically by the computer system 200 may be stored in the index 254. In turn, subsequent queries received by the search engine 230 that match feature vectors in storage 250 may be responded to, in certain embodiments, with the lateral concepts stored in the index 254. For a given query, the index 254 may store several lateral concepts. Accordingly, the search engine 230 may access the index 254 to obtain a list of lateral concepts. The lateral concepts enable a user to navigate content in the storage 250.
The ontologies 252 include words or phrases that correspond to content in storage 250. The categories associated with content in storage 250 may be selected from multiple ontologies 252. Each ontology 252 includes a taxonomy for a domain and the relationship between words or phrases in the domain. The taxonomy specifies the relationship between the words or phrases in a domain. The domains may include medicine, art, computers, etc. In turn, the categories associated with the content may be assigned a score by the lateral concept generator 240 based on similarity. In one embodiment, the lateral concept generator 240 calculates the score based on similarity to content obtained in response to the query. In another embodiment, the lateral concept generator 240 calculates the score based on similarity to the query. The lateral concept generator 240 selects several categories as lateral concepts based on the score.
In some embodiments, one or more lateral concepts stored in an index are transmitted to a client device for presentation to a user in response to a query from the user. Alternatively, the lateral concepts may be dynamically generated based on the query received from the user. The computer system may execute at least two computer-implemented methods for dynamically generating lateral concepts. In a first embodiment, the lateral concepts are selected based on scores between feature vectors of content matching the query and other content in storage.
In step 320, the computer system receives a user query. In turn, the computer system obtains content that corresponds to the user query from storage, in step 330. In step 340, the computer system identifies categories associated with the obtained content corresponding to the user query. In one embodiment, the categories include phrases in one or more ontologies. In another embodiment, the categories comprise attributes of the obtained content corresponding to the user query. In turn, the computer system retrieves, from storage a collection of content that corresponds to each identified category, in step 350.
In step 360, the computer system selects several identified categories as lateral concepts based on scores assigned to content in the collection of content. In one embodiment, the lateral concepts may include orthogonal concepts. The lateral concepts may be stored in the storage of the computer system.
In certain embodiments, the content is represented as feature vectors. And the score is assigned to the content based on similarity between feature vectors. The computer system displays the lateral concepts to the user that provided the user query. Also, content displayed with the lateral concepts may be filtered by the computer system based on the similarity score assigned to the content. In an embodiment, the computer system displays the top three lateral concepts.
The computer system may select, in some embodiments, orthogonal concepts by identifying the normal to a plane corresponding to the feature vector of the obtained content. In turn, feature vectors for the collection of content that create planes, which are parallel to a plane created by the normal, are processed by the computer system to obtain categories of the content associated with those feature vectors. In step 370, several of these categories may be returned as lateral concepts based on a score assigned to the content within the categories. The method terminates in step 380.
As mentioned above, the computer system may execute at least two computer-implemented methods for dynamically generating lateral concepts. In a second embodiment, the lateral concepts are selected based on scores between feature vectors for the query and content in storage. The computer system may execute this method when the storage does not contain a match to the query. In some embodiments, a match is determined without using stems for the terms included in the query. Thus, the storage of the computer system may include other matches that are based on the stems of the terms included in the query. These other matches may be used to generate the lateral concepts.
In step 420, the computer system receives a user query. In step 430, the computer system calculates similarity between content in storage and the user query. In step 440, the computer system creates a collection of content having a predetermined number of content similar to the user query. In turn, the computer system identifies each category that corresponds to content in the collection of content, in step 450. In step 460, the computer system selects several identified categories as lateral concepts based on scores assigned to content in the collection of content.
In certain embodiments, the query and content are represented as feature vectors. And the score is assigned to the content based on similarity between feature vectors for the query and content. The computer system displays the lateral concepts to the user that provided the user query. Also, content displayed with the lateral concepts may be filtered by the computer system based on the similarity score assigned to the content. In an embodiment, the computer system displays the top three lateral concepts. In one embodiment, orthogonal concepts may be included in the lateral concepts. The orthogonal concepts are selected by identifying the normal to a plane corresponding to the feature vector of the query. In turn, feature vectors for the collection of content that create planes, which are parallel to a plane created by the normal, are processed by the computer system to obtain categories of the content associated with those feature vectors. In step 470, several of these categories may be returned as lateral concepts based on a score assigned to the content within the categories. The method terminates in step 480.
In certain embodiments, the selected lateral concepts are displayed in a graphical user interface provided by a search engine. The lateral concepts are provided along with the search results that match the user query received by the search engine. The user may select the lateral concepts to issue queries to the search engine and retrieve additional content corresponding to the selected lateral concepts.
The graphical user interface 500 is displayed in response to a user query entered in the search text box 510. The user query is transmitted to the search engine after the user initiates the search. The search engine responds with a listing of results and the results are displayed in the search results region 520. The search engine also responds with lateral concepts. The lateral concepts are displayed in the lateral concepts regions 530. If a user selects a lateral concept from the lateral concepts regions 530, search results relevant to the selected lateral concept are displayed in the search results region 520.
In summary, lateral concepts allow a user to traverse unstructured, structured, and semistructured content using information derived from the content or the storage structure of the computer system storing the unstructured, structured, and semistructured content. A user may send a query to a search engine, which returns a number of results. In addition the search engine may also provide lateral concepts. The lateral concepts may correspond to one or more categories associated with content included in the search results. When the user clicks on the lateral concepts, the results are updated to include additional content associated with the lateral concepts.
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present invention. Embodiments of the present invention have been described with the intent to be illustrative rather than restrictive. It is understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. Not all steps listed in the various figures need be carried out in the specific order described.
Number | Name | Date | Kind |
---|---|---|---|
5748974 | Johnson | May 1998 | A |
5835087 | Herz et al. | Nov 1998 | A |
6038560 | Wical | Mar 2000 | A |
6154213 | Rennison et al. | Nov 2000 | A |
6256031 | Meijer et al. | Jul 2001 | B1 |
6363378 | Conklin et al. | Mar 2002 | B1 |
6460034 | Wical | Oct 2002 | B1 |
6510406 | Marchisio | Jan 2003 | B1 |
6675159 | Lin et al. | Jan 2004 | B1 |
6859800 | Roche | Feb 2005 | B1 |
6868525 | Szabo | Mar 2005 | B1 |
6968332 | Milic-Frayling et al. | Nov 2005 | B1 |
7089226 | Dumais et al. | Aug 2006 | B1 |
7152031 | Jenson | Dec 2006 | B1 |
7153137 | Altenhofen | Dec 2006 | B2 |
7167866 | Farnham | Jan 2007 | B2 |
7171424 | Barsness et al. | Jan 2007 | B2 |
7213205 | Miwa et al | May 2007 | B1 |
7225407 | Sommerer | May 2007 | B2 |
7275061 | Kon et al. | Sep 2007 | B1 |
7292243 | Burke | Nov 2007 | B1 |
7319998 | Marum Campos et al. | Jan 2008 | B2 |
7350138 | Swaminathan et al. | Mar 2008 | B1 |
7421450 | Mazzarella et al. | Sep 2008 | B1 |
7448047 | Poole | Nov 2008 | B2 |
7496830 | Rubin | Feb 2009 | B2 |
7505985 | Kilroy | Mar 2009 | B2 |
7565627 | Brill | Jul 2009 | B2 |
7577646 | Chien | Aug 2009 | B2 |
7657518 | Budzik et al. | Feb 2010 | B2 |
7707201 | Kapur et al. | Apr 2010 | B2 |
7809705 | Dom et al. | Oct 2010 | B2 |
7809717 | Hoeber et al. | Oct 2010 | B1 |
7818315 | Cucerzan et al. | Oct 2010 | B2 |
7849080 | Chang et al. | Dec 2010 | B2 |
7860853 | Ren et al. | Dec 2010 | B2 |
7870117 | Rennison | Jan 2011 | B1 |
7921107 | Chang et al. | Apr 2011 | B2 |
7921108 | Wang et al. | Apr 2011 | B2 |
7921109 | Parikh et al. | Apr 2011 | B2 |
7934161 | Denise | Apr 2011 | B1 |
7937340 | Hurst-Hiller et al. | May 2011 | B2 |
7958115 | Kraft | Jun 2011 | B2 |
7966305 | Olsen et al. | Jun 2011 | B2 |
7970721 | Leskovec et al. | Jun 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8024329 | Rennison | Sep 2011 | B1 |
8051104 | Weissman et al. | Nov 2011 | B2 |
8086600 | Bailey et al. | Dec 2011 | B2 |
8090713 | Tong et al. | Jan 2012 | B2 |
8090724 | Welch et al. | Jan 2012 | B1 |
8108385 | Kraft et al. | Jan 2012 | B2 |
8122016 | Lamba et al. | Feb 2012 | B1 |
8122017 | Sung et al. | Feb 2012 | B1 |
8126880 | Dexter et al. | Feb 2012 | B2 |
8150859 | Vadlamani et al. | Apr 2012 | B2 |
8176041 | Harinarayan et al. | May 2012 | B1 |
8229900 | Houle | Jul 2012 | B2 |
8260664 | Vadlamani et al. | Sep 2012 | B2 |
8326842 | Vadlamani et al. | Dec 2012 | B2 |
8386509 | Scofield et al. | Feb 2013 | B1 |
20020049738 | Epstein | Apr 2002 | A1 |
20020062368 | Holtzman et al. | May 2002 | A1 |
20030078913 | McGreevy | Apr 2003 | A1 |
20030177112 | Gardner | Sep 2003 | A1 |
20040003351 | Sommerer | Jan 2004 | A1 |
20040015483 | Hogan | Jan 2004 | A1 |
20040030741 | Wolton | Feb 2004 | A1 |
20040169688 | Burdick | Sep 2004 | A1 |
20050022114 | Shanahan et al. | Jan 2005 | A1 |
20050055341 | Haahr | Mar 2005 | A1 |
20050080775 | Colledge | Apr 2005 | A1 |
20050120015 | Marum Campos et al. | Jun 2005 | A1 |
20050125219 | Dymetman et al. | Jun 2005 | A1 |
20050132297 | Milic-Frayling | Jun 2005 | A1 |
20050149510 | Shafrir | Jul 2005 | A1 |
20050198011 | Barsness et al. | Sep 2005 | A1 |
20050203924 | Rosenberg | Sep 2005 | A1 |
20050257894 | Biagiotti | Nov 2005 | A1 |
20050268341 | Ross | Dec 2005 | A1 |
20060004732 | Odom | Jan 2006 | A1 |
20060005156 | Korpipaa | Jan 2006 | A1 |
20060036408 | Templier et al. | Feb 2006 | A1 |
20060047691 | Humphreys | Mar 2006 | A1 |
20060069589 | Nigam et al. | Mar 2006 | A1 |
20060069617 | Milener | Mar 2006 | A1 |
20060074870 | Brill et al. | Apr 2006 | A1 |
20060106793 | Liang | May 2006 | A1 |
20060116994 | Jonker | Jun 2006 | A1 |
20060117002 | Swen | Jun 2006 | A1 |
20060122979 | Kapur et al. | Jun 2006 | A1 |
20060242147 | Gehrking | Oct 2006 | A1 |
20060248078 | Gross et al. | Nov 2006 | A1 |
20060287919 | Rubens | Dec 2006 | A1 |
20060287983 | Krauss et al. | Dec 2006 | A1 |
20070011155 | Sarkar | Jan 2007 | A1 |
20070150515 | Brave et al. | Jun 2007 | A1 |
20070174255 | Sravanapudi | Jul 2007 | A1 |
20070226198 | Kapur | Sep 2007 | A1 |
20070294200 | Au | Dec 2007 | A1 |
20080010311 | Kon et al. | Jan 2008 | A1 |
20080033932 | DeLong | Feb 2008 | A1 |
20080033982 | Parikh et al. | Feb 2008 | A1 |
20080059508 | Lu et al. | Mar 2008 | A1 |
20080082477 | Dominowska et al. | Apr 2008 | A1 |
20080104061 | Rezaei | May 2008 | A1 |
20080104071 | Pragada et al. | May 2008 | A1 |
20080133488 | Bandaru et al. | Jun 2008 | A1 |
20080133585 | Vogel | Jun 2008 | A1 |
20080235203 | Case et al. | Sep 2008 | A1 |
20080243799 | Rozich | Oct 2008 | A1 |
20080256061 | Chang | Oct 2008 | A1 |
20080270384 | Tak | Oct 2008 | A1 |
20080288456 | Omoigui | Nov 2008 | A1 |
20080313119 | Leskovec et al. | Dec 2008 | A1 |
20090006358 | Morris | Jan 2009 | A1 |
20090006974 | Harinarayan | Jan 2009 | A1 |
20090024962 | Gotz | Jan 2009 | A1 |
20090055394 | Schilit et al. | Feb 2009 | A1 |
20090083261 | Nagano | Mar 2009 | A1 |
20090089312 | Chi et al. | Apr 2009 | A1 |
20090100037 | Scheibe | Apr 2009 | A1 |
20090119261 | Ismalon | May 2009 | A1 |
20090119289 | Gibbs et al. | May 2009 | A1 |
20090125505 | Bhalotia | May 2009 | A1 |
20090157419 | Bursey | Jun 2009 | A1 |
20090157676 | Shanbhag | Jun 2009 | A1 |
20090164441 | Cheyer | Jun 2009 | A1 |
20090234814 | Boerries | Sep 2009 | A1 |
20090240672 | Costello | Sep 2009 | A1 |
20090241065 | Costello | Sep 2009 | A1 |
20090254574 | De et al. | Oct 2009 | A1 |
20090299853 | Jones et al. | Dec 2009 | A1 |
20100005092 | Matson | Jan 2010 | A1 |
20100010913 | Pinckney et al. | Jan 2010 | A1 |
20100023508 | Zeng | Jan 2010 | A1 |
20100042619 | Jones et al. | Feb 2010 | A1 |
20100070484 | Kraft et al. | Mar 2010 | A1 |
20100106485 | Lu | Apr 2010 | A1 |
20100131085 | Steelberg | May 2010 | A1 |
20100138402 | Burroughs et al. | Jun 2010 | A1 |
20100223261 | Sarkar | Sep 2010 | A1 |
20100332500 | Pan et al. | Dec 2010 | A1 |
20110040749 | Ceri et al. | Feb 2011 | A1 |
20110047148 | Omoigui | Feb 2011 | A1 |
20110047149 | Vaananen | Feb 2011 | A1 |
20110055189 | Effrat et al. | Mar 2011 | A1 |
20110055207 | Schorzman et al. | Mar 2011 | A1 |
20110125734 | Duboue et al. | May 2011 | A1 |
20110131157 | Iyer et al. | Jun 2011 | A1 |
20110131205 | Iyer et al. | Jun 2011 | A1 |
20110179024 | Stiver et al. | Jul 2011 | A1 |
20110196737 | Vadlamani et al. | Aug 2011 | A1 |
20110196851 | Vadlamani et al. | Aug 2011 | A1 |
20110196852 | Srikanth et al. | Aug 2011 | A1 |
20110231395 | Vadlamani et al. | Sep 2011 | A1 |
20110264655 | Xiao et al. | Oct 2011 | A1 |
20110264656 | Dumais et al. | Oct 2011 | A1 |
20110307460 | Vadlamani et al. | Dec 2011 | A1 |
20120130999 | Jin et al. | May 2012 | A1 |
Number | Date | Country |
---|---|---|
1535433 | Oct 2004 | CN |
101124609 | Feb 2008 | CN |
101137957 | Mar 2008 | CN |
101356525 | Jan 2009 | CN |
101364239 | Feb 2009 | CN |
2003032292 | Feb 1991 | JP |
2005165958 | Feb 1993 | JP |
2008235185 | Sep 1996 | JP |
2009252145 | Oct 2009 | JP |
100837751 | Jun 2008 | KR |
0150330 | Dec 2001 | WO |
2006083684 | Aug 2006 | WO |
2007143109 | Jun 2007 | WO |
2007113546 | Oct 2007 | WO |
2008027503 | Mar 2008 | WO |
2009117273 | Sep 2009 | WO |
2010148419 | Dec 2010 | WO |
Entry |
---|
International Search Report and Written Opinion in PTC application PCT/US2011/020908 mailed Sep. 28, 2011. |
“On Scalability of the Similarity Search in the World of Peers”—Published Date: 2006, http://www.nmis.isti.cnr.it/falchi/publications/Falchi-2006-Infoscale.pdf. |
“Curse of Dimensionality in the Application of Pivot-based Indexes to the Similarity Search Problem”—Published Date: May 2009, http://arxiv.org/PS—cache/arxiv/pdf/0905/0905.2141v1.pdf. |
“Google Wonder Wheel, Google Wonder Wheel Explained”, Google Inc., Published Date: 2009, http://www.googlewonderwheel.com/. |
“Cuil—Features”, Cuil, Inc., Published Date: 2010, http://www.cuil.com/info/features/. |
“Kosmix: Your Guide to the Web”, Kosmix Corporation, Published Date: 2010, http://www.kosmix.com/corp/about. |
Broder, Andrei, et al., A Semantic Approach to Contextual Advertising—Published Date: Jul. 23-27, 2007 http://fontoura.org/papers/semsyn.pdf. |
Osinski, Stanislaw, An Algorithm for Clustering of Web Search Results—Published Date: Jun. 2003 http://project.carrot2.org/publications/osinski-2003-lingo.pdf. |
Rajaraman, Anand, Kosmix: Exploring the Deep Web using Taxonomies and Categorization—Published Date: 2009. ftp://ftp.research.microsoft.com/pub/debull/A09June/anand—deepweb1.pdf. |
Wang, Xuerui, et al., A Search-based Method for Forecasting Ad Impression in Contextual Advertising—Published Date: Apr. 20-24, 2009 http://www.cs.umass.edu/˜xuerui/papers/forecasting—www2009.pdf. |
Wartena, Christian, et al., Topic Detection by Clustering Keywords—Published Date: Sep. 5, 2008 http://www.uni-weimar.de/medien/webis/research/workshopseries/tir-08/proceedings/18—paper—655.pdf. |
Chirita, Paul-Alexandru, et al., Personalized Query Expansion for the Web—Published Date: Jul. 27, 2007 http://delivery.acm.org/10.1145/1280000/1277746/p7-chirita.pdf?key1=1277746&key2=8684409521&coll=GUIDE&dl=GUIDE&CFID=63203797&CFTOKEN=28379565. |
Kules, Bill, et al., Categorizing Web Search Results into Meaningful and Stable Categories Using Fast-Feature Techniques—Published Date: Jun. 15, 2006 http://hcil.cs.umd.edu/trs/2006-15/2006-15.pdf. |
Bade, Korinna, et al., CARSA—An Architecture for the Development of Context Adaptive Retrieval Systems—Published Date: Feb. 14, 2006 http://www.springerlink.com/content/jk3wj13251rh6581/fulltext.pdf. |
Budanitsky, et al., “Semantics Distance in Wordnet: an experimental, application-oriented evaluation of five measures” workshop of wordnet and other lexical resources, in the north american chapter of the association for computation linguistics, Jun. 2001, Pittsburgh, PA http://citeseer.ist.psu.edu/budanitskyo1semantic.html. |
“Fisher, Brian, et al., ““CZWeb: Fish-Eye Views for Visualizing the World-Wide Web””, Published 1997, 5 pages, http://scholar.google.co.in/scholar?cluster=3988401955906218135&hl=en&as—sdt=2000”. |
Gonen, Bilal, “Semantic Browser”, Aug. 2006, 42 pages, University of Georgia, Athens, Georgia, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.83.9132&rep=rep1&type=pdf. |
Hao Liang, et al., “Translating Query for Deep Web Using Ontology”, 2008 International Conference on Computer Science and Software Engineering, IEEE Computer Society, Published Date: 2008, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04722650. |
Havre, Susan, et al., “Interactive Visualization of Multiple Query Results,” 2001, 8 pages, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2850&rep=rep1&type=pdf. |
Hearst, Marti A., “Ch. 10, Information Visualization for Search Interfaces”, 2009, 69 pages, Search User Interfaces, http://searchuserinterfaces.com/book/sui—ch10—visualization.html. |
International Search Report and Written Opinion PCT/US2011/021596, mailed Aug. 29, 2011. |
International Search Report and Written Opinion PCT/US2011/021597, mailed Sep. 23, 2011. |
Jonker, David, et al., “Information Triage with TRIST”, May 2005, 6 pages, 2005 Intelligence Analysis Conference, Washington DC, Oculus Info, Inc., http://www.oculusinfo.com/papers/Oculus—TRIST—Final—Distrib.pdf. |
Kiryakov, et al., “Semantic Annotation, Indexing, and Retrieval” Web Semantics: Science, Services and Agents on the World Wide Web, Elsevier, vol. 2, No. 1, Dec. 1, 2004, pp. 49-79. |
Kosara, Robert, et al., “An Interaction View on Information Visualization”, 2003, 15 pages, The Eurographics Association, http://www.cs.uta.fi/˜jt68641/infoviz/An—Interaction—View—on—Information—Visualization.pdf. |
Leopold, Jennifer, et al., “A Generic, Functionally Comprehensive Approach to Maintaining an Ontology as a Relational Database”, 2009, pp. 369-379, World Academy of Science, vol. 52, http://www.akademik.unsri.ac.id/ download/journal/files/waset/v52-58-oaj-unsri.pdf. |
Mateevitsi, Victor, et al., “Sparklers: An Interactive Visualization and Implementation of the Netflix recommendation algorithm”, retrieved Apr. 7, 2010, 5 pages, http://www.vmateevitsi.com/bloptop/. |
Nguyen, Tien N., “A Novel Visualization Model for Web Search Results,” Sep./Oct. 2006, pp. 981-988, IEEE Transactions on Visualization and Computer Graphics, vol. 12, No. 5, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4015455. |
Roberts, Jonathan C., et al.,“Visual Bracketing for Web Search Result Visualization”, 2003, 6 pages, Seventh International Conference on Information Visualization, IEEE Computer Society,http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1217989. |
“Smith, Kate A., et al., ““Web Page Clustering using a Self-Organizing Map of User Navigation Patterns””, Published 2003, pp. 245-256, Decision Support Systems, vol. 35, Elsevier Science, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.5185&rep=rep1&type=pdf”. |
“Smith, Michael P., et al., ““Providing a User Customisable Tool for Software Visualisation at Runtime,”” Published 2004, 6 pages, University of Durham, United Kingdom, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.4013&rep=rep1&type=pdf”. |
“Thomas Strang, Claudia Linnhoff-Popien, and Korbinian Frank, ““CoOL: A Context Ontology Languageto enable Contextual Interoperability””, IFIP International Federation for Information Processing, Published Date: 2003, http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=BC65BEE5025FB518404AF16988E46489?doi=10.1.1.5.9505&rep=rep1&type=pdf.” |
Tony Veale and Yanfen Hao, “A context-sensitive framework for lexical ontologies”, The Knowledge Engineering Review, vol. 23:1, 101-115, Cambridge University Press, Published Date: 2007, United Kingdom, http://afflatus.ucd.ie/Papers/ContextAndLexicalOntologies.pdf. |
“Tvarozek, Michal, et a., “Improving Semantic Search via Integrated Personalized Faceted and Visual Graph Navigation”, Published Date: 2008, 12 pages, http://www2.fiit.stuba.sk/˜bielik/publ/abstracts/2008/sofsem2008navigation.pdf”. |
“Yngve, Gary, ““Visualization for Biological Models, Simulation, and Ontologies””, Published Aug. 2007, 152 pages, University of Washington, http://sigpubs.biostr.washington.edu/archive/00000232/01/gary-thesis-final.pdf”. |
Search Report Cited in PCT/US2011/0212 mailed Aug. 19, 2011. |
Non Final Office Action in U.S. Appl. No. 12/700,985 mailed Dec. 12, 2011. |
Non Final Office Action in U.S. Appl. No. 12/701,330 mailed Dec. 21, 2011. |
Non Final Office Action in U.S. Appl. No. 12/727,836, mailed Jan. 6, 2012. |
Final Office Action, U.S. Appl. No. 12/795,238—mailed Dec. 11, 2012. |
Non Final Office Action, mailed Mar. 26, 2012, in U.S. Appl. No. 12/796,753. |
Final Office Action, mailed Aug. 6, 2012, in U.S. Appl. No. 12/796,753. |
Notice of Allowance, mailed Aug. 24, 2012, in U.S. Appl. No. 13/406,941. |
Notice of Allowance in U.S. Appl. No. 13/406,941, mailed Jul. 30, 2012. |
Non Final Office Action in U.S. Appl. No. 12/797,375, mailed Sep. 13, 2012. |
Non-Final Office Action mailed Mar. 27, 2013 in U.S. Appl. No. 13/569,460, 25 pages. |
Final Office Action mailed Jun. 5, 2013 in U.S. Appl. No. 12/797,375 13 pages. |
China 1st Office Action dated Jun. 7, 2013 in CN Application No. 201180008397.3, 5 pages. |
China State Intellectual Property Office Search Report dated May 30, 2013 in CN Application 201180008397.3, 2 pages. |
Final Office Action in U.S. Appl. No. 12/727,836 mailed Apr. 16, 2012, 14 pages. |
Final Office Action in U.S. Appl. No. 12/700,985, mailed Apr. 6, 2012, 24 pages. |
Notice of Allowance in U.S. Appl. No. 12/701,330, mailed Jun. 21, 2012. |
NonFinal Office Action in U.S. Appl. No. 12/795,238 mailed Jul. 5, 2012, pp. 1-16. |
Chris Halaschek, Boanerges Aleman-Meza, I. Budak Arpinar, and Amit P. Sheth, 2004, Discovering and ranking Semantic Associations Over a Large RDF Metabase. In Proceedings of the Thirtieth International conference on Very large data bases—vol. 30 (VLDB '04), vol. 30. VLDB Endowment, pp. 1317-1320. |
Non Final OA mailed Jan. 10, 2014 in U.S. Appl. No. 12/727,836. |
Non Final OA mailed Jan. 27, 2014 in U.S. Appl. No. 12/796,753. |
Non Final OA mailed Dec. 30, 2013 in U.S. Appl. No. 12/797,375. |
Non Final Office Action, mailed Nov. 1, 2013, in U.S. Appl. No. 12/795,238. |
Final Office Action mailed Sep. 9, 2013 in U.S. Appl. No. 13/569,460. |
First OA mailed Oct. 29, 2013 in CN Application No. 201180008423.2. |
Chinese Office Action mailed Feb. 24, 2014 in CN Application No. 201180008397.3. |
Chinese Office Action mailed Feb. 18, 2014 in CN Application No. 201180008411.X. |
Australian Office Action mailed Feb. 21, 2014 in AU Application No. 2011213263. |
Non-Final Office Action mailed Jun. 10, 2014 in U.S. Appl. No. 12/700,985, 29 pages. |
Final Office Action dated May 30, 2014 re U.S. Appl. No. 12/796,753, 29 pages. |
Japanese Office Action dated Jun. 10, 2014 in Application No. 2012-551987, 7 pages. |
Final Office Action mailed Apr. 28, 2014 in U.S. Appl. No. 12/727,836, 22 pages. |
Final Office Action mailed Apr. 23, 2014 in U.S. Appl. No. 12/795,238, 37 pages. |
Australian Office Action dated Apr. 15, 2014 with Search Information Statement (SIS) dated Apr. 10, 2014 in Application No. 2011213263, 5 pages. |
Chinese Office Action mailed Apr. 3, 2014 in Application No. 201180008423.2, 4 pages. |
Chinese Office Action mailed May 19, 2014 in Application No. 201180008427.0, 9 pages. |
Chinese Search Report dated Apr. 8, 2014 in Application No. 201180008427.0, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20110196851 A1 | Aug 2011 | US |