The present application relates generally to the technical field of search algorithms and, in one specific example, to the use of a search algorithm to generate feedback.
Customer feedback for online transactions allows potential purchasers of goods or services to evaluate a seller of a good or service prior to engaging in a transaction with the seller. In some cases, this feedback takes the form of statements regarding a particular seller, a good or service, or a category of good or service being sold. These statements may range form being very general to being very specific in terms of the information that they convey.
Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which:
Example methods and systems to facilitate feedback ratings are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
In some embodiments, a system and method for providing feedback to a potential purchaser of a good or service is illustrated. This system and method may provide specific and unique feedback (e.g., feedback ratings), which, in some cases, is devoid of redundant, cumulative, or other types of information that do not contribute to educating a potential purchaser of a good or service about either the good or service or about the party selling the good or service. Further, this system and method may provide a potential purchaser with one or more visual clues as to the quality of the purchasing experience that actual purchasers have had with the particular party selling the good or service, and/or the good or service actually sold. In one embodiment, these visual clues may take the form of a graphical illustration (e.g., an emoticon), and/or various textual highlights displayed on a feedback page in the form of, for example, a Hyper Text Markup Language (HTML) based webpage.
Providing generalized feedback ratings for marketplace participants often lacks the specificity to be informative for a user. Example marketplace feedback scores may give a quantitative measure of user trustworthiness, but they may at the same time lack the requisite detail to be informative. This may be true even where the feedback may be categorized to include a plurality of positive, negative and neutral comments. More to the point, pure feedback scores, even if categorized by positives, negatives, and neutrals, may not differentiate between users.
While feedback may give a sense of how good or bad users are (e.g., in terms of the quality of service or goods they provide), it does not describe why the users may be good or bad. In some cases, it may be difficult to tell what qualities differentiate one buyer from another and one seller from another. For example, it may be important to know whether a particular market participant is good, for example, at communication, packaging, pricing or service. Further, in some cases, users (e.g., those who have actually purchased a good or service from a seller) may leave neutral or positive feedback to avoid confrontation or may put in information that relates to the quality of their business conduct. In some cases, without a proper tool to differentiate the feedback, potential customers may have to go through pages of text trying to glean particulars.
Some example embodiments may include providing potential purchasers with an ability to automatically extract representative textual phrases or tags from a marketplace feedback text. Once the feedback tags are extracted, graphical representations in the form of, for example, emoticons, may be attached to the feedback tags. Multiple feedback tags may have the same emoticons attached to them revealing the sentiments of users who have actually purchased a good or service from the seller.
In some embodiments, technology may be implemented that includes the visualization of reputation ratings by parsing the feedback and analyzing its text for specific pattern frequencies. Text size may be used, for example, to show popularity of feedback by displaying more frequently-used phrases in a bigger font. In other example embodiments, other visual differentiation techniques may be used to highlight or distinguish more frequently-used (or otherwise identified) phrases included in the feedback data. Uniqueness of the particular feedback is also considered. An emoticon icon (e.g., emoticon) may be generated based on the feedback and may be displayed.
Further, in some embodiments, a tool may provide users with a legend of emoticons used to visualize the feedback. The emoticons may be representative of emoticons specific to the feedback provided by a particular community and may be displayed automatically. Moreover, users may be given an option of using cached information, and may select whether the feedback information should come from the item category or from user feedback. Additionally, in some embodiments, users may be provided with hints to further explain the meaning of the icons and terms.
Example Technology
In some embodiments, the example technology may include allowing searching by a user (e.g., a seller of goods or services), an identifier (e.g., a screen name, handle, or numeric identifier), or allowing a user to pick a user name from cached user names. The potential purchaser may select and search for visualized feedback that may, for example, be presented in positive, negative, or neutral categories. Additionally, the potential purchaser may use emoticon legends (e.g., emoticon legends) that correspond to the feedback terms. Further, the more frequently-occurring feedback terms may be displayed in correspondingly larger font. Example embodiments may include using displayed terms or phrases which are clickable to expose further details about the feedback associated with these terms or phrases. Moreover, the technology may include demonstrating the frequency of use of the term or phrases by displaying the percentage of occurrence in the feedback.
Some example embodiments may include extracting representative textual phrases from a user's feedback text as “tags”. These tags may be extracted to differentiate feedback for one user from that of another. For example, if all users have the text “AAAAA++++” in their feedback, this information does not serve to distinguish one user from another. In some embodiments, the example technology may extract other distinguishing phrases that summarize a user's (e.g., a seller's) feedback text. This information may be extracted at a global level, a category level, a domain level (e.g., static or dynamic), or at any other suitable level. Also, this information may be extracted for all the transactions that have occurred for all users in a given category to describe the most representative tags for that category feedback. For example, “very cute” may be a typical phrase in a positive feedback text in a bag category, “does not fit” may be a common phrase in an apparel category, and “wrong size” may be a common phrase in a jewelry-ring category.
Once the text is extracted, the information may be presented at a user level, across all categories, at a category level, domain level, or at some other suitable level. Example representations of the feedback may be shown with the text differentiated by its size (e.g., highlighted with larger text for more frequent phrases and smaller text for less frequent ones).
Further, in some embodiments, an example technology may represent text emoticons or sentiments that may have emoticons attached to them. This may be done by keeping a dictionary of phrases and mapping the phrases to a static set of emoticons. For example, “speedy shipping” and “fast delivery” may be associated with a common emoticon. Linguistic analysis and natural language processing may help to identify such similar phrases and map them to the same emoticon. Techniques of sentiment mining from small text may be used to attach sentiments to the tag phrases.
Example System
Example Screen Shots of Interfaces
In some embodiments, a feedback page 203 is displayed to a potential purchaser. Feedback page 203 may contain feedback regarding a particular seller of a good or service, or regarding a particular category of goods or services for sale. In one embodiment, when a potential purchaser clicks on a tag underlying the text displayed on the feedback page 203, an asynchronous request is sent to the server to obtain details for that tag. This asynchronous request may be generated using technology including, for example, AJAX, or DHTML. Once the asynchronous request is received, then feedback information related to that tag is extracted from a pool of feedback for that user. Next, in some cases, a percentage is computed to determine how many comments out of the total pool of comments (e.g., for that particular seller) actually relate to that specific tag. Then, in some embodiments, a feedback servlet again constructs the HTML to be displayed to the client (e.g., displayed as part of the feedback page 203). In some example embodiments, synchronous transmissions of web page queries are utilized in lieu of or in combination with the asynchronous queries. Further, technologies such as ASP may be utilized in lieu of servlets.
In some cases, a potential purchaser may generate a feedback request relating to comments regarding a seller, comments regarding another potential purchaser, a feedback score, a category of items for sale, a particular item, or some other type of suitable information pertinent to the sale of a good or service. This information may be supplied by purchasers or by others having access to this type of information.
Example Method
With regard to the first stream, an operation 801 may be executed that generates feedback in the form of feedback data 126 that is then transmitted to, or otherwise received through, the execution of an operation 802. Once operation 802 is executed, an operation 803 may be executed that parses and stores feedback data 126 into feedback data store 132. This process of generating feedback data and subsequently parsing and storing it into a data store may serve to, for example, seed or otherwise populate a data store with user feedback. In some cases, this user feedback may be subsequently utilized for the generation of a feedback page, its associated graphical illustrations (e.g., emoticons), and phrases describing a particular user in terms of feedback regarding that user. Once feedback data store 132 is populated, a reviewer such as reviewer 201 may execute an operation 804 to generate a feedback request, such as feedback request 202. This feedback may be generated with respect to a particular user, category of good, services, at a static or dynamic domain context, or even for all transactions (e.g., globally). Further, these types of feedback may be combined. Next, through the execution of an operation 805, feedback request 202 may be received and processed. Once feedback request 202 has been received, an operation 806 may be executed that may retrieve feedback entries from, for example, the feedback data store 132. This operation 806 may use various Application Programming Interface (API) calls, or even calls generated using a Structured Query Language (SQL) to retrieve feedback entries. Then, an operation 807 may be executed that generates a list of positive, neutral, or negative feedback entries. Put another way, in some embodiments, a list containing only positive feedback may be generated, a list containing only neutral feedback may be generated, and a list containing only negative feedback may be generated. In lieu of a list, some other suitable data structure (e.g., a Binary Search Tree (BST), a stack, a queue, a double linked list) may be utilized.
Once these lists are generated, an operation 808 is executed that filters certain noise words, wherein these noise words may be contained in some type of predefined dictionary (e.g., a stop word dictionary). This dictionary may be based upon certain words that may be deemed to be uninformative or otherwise unhelpful in terms of facilitating a reviewer's, such as reviewer 201's, understanding of feedback regarding a particular user. The execution of operation 808 may be optional in some embodiments. Once operation 808 is executed, or if operation 808 is optionally not executed, the method 800 continues to an operation 809 that assigns any remaining words to an array, or some other suitable data structure, wherein a unique integer value is associated with each one of the words. Then, an operation 810 may be executed that assigns each one of these words to some type of searchable data structure, such as, for example, a trie, a BST, a heap, or a list. In some embodiments, a data structure may be generated for each feedback type (e.g., positive, neutral, and/or negative feedback), resulting in a plurality of data structures. An operation 811 may then be executed that extracts certain phrases from the searchable data structure (or plurality of searchable data structures) and passes these phrases through a frequency engine. In some embodiments, the frequency engine counts the number of times these phrases appear in all of the searchable data structures, or in some cases, only one or more of the searchable data structures.
An operation 812 is then executed that builds a scoring model using the frequency count. In some cases, this scoring model may, for example, be a hash table that contains a phrase and its frequency count based upon the aggregation of the frequency values for the words associated with the phrase. In some cases, this hash table may be implemented using bucket hashing or cluster hashing as may be suitable. In other cases, some other suitable data structure may be used such as, for example, a BST, heap, linked list, or doubly linked list. Next, an operation 813 is executed that maps or compares the frequency count for each of the phrases to certain graphic standards relating to particular graphical illustrations (e.g., emoticons). In effect, this comparison takes the phrase contained in a hash table entry and compares it to phrases associated with a particular emoticon. In some cases, a dictionary of sentiments, or emoticon mining system (collectively referenced herein as an emotion database) is implemented such that the phrases are compared to possible synonyms and where a match is found the corresponding emoticon is used. If, for example, the phrase “Best Purchase Ever” is considered to be synonymous with the phrase “Exactly Described”, then the emoticon 301 denoting that one is “pleased” with a transaction might be appropriate. Once the phrases are mapped to an emoticon, an operation 814 is executed that generates a feedback page, such as feedback page 203. The generation of feedback page 203 may be more fully illustrated below, but includes, for example, the generation of an HTML-based page that contains, for example, the phrases and their respective emoticons. In some embodiments, the phrases themselves may be highlighted in some manner (e.g., font size, bolded font, italicized font, underlined font, color font, or some other suitable method of highlighting) where a particular phrase may need to stand out relative to other phrases contained in a particular field (e.g., a positive feedback field 401, negative feedback field 405, neutral feedback field 408). Some embodiments may include using a method of gradation based upon import to highlight certain phrases such that the more unique the phrase is relative to some universe of phrases, the great degree of highlighting it may receive. The method of gradation based upon import may be more fully discussed below is the section relating to the generation of the Inverse Document Frequency (e.g., referenced as idf) value.
Feedback page 203 may then be received through the execution of an operation 815 that may receive feedback page 203 and display it. In some embodiments, the execution of operation 815 may be carried out through some type of application capable of interpreting an HTML-based page, such as, for example, a web browser or other suitable application that may interpret, for example, HTML or XML. Further, through the execution of operation 815, details relating to a particular piece of feedback may be displayed (see e.g., phrase 501 selected with mouse pointer 502 so as to display the specifics of the selected phrase in screen object or widget 503).
The second sub-trie 1430 provides an illustration of a suffix ordering relating to neutral feedback in the form of the phrase “Service Was About Average”. Shown is a root node 1407. Connected to root node 1407 are a number of child nodes including, for example, nodes 1408, 1409, 1410, and 1411. Some of these child nodes themselves have children, such that, for example, node 1408 has a child node 1412, node 1409 has a child node 1413, and node 1410 has a child node 1414. Again, some of these child nodes also have children, such that, for example, node 1412 has a child node 1415, and node 1413 has a child node 1416. Also illustrated is a leaf node 1417 that is a child of node 1415. Again traversing from the root node to the leaf nodes, there are a number of phrases that may be generated, such that, for example, traversing from the root node 1407 to the leaf node 1417, the phrase “Services Was About Average” may be generated. Likewise, traversing from the root node 1407 to the node 1416, the phrase “was about average” may generated. Similarly, traversing the path from root node 1407 to the node 1414, the phrase “About Average” may be generated, and traversing from the root node 1407 to the node 1411, the phrase, or in this case word, “Average” may be generated.
Further illustrated is a third sub-trie 1431 that relates to negative feedback that a particular user has received, for example, in the form of the phrase “Not A Good Experience”. Shown as a part of trie 1431 is a root node 1418 having a number of child nodes 1419, 1420, 1421, and 1422. These child nodes themselves may have one or more children. For example, node 1423 is a child of the node 1419, node 1424 is a child of the node 1420, and node 1425 is a child of the node 1421. Other children illustrated herein include, for example, node 1426 as a child of node 1423, and node 1427 as a child of node 1424. Additionally, a leaf node 1428 is illustrated as a child of the node 1426. Trie 1431 may be traversed via following any one of a number of paths from the root node to various child or leaf nodes. The phrase “Not A Good Experience” may be generated through traversing the path from the root node 1418 to the child node 1428. Another path may be traversed from the root node 1418 to the child or leaf node 1427 wherein the phrase “A Good Experience” may be generated. Another phrase, “Good Experience,” may be generated by traversing the path from the root node 1418 to the leaf node 1425, and yet another phrase, or in this case word, “Experience” may be generated by traversing the path between the node 1418 and the leaf node 1422. As illustrated elsewhere, some other suitable type of data structure may be used in lieu of tries to organize and traverse strings and substrings associated with positive, neutral, or negative feedback.
In some embodiments, another scoring model (e.g., a combined scoring model value) may be utilized in lieu of or in conjunction with the frequency count. This combined scoring model value may be generated based upon, for example, the product of a term's frequency (e.g., its term frequency (tf), or frequency count), and the number of documents that contain that term for a particular user (e.g., a buyer or seller) as compared to a given universe of documents (e.g., the universe of all user feedback) (e.g., idf value). In one embodiment, once the hash table is built for all phrases extracted, the combined scoring model value is then computed for each phrase using a tf value for global data, and an idf value for global data. In one embodiment, this tf*idf score may be stated as:
Combined scoring model value=tf*idf;
where tf=((number of times phrase “X” occurs for seller A)/(max term frequency for seller A)); and
idf=log 2((number of documents[e.g., feedback comments] in the dataset)/(number of times phrase X occurred in the entire dataset)).
In some embodiments, the combined scoring model value is stored into the previously-referenced data structure along with the term or phrase. Further, in some embodiments, the combined scoring model value is computed separately and is not stored into the data structure. In some embodiments, the combined scoring model value may be based upon some other suitable expression used to determine the frequency of a phrase and the terms contained therein.
In some embodiments, the idf value may be more significant where the phrases used to describe a buyer or seller more closely approximates, for example, the universe of all words or phrases to describe sellers in general. If “Happy_Shopper” is described as “An unparalleled Seller” three times in feedback related to them, and the phrase “An unparalleled Seller” is only used three times in the universe of all seller feedback, then the idf value will taken on greater significance. In some cases, the use of an idf value may ensure the uniqueness of a phrase relative to the universe of phrases within which the phrase may be found. This idf value may also be used to determine the gradation based upon import of a phrase such that the phrase may be highlighted to a greater or lesser degree. For example, if the idf value is close to 1, then the phrase may appear bigger (e.g., larger font), be represented in a unique color or have some other way of representing it as distinct from other phrases that may appear on a feedback page such as feedback page 203.
In some embodiments, the modification of the font size or other types of highlighting relating to a particular phrase may occur where the combined scoring model results in some the generation of some value (e.g., the combined scoring model value). In cases where this value falls within one particular area of the gradation based upon import to highlight, then the font size, color, or some other way of distinguishing the phrase form other phrases contained in the feedback will be applied. In some cases, the larger the combined scoring model value, the more uniquely highlighted the phrase will be. For example, under the threshold instructions, phrases with a combined scoring model value of >100 are entitled to receive 16 point font, while those <=100 are entitled to only 12 point font. And again, pursuant to the threshold instructions, phrases with a value of >200 may be entitled to being highlighted in the color red, but those with a value of <200 are entitled to no special coloring other than black.
Example Storage
Some embodiments may include the various databases (e.g., 132, 1104, or 1503) being relational databases, or in some cases On-Line Analytical Processing (OLAP) based databases. In the case of relational databases, various tables of data are created, and data is inserted into and/or selected from these tables using SQL or some other database-query language known in the art. In the case of OLAP databases, one or more multi-dimensional cubes or hypercubes containing multidimensional data, which data is selected from or inserted into using MDX, may be implemented. In the case of a database using tables and SQL, a database application such as, for example, MYSQL™, SQLSERVER™, Oracle 8I™, 10G™, or some other suitable database application may be used to manage the data. In the case of a database using cubes and MDX, a database using Multidimensional On Line Analytic Processing (MOLAP), Relational On Line Analytic Processing (ROLAP), Hybrid Online Analytic Processing (HOLAP), or some other suitable database application may be used to manage the data. These tables or cubes made up of tables, in the case of, for example, ROLAP, are organized into a RDS or Object Relational Data Schema (ORDS), as is known in the art. These schemas may be normalized using certain normalization algorithms so as to avoid abnormalities such as non-additive joins and other problems. Additionally, these normalization algorithms may include Boyce-Codd Normal Form or some other normalization or optimization algorithm known in the art.
Also shown is an extraction rules for phrases table 2005 containing various extraction rules for certain phrases. In some embodiments, while the tables 2001 through 2003 may reside on, for example, the feedback data store 132, the extraction rules for phrases table 2005 may reside as a part of, for example, the phrase extraction rules database 1503. Extraction rules may include rules regarding the number of words that may make up a phrase that may be able to be extracted, the frequency with which these words may appear across a plurality of phrases, such that these words may be used in a phrase that is extracted, or some other suitable set of rules. Also shown is a synonyms table 2007 that may reside as a part of the emotions database 1802. These rules may be in the form of strings or character data types wherein, for example, if a negative phrase is determined to by synonymous with a string in the emotions database 1802, then that phrase may be associated with an emoticon. Also shown is an illustrations table 2008 that may reside on, for example, the emoticon database 1802. Illustrations table 2008 may contain graphic illustrations in the form of, for example, emoticons. The illustrations table 2008 may contain, for example, a Binary Large Object (BLOB) that contains a binary representation of the actual graphic illustration (e.g., the emoticon).
Further, a stop word table 2006 is shown that may reside as a part of, for example, the noise word database 1104. Stop word table 206 may be optional, but when used or otherwise implemented, it may contain a plurality of stop words such that if one of these stop words is encountered in a phrase, that word is then removed from the phrase or, more generally, from a particular piece of feedback data (e.g., 126, 127 and/or 128). These stop words may be generated by, for example, a system administrator, and may be in the form of, for example, a string data type. Further illustrated is a unique key table 2004 that provides a unique key value for one or more of the tables illustrated herein (e.g., 2001-2003, 2005-2008). These unique key values may be in the form of, for example, an integer or some other uniquely identifying numeric value or plurality of unique identifying numeric values.
A Three-Tier Architecture
In some embodiments, a method is described as implemented in a distributed or non-distributed software application designed under a three-tier architecture paradigm, whereby the various components of computer code that implement this method may be categorized as belonging to one or more of these three tiers. Some embodiments may include a first tier as an interface (e.g., an interface tier) that is relatively free of application processing. Further, a second tier may be a logic tier that performs application processing in the form of logical/mathematical manipulations of data inputted through the interface level, and communicates the results of these logical/mathematical manipulations to the interface tier and/or to a backend, or storage tier. These logical/mathematical manipulations may relate to certain business rules, or processes that govern the software application as a whole. A third, storage tier, may be a persistent or non-persistent storage medium. In some cases, one or more of these tiers may be collapsed into another, resulting in a two-tier or even a one-tier architecture. For example, the interface and logic tiers may be consolidated, or the logic and storage tiers may be consolidated, as in the case of a software application with an embedded database. This three-tier architecture may be implemented using one technology, or as will be discussed below, a variety of technologies. This three-tier architecture, and the technologies through which it is implemented, may be executed on two or more computer systems organized in a server-client, peer to peer, or some other suitable configuration. Further, these three tiers may be distributed between more than one computer system as various software components.
Component Design
Some example embodiments may include the above described tiers, and processes or operations that make them up, as being written as one or more software components. Common to many of these components is the ability to generate, use, and manipulate data. These components, and the functionality associated with each, may be used by client, server, or peer computer systems. These various components may be implemented by a computer system on an as-needed basis. These components may be written in an object-oriented computer language such that a component oriented, or object-oriented programming technique can be implemented using a Visual Component Library (VCL), Component Library for Cross Platform (CLX), Java Beans (JB), Enterprise Java Beans (EJB), Component Object Model (COM), Distributed Component Object Model (DCOM), or other suitable technique. These components may be linked to other components via various Application Programming interfaces (APIs), and then compiled into one complete server, client, and/or peer software application. Further, these APIs may be able to communicate through various distributed programming protocols as distributed computing components.
Distributed Computing Components and Protocols
Some example embodiments may include remote procedure calls being used to implement one or more of the above described components across a distributed programming environment as distributed computing components. For example, an interface component (e.g., an interface tier) may reside on a first computer system that is located remotely from a second computer system containing a logic component (e.g., a logic tier). These first and second computer systems may be configured in a server-client, peer-to-peer, or some other suitable configuration. These various components may be written using the above-described object-oriented programming techniques, and can be written in the same programming language or in different programming languages. Various protocols may be implemented to enable these various components to communicate regardless of the programming language(s) used to write them. For example, a component written in C++ may be able to communicate with another component written in the Java programming language through use of a distributed computing protocol such as a Common Object Request Broker Architecture (CORBA), a Simple Object Access Protocol (SOAP), or some other suitable protocol. Some embodiments may include the use of one or more of these protocols with the various protocols outlined in the Open Systems Interconnection (OSI) model, or the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack model for defining the protocols used by a network to transmit data.
A System of Transmission Between a Server and Client
Some embodiments may utilize the Open Systems Interconnection (OSI) basic reference model or Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack model for defining the protocols used by a network to transmit data. In applying these models, a system of data transmission between a server and client, or between peer computer systems is described as a series of roughly five layers comprising: an application layer, a transport layer, a network layer, a data link layer, and a physical layer. In the case of software having a three tier architecture, the various tiers (e.g., the interface, logic, and storage tiers) reside on the application layer of the TCP/IP protocol stack. In an example implementation using the TCP/IP protocol stack model, data from an application residing at the application layer is loaded into the data load field of a TCP segment residing at the transport layer. The TCP segment also contains port information for a recipient software application residing remotely. The TCP segment is loaded into the data load field of an IP datagram residing at the network layer. Next, the IP datagram is loaded into a frame residing at the data link layer. This frame is then encoded at the physical layer, and the data is transmitted over a network such as an internet, Local Area Network (LAN), Wide Area Network (WAN), or some other suitable network. In some cases, the word ‘internet’ refers to a network of networks. These networks may use a variety of protocols for the exchange of data, including the aforementioned TCP/IP as well as ATM, SNA, SDI, or some other suitable protocol. These networks may be organized within a variety of topologies (e.g., a star topology) or structures.
A Computer System
The example computer system 2100 includes a processor 2102 (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or both), a main memory 2101 and a static memory 2106, which communicate with each other via a bus 2108. The computer system 2100 may further include a video display unit 2110 (e.g., a Liquid Crystal Display (LCD) or a Cathode Ray Tube (CRT)). The computer system 2100 also includes an alphanumeric input device 2117 (e.g., a keyboard), a User Interface (UI) cursor controller 2111 (e.g., a mouse), a disk drive unit 2116, a signal generation device 2153 (e.g., a speaker) and a network interface device (e.g., a transmitter) 2120.
The disk drive unit 2116 includes a machine-readable medium 2122 on which is stored one or more sets of instructions 2121 and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The software may also reside, completely or at least partially, within the main memory 2101 and/or within the processor 2102 during execution thereof by the computer system 2100, the main memory 2101 and the processor 2102 also constituting machine-readable media.
The instructions 2121 may further be transmitted or received over a network 2126 via the network interface device 2120 utilizing any one of a number of well-known transfer protocols (e.g., HTTP, SIP).
In some embodiments, a removable physical storage medium is shown to be a single medium, and the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any of the one or more of the methodologies described herein. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
Marketplace Applications
In some embodiments, a system and method is illustrated that facilitates the generation of feedback that is useful and non-cumulative by removing redundant information or only providing useful feedback information. Feedback regarding a particular seller of a good or service may only be useful insofar as it instructs a potential purchaser regarding specific information about a seller and/or the good or services that are being sold. In cases where this feedback is merely cumulative, the feedback is not informative. Feedback that is informative will attract more potential purchasers, since these purchasers may be more able to research the sellers and the goods or services being sold on, for example, a website.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This application is a continuation of U.S. patent application Ser. No. 16/167,289 filed on Oct. 22, 2018, which claims priority to U.S. patent application Ser. No. 14/611,087 filed on Jan. 30, 2015, which claims priority to U.S. patent application Ser. No. 11/834,817 filed on Aug. 7, 2007, which claims priority to U.S. Provisional Patent Application Ser. No. 60/912,389 filed on Apr. 17, 2007, and U.S. Provisional Patent Application Ser. No. 60/912,077 filed on Apr. 16, 2007. The benefit of priority of each of these applications is claimed hereby. Further, the disclosures of these applications are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2008009 | Firestone | Jul 1935 | A |
5537618 | Boulton et al. | Jul 1996 | A |
5638543 | Pedersen et al. | Jun 1997 | A |
5890152 | Rapaport | Mar 1999 | A |
5899867 | Collura | May 1999 | A |
5910987 | Ginter et al. | Jun 1999 | A |
5918217 | Maggioncalda et al. | Jun 1999 | A |
6026376 | Kenney | Feb 2000 | A |
6073124 | Krishnan et al. | Jun 2000 | A |
6102287 | Matyas, Jr. | Aug 2000 | A |
6125385 | Wies et al. | Sep 2000 | A |
6141006 | Knowlton et al. | Oct 2000 | A |
6269368 | Diamond | Jul 2001 | B1 |
6295092 | Hullinger et al. | Sep 2001 | B1 |
6295559 | Emens et al. | Sep 2001 | B1 |
6362837 | Ginn | Mar 2002 | B1 |
6405175 | Ng | Jun 2002 | B1 |
6493744 | Emens et al. | Dec 2002 | B1 |
6519586 | Anick et al. | Feb 2003 | B2 |
6519602 | Sundaresan et al. | Feb 2003 | B2 |
6671878 | Bliss | Dec 2003 | B1 |
6751614 | Rao | Jun 2004 | B1 |
6910029 | Sundaresan | Jun 2005 | B1 |
7039859 | Sundaresan | May 2006 | B1 |
7058625 | Bossemeyer, Jr. et al. | Jun 2006 | B2 |
7099859 | Sundaresan | Aug 2006 | B2 |
7249139 | Chuah et al. | Jul 2007 | B2 |
7266511 | Teshima | Sep 2007 | B2 |
7269573 | Bunn et al. | Sep 2007 | B1 |
7328177 | Lin-Hendel | Feb 2008 | B1 |
7702534 | Shimizu | Apr 2010 | B1 |
7801885 | Verma | Sep 2010 | B1 |
8166026 | Sadler | Apr 2012 | B1 |
8171084 | Walter | May 2012 | B2 |
8260687 | Chang et al. | Sep 2012 | B2 |
8386265 | Subramanian | Feb 2013 | B2 |
8416999 | Ogawa | Apr 2013 | B2 |
8977631 | Sundaresan et al. | Mar 2015 | B2 |
9262784 | Shi | Feb 2016 | B2 |
9342504 | Ehsani | May 2016 | B2 |
9692739 | Harrison, Jr. | Jun 2017 | B1 |
10127583 | Sundaresan et al. | Nov 2018 | B2 |
10417268 | Adwait | Sep 2019 | B2 |
11030662 | Sundaresan et al. | Jun 2021 | B2 |
20010011239 | Kondoh et al. | Aug 2001 | A1 |
20010020231 | Perri, III et al. | Sep 2001 | A1 |
20010034661 | Ferreira | Oct 2001 | A1 |
20010037368 | Huang | Nov 2001 | A1 |
20010056377 | Kondoh et al. | Dec 2001 | A1 |
20020026388 | Roebuck | Feb 2002 | A1 |
20020103692 | Rosenberg et al. | Aug 2002 | A1 |
20020107752 | Rivera et al. | Aug 2002 | A1 |
20020156807 | Dieberger | Oct 2002 | A1 |
20030014423 | Chuah | Jan 2003 | A1 |
20030033299 | Sundaresan | Feb 2003 | A1 |
20030126035 | Kake et al. | Jul 2003 | A1 |
20030172174 | Mihalcheon | Sep 2003 | A1 |
20030195877 | Ford et al. | Oct 2003 | A1 |
20040030697 | Cochran | Feb 2004 | A1 |
20040044730 | Gockel et al. | Mar 2004 | A1 |
20040044950 | Mills et al. | Mar 2004 | A1 |
20040049498 | Dehlinger et al. | Mar 2004 | A1 |
20040075681 | Anati | Apr 2004 | A1 |
20040083127 | Lunsford et al. | Apr 2004 | A1 |
20040098385 | Mayfield et al. | May 2004 | A1 |
20040117173 | Ford | Jun 2004 | A1 |
20040122673 | Park et al. | Jun 2004 | A1 |
20040166401 | Srinivas | Aug 2004 | A1 |
20040243568 | Wang et al. | Dec 2004 | A1 |
20050027612 | Walker et al. | Feb 2005 | A1 |
20050144052 | Harding | Jun 2005 | A1 |
20050149458 | Eglen et al. | Jul 2005 | A1 |
20050192854 | Ebert et al. | Sep 2005 | A1 |
20050192958 | Widjojo et al. | Sep 2005 | A1 |
20050197893 | Landau et al. | Sep 2005 | A1 |
20050251553 | Gottfried | Nov 2005 | A1 |
20060018551 | Patterson | Jan 2006 | A1 |
20060025220 | Macauley | Feb 2006 | A1 |
20060026152 | Zeng et al. | Feb 2006 | A1 |
20060069561 | Beattie et al. | Mar 2006 | A1 |
20060085259 | Nicholas et al. | Apr 2006 | A1 |
20060128263 | Baird | Jun 2006 | A1 |
20060143095 | Sandus et al. | Jun 2006 | A1 |
20060149745 | Mengerink | Jul 2006 | A1 |
20060212441 | Tang et al. | Sep 2006 | A1 |
20060218577 | Goodman et al. | Sep 2006 | A1 |
20060235860 | Brewer et al. | Oct 2006 | A1 |
20060247914 | Brener et al. | Nov 2006 | A1 |
20060247946 | Gordon | Nov 2006 | A1 |
20060282303 | Hale | Dec 2006 | A1 |
20070005587 | Johnson | Jan 2007 | A1 |
20070027830 | Simons et al. | Feb 2007 | A1 |
20070043761 | Chim | Feb 2007 | A1 |
20070067279 | Bonabeau | Mar 2007 | A1 |
20070067289 | Novak | Mar 2007 | A1 |
20070073580 | Perry et al. | Mar 2007 | A1 |
20070100650 | Ramer et al. | May 2007 | A1 |
20070106659 | Lu et al. | May 2007 | A1 |
20070112738 | Livaditis | May 2007 | A1 |
20070112758 | Livaditis | May 2007 | A1 |
20070118813 | Forstall et al. | May 2007 | A1 |
20070122005 | Kage et al. | May 2007 | A1 |
20070136178 | Wiseman et al. | Jun 2007 | A1 |
20070214000 | Shahrabi et al. | Sep 2007 | A1 |
20070244884 | Yang | Oct 2007 | A1 |
20070265507 | De Lemos | Nov 2007 | A1 |
20070266022 | Frumkin et al. | Nov 2007 | A1 |
20070266093 | Forstall et al. | Nov 2007 | A1 |
20080010167 | Bunn et al. | Jan 2008 | A1 |
20080052312 | Tang et al. | Feb 2008 | A1 |
20080082477 | Dominowska | Apr 2008 | A1 |
20080097871 | Williams et al. | Apr 2008 | A1 |
20080109327 | Mayle et al. | May 2008 | A1 |
20080112621 | Gallagher et al. | May 2008 | A1 |
20080113614 | Rosenblatt | May 2008 | A1 |
20080141153 | Samson et al. | Jun 2008 | A1 |
20080228595 | Hill et al. | Sep 2008 | A1 |
20080255957 | Erdem et al. | Oct 2008 | A1 |
20080255962 | Chang et al. | Oct 2008 | A1 |
20080255967 | Shi | Oct 2008 | A1 |
20080256040 | Sundaresan et al. | Oct 2008 | A1 |
20120323743 | Chang et al. | Dec 2012 | A1 |
20140325068 | Assuncao et al. | Oct 2014 | A1 |
20150149385 | Sundaresan et al. | May 2015 | A1 |
20150317320 | Miller et al. | Nov 2015 | A1 |
20190057421 | Sundaresan et al. | Feb 2019 | A1 |
Number | Date | Country |
---|---|---|
1296241 | Mar 2003 | EP |
1783632 | Dec 2012 | EP |
0111511 | Feb 2001 | WO |
WO2001084266 | Nov 2001 | WO |
0242909 | May 2002 | WO |
WO2003010621 | Feb 2003 | WO |
2004059595 | Jul 2004 | WO |
WO2005036413 | Apr 2005 | WO |
WO2005104736 | Nov 2005 | WO |
WO2007121381 | Oct 2007 | WO |
2008130531 | Oct 2008 | WO |
2008130575 | Oct 2008 | WO |
Entry |
---|
En Cheng et al., “Using Implicit Relevane Feedback to Advance Web Image Search”, IEEE International Conference on Multimedia and Expo Jul. 2006, (pp. 1773-1776). |
Peter D. Tumey et al., “Similarity of Semantic Relations”, Computational Linguistics (2006) 32 (3): 379-416. |
Final Office Action received for U.S. Appl. No. 12/104,205, dated Jun. 12, 2012, 9 pages. |
Final Office Action received for U.S. Appl. No. 12/104,205, dated Oct. 12, 2012, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/104,205, dated Oct. 4, 2011, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 12/104,205, dated Jan. 13, 2014, 11 Pages. |
Notice of Allowance received for U.S. Appl. No. 12/104,205, dated Jun. 24, 2014, 6 Pages. |
Preinterview first office action received for U.S. Appl. No. 14/611,087 dated Dec. 31, 2015, 5 Pages. |
Velocity Reviews Website, Retrieved from the Internet: <URL:http://www. velocityreviews.com/forums/t337777-extract-title-from-htm1-documents. html>, Nov. 4, 2004, 4 Pages. |
Final Office Action Received for U.S. Appl. No. 16/167,289, dated Feb. 2, 2021, 14 Pages. |
Non Final Office Action Received for U.S. Appl. No. 16/167,289, dated Oct. 6, 2020, 19 pages. |
Notice of Allowance Received for U.S. Appl. No. 16/167,289, dated Mar. 24, 2021, 6 pages. |
“Quickbuy—Innovations fore-Business”, Item U: QuickBuy, Mar./Apr. 2000, 14 pages. |
“Vendio Launches Widgipedia.com—The Ultimate Widgets Resources; Encourages Development and Distribution of Thounsands of Web and Desktop Widgets Enabling Myriad Applications”, PR Newswire, Proquest # 1207770171, Feb. 1, 2007, 4 pages. |
“Widgetbox Opens Web Widget Marketplace; New Services makes it Easy to Assemble, Share and Intergrate Web Widgets”, PR Newswire, Proquest # 1134701801, Sep. 25, 2006, 4 pages. |
Adaptiveblue, URL: I<tlttp://web.archive.org/web/20061221 01160 1/www.adaptiveblue.com/help.html>, 2006, 25 Pages. |
International Search Report received for PCT Application No. PCT/US2008/004912, dated Jul. 14, 2008, 4 pages. |
International Written Opinion received for PCT Application No. PCT/US2008/004912, dated Jul. 14, 2008, 7 pages. |
International Preliminary Report on Patentability received for PCT Application No. PCT/US2008/004912, dated Oct. 29, 2009, 6 pages. |
Shama,“Wotsa Widget?”, Jerusalem Post, Proquest # 895924121, Aug. 19, 2005, 4 pages. |
Sergei et al., “Using Web-Graph Distance for Relevance Feedback in Web Search”, SIGIR, 2006, pp. 147-148. |
Rohini et al., “A Novel Approach for Re-ranking of Search Results using Collaborative Filtering”, IEEE, 2007, 5 pages. |
Kherfi, et al., “Combining Positive and Negative Examples in Relevance Feedback for Content-based Image Retrieval”, J. Vis Commun. Image R. 14, 2003, pp. 428-457. |
Appeal Decision received for U.S. Appl. No. 11/834,817, mailed on Jul. 25, 2014, 7 pages. |
Decision on Pre-Appeal Brief Request received for U.S. Appl. No. 11/834,817, mailed on May 6, 2011, 2 pages. |
Examiner's Answer to Appeal Brief received for U.S. Appl. No. 11/834,817, mailed on Aug. 25, 2011, 23 pages. |
Final Office action received for U.S. Appl. No. 11/834,817, dated Feb. 2, 2011, 22 pages. |
Final Office action received for U.S. Appl. No. 11/834,817, dated May 12, 2010, 20 pages. |
Non-Final office action received for U.S. Appl. No. 11/834,817, dated Aug. 26, 2010, 16 pages. |
Non-Final Office Action received for U.S. Appl. No. 11/834,817, dated Sep. 21, 2009, 17 pages. |
Notice of Allowance received for U.S. Appl. No. 11/834,817, dated Oct. 31, 2014, 5 pages. |
Final Office Action received for U.S. Appl. No. 12/104,205, dated Aug. 18, 2016, 11 pages. |
Final Office Action received for U.S. Appl. No. 12/104,205, dated May 8, 2015, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/104,205, dated Jan. 12, 2016, 14 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/104,205, dated Oct. 8, 2014, 6 pages. |
Advisory Action received for U.S. Appl. No. 14/611,087, dated Sep. 22, 2017, 3 pages. |
Final Office Action received for U.S. Appl. No. 14/611,087, dated Apr. 4, 2018, 21 pages. |
Final Office Action received for U.S. Appl. No. 14/611,087, dated Jul. 11, 2017, 20 pages. |
Final Office Action received for U.S. Appl. No. 14/611,087, dated Sep. 9, 2016, 19 pages. |
First Action Interview—Office Action Summary received for U.S. Appl. No. 14/611,087 dated Apr. 28, 2016, 5 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/611,087, dated Mar. 1, 2017, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/611,087, dated Nov. 1, 2017, 19 pages. |
Notice of Allowance received for U.S. Appl. No. 14/611,087, dated Jun. 29, 2018, 23 pages. |
Kavita et al., “Mining Tag Clouds and Emotions behind Community Feedback”, 2008, pp. 1181-1182. |
Number | Date | Country | |
---|---|---|---|
20210256575 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
60912389 | Apr 2007 | US | |
60912077 | Apr 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16167289 | Oct 2018 | US |
Child | 17308653 | US | |
Parent | 14611087 | Jan 2015 | US |
Child | 16167289 | US | |
Parent | 11834817 | Aug 2007 | US |
Child | 14611087 | US |