1. Technical Field of the Invention
The present invention relates generally to electronic services that allow users to rate services and the like and to receive rating information about such services and the like and, more particularly, to a computerized system and method for collecting, authenticating and/or validating bonafide reviews of ratable objects.
2. Discussion of Related Art
The Internet and the World-Wide-Web are increasingly becoming a major source of information for many people. New information (good or bad) appears on the Internet constantly. In order to help people better determine the usefulness of this information, rating services exist to provide both rating and commenting information that may help people make better determinations about the quality or usefulness of brick and mortar organizations, products, services, Internet organizations, Internet Web Sites, and/or specific content within a web page.
The majority of these systems solicit the user's rating and opinions on a specific ratable object, such as a company, a product, a web site, an article or a web page. When a user is looking for rating information regarding an ratable object, either for online shopping or any other purpose, the system presents a rating for the object to the user that was created by a previous user or users. Most system and services that exist today provide these ratings and reviews in an anonymous and/or semi-anonymous way with minimal or no authentication to help determine if the rating and review are from a legitimate user. At the very least, these systems do not ask for or require collection and verification of any identifiable information to determine whether a review for a ratable object is either real or whether the review might be fraudulent.
Some systems ask the user to submit and verify their email address. Some systems may even check to see if their email address is unique to ensure that no user submits more than (1) one review per ratable object. Still, these techniques do very little to stop potential fraudulent activity or determine if the rater has the authority to rate the object. Once this information is collected for the ratable object, these ratings and reviews are presented as reviews that other users can use in order to make future transaction decisions about the object that is being rated which could be misleading if the data source has a vested interest to present false data about the object.
When these systems present a rating to a user, there is no consideration of the way the rating information was collected, who the rater might be or if they have a vested interest in rating a certain way. Since people rely on this information, greater prevention techniques are needed to ensure the users of these ratings and reviews can be trusted as reliable reviews from actual users with experience with the ratable object.
Some systems use vetting techniques of the reviewer like verifying that the user has access to an email address. For example, Yelp and Yahoo, ask reviewers of a business to verify their email address once and then a user name and password will be provided to the reviewer to log into the account for future reviews. Once this email verification is completed, the reviewers' ratings and reviews will be posted as a trusted review. The assumption is that the reviewer has actually had a transactional experience and/or is not a fraudulent reviewer of the site. Still, other systems such as Bazaarvoice, Inc. do not require authentication of the rater when a rater rates or reviews a product or service. Furthermore, none of these services today try to detect that these transactions might be fraudulent. There is a need for a reliable system to authenticate and verify raters and the ratings they submit to review a ratable object, in order to detect those transactions that may be fraudulent.
The present invention provides a system and method for generating bonafide ratings of ratable objects by identifying fraudulent activity and evaluating transactional relationships of raters/reviewers to ratable objects. The system and method provide trustworthy rating and review information to users relying on this information to determine if they should conduct future transactions with the ratable object in question. In a multi-stage vetting process, the system automatically evaluates a rater or reviewer's profile information, the rating submitted and data concerning the ratable object and produces a bonafide rating. Bonafide ratings may then be incorporated into a rating database, accessed by users interested in obtaining a trustworthy rating of a ratable object such as a company, person, website, product, service, virtual ratable object etc., or utilized for any variety of purposes.
Under one embodiment of the invention, a method, performed on a computer system, provides a computer-based service to automatically evaluate and determine authenticity of a rating. The method includes receiving input at the computer system with rating information, the rating information including a rating for a specified ratable object and identification data for the ratable object. The method includes receiving input at the computer system with rater profile information, the rater profile information including at least one of identification information and usage information associated with an active user of the computer based service. The method includes performing at least one evaluation step, the at least one evaluation step evaluating the received input at the computer system. Evaluating includes determining a risk level associated with the rating information, the rater profile information, and a time frame associated with receiving input. The method includes determining, based on the risk level, an evaluation outcome message. The system communicates to the active user the evaluation outcome message, the evaluation outcome message including at least one of an acceptance message, an information request message, and a rejection message. Upon communication of the acceptance message, the computer-based service accepts the rating for the specified ratable object for storage in a rating information database. Upon communication of the information request message, the computer-based service implements a verification process. Upon communication of the rejection message, the computer-based service rejects the rating for the specified ratable object for storage in the rating information database.
According to one aspect, the ratable object includes one of a business, a person, a product, a URI, a website, web page content, a virtual object, a virtual product, or a virtual service.
According to another aspect, receiving input at the computer system includes receiving electronic information by way of one of a URI, an Internet capable application, Javascript, SMS texting, Telephone, Flash object, and application program interface (API).
According to another aspect, communicating to the active user an evaluation outcome message includes transmitting electronic information by way of one of a URI, an Internet capable application, Javascript, SMS texting, Telephone, Flash object, and application program interface (API).
According to another aspect, the evaluation step includes classifying the rating as one of positive and negative.
According to another aspect, the evaluation step includes evaluating the rater profile information to determine whether the active user is an ad hoc user.
According to another aspect, the evaluation step includes evaluating the rater profile information to determine whether the active user is a recruited user.
According to another aspect, the evaluation step includes evaluating usage information to determine a usage history via at least one of tracking an IP address, applying a cookie and requesting usage information from the active user.
According to another aspect, evaluating a time frame associated with receiving input includes determining whether an upper or lower time limit for receiving input at the computer system with rating information is exceeded.
According to another aspect, evaluating the rating information includes determining whether an upper or lower text limit for rating information is exceeded.
According to another aspect, determining a risk level includes identifying a combination of rating information, the rater profile information, and time frame associated with receiving input as high risk.
According to another aspect, determining a risk level includes identifying a combination of rating information, the rater profile information, and time frame associated with receiving input as medium risk.
According to another aspect, determining a risk level includes identifying a combination of rating information, the rater profile information, and time frame associated with receiving input as low risk.
According to another aspect, the verification process includes automatically communicating to the active user via at least one of an SMS message, an e-mail message, a telephone call, a facsimile and a postal message, a request for additional information.
According to another aspect, the request for additional information includes one of active user confirmation, additional identification information and additional usage information associated with the active user.
According to another aspect, upon communication of the acceptance message, the method further includes assigning a transaction identity to the rating information, the transaction identity comprising the risk level, the evaluation outcome message, the rater profile information, and the time frame associated with receiving input.
According to another aspect, upon communication of the rejection message, the method further comprises assigning a transaction identity to the rejected rating information, the transaction identity comprising the risk level, the evaluation outcome message, the rater profile information, and the time frame associated with receiving the input.
In the drawings:
A mechanism is provided to automatically identify bonafide raters and reviewers of ratable objects like a company, a product, a person, a URI, a web site, a web page content, a virtual object, virtual products, or virtual service so that rating information may be trusted from and shared with other users of the system. Data is relayed via multiple protocols like a http (web browser), SMS (texting), Telephone (phone lines) and other standard and proprietary methods and protocols like Skype and public and/or private instant messaging systems. A computerized system generates bonafide ratings by executing various computer implemented algorithms to evaluate the relationship between the rater/reviewer and the ratable object, using various characteristics of the rating itself to trigger each evaluation process.
The computerized systems provides fraud prevention for computerized reviews/ratings and generates legitimate, trustworthy, and/or bonafide ratings/reviews by identifying biased rating/reviews. The method seeks to isolate fraudulent reviews by a series of mechanisms, one of which includes the identification of vested interests. The method is based, at least in part, on the fundamental idea that vested interests may encourage users of a rating system to produce biased reviews. Thus in certain circumstances, a rater/reviewer may submit an inaccurately positive rating/review of a ratable object when that rater/reviewer seeks to benefit from a positive rating/review. By way of an introductory example, an owner of a business or service might be inclined to submit a positive review of his or her business or service to help generate an inflated, good reputation. Conversely, in certain circumstances, a rater/reviewer may submit an inaccurately negative rating/review of a ratable object when that rater/reviewer seeks to benefit from a negative rating/review. By way of example, an owner of a business or service might be inclined to submit a negative review of his or her competitor's business or service to help generate a deflated, bad reputation for that competitor, thereby improving the relative appeal of his or her own business or service.
The computerized system for generating bonafide ratings goes about identifying potentially biased reviews by executing a series of authentication and verification processes. The processes are structured to identify the fraud risk level associated with the rating/review. Those processes aimed at identifying at least some likelihood of vested interest include the execution of algorithms that compare data for the ratable object to data for the rater/reviewer. Those processes aimed at identifying different manifestations of fraud may examine time frames associated with generating and submitting ratings, origins of the rater/reviewer's use of the rating system, and a variety of other parameters. The computerized system may combine any variety of these processes and employ communication mechanisms to request confirmation steps, additional information from raters/reviewers, etc. In sum, a multi-step, multi-dimensional process is implemented to identify and minimize fraudulent ratings, while creating a legitimacy measure for those ratings that successfully pass the authentication and verification process. The multi-step, multi-dimensional rating/review process is described in detail below.
A reviewer typically undergos various authentication levels of vetting in order to submit a review. The authentication process may request only a minimal amount of data from the rater/reviewer. Alternately, multiple types of data may be requested from the rater/review and a more extensive authentication process executed. In each case, the rater/review provides data which is then verified, based on pre-determined system triggers. Certain data inputs may initiate a process which requests additional information about the rater/reviewer that may be provided and verified. Each such variation is discussed more fully in the sections that follow.
Under certain embodiments, a reviewer's activity is monitored and analyzed via a series of detection algorithms. The detection algorithms are constructed to meet a variety of application parameters. The detection algorithms are used to determine if a rater/reviewer might have a vested interest to provide either a positive rating/review or a negative rating/review.
The system and method for determining bonafide ratings and review relies on the system's applied levels of authentication for the rater/reviewer and when to apply each authentication level based on the various fraudulent threats of misrepresenting the rating and review content. The system relies on the user's rating/review submission behavior to identify how and when the system applies the authentication methods in order to successfully submit a rating or review for a ratable object. This determination is key, in so far as vested interests are understood to bias rating outcomes either towards inaccurately negative or inaccurately positive outcomes.
The service authenticates or validates bonafide reviewers of ratable objects (organizations, products, services, websites, and other objects). In order to authenticate a reviewer, the service collects different elements of information for a particular reviewer. Where most rating services would simply collect basic information from a reviewer such as email address, the described embodiment goes further, and continues to monitor the reviewer information. At the outset, the service collects the standard information, such as the reviewer's email address. But in certain cases where the reviewing/rating warrants more checks, the service performs additional checks as placing an automated telephone call to the reviewer, recording the information received to provide an additional contact point beyond what is already on file. In addition, at any point, and this could be randomly selected, the user could be taken through an extended authentication process where the reviewing/rating service performs additional authentication steps to validate the authenticity of the review information. For example, an automated telephone call could be placed to a new or predetermined phone number of the reviewer. Or, an addition email message, SMS, or other mechanism could be used and may be accepted and confirmed by the reviewer.
In order to authenticate and validate a review, a risk evaluation system is employed. The risk evaluation system is designed to differentiate ratings that are likely potential fraudulent activity from those which are not likely to comprise fraudulent activity. Under one embodiment, fraudulent rating/reviews are measured in a 6 category framework system, in which 4 categories represent potential fraudulent activity. The 6 categories are defined as either positive or negative rating/reviews coming from a user like a customer, a party with a vested interest in the ratable object's success like an owner, or a party with a vested interest in the ratable object's failure like a competitor. This 6-category system helps to show that 4 of the 6 categories are likely potential fraudulent activity. The first category is one in which a ratable object owner could be submitting a review for his or her own ratable object. Because the ratable object owner likely has a vested interest in submitting a positive rating or review for their company, product, etc. and because future users of these rating and reviews may depend on this information to determine whether a transaction should occur with the ratable object, the system will flag this positive rating/review. The system will then stop the rating/review, or have the rating/review undergo more a intense vetting process. The second, third, and fourth categories are those in which a competitor or agent of the ratable object could be submitting a review of the ratable object. Because the competitor of the ratable object has an vested interest in submitting a negative review for the company, product, etc. and future users of these ratings/reviews may depend on this information to be objective to determine if a transaction should occur with the ratable object, RatePoint will flag this transaction to stop the rating/review, or have the rating system undergo more intense vetting of the rating/review.
The rating system may differentiate ratings that are likely potential fraudulent activity from those which are not likely to comprise fraudulent activity by classifying the rating/review under a risk standard. The system will look for fraudulent activity and classify each rating/review transaction as having a low risk, a medium risk, or a high risk of fraudulent activity. If the transaction has a low risk of be fraudulent activity, the system will vet the reviewer with a minimum set of standards. If the transaction has a medium risk of being fraudulent, then the system will vet the reviewer with the minimum set of standards, plus an additional set of standards that include an out of band verification checks that creates a two factor authentication check. If the transaction has a high risk of being fraudulent, then the system will simply block the transaction from entering our system and notify the reviewer of the situation.
The collection of reviews and ratings is accomplished via multiple processes. The processes may be performed via the web and a browser using a standard web form, sending an SMS message to the service, via a telephone call placed either by the reviewer or automatically by the reviewing/rating service to the user, via email, fax message, postal mail or other means. All the collected reviews/ratings are stored and made available to the participating businesses via a centralized ASP environment. In addition, the reviewing/rating system collects reviews and ratings relating to the participating businesses from other available resources and brings those into the ASP service, thereby making the ASP service a central location for all review, rating, and reputation information for a member company.
Various parameters are used to determine whether a review/rating should be further scrutinized to determine its validity. If, for example, the reviewer submits a negative review, there is a higher chance that the reviewer might be a competitor or a competitor's agent. In such a case, the reviewer shall be placed in a process that warrants additional vetting. If the reviewer submits a review with the same email address as an existing rating for a ratable object that is stored by the system, then the reviewer shall be allowed to replace the previous rating/review. The user will be blocked from adding an additional review. This analysis is adapted to limit an individual reviewer from independently biasing a collective rating of a ratable object. A similar process may be enacted via telephone or SMS. Under another aspect, if the reviewer submits a review with the same telephone or SMS number and proves access to this telephone or SMS number, then the reviewer shall be allowed to replace the previous rating/review.
Yet other criteria are used to refine the rating system. In one embodiment, if the reviewer creates and submits a review in under a pre-determined amount of time and/or the time period to write a review is greater than a pre-defined word per minute rate, then the review shall be placed in a process that warrants additional vetting. Under another embodiment, if the reviewer submits a review that is to long or to short as defined by the system, then the reviewer shall be placed in a process that warrants additional vetting. Under another aspect, if the organization receives more reviews per visitor to the site than the system allows, then the review shall be placed in a process that warrants additional vetting. In certain instances, a ratable object's first set of reviews for a ratable object within a pre-determined timeframe are flagged for additional vetting.
The system employs various methods to track the manner in which a rating or review is collected. Various steps are taken to ensure a quality collection process. If, for example, the reviews are collected in ad hoc and free-formed manner, as opposed to through some of the automated tools that the system provides, then the system will flag the reviews as suspect. Examples of automated tools provided by the system include email requests for reviews sent out to the organization's customer base. In certain embodiments, the system tracks the IP address of the organization used when it signed up for an account with the system. By analyzing all the future ratings and reviews of the rater's IP with Organization's signup the system can try to determine if an Organization is trying to review itself or it products, services, etc. the system can stop the submission and inclusion of the rater's rating or review because it is likely not objective, as the source of the rater is highly likely to be the organization which is likely to have a vested interest in submitting a basis positive review for the rated object.
Unique identifiers may also be used to improve the robustness of the vetting process. In certain embodiments, the system applies a cookie (a unique code applied by the system to determine identity, setting, and preference data for future return visits to the system) to the web browser of the organization when it previously signed up for an account with the system and another cookie when it actually administers their account on the system. By analyzing all the future ratings and reviews while looking for this same cookie on the rater's web browser, the system can selectively stop the submission and inclusion of the rater's rating or review. This feature is employed when the reviewer is evaluated to be likely to have a vested interest in a particular rating—e.g. the source of the rater is highly likely to be the organization which is likely to have a vested interest in submitting a biased positive review for the rated object. The system may then transfer the reviewer to undergo further authentication because the review is deemed likely not objective.
Each feature may be selected and employed to better provide a mechanism of automatically identifying bonafide raters and reviewers of ratable objects towards the eventual goal of delivering trustworthy ratings and reviews. Ratable object, as used herein, include but are not limited to a company, a product, a person, a URI, a web site, a web page content, a virtual object, virtual products, and/or virtual service. In the exemplary system, company rating/reviews are used. For purposes of illustration, the sections that follow discuss a system and authentication method for generating bonafide user ratings on businesses. The ensuing discussion should not be considered limiting, as the system and methods will also apply to any other ratable objects and entities.
A user can submit a particular rating on an entity through an application or an element and/or entity within or accessible via a URI and/or application either through an Internet capable application like a web browser (via a toolbar, web page or web portal), Javascript, SMS texting, Telephonic, Flash object, application programming interface (API) or any other protocol or method that can call and display content over a network and/or the Internet. A user can be a registered user to the system or be an anonymous user, i.e., the system does not have the user's identification information. Rating information may be trusted from and shared with other users of the system by way of multiple protocols including but not limited to a http (web browser), SMS (texting), Telephone (phone lines) and other standard and proprietary methods and protocols like Skype and public and/or private instant messaging systems.
A variety of levels of authentication and verification of the reviewer/rater may be used. In the aforementioned embodiments, a reviewer is requested to undergo various authentication levels or levels of vetting in order to submit a review. The authentication process may request only a minimal amount of data from the rater/reviewer that should be provided and verified and based on system triggers. Or, the process may request additional information about the rater/reviewer that should be provided and verified as discussed more fully in the sections that follow.
Under certain embodiments, a reviewer's activity is monitored and analyzed via a series of detection algorithms used to determine if a rater/reviewer might have a vested interest to provide either a positive rating/review or a negative rating/review.
The system and method for determining bonafide ratings and review relies on the system's applied levels of authentication for the rater/reviewer and when to apply each authentication level based on the various fraudulent threats of misrepresenting the rating and review content. The system relies on the user's rating/review submission behavior to identify how and when the system applies the authentication methods in order to successfully submit a rating or review for a ratable object.
In various embodiments, the disclosed system and method determines how and when each authentication method is used to render bonafide user ratings on businesses. These authentication methods are discussed more fully in the sections that follow. The system can be applied to an entity as a company, a person, a product, a URI, a web site, web page content, a virtual object, virtual products, or virtual services. In one exemplary system, company rating/reviews are used. In various embodiments, a user can submit a particular rating on an entity through an application or an element and/or entity within or accessible via a URI and/or application either through an Internet capable application like a web browser (via a toolbar, web page or web portal), Javascript, SMS texting, Telephonic, Flash object, application programming interface (API) or any other method that can call and display content over a network and/or the Internet. A user can be a registered user to the system or be an anonymous user, i.e., the system does not have the user's identification information. A detailed description of the above-mentioned system methods and features is now provided with reference to the figures.
The host computer shown in
The Rater/Reviewer Accepted Submission Protocol part 1001 consist of four logical methods a user can submit a rating/review, a web browser based submission 101, a telephone based submission 102, a SMS (Short Message Service) based submission 103 and any other standard and proprietary protocols 104. The Ratings/Review Module 105 collects and processes the rating/review data. A database 108 is used to store all the information.
A user 100 using an http web browser or email client 101 to rate/review a ratable object will submit a rating/review to the system first. A user normally initializes the process by clicking on a hyperlinked image or textual hyperlink or may go directly to the appropriate URL to activate the rating/review process. The Ratings/Review Module 105 collects and processes the rating/review data. A database 108 is used to store all the information.
A user 100 may use a voice telephone based device 102 to rate/review a ratable object. A user normally initializes the rating/review process with a telephone network enabled device dialing a predetermined telephone number and inserting a unique numeric code of the ratable object. The system then instructs the user to submit the rating/review by using both telephone keypad and the rater/reviewer voice to collect the rating/review. The Ratings/Review Module 105 collects and processes the rating/review data. A database 108 is used to store all the information.
A user 100 may use a SMS (short message service) based device 103 to rate/review a ratable object. A user normally initializes the rating/review process with a mobile phone enabled SMS device by inserting a unique numeric code of the ratable object ID and sending it to a predefined telephone number or Short Code (a 5 or 6 digit number that is used in the United States to collect and send SMS messages). The Ratings/Review Module 105 collects and processes the rating/review data. A database 108 is used to store all the information.
A user 100 may also use a standard or proprietary protocol 104 to rate/review a ratable object. A developer may use the system API to create a new rating/review process for a protocol that is either standard or proprietary. The Ratings/Review Module 105 collects and processes the rating/review data. A database 108 is used to store all the information.
The Rating/Reviews Processing Application part 1002, collects, verifies and analyzes all user input and stores it in a database 108. It consists of three modules, a Rating and Review Module 105 that collects the user 100 rating/review data, a Authentication Module 106 that verifies and determines which user data to collect and a Fraud Detection Module 107 that analyzes user data to see if a potential fraudulent activity could exist. The database 108 is shared by the Rating/Review Processing Application 1002.
The Rating/Reviews Module 105, collects a user 100 rating/review data and stores it in a database 108. The module dynamically determines which data to collect based on the analysis of the Fraud Detection Module 107.
The Authentication Module 106 verifies the user's rating/review data to ensure the data is real. The Authentication Module 106 also dynamically instructs the Rating/Review Module 105 to collect more or less data elements from the rater/review 100 depending on the analysis of processing the user data from the Fraud Detection Module 107.
The Fraud Detection Module 107 analyzes the user's rating/review data to determine if potential fraudulent activity is occurring. The Fraud Detection Module has many algorithms that can potentially determine fraudulent activity, if one or more of these algorithms indicate that potential fraudulent activity is occurring then it notifies the Authentication Module 106 which may take appropriate steps to ask for and verify additional data from the rater/reviewer user 100 to reduce the fraudulent activity. Methods for determining potential fraudulent activity are described below.
This third source may have a vested interest in submitting a negative rating/review. The submission of a biased review may result in a rating that misrepresent the ratable object and may also lead a future user of the rating or review to make a misinformed decision about the ratable object. The second group identifies if the rating/review is a Positive Review part 3002. The system identifies positive reviews as being 3, 4 or 5 in the Rating selection 201. The third group identifies if the rating/review is a Negative Review part 3003. The system identifies a negative review as being a 1 or 2 in the Rating selection 201. The system focuses on the two primary rating/review fraud threats. The first is a positive review based on the evaluation that there is a medium to high risk that the Ratable Object Owner 301 may be submitting a positive rating/review 304 to the system. Under this situation, the information being submitted will undergo additional authentication and potentially stopped.
The system will then determine if the rating/review is positive or negative part 804. A positive rating/review is marked with a 3, 4 or 5 rating 201, while a negative rating/review is marked with a 1 or 2 rating 201. If the rating is negative, the system determines that the fraud risk is medium and allows the rater/review to continue to submit the rating/review, but through an additional amount of vetting including a real time telephone call back to the rater/reviewer part 805. The system requests the user verify their email address by sending an email with a verification code or hyperlink as seen in
The system will then ask the user if they are the owner of the ratable object part 1704 as exemplified in
In the first path, when a user submits a positive review part 2101 via a web browser 101, a web page
A second alternative path is executed when a user submits a negative review part 2101 via a web browser 101, a web page
If the rating is negative, the system determines that the fraud risk is medium and allows the rater/review to continue to submit the rating/review, but implements an additional amount of vetting. The additional vetting can include a real time telephone call back to the rater/reviewer part 2107. The system requests the user verify their email address by sending an email with a verification code or hyperlink as seen in
A third alternative path is enacted when a user submits positive a user rating/review 3002 from a Ratable Object Owner 301 via the Rating/Review submission portal & protocols 1001 of the system. When a user submits a review part 2101 via a web browser 101, a web page
The system will then ask the user if they are the owner of the ratable object, 2109, as exemplified in
The additional vetting process includes a real time telephone call back to the rater/reviewer part 2107. The system requests the user verify their email address by sending an email with a verification code or hyperlink as seen in
A fourth alternative path is implemented when a user submits a positive user rating/review 3002 from a ratable Object Owner 301 via the Rating/Review submission portal & protocols 1001 of the system. When a user submits a review part 2101 via a web browser 101, a web page
The first algorithm 2201 works as follows: if a Rater/Reviewer's IP address is determined to be the same as the IP address of the ratable object owner that is recorded during a signup stage or the IP address that is recorded from the ratable object owner during a login event to manage their ratable object account, then the system implements algorithm 2201. The system determines time duration elapsed between these two IP addresses from when they were last recorded is less than or equal to X hours (as defined in the system
The second algorithm 2202 works as follows: if a Rater/Reviewer cookie (a unique code that the system applies to the user's web browser) is determined to be the same as the cookie that was applied by the system during signup of the ratable objects owner during a login event to the system in order to manage their ratable object account, then the system blocks the reviewer 1903. When the system blocks the reviewer 1903 it provides a notification, as exemplified in
For example, algorithm 2301 determines whether a rater/reviewer submits a negative review. The first algorithm 2301 works as follows: If the rater/reviewer submits a negative review which is a rating 201 of 1 or 2, the rater/review will go through the additional vetting process 805 which is exemplified in
Algorithm 2302 determines whether to flag the reviews as suspect on the basis of whether the reviews are collected in ad hoc and free formed manner and not with some of the automated tools that the system provides—e.g. email requests for reviews sent out to the organization's customer base. The second algorithm 2302 works as follows: If the ratable object receives a rating/review without using any of the system's proactive tools to solicit reviews, then the system will redirect the rater/review to go through the additional vetting process 805 which is exemplified in
The proactive tools used in 2302 are the system's email solicitation, images, web pages and/or pop-ups. Email solicitation allows the owner of the ratable object to request reviews via email to the rater/reviewer to actually rate/review the ratable object. The Site Seal or embedded web page or pop-up is an image, page or pop-up that is placed next to the ratable object to create a call to action for the user to rate/review the ratable object. The system is able to count the number of times the image, pages and/or pop-ups are delivered and if the image has not been delivered a sufficient number of times as predefined in the system before a review is placed, then the system will redirect the rater/review to go through the additional vetting process 805 which is exemplified in
The algorithms provide a variety of triggers to alert the system to medium fraud risk. Algorithm 2303 determines whether the rater/reviewer creates and submits a review in under a pre-determined amount of time and/or whether the time period to write a review is greater than a pre-defined word per minute rate. The third algorithm 2303 works as follows: If the rater/reviewer submits a rating/review in less than X milliseconds, where X is a variable that is defined and configured in the system, the rater/review will go through the additional vetting process 805 which is exemplified in
Algorithm 2304 determines whether the rater/reviewer has submitted a review that is too long or too short. The forth algorithm 2304 works as follows: If the rater/reviewer's submission is either to long or short as pre-defined in the system, then the rater/review will go through the additional vetting process 805 which is exemplified in
The fifth algorithm 2305 works as follows: If the rater/reviewer submits a review were the read (reviews read)/write (reviews written) ratio of the ratable object is less than X as pre-defined in the system, then the rater/reviewer will go through the additional vetting process 805 which is exemplified in
The sixth algorithm 2306 works as follows: If the ratable object has less than X number of reviews as pre-defined by system, the rater/reviewer will go through the additional vetting process 805 which is exemplified in
The second configurable variable part 2402 allows the system to detect potential fraud if the configurable variable X (the first number of positive reviews that warrant additional authentication) 2402 that are collected from a rater/reviewer. If the review is submitting a positive review and the number of positive reviews including this new positive review is less than the configurable variable part 2402, then the rater/reviewer will go through the additional vetting process 805 which is exemplified in
The third configurable variable part 2403, if a rater/reviewer fails to have their site seal rendered more than X times before a reviewer submits a positive rating/review and the system's email solicitation service has not been to used to request a review for the ratable object, then the rater/reviewer will go through the additional vetting process 805 which is exemplified in
The systems and methods as described above may be applied to other services and protocols 104 not mentioned in this document. The interface to system's rating and review services is accessible via the system API that allows a developer to create a custom interface into the system environment over other standard or proprietary protocols to collect, process and store ratings and reviews for ratable objects.
For example, a developer could enable a live public or private chat service to solicit reviews for a ratable object at the end of a chat session. The service could verify the user's Chat ID with the chat provider and/or perform an out of band authentication process with a phone, SMS, and/or email verification based on the information requested from and provided by the Chat Session users. Thus a plurality of protocols to collect, process, and store ratings and reviews for ratable objects are envisioned.
As noted above, the system can be expanded to rate many different types of objects. Specific examples of ratable objects on which the rating system can be used include products (electronics, books, music, movies), services (web services, utility services), people, virtual people, organizations, websites, web pages, any other object that can be associated with an unique ID. Furthermore, once a unique ID is assigned to a ratable object, the ID can be accepted via multiple protocols as mentioned above, http web brower, email, voice phone and SMS. The protocols can be expanded to include Instant messaging like AOL, Yahoo, MSN, etc or proprietary services like corporate Live Chat products and services like LivePerson, Boldchat, ActivaLive and others. The system can even blend the protocols by accepting the review via one protocol and delivering the confirmation results to another protocol. The rating/and reviews may be quantitative (e.g. 5 stars) or qualitative ratings (e.g. free formed textual, video, voice, other types of media comments). Additionally, the rating UI or scale can be modified, for example the system could except any UI other than a star rating and accept something other than a 5-unit scale. For example, the system can easily accept a 2-unit, 10-unit, 100-unit, or any other quantitative or qualitative scale.
In each case the system fraud algorithms could be applied and expanded to ensure that the rating and reviews that are received form the rater/review are bonafide such that other users of the ratings/reviews can trust that the information presented in the review does not conceal or misrepresent the information about the ratable object. Furthermore, the system's fraud algorithms could be modified and optimized for a particular type of ratable object. For example, a business rating/review might require the authentication/vetting methods described above, but a product review might require a modified set of authentication/vetting methods to ensure a rating/review is bonafide. In a business review the system asks the user to present their email address and potentially telephone number. In a product review, the system might require proof of purchase via: 1. a serial number, 2. an invoice number that can be matched to the product vendor's transaction database, 3. a verification with the issuing bank for a credit card, check or other payment method that would match the payment details to the issuing bank or like organization, and/or 4. a match with the shipping identification number/tracking ID.
The system's current algorithms can optionally be enhanced by applying Cookies to all users and tracking behaviors overtime to determine potential fraud activity. This includes instances in which a rater/reviewer may be: 1. rating certain organizations negatively, 2. flooding the system with reviews inappropriately, and/or 3. discovering a relationship with competitive ratable objects as determined by category or textual analysis.
Furthermore, the system can apply a Cookie via some scripting to a user's browser on the first web page displayed for a successfully completed transaction. This Cookie will identify that the user did, in fact, conduct a transaction with the organization. This information can be used to proactively solicit a user to review the organization upon the user's return to the site. Soliciting a return user for a review can be implemented via, for example, a pop-up review request. As noted above, the information can be used to prove that the user had a transaction with the organization. In yet other embodiments, the system applies a Cookie to a transaction confirmation page to rate a product and collect the unique product ID that was purchased via an API. Later, the system can solicit a pop-up review request if that visitor returns to the organization's website. A Transaction ID and/or Product Unique ID can be used to prove that the rater had a transaction relationship with the ratable object.
While the above description refers to specific embodiments of a system and method for determining bonafide reviews ratable objects, other embodiments are possible. For example the system's fraud detection measures may be used to stop other types of fraudulent activity. For example: the system's fraud detection measures are flexible and could be used to vet an organization or person before they signup with another system, service or organization to receive certain products, services or access to read, add or modify information
Additionally, other methods of fraud detection can be identified as more patterns of fraudulent transactions appear. This could include, the system automatically monitoring usage activity of the system rater/reviewer and analyze and compare that information to produce a profile that describes in computerized form the usage of the rater/reviewer. Those profiles are subsequently analyzed to compare usage among other raters/reviewer. The usage analysis profile of the user includes web-visiting records, rating records, etc. and may be categorize as the Review Source of the Ratable Object 3001 to determine fraud activity. While the above discussion has explicitly identified target objects such as a company, a product, a URI, a web site, web page content, a virtual object, virtual products, or virtual service (e.g. virtual objects, product and services are found inside a gaming environment and other virtual worlds), any range of ratable objects could be rated with the system.
The system can adjust the application of the vetting and authentication procedures for various ratable objects. For example, the system can ask for an invoice number for a review corresponding to the rater/reviewer's transaction with a business. Or, the system can ask for a transaction ID that might be used to prove that a reviewer purchased a certain product before they review that product.
Another process flow that may be implemented includes one reflecting a more detailed understanding of the relationship of the rater/review to the system. In this embodiment, the computerized system may evaluate whether the rater/reviewer is known or unknown to the system, how long has the rater/reviewer been a registered (or unregistered) rater on the system, where the rater/reviewer is geographically located in their rating profile as compared to the current geographic location of their IP address, phone number or SMS number, etc. By creating a computerize model of known fraudulent activity behaviors of the system's raters/reviews locating the more correlative data variable that the system stores for these users, the system can develop a regression model to better determine future fraud activity from raters/reviewer. Additionally, the system could use a measure of relationship and/or closeness to detect otherwise-difficult to find fraud. For various methods and systems for determining relationship and closeness measures see U.S. patent application Ser. No. 11/639,678. For example: the aforementioned computer implemented algorithms could detect someone negatively reviewing hair salons, which may indicate competitive fraud activity. An alternate indication is that a group of businesses are rating each other to drive up positive reviews on their partner businesses artificially, without their businesses being otherwise identified as fraudulent.
The present invention, in its various embodiments, utilizes a range of computer and computer system technologies widely known in the art, including memory storage technologies, central processing units, input/output methods, bus control circuitry and other portions of computer architecture. Additionally, a broad range of Internet technologies well-known in the art are used.
The system described above is an open system in which bonafide ratings are generated from rating sources across a wide variety of platforms. Instead of applying a vetting process to ratings submitted through a single user platform, transaction service, or website, the present system and method are flexible enough to evaluate ratings submitted through a plurality of platforms. For example, when the method is used to legitimate a rating submitted by a rater who is rating a ratable object on a first platform (e.g. a seller on Amazon.com who is selling category A of products), the system will check whether the user has an activity history on a second platform (e.g. the rater is selling category A of products on e-Bay). (In this example, if the rater submits a negative rating, that rating may be flagged as medium or high risk of being biased or fraudulent rating). Thus the vetting process is not limited to transactions and activity history on a single platform and instead, reaches across multiple platforms to enact a broad vetting process for an arbitrary ratable object in a wide area electronic network.
Moreover, the system described above generates bonafide ratings from a multi-dimensional evaluation process. Whereas authentication and verification systems may perform a single-dimensional check, the present system and method legitimate ratings by contextualizing a particular rating with respect to other variables. The system contextualizes the rating by: (1) analyzing information about the ratable object, (2) analyzing information about the rater/reviewer who is submitting the rating and (3) analyzing details about the content and submission process of the rating itself, etc. For the purpose of illustration, a rating for a business could be vetted by examining, for example: (1) the sort of business being rated—what does it sell? what is its geographic location? (2) who is rating the business—does he/she sell similar products? is he/she located in a similar geographic region? does he/she have a history of submitting negative ratings? did he/she sign up for a rating profile? (3) is the rating negative/positive? is the rating submitted within X hours of the alleged transaction with the business? Moreover, the system may evaluate a rater's connectedness to a transaction based on a range of inferences, enacted through the computer implemented algorithms. As illustrated by the aforementioned example, the bonafide ratings are generated through a multi-dimensional vetting process that incorporates a wide variety of variables about the rating/review, the ratable object and the rater/reviewer. It is through this multi-dimensional vetting process that the method and system ensures, with various clear, quantified measures, that the ratings are legitimate and trustworthy. In other words, the multi-dimensional process is designed to identify multiple way bias could manifest.
The system described above generates bonafide ratings from a multi-step vetting process. Instead of only identifying a fraud risk and allowing or rejecting the rating, the present method involves an iterative process. An initial evaluation of risk level (see threat matrix detailed in
Thus the flexibility of the present system and method rely on the cross-platform nature, the multi-dimensional analysis, and the iterative vetting process. The system overcomes the need for a pre-authenticated user by implementing a variety of techniques to observe usage history and make plausible inferences about the user's biases or vested interests. Because the system is not limited to using fixed criteria, it can generate trustworthy ratings for arbitrary ratable objects in a wide area electronic network.
It will be further appreciated that the scope of the present invention is not limited to the above-described embodiments but rather is defined by the appended claims, and that these claims will encompass modifications and improvements to what has been described.
This application claims priority under § 119 to U.S. Provisional Patent Application No. 60/980,687 filed Oct. 17, 2007, the entire contents of which are herein incorporated by reference. This application is also related to the following co-pending applications, the entire contents of which are herein incorporated by reference: U.S. patent application Ser. No. 11/639,679, filed on Dec. 15, 2006, entitled SYSTEM AND METHOD FOR PARTICIPATION IN A CROSS PLATFORM AND CROSS COMPUTERIZED-ECO-SYSTEM RATING SER VICE; U.S. patent application Ser. No. 11/639,678, filed on Dec. 15, 2006, entitled SYSTEM AND METHOD FOR DETERMINING BEHAVIORAL SIMILARITY BETWEEN USERS AND USER DA TA TO IDENTIFY GROUPS TO SHARE USER IMPRESSIONS OF RATABLE OBJECTS; and U.S. patent application Ser. No. 11/711,223, filed on Feb. 27, 2007 entitled SYSTEM AND METHOD FOR PARTICIPATION IN A CROSS PLAT FORMAND CROSS COMPUTERIZED-ECO-SYSTEM RATING SERVICE; and U.S. patent application Ser. No. 11/711,248, filed on Feb. 27, 2007 entitled SYSTEM AND METHOD FOR MULTIPLAYER COMPUTERIZED GAME ENVIRONMENT WITH NON-INTRUSIVE, CO-PRESENTED COMPUTERIZED RATINGS.
Number | Date | Country | |
---|---|---|---|
60980687 | Oct 2007 | US |