The present disclosure generally relates to data processing systems and techniques. More specifically, the present disclosure relates to methods and systems for exposing data that is used by a search engine to order search results for display in a search results page.
A search engine is a software-based tool designed to aid in the search for information. For example, a web search engine is designed to search for web pages relevant to a user's search query. Generally, a search engine is evaluated based on the quality of the search results it provides in response to a user-provided search query. To ensure that the best (most relevant) search results are positioned first in a list of search results, a search engine may utilize a complex algorithm and rely on a variety of different types of data when ordering the search results for presentation on a search results page. For those interested in having a particular search result displayed prominently at the top of the search results page, understanding the underlying algorithms and data upon which a search engine relies in ranking search results is critical.
Search engines are frequently utilized by e-commerce sites to enable users (e.g. potential buyers) to find items (e.g., products and/or services) of interest. For those that are selling items on e-commerce sites that bring together sellers and buyers, understanding how a search engine works and being familiar with the underlying data used to order search results can help to improve a seller's chances of having items appear in the search results page, and ultimately increase the chances that items will be purchased. For instance, if a seller is familiar with the underlying algorithms and data upon which search engines rely in ordering search results, the seller can manipulate his or her item listings to increase the likelihood that an item listing will appear in a prominent position within the search results pages, and ultimately increase the odds that an item will be purchased. However, with conventional e-commerce marketplaces, a seller does not have access to any of the underlying data utilized by the search engine in ranking, and ordering, search results on a search results page.
Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
Methods and systems for exposing data that is used by a search engine to order search results for display in a search results page are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various aspects of different embodiments of the present invention. It will be evident, however, to one skilled in the art, that the present invention may be practiced without these specific details.
Consistent with some embodiments of the invention, an e-commerce or online trading application includes an application programming interface (API) that provides access to data used by a ranking algorithm of a search engine when ordering search results for display in a search results page. For instance, in some embodiments, the e-commerce or online trading application includes a search engine that allows users to search for item listings. The search engine utilizes a ranking algorithm (referred to herein as a listing performance algorithm) to assign a ranking score (referred to herein as a listing performance score) to each item listing that satisfies a query. For example, after selecting the search results (e.g., item listings) that satisfy a user-initiated search query, a listing performance score is assigned to each search result in the result set. When the item listings are presented to a user in a search results page or pages, the item listings are ordered based on the listing performance scores assigned to the individual item listings in the result set. For example, in some embodiments, those item listings with the highest listing performance scores will be displayed at the top of a list of search results on the first page of what might be several pages of search results.
To enable sellers who have listed items for sale via the on-line trading application to access and analyze the data that affects an individual item listing's ranking score, the online trading application includes a listing performance data engine module that is made accessible via an open API. Accordingly, a software application leveraging the API can make data requests to the listing performance data engine module. As such, sellers are provided access to data that is used in generating an individual item listing's listing performance score, thereby providing insight into why a particular item listing is presented in the search results pages at a particular page and/or position. For instance, if a seller desires to understand why a particular item listing it not listed on the first page of the search results pages, the seller might utilize a listing performance analysis application to query the listing performance data engine module for listing performance data that affects the listing performance score of one or more item listings. With an understanding of how the underlying algorithm uses the data, a seller can determine whether it may be possible to modify one or more attributes of his or her item listing to increase the item listing's listing performance score, thereby improving the page and/or position on which the item is shown in the search results page for a specific query, and ultimately increasing the likelihood that the item listing will result in a transaction (e.g., sale).
In addition, the open API provides third party software developers with the required interface for developing third party software tools for performing analysis of item listings and associated performance data associated with the item listings. In general, the data that is accessible via the data engine and API is one of two types—attribute data that is input or selected by the buyer, for example, when listing an item for sale, and performance data that is derived as a result of various activities and actions that are detected by the online trading application. For example, the attribute data for the item listing may include such data items as: the title of the item listing, the item number assigned to the item listing, the price of the item being offered, the shipping method and/or cost, the total quantity of items listed or available, the listing start date, the pricing format (e.g., fixed price or auction), and the category in which the item has been listed. The performance data may include such data items as: the number of impressions the item listing has received (e.g., the number of times the item listing has been presented in a search result set to a user), the number of times a unique user has selected (e.g., clicked on) the item listing when presented in a search result set (e.g., referred to as the click-through count), the ratio of click-through count to impressions (e.g., referred to as the click-through rate), the number of unique users monitoring or tracking the item listing (e.g., referred to as the watch count), the number of sales generated by the item listing, the quantity of items sold, the number of sales per the number of impressions, the number of sales per unique click-through count, and so forth. By combining the API described herein with one or more other API's made available by the operator of the online trading application, a third party software developer may develop a fully- or semi-automated performance analyzer or monitor for item listings. Collectively, the two types of data (e.g., attribute data, and performance data) are referred to herein as listing performance data. Other aspects and advantages of the inventive subject matter will become readily apparent from the description of the figures that follow.
In some embodiments, the on-line trading engine module 20 may consist of a variety of sub-components or modules, which provide some of the functions of an on-line trading application 18. As described more completely below, each module may be comprised of software instructions, computer hardware components, or a combination of both. To avoid obscuring the invention in unnecessary detail, only a few of the on-line trading engine functions are described herein. For example, the on-line trading engine module 20 may include an item listing management module (not shown) that facilitates the receiving and storing of data representing item attributes, which collectively form an item listing. When a user desires to list a single item, or multiple items, for sale, the user will provide information about the item(s) (e.g., item attributes). Such information may be submitted via one or more forms of one or more web pages, or via drop down lists, or similar user interface elements. The item listing management module receives the item attributes and stores the item attributes together within a database 28 as an item listing. In some instances, the item listings may be stored in an item listing database table 30. As described in greater detail below, the item attributes of each item listing are analyzed and used as inputs to one more algorithms used to assign a ranking score to item listings, which in turn is used in determining the position of item listings when the item listings are being presented in a set of search results pages.
The on-line trading engine module 18 may also include one or more modules for receiving and storing historical data, generally representing user-initiated activities and/or events detected at the online trading application, that is used to measure the likelihood that an item listing will, if presented in a search results page, result in a transaction being concluded. For instance, in some embodiments, data associated with user-initiated activities are analyzed and captured for the purpose of predicting future user activities. If a user submits a search request including certain search terms, and then proceeds to conclude a transaction for a particular item (e.g., purchase the item), information from the user's interaction with the online trading application will be captured and stored for the purpose of predicting future actions by other users. Some of the data used in this capacity is generally referred to as relevance data 32 because it is used to determine a measure of relevance between search terms used in a search query, and individual item listings. For instance, if a potential buyer submits a search request with the search terms, “mobile phone”, item listings that have certain item attributes are more likely to result in the conclusion of a transaction if presented in a search results page in response to the search request. For instance, continuing with the example search terms, “mobile phone”, given the specific search terms used in the search query, item listings that have been designated as being in a certain category of items, such as “Electronics”, or even more specifically, “Mobile Phones”, are more likely to result in a transaction if presented in a search results page than item listings in other categories, for example, such as “Automobiles” or “Jewelry”. Similarly, given the search terms, “mobile phone”, item listings with titles that include the search terms may prove more likely to result in a transaction than item listings without the search terms in the title. Accordingly, in some embodiments, the on-line trading engine 18 includes one or more modules for receiving and analyzing historical data to generate what is referred to herein as relevance data. The relevance data is used to derive a measure of the likelihood that item listings with certain item attributes will result in a transaction if displayed in response to certain search terms being submitted in a search request. The derived measure of relevance may take the form of a number representing a score, which can be exposed to a seller via a third-party application that utilizes an API consistent with an embodiment of the invention.
The on-line trading engine module 18 may also include one or more modules for receiving and storing data representing, among other things, a measure of a seller's performance of obligations associated with transactions in which the seller has participated. For instance, in some embodiments, when a transaction is concluded, a buyer may be prompted to provide feedback information concerning the performance of a seller. The buyer may, for example, rate the accuracy of the seller's description of an item provided in the item listing. For instance, if the item received by the buyer is in poor condition, but was described in the item listing as being in good condition, the buyer may provide feedback information to reflect that the seller's description of the item in the item listing was inaccurate. As described more fully below, this seller performance information may be used in a variety of ways to derive a ranking score for an item listing. For instance, in some cases, the seller feedback information may be used to determine a ranking score for another item listing of the same seller. Such information may be stored in a database 28, as indicated in
As illustrated in
Referring again to
For instance, in some embodiments, a potential buyer operates a web browser application 38 on a client system 12 to interact with the on-line trading application residing and executing on the server system 16. As illustrated by the example user interface with reference number 40, a potential buyer may be presented with a search interface 40, with which the user can specify one or more search terms to be used in a search request submitted to the on-line trading application 18. In some embodiments, in addition to specifying search terms, users may be able to select certain item attributes, such as the desired color of an item, the item categories that are to be searched, and so on. After receiving and processing the search request, the on-line trading application 18 communicates a response to the web browser application 38 on the client system 12. For instance, the response is an Internet document or web page that, when rendered by the browser application 38, displays a search results page 42 showing several item listings that satisfy the user's search request. As illustrated in the example search results page 42 of
In general, the item listings are presented in the search results page in an order based on a ranking score that is assigned to each item listing that satisfies the query. In some embodiments, the item listings will be arranged in a simple list, with the item listing having the highest ranking score appearing at the top of the list, followed by the item listing with the next highest ranking score, and so on. In some embodiments, several search results pages may be required to present all item listings that satisfy the query. Accordingly, only a subset of the set of item listings that satisfy the query may be presented in the first page of the search results pages. In some embodiments, the item listings may be ordered or arranged in some other manner, based on their ranking scores. For instance, instead of using a simple list, in some embodiments the item listings may be presented one item listing per page, or, arranged in a grid, or in some manner other than a top-down list.
As described in greater detail below, the ranking score may be based on several component scores or sub-scores including, but by no means limited to: a relevance score, representing a measure of the relevance of an item listing with respect to search terms provided in the search request; a listing quality score, representing a measure of the likelihood that an item listing will result in a transaction based at least in part on historical data associated with similar item listings; and, a business rules score, representing a promotion or demotion factor determined based on the evaluation of one or more business rules. As used herein, a component score or sub-score is a score that is used in deriving the overall ranking score for an item listing. However, a component score in one embodiment may be a ranking score in another embodiment. For instance, in some embodiments, the ranking score may be equivalent to a single component score, such as the listing quality score. Similarly, in some embodiments, the ranking score may be equivalent to the business rules score.
As illustrated in
In some embodiments, when processing a query resulting from a potential buyer's search request, the item listings that satisfy the search request are ordered or otherwise positioned or arranged in a search results page (e.g., an Internet document or web page) based on a listing performance score calculated for, and assigned to, each item listing. For instance, in response to receiving and processing a search request, one or more algorithms are used to assign listing performance scores to each item listing that satisfies the search request. The listing performance scores assigned to the item listings that satisfy the search request are then used to determine where each individual item listing is to appear when presented to a user in a search results page. Accordingly, in some embodiments, the item listings that are assigned the highest ranking scores are placed in the positions deemed to be most prominent, and therefore most likely to be seen and selected by a user. For example, the item listings with the highest ranking scores may be positioned at the top of a first page of several search results pages that together comprise a long list of item listings that satisfy a potential buyer's search request.
In some embodiments, the ranking score is itself comprised of several sub-scores or component scores. For instance, as illustrated in
As illustrated in
As illustrated in
The part of the listing quality score representing the predicted score is based on an analysis of item attributes of the item listing, in comparison with item attributes of item listings determined to be similar. Although many item attributes may be considered in various embodiments, in some embodiments the price of the item and the shipping cost are the primary predictors of quality. For instance, the price of an item listing relative to the prices of similar item listings that have previously resulted in transactions is used as a metric to indicate the likelihood that an item listing will result in a transaction. If the price for the item listing is below the median price for similar item listings, the likelihood that a transaction will conclude if the item listing is presented increases. Similarly, if the price for the item listing is greater than the median price for similar item listings, the likelihood of a transaction concluding decreases. The same general analysis can be undertaken for shipping cost as well. In some embodiments, the shipping cost is analyzed separately from the price of the item, and in some cases, the shipping cost and price are summed to derive a total price that is used in the analysis.
The listing quality score is also based in part on an observed score representing a demand metric or combination of demand metrics. A demand metric represents a measure of the demand for an item based at least in part on historical data. For instance, in some embodiments, a demand metric used for calculating a listing quality score is calculated as a ratio of the number of transactions concluded per search impressions for an item listing, or for item listings determined to be similar. For example, in the case of a multi-quantity item listing—that is, an item listing offering multiple items (e.g., one-hundred mobile phones)—the observed demand metric may be derived as the ratio of the number of transactions concluded per the number of search impressions for the item listing. Again referring to the example item listing for a mobile phone, if five out of ten times the item listing is presented in a search results page a buyer concludes a transaction by purchasing a mobile phone, then the transactions per search impressions ratio is fifty percent (50%). This demand metric may be normalized, such that the final observed score takes into consideration the performance of the item listing in relation to the performance of other item listings for similar items. For instance, for certain categories of items (e.g., Automobiles, Coins, Stamps, Mobile Phones, and so on), different observed scores may be interpreted differently. For instance, ratio of the transactions per search impressions with value fifty percent (50%) may be viewed as a “good” ratio, indicating a strong item listing performance, for one category (e.g., Automobiles), but a “bad” ratio, indicating a weak item listing performance for another category (e.g., Mobile Phones).
In general, if the ratio of the number of transactions per search impressions for an item listing is high, the likelihood that the item listing will result in a transaction is also high. However, if the total number of search impressions for a given item listing is low, the confidence in the demand metric may be low. For instance, if the item listing has only one search impression, and that search impression ultimately resulted in a transaction, it may be difficult to predict whether the item listing is a “good” item listing. Accordingly, and as described more completely below, the weighting factor for the demand metric may be a function of the number of search impressions for the item listing, or a metric referred to as time on site (TOS), representing the length or duration of time the item listing has been active.
In some embodiments, the weighting factor is a function of a time-based metric, such that, when the item listing is first listed, the emphasis is on the predicted score, but over time, the emphasis is shifted to the observed score. For example, in some embodiments, the weighting factor is a function of the number of search impressions that an item listing has received. For instance, when the search impression count (i.e., the number of times an item listing has been presented in a search results page) reaches some minimum threshold, the weighting factor applied to the predicted score is decreased, resulting in less emphasis on the predicted score, and the weighting factor applied to the observed metric is increased, resulting in greater emphasis on the observed metric component of the listing quality score.
In some embodiments, the listing quality score, or any of the inputs used to derive the listing quality score, may be made available to a seller who has an active item listing. Specifically, via an API call, a seller may obtain such listing performance data as: an impression count for an item listing, a click-through rate (or, number of views) for the item listing, a number of transactions concluded for the item listing, and/or a number of items sold via the item listing. In addition, various attributes of the item listing may be made available via an API call, including the price of the item offered via the item listing, the shipping method and cost, the condition of the item, and the duration or length of time the listing has been active. These attributes might be compared with aggregate listing performance data for other item listings. For instance, in some embodiments, listing performance data for all item listings that would be displayed in the first page of the search results pages for a particular keyword or category might be presented to a seller. Accordingly, a seller can compare the listing performance data of his or her item listing with the listing performance data of those item listings that would appear on the first page of the search results pages to assess whether there is a particular item attribute that is causing the seller's item listing to not be displayed in the first page of the search results pages.
In addition, to the relevance score and listing quality score, a listing performance score may be affected by the evaluation of one or more business rules, resulting in a business rules score 66. For example, if a seller has a power seller status, or if an item is offered with free shipping, or if a seller has a seller quality score that exceeds a predetermined threshold, a business rule score may positively affect the listing performance score. Similarly, if a seller has a low seller quality score, or an item has a high cost of shipping, a business rules score may result in a demotion of the particular item listing.
In some embodiments, a single algorithm is used to assign a score to each item listing. For example, after determining which item listings satisfy a search request, a single algorithm is used to assign a score to each item listing determined to satisfy the search request. This score (e.g., the Listing Performance Score) is then used to determine the position of each item listing when the item listings are presented to a user, for example, in a search results page. Alternatively, in some embodiments, several algorithms are utilized to assign scores to item listings. For instance, in some embodiments, a separate algorithm is used to assign scores to item listings based on a characteristic of the item listings, such that the algorithm used for any particular item listing is selected based on a particular characteristic or attribute value of the item listing. For instance, when the total population of item listings includes both auction listings and fixed-price listings, one algorithm may be used to assign a score to all item listings that include a fixed price, while a separate algorithm may be used for item listings that are offered at auction. Similarly, with multiple-quantity item listings offered at a fixed price—that is, item listings that offer multiple units of the same item at a fixed price—separate algorithms may be used to assign scores to those item listings that are new—and thus do not have available historical data for assessing the quality of the item listing—and those item listings that are older—and thus have available historical data for assessing the quality of the item listing. Similarly, different algorithms may be used to assign listing quality scores to item listings in different categories. For instance, some categories (e.g., Mobile Phones) may use transactions per search impressions as the observed demand metric for the observed listing quality score, whereas item listings in another category (e.g., Automobiles) may use the ratio of views per search impression as the demand metric for the observed listing quality score.
Any of the individual data items that may affect a listing performance score as described in connection with the description of
As illustrated in
In addition to tools and applications for providing sellers with access to listing performance data, an internal services module 78 may serve as an interface to the listing performance data engine module 24 for a number of internally accessible tools 80, for use by administrators of the online trading application 18. These internal tools 82 may be web-based (e.g., accessible via a web browser), or native desktop applications. In some embodiments, the tools for internal use may have a superset of the functionality offered to sellers. For instance, internal users may have access to certain data and functionality that sellers are restricted from accessing. In particular, the internal tools will generally provide access to data for all item listings in a category, or by keyword, whereas the seller tools will generally utilize an authentication method to provide listing performance data for only those item listings of a particular seller.
In some embodiments, the API supports requests and responses formatted as extensible markup language (XML) or as SOAP (simple object access protocol) messages. For instance, an HTTP POST call method supports requests formatted in XML. The response format for an XML formatted request is XML. An example might look like the following:
Similarly, the HTTP POST call method supports requests formatted in accordance with the SOAP protocol. The response format for SOAP requests is SOAP. An example request might look as follows:
In some embodiments, each API call consists of the following elements: a Service Endpoint, identifying an API Gateway; an HTTP Header, for example, specifying optional and required parameters, such as an authentication token or an API call name; Standard Input Fields, for example, defining parameters such as, entriesPerPage, which can be used to specify how many items are returned in a response, or a field to specify which items to include in the search, and which items to return from the search; and, Call-specific Input Fields, such as keywords or category identifiers that may be specific to certain API calls.
At method operation 92, the request for listing performance data is processed by the listing performance data engine module in order to identify the relevant listing performance data. In particular, relevant data includes only listing performance data associated with the seller's listing performance and data that is responsive to the particular API-based request. Finally, at method operation 94, the identified relevant listing performance data is communicated to the requesting application at the client computing system.
As illustrated in
In the example shown in
In the example user interface of
In the example user interface of
In addition to the exact position and page being displayed, the user interface shows aggregated listing performance data for the item listings that would appear in the first page of the search results pages, if a potential buyer was to perform a keyword query containing the identified keywords. For instance, in this example, the price range for fixed price format listings appearing in the first page of the search results pages is between $20.00 and $99.00. Given the seller's price of $9.00 for the item listing, the seller can determine that his price is on the low end of the price range, and therefore not likely to be hurting the listing performance score. Similarly, the aggregated listing performance data for the item listings appearing on the first page for a give query include information about the shipping cost range, the percentage of items with free shipping, the percentage of sellers ranked in the top sellers group, and other information.
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs).)
The example computer system 1500 includes a processor 1502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 1501 and a static memory 1506, which communicate with each other via a bus 1508. The computer system 1500 may further include a display unit 1510, an alphanumeric input device 1517 (e.g., a keyboard), and a user interface (UI) navigation device 1511 (e.g., a mouse). In one embodiment, the display, input device and cursor control device are a touch screen display. The computer system 1500 may additionally include a storage device (e.g., drive unit 1516), a signal generation device 1518 (e.g., a speaker), a network interface device 1520, and one or more sensors 1521, such as a global positioning system sensor, compass, accelerometer, or other sensor.
The drive unit 1516 includes a machine-readable medium 1522 on which is stored one or more sets of instructions and data structures (e.g., software 1523) embodying or utilized by any one or more of the methodologies or functions described herein. The software 1523 may also reside, completely or at least partially, within the main memory 1501 and/or within the processor 1502 during execution thereof by the computer system 1500, the main memory 1501 and the processor 1502 also constituting machine-readable media.
While the machine-readable medium 1522 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The software 1523 may further be transmitted or received over a communications network 1526 using a transmission medium via the network interface device 1520 utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi® and WiMax® networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
This patent application is a continuation of U.S. patent application Ser. No. 15/809,589, filed Nov. 10, 2017, which is a continuation of U.S. patent application Ser. No. 12/571,214, filed Sep. 30, 2009 (issued as U.S. Pat. No. 9,846,898), each of which is incorporated herein by reference in its entirety. The present application is also related by subject matter to U.S. patent application Ser. No. 12/476,046, filed Jun. 1, 2009 (issued as U.S. Pat. No. 8,903,816), which claims priority to U.S. Provisional Application No. 61/167,796, filed on Apr. 9, 2009, which are also incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
7330829 | Tenorio | Feb 2008 | B1 |
7836050 | Jing et al. | Nov 2010 | B2 |
7962462 | Lamping et al. | Jun 2011 | B1 |
8423541 | Baluja et al. | Apr 2013 | B1 |
8577755 | Wiesinger | Nov 2013 | B2 |
8903816 | Dumon et al. | Dec 2014 | B2 |
9672554 | Dumon et al. | Jun 2017 | B2 |
9846898 | Rehman et al. | Dec 2017 | B2 |
10181141 | Rehman et al. | Jan 2019 | B2 |
20020198875 | Masters | Dec 2002 | A1 |
20030195877 | Ford et al. | Oct 2003 | A1 |
20050004889 | Bailey et al. | Jan 2005 | A1 |
20050038717 | Mcqueen et al. | Feb 2005 | A1 |
20050154718 | Payne et al. | Jul 2005 | A1 |
20060101102 | Su et al. | May 2006 | A1 |
20060143197 | Kaul et al. | Jun 2006 | A1 |
20060224593 | Benton et al. | Oct 2006 | A1 |
20070033531 | Marsh | Feb 2007 | A1 |
20070150470 | Brave et al. | Jun 2007 | A1 |
20070192314 | Heggem | Aug 2007 | A1 |
20080072180 | Chevalier et al. | Mar 2008 | A1 |
20080168052 | Ott et al. | Jul 2008 | A1 |
20080275864 | Kim et al. | Nov 2008 | A1 |
20080288348 | Zeng et al. | Nov 2008 | A1 |
20090265243 | Karassner et al. | Oct 2009 | A1 |
20100057717 | Kulkami | Mar 2010 | A1 |
20100223125 | Spitkovsky | Sep 2010 | A1 |
20100262602 | Dumon et al. | Oct 2010 | A1 |
20110078049 | Rehman et al. | Mar 2011 | A1 |
20110275393 | Ramer et al. | Nov 2011 | A1 |
20150058174 | Dumon et al. | Feb 2015 | A1 |
20180068360 | Rehman et al. | Mar 2018 | A1 |
Number | Date | Country |
---|---|---|
2010118167 | Oct 2010 | WO |
Entry |
---|
PR Newswire: “Content Analysis Tool for Search Engine Optimization Now Available: New predictive content analysis application from Ecordia provides writers with SEO verification capabilities prior to publishing on search engines.” Sep. 11, 2009; ProQuest& nbsp;Dialog #453563710, 4pgs. (Year: 2009). |
Non-Final Office Action Received for U.S. Appl. No. 15/809,589 dated May 16, 2018, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 15/809,589, dated Sep. 13, 2018, 6 pages. |
Response to Non-Final Office Action filed on Aug. 2, 2018, for U.S. Appl. No. 15/809,589, dated May 16, 2018, 8 pages. |
Advisory Action received for U.S. Appl. No. 12/476,046 dated Mar. 3, 2014, 3 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 12/476,046, dated Jun. 26, 2014, 3 pages. |
Final Office Action received for U.S. Appl. No. 12/476,046, dated Oct. 2, 2013, 16 pages. |
Notice of Allowance received for U.S. Appl. No. 12/476,046, dated Aug. 1, 2014, 9 pages. |
Response to Final Office Action filed on Feb. 3, 2014 for U.S. Appl. No. 12/476,046, dated Oct. 2, 2013, 12 pages. |
Response to Final Office Action filed on Jun. 30, 2014 for U.S. Appl. No. 12/476,046, dated Oct. 32, 2013, 11 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 14/530,482, dated Jan. 26, 2017, 3 pages. |
Final Office Action received for U.S. Appl. No. 14/530,482, dated Sep. 26, 2016, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/530,482, dated Mar. 28, 2016, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 14/530,482, dated Feb. 9, 2017, 9 pages. |
Preliminary Amendment for U.S. Appl. No. 14/530,482, filed on Feb. 25, 2015, 6 pages. |
Response to Final Office Action filed on Jan. 26, 2017 for U.S. Appl. No. 14/530,482, dated Sep. 26, 2016, 8 pages. |
Response to Non-Final Office Action filed on Jul. 28, 2016 for U.S. Appl. No. 14/530,482, dated Mar. 28, 2016, 10 pages. |
Preliminary Amendment for U.S. Appl. No. 15/613,946, filed Jun. 6, 2017, 6 pages. |
“2014 Interim Guidance on Patent Subject Matter Eligibility”, vol. 79, No. 241, Retrieved from the Internet URL: <https://www.federalregister.gov/documents/2014/12/16/2014-29414/2014-interim-guidance-on-patent-subject-matter-eigibility>, Dec. 16, 2014, pp. 74618-74633. |
“Jul. 2015 Update: Interim Eligibility Guidance”, Retrieved from the Internet URL: Jul. 2015, 2 pages. |
Applicant Initiated Interview Summary received for U.S. Appl. No. 12/571,214, dated Jun. 28, 2017, 3 pages. |
Final Office Action received for U.S. Appl. No. 12/571,214, dated Apr. 2, 2015, 23 pages. |
Final Office Action received for U.S. Appl. No. 12/571,214, dated Aug. 11, 2016, 10 pages. |
Final Office Action received for U.S. Appl. No. 12/571,214, dated May 10, 2017, 18 pages. |
Final Office Action received for U.S. Appl. No. 12/571,214, dated May 20, 2013, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 121571,214, dated Feb. 23, 2016, 26 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/571,214, dated Feb. 29, 2012, 9 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/571,214, dated Jul. 18, 2014, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/571,214, dated Nov. 14, 2012, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/571,214, dated Oct. 28, 2016, 17 pages. |
Notice of Allowance received for U.S. Appl. No. 12/571,214, dated Aug. 10, 2017, 10 pages. |
Response to Final Office Action filed on Aug. 20, 2013, for U.S. Appl. No. 12/571,214, dated May 20, 2013, 10 pages. |
Response to Final Office Action filed on Aug. 25, 2015, for U.S. Appl. No. 12/571,214, dated Apr. 2, 2015, 11 pages. |
Response to Final Office Action filed on Jul. 14, 2017, for U.S. Appl. No. 12/571,214, dated May 10, 2017, 10 pages. |
Response to Final Office Action filed on Oct. 11, 2016, for U.S. Appl. No. 12/571,214, dated Aug. 11, 2016, 10 pages. |
Response to Non-Final Office Action filed on Aug. 29, 2012, for U.S. Appl. No. 12/571,214, dated Feb. 29, 2012, 11 pages. |
Response to Non-Final Office Action filed on Dec. 18, 2014, for U.S. Appl. No. 12/571,214, dated Jul. 18, 2014, 13 pages. |
Response to Non-Final Office Action filed on Feb. 14, 2013, for U.S. Appl. No. 12/571,214, dated Nov. 14, 2012, 11 pages. |
Response to Non-Final Office Action filed on Jan. 24, 2017, for U.S. Appl. No. 12/571,214, dated Oct. 28, 2016, 10 pages. |
Response to Non-Final Office Action filed on May 23, 2016, for U.S. Appl. No. 12/571,214, dated Feb. 23, 2016, 16 pages. |
Response to Restriction Requirement filed on Feb. 6, 2012, for U.S. Appl. No. 12/571,214, dated Jan. 5, 2012, 8 pages. |
Restriction Requirement received for U.S. Appl. No. 12/571,214, dated Jan. 5, 2012, 6 pages. |
Blankenbake, et al., “Paid search for online travel agencies Exploring strategies for search keywords”, Journal of Revenue and Pricing Management, Mar. 2009, 13 pages. |
Ebay, “Internet Archive Wayback Machine”, www.archive.org; www.ebay.com, 2007, 18 pages. |
Bay, “The World's Online Marketplace”, Internet Archive Wayback Machine, Feb.-Mar. 2007, 27 pages. |
PR Newswire, “Content Analysis Tool for Search Engine Optimization Now Available: New predictive content analysis application from Ecordia provides writers with SEO verification capabilities prior to publishing on search engines”, Sep. 11, 2009, 4 pages. |
Advisory Action received for U.S. Appl. No. 12/476,046, dated Feb. 23, 2012, 3 pages. |
Final Office Action received for U.S. Appl. No. 12/476,046, dated Oct. 31, 2011, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/476,046, dated Apr. 28, 2011, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/476,046, dated Apr. 30, 2012, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 12/476,046, dated Mar. 20, 2013, 13 pages. |
Response to Final Office Action filed on Jan. 31, 2012, for U.S. Appl. No. 12/476,046, dated Oct. 31, 2011, 10 pages. |
Response to Non-Final Office Action filed on Aug. 29, 2012, for U.S. Appl. No. 12/476,046, dated Apr. 30, 2012, 13 pages. |
Response to Non-Final Office Action filed on Jul. 22, 2013, for U.S. Appl. No. 12/476,046, dated Mar. 20, 2013, 12 pages. |
Response to Non-Final Office Action filed on Jul. 28, 2011, for U.S. Appl. No. 12/476,046, dated Apr. 28, 2011, 10 pages. |
International Preliminary Report on Patentability received for PCT Application No. PCT/US2010/030287, dated Oct. 20, 2011, 6 pages. |
International Search Report received for PCT Application No. PCT/US2010/030287, dated Jun. 9, 2010, 4 pages. |
Written Opinion received for PCT Application No. PCT/US2010/030287, dated Jun. 9, 2010, 6 pages. |
Non Final Office Action Received for U.S. Appl. No. 15/613,946, dated Dec. 12, 2019, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20190139109 A1 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15809589 | Nov 2017 | US |
Child | 16237238 | US | |
Parent | 12571214 | Sep 2009 | US |
Child | 15809589 | US |