None.
None.
None.
The invention disclosed broadly relates to the field of display ad web pages and more particularly relates to the field of classifying display ads.
Advertising is a critical economic driver of the internet ecosystem, with internet advertising revenues estimated to be around US $5.9 billion in the first quarter of 2010 alone. This online revenue stream supports the explosive growth of the number of web sites and helps offset the associated infrastructure costs. There are two main types of advertising depending on the nature of the ad creative: textual advertising in which the ads contain text snippets similar to the content of a web-page, and display advertising in which the ads are graphical ad creatives in various formats and sizes (static images, interactive ads that change shape and size depending on the user interaction powered by flash etc.). The text ads are typically displayed either in response to a search query on the search results page, while the display ads are shown on other content pages. Advertisers book display advertising campaigns by specifying the attributes of the site where their ads should be displayed, and/or the attributes of users to whom the ads can be shown. For example, one display advertising campaign can specify that the ads should be shown only on pages related to Sports, and to users who visit those pages from say, the state of California, USA. In addition, the advertiser (or an advertising agency that works on behalf of the advertiser) also specifies the ad creative (the physical ad image) that should be displayed on the user's browser, and the time period over which the ad should run.
Ad serving systems select the ads to show based on the relevance of the ad to either the content of the page, or user, or both. This serving typically involves 2 steps: (i) a matching step which first selects a list of ads that are eligible to be displayed in an ad-serving opportunity depending on the requirements from the advertiser, the attributes of the page, the user, etc., and (ii) a ranking step which then rank orders the list eligible ads based on some objective function (relevance, expected revenue, etc.). The algorithms in these matching and ranking steps leverage data about the available ads, the content of the pages on which the ads are to be shown, the interest of the user etc. Typical display ad campaigns do not require the advertiser to give much more information about the ads themselves, other than that they meet certain quality requirements including for example, the image should not contain any offensive content, should render correctly on the browser.
One common information used in these matching and ranking steps is the category of these component entities (pages, queries, ads), from among a set of relevant user interest categories (e.g., Travel, Finance, Sports). These categories are either assigned manually by editors, or using machine learned categorization tools trained using some historically labeled set of entities. It is typically easier to train machine learned categorization tools to categorize content pages, queries, and text ads, using standard feature construction techniques used in information-retrieval, for example, a bag of words, term-frequency-inverse-document frequency (tf-idf) feature weights etc. Display ads on the other hand do not lend themselves to easy feature representations. Categorization of display ads typically involves large-scale manual labeling by a large team of human editorial experts.
Briefly, according to an embodiment of the invention a method includes steps or acts of: extracting text features from ad images using OCR (optical character recognition) techniques; identifying objects of interest from ad images using object detection and recognition techniques in computer vision; extracting text features from the web-page of the advertiser that the user is re-directed to when clicking the ad (also called the landing page of the ad); training statistical models using the extracted features as well as the advertiser attributes from a historical dataset of ads labeled by human editors; and determining the relevant categories of unlabeled ads using the trained models.
To describe the foregoing and other exemplary purposes, aspects, and advantages, we use the following detailed description of an exemplary embodiment of the invention with reference to the drawings, in which:
While the invention as claimed can be modified into alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention.
Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the embodiments reside primarily in combinations of method steps and system components related to systems and methods for placing computation inside a communication network. Accordingly, the system components and method steps have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. Thus, it will be appreciated that for simplicity and clarity of illustration, common and well-understood elements that are useful or necessary in a commercially feasible embodiment may not be depicted in order to facilitate a less obstructed view of these various embodiments.
We developed a new method to classify display ads into a taxonomy of categories. The method leverages information from ad images and ad landing pages. We extract text from ad images using OCR techniques. We identify objects of interest from ad images using object detection and recognition techniques in computer vision. We extract text in the title, keywords, and body of ad landing pages. We generate bag-of-words features using the extracted features mentioned above as well as the attributes of advertisers. We train one one-vs-all SVM (support vector machine) classifier for each category in the taxonomy on a historical dataset of ads labeled by human editors. The categories of ads are rolled up according to the taxonomy, e.g., if an ad belongs to automotive/sedan, it also belongs to automotive. To classify a new ad, we compute its score for each category using the corresponding SVM classifier and add the category to its label list if its score is above a certain threshold.
The invention has two major advantages. First, the method leverages information from multiple channels including ad images and landing pages. Signals from multiple channels can reinforce one another. Second, the method extracts text features from ad images which often are more informative than other standard image features (e.g., color, texture).
Referring to
Referring to
The system 200 also includes a communication interface 218 connected to a local area network 226 via a communication link 222. The system 200 performs a method that includes the steps of: reading ad image and landing page (the web-page of the advertiser that the user is re-directed to when clicking the ad) for the ad from a storage device; using a processor device to execute optical character recognition (OCR) to extract text features for ad image; using a processor device to execute object detection and recognition to identify objects of interest from ad image; using a processor device to parse landing page to extract text features; storing the extracted features from ad image and landing page in a storage device; training statistical models using the extracted features as well as advertiser attributes from a historical dataset of ads labeled by human editors; determining the relevant categories of unlabeled ads using the trained models. The system further comprises an input/output device 214.
The invention has multiple uses in display advertising: increasing the ad categorization coverage, scaling up the ad categorization capacity to handle large volumes of ads by reducing the amount of human editorial effort, better utilizing the human editorial experts to focus on categorizing difficult ads and the like. In addition, the ad image and landing page features extracted in this ad categorization system can be used to improve the matching and ranking steps of ad selection algorithms in display ad serving systems.
Therefore, while there has been described what is presently considered to be the preferred embodiment, it will understood by those skilled in the art that other modifications can be made within the spirit of the invention. The above description(s) of embodiment(s) is not intended to be exhaustive or limiting in scope. The embodiment(s), as described, were chosen in order to explain the principles of the invention, show its practical application, and enable those with ordinary skill in the art to understand how to make and use the invention. It should be understood that the invention is not limited to the embodiment(s) described above, but rather should be interpreted within the full meaning and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
20020147637 | Kraft et al. | Oct 2002 | A1 |
20030078928 | Dorosario et al. | Apr 2003 | A1 |
20050235030 | Lauckhart et al. | Oct 2005 | A1 |
20060242147 | Gehrking et al. | Oct 2006 | A1 |
20070005418 | Nishar et al. | Jan 2007 | A1 |
20070038653 | Li et al. | Feb 2007 | A1 |
20070192181 | Asdourian | Aug 2007 | A1 |
20070198345 | Park | Aug 2007 | A1 |
20070214045 | Subramanian et al. | Sep 2007 | A1 |
20070233563 | Takahashi et al. | Oct 2007 | A1 |
20070260520 | Jha et al. | Nov 2007 | A1 |
20070300142 | King et al. | Dec 2007 | A1 |
20080077494 | Ozveren et al. | Mar 2008 | A1 |
20080120646 | Stern et al. | May 2008 | A1 |
20080235092 | Song et al. | Sep 2008 | A1 |
20080306809 | Kwak et al. | Dec 2008 | A1 |
20090043739 | Choi | Feb 2009 | A1 |
20090063265 | Nomula | Mar 2009 | A1 |
20090070310 | Srivastava et al. | Mar 2009 | A1 |
20090112840 | Murdock et al. | Apr 2009 | A1 |
20100057560 | Skudlark et al. | Mar 2010 | A1 |
20100082628 | Scholz | Apr 2010 | A1 |
20100094860 | Lin et al. | Apr 2010 | A1 |
20110035289 | King et al. | Feb 2011 | A1 |
20110082824 | Allison et al. | Apr 2011 | A1 |
20110103682 | Chidlovskii et al. | May 2011 | A1 |
20110166934 | Comay et al. | Jul 2011 | A1 |
20110225115 | Moitra et al. | Sep 2011 | A1 |
20110264522 | Chan et al. | Oct 2011 | A1 |
20110295678 | Seldin et al. | Dec 2011 | A1 |
20120036015 | Sheikh | Feb 2012 | A1 |
20120045134 | Perronnin et al. | Feb 2012 | A1 |
20120072280 | Lin | Mar 2012 | A1 |
20120095819 | Li | Apr 2012 | A1 |
20120117072 | Gokturk et al. | May 2012 | A1 |
20120117092 | Stankiewicz et al. | May 2012 | A1 |
20120190386 | Anderson | Jul 2012 | A1 |
20120259856 | Gehrking et al. | Oct 2012 | A1 |
Number | Date | Country | |
---|---|---|---|
20120158525 A1 | Jun 2012 | US |