This disclosure generally relates to online content distribution, and more specifically to dynamically creating content items from a set of individual content components for a target audience.
Content providers produce content that is targeted to certain audiences within online systems. Users interact with content received from the online system. With the advent of online systems such as social networking systems, content providers have increasingly relied on the online systems to create effective sponsored content within the online system to greatly increase engagement among users of the online systems. For example, subsequent to presenting sponsored content provided by a content provider to users of an online system, the online system tracks how often the users interact with the presented content and calculates statistics for the content. These statistics may be accrued over numerous content campaigns and serve to measure the effectiveness of each content item in the campaign. Based on these statistics, the content provider can edit content items that perform poorly or alternatively choose to show content items that have performed very effectively.
Currently, content providers face challenges in running content campaigns at scale on an online system such as setting up content campaigns such that the best possible content item is created and delivered to each user of the online system. For example, current solutions only enable a content provider to present pre-assembled content items to users of an online system. Online systems can track the performance of pre-assembled content items but provide zero or very little feedback to content providers about the performance of particular components (e.g., text, images and videos) of a content item. Content providers cannot “see inside a content item” to understand which components of the content item did not perform well for their objectives or target audience.
An online system, such as a social networking system, presents dynamically optimized content to users of the online system. Each sponsored content (also referred to as “content” or “content item”), has a number of different types of component creatives (also referred to as “creatives”). Examples of different types of creatives include images, videos, bodies of text, call_to_action_types (e.g. install application, play application), titles, descriptions, universal resource locators (URL), and captions. A dynamic creative optimization (DCO) module of the online system receives a number of component creatives from a user of the DCO system, such as a content ad provider, and assembles the creatives into a sponsored content item. The DCO module can also receive, from the content provider, constraints or rules describing how the component creatives should be included in the sponsored content item. For each opportunity to present a sponsored content item to a user (or a target audience that includes the user), the DCO module selects an optimal creative for each type of creative. For example, the optimal image creative is selected from multiple image creative candidates. The selection occurs based on a component model trained to dynamically optimize component creatives of that type. The DCO module assembles the selected creatives into a sponsored content, which represents the optimal assembly of component creatives for the user (or for the audience that includes the user). Each user of the online system is presented with a sponsored content item having a number of component creatives, which are dynamically selected based on the user's information and information describing the component creatives. Different users of the online system are provided with different sponsored content composed of different component creatives, each component creative optimally selected for that audience or that user.
The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
Overview of System Environment
A client device 110 is a computing device capable of receiving user input through a user interface 112, as well as transmitting and/or receiving data via the network 120. Examples of client devices 110 include desktop computers, laptop computers, tablet computers (pads), mobile phones, personal digital assistants (PDAs), gaming devices, or any other electronic device including computing functionality and data communication capabilities. A user of the client device 110 accesses the online system 130 and interacts with content provided by the online system 130 or by the content provider system 140. For example, the user may retrieve the content for viewing and indicate an affinity towards the content by posting comments about the content or recommending the content to other users. Alternatively a user may indicate a dislike towards the content by flagging the content or closing or hiding the content window, thereby indicating that the user is not interested in the content.
The network 120 facilitates communications among one or more client devices 110, the online system 130, and/or one or more content provider systems 140. The network 120 may be any wired or wireless local area network (LAN) and/or wide area network (WAN), such as an intranet, an extranet, or the Internet. In various embodiments, the network 120 uses standard communication technologies and/or protocols. Examples of technologies used by the network 120 include Ethernet, 802.11, 3G, 4G, 802.16, or any other suitable communication technology. The network 120 may use wireless, wired, or a combination of wireless and wired communication technologies. Examples of protocols used by the network 120 include transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), file transfer protocol (TCP), or any other suitable communication protocol.
The content provider system 140 is used by content providers for interacting with the online system 130. Examples of interactions include providing content, providing components of the content, and providing information related to the content and the components. In the embodiment shown in
The content provider system 140 provides one or more content items 144 and/or component creatives to be included in a content item 144 to the online system 130. A content item 144 may be sponsored content such as advertisements sponsored by advertisers. A content item 144 is a combination of a number of component creatives (also called “creatives”); each component creative is a part of the content item 144 to be presented to a target user and each component creative is of a type. Examples of types of creatives includes image, video, body representing the primary message of the content item, call_to_action_type (e.g., shop_now, learn_more, etc.,), title representing a short headline in the content item, description representing secondary message of the content item, URL, and caption representing corresponding text of an URL. In one embodiment, a content provider system 140 provides a content item 144 having a set of predetermined creatives to the online system 130 for presentation to a target user, e.g., {Image A, Title A, Body B}. In another embodiment, a content provider system 140 provides a set of creatives to the online system 130, which dynamically decides which creative to use in the content item 144 to be delivered to a target user. For example, a content provider system 140 is able to provide a content item 144 in a set having the following different types of creatives:
The online system 130 communicates via the network 120 with the content provider system 140, and/or with one or more client devices 110. In one embodiment, the online system 130 receives a content item 144 having a set of predetermined creatives. In another embodiment, the online system 130 receives a set of creatives from which the content item 144 is dynamically created upon receiving a request for presentation of a content item 144. The online system 130 then delivers the content item 144 to its target audience. For simplicity, the content item 144 having a set of predetermined creatives is referred to as “pre-assembled content item” and the content item 144 to be dynamically assembled is referred to as “DCO content item.”
To provide a DCO content item for a target user, the online system 130 applies a trained component model, each component model associated with a particular type of creative in the content item 144, e.g., an image model is applied for image creatives in the content item 144. Each creative is applied to its corresponding trained component model to generate a prediction score that also takes into consideration the target user's information. The online system 130 selects the creative with the highest prediction score from each category of creatives and combines the selected creatives of each type to create the DCO content item for the target user. Therefore, two different audiences are provided with different DCO content items composed of different combinations of creatives. Using the sample example described above, the online system 130 dynamically decides which creatives to use in the content item 144 to delivered to a target user, e.g., for user 1, the content item 144 including {ImageB, TextB and BodyB}; for a different user, e.g., for user 2, the content item 144 including {ImageC, TextA, BodyC}. Dynamically assembling content item is further described with reference to
Turning now to
Turning back to
In one embodiment, the online system 130 may use edges to generate stories describing actions performed by users, which are communicated to one or more additional users connected to the users through the online system 130. For example, the online system 130 may present a story to an additional user about a first user (e.g. a friend) that has liked a new game or application advertised by a sponsored content item presented to the first user. The additional user may choose to interact with the presented story thereby creating an edge in the social graph maintained by the online system 130 between the additional user and the subject matter of the story. The online system 130 may store this edge. This edge may be retrieved at a future time point when the online system 130 seeks to identify components that may align well with the additional user's preferences.
In various embodiments, in addition to receiving one or more content items 144 from the content provider system 140, the online system 130 may also receive one or more advertisement requests. In various embodiments, an advertisement request includes a landing page specifying a network address to which a user is directed when the advertisement is accessed. An advertisement request from an advertiser also includes a bid amount associated with an advertisement. The bid amount is used to determine an expected value, such as monetary compensation, provided by an advertiser to the online system 130 if the advertisement is presented to a user, if the advertisement receives a user interaction, or based on any other suitable condition. For example, the bid amount specifies a monetary amount that the online system 130 receives from the advertiser if the advertisement is displayed and the expected value is determined by multiplying the bid amount by a probability of the advertisement being accessed.
Dynamic Creative Optimization (DCO)
In one embodiment, the online system 130 has a dynamic creative optimization module 200 to dynamically select creatives to be included in a content item for a target user.
In the embodiment shown in
The creative feature extraction module 205 receives creatives of a content item from a content provider through the content provider system 140 and the network 120. The creative feature extraction module 205 extracts features of each creative and stores the extracted creative features in a creative feature vector in the creative feature store 240. In one embodiment, the creative feature extraction module 205 extracts specific features associated with each type of creative. For example, for textual creatives such as description, call_to_action_type, and caption and body text, the creative feature extraction module 205 uses textual analysis methods known to those of ordinary skills in the art to extract individual words and text strings from the creatives. Taking the title text 310 of the content item 300 as illustrated in
The creative feature extraction module 205 extracts various image features associated with an image creative such as dominant color of the image, background color of the image, size of the image (e.g., width and length of the image), and a total number of image skin blobs. In one embodiment, the creative feature extraction module 205 uses image processing algorithms such as edge detection, Blob extraction, histogram analysis, pixel intensity filtering, gradient filtering, or scale-invariant feature transform to extract visual features of an image. Alternatively, the creative feature extraction module 205 applies an image feature extraction model to extract visual features of an image, where the extraction model is trained using asynchronous stochastic gradient descent procedure and a variety of distributed batch optimization procedure on computing clusters a large corpus of training images.
In addition to visual images associated with an image creative, the creative feature extraction module 205 may also extract text associated with the image, e.g., textual caption of the image, and other related information, e.g., location of the feature in the image creative. For example, in
The creative model training module 210 continuously trains a creative model for each creative type using the training data stored in the training data store 245. For example, the creative model training module 210 trains an image model for image creatives, a video model for video creatives, a title model for title creatives, a body model for body creatives, a call_for_action_type model for call_for_action_type creatives, and a caption model for caption creatives. Each trained component model is configured to generate a prediction score for each creative candidate to be included in a content item for a target user; in other words, each trained model takes target user information and creative information and generates a score that reflects how likely the target user will click on the content item having the creative. The target user information is represented by multiple user features (e.g., a few thousand features from the user profile and other information associated with the user) such as age, gender, demographic group, socioeconomic status, personal interests, and social connections. The creative information is represented by multiple creative features (e.g., a few hundred features from the creatives) such as image width, height, and image's most frequent pixel value for green component.
For example, a trained image model, ImageModel, is configured to predict how likely a user, User, is to click a content item having an image, ImageA, as follows: ImageModel (ImageA, User)→0.50, where 0.50 is the prediction score. Similarly, the trained image model can be applied to another image, ImageB, to predict how likely User is to click the content item having an ImageB as: ImageModel (ImageB, User)→0.55, where 0.55 is the prediction score.
In one embodiment, the creative model training module 210 trains the creative models using one or more machine learning algorithms such as neural networks, naïve Bayes, and support vector machines with the training data stored in the training data store 245. The training data store 245 stores various data for the creative model training module 210 to train the creative models. Examples of the training data include statistics of past advertisement campaigns, such as the click-through rate (CTR) or impression rate, of previously presented creatives or content items of assembled creatives. The training data store 245 also stores training data describing user information of various types of target audiences, e.g., age, gender, demographic group, socioeconomic status.
In one embodiment, the different creative models are trained based on the user information. For example, responding to training samples showing that the male, 18-25 year old group more preferably interact with image creatives involving sports cars as opposed to other types of cars, the creative model training module 210 trains an image creative model that generates a higher prediction score for an image creative showing a sports car than for an image creative of a minivan for a male user of the same age group.
In one embodiment, the different creative models are trained for different target audiences based on user actions committed by the user of the online system 130. For example, a user may have numerous positive posts about victories by the Golden State Warriors on his/her user profile, and the online system 130 stores edges between the user and the Golden State Warriors. The creative model training module 210 trains various creative models to generate higher prediction scores for creatives related to the Golden State Warriors. At run time, the online system 130 receives an image creative from the content provider system 140 that depicts an image of the Warriors logo. The creative feature extraction module 205 may extract an image feature that is related to the Warriors. The image creative of the Warriors logo is scored highly for the user by an image model trained by the creative model training module 210.
The creative analysis module 220 retrieves the extracted features of the individual creatives of a content item from the creative feature store 240 and user features of a target user of the content item from a user feature store or the user profile and maps each feature to a feature value. In one embodiment, the creative analysis module 220 organizes the creative features and the user features as an array (also called “feature vector”). Each feature has an identification and a feature name, e.g., {Feature1: age}. An example feature vector for an image creative described by its width and height and a targeting audience defined by its age and gender is as follows:
Although this example feature vector depicts 4 different features (2 from the user, 2 from the image creative), one skilled in the art can appreciate that in other examples, there may be thousands of additional features associated with the user and the image creative that may be included.
The creative analysis module 220 maps each feature in the feature vector associated with a content item to a feature value based on the target user information and creative information of the content item. Each feature value has a predefined value range. For example, gender can be represented by either 1 (for male) or 0 (for female). For a color image in RGB (red-green-blue) color space and each color pixel being represented by 8 bits, a feature representing the color image's more frequent pixel value for its red, green or blue component has a feature value between 1-255. Taking the feature vector of an image creative described by its width (640 pixels) and height (480 pixels) and a targeting audience defined by his age (29 years old) and gender (male represented by 1), the creative analysis module 220 transforms the feature vector into an array of feature values such as [29, 1, 640, 480]. The creative analysis module 220 stores the feature vector and its corresponding array of feature values, each of which corresponds to a feature in the feature vector, in the creative feature store 240.
The creative ranking module 225 ranks each creative candidate to be included in a content item. In one embodiment, the creative ranking module 225 applies a trained creative model to each creative candidate of the corresponding type. For example, the creative ranking module 225 applies a trained image creative model to each image creative candidate and generates a prediction score for the image creative candidate for a given target user. Similarly, the creative ranking module 225 applies a trained title creative model to each title creative candidate and generates a prediction score for the title creative candidate for a given target user. Using the prediction scores, the creative ranking module 225 ranks the creative candidates for each creative type and selects an optimal creative having the highest prediction score among all creative candidates of the same type. For example, assume that the content provider system 140 provides to the online system 130 two images, ImageA and ImageB, and three titles, TitleA, TitleB, and TitleC, to be considered for a content item for a target user, User. The creative ranking module 225 applies an image creative model, e.g., ImageModel, to each of the two images and generates a prediction score for each image. Similarly, the creative ranking module 225 applies a title creative model, e.g., TitleModel, to each of the three titles and generates a prediction score for each title. Based on the prediction scores, the creative ranking module 225 selects an image having the highest prediction score from the two image creative candidates and a title having the highest prediction score among the three title creative candidates. An example pseudocode for the operations of the creative ranking module 225 using the above example is as follows:
The creative assembly module 230 retrieves the selected optimal creatives of the content item, where each selected creative has the highest prediction score among multiple creatives of the same creative type for a target user, and fully assembles the creative into a DCO content item to be shown to the target user. The creative assembly module 230 assembles a DCO content item composed of different combinations of the creatives associated with a content item for each different target user. The creative assembly module 230 provides the DCO content item for a target user to other modules (not shown) such as content bidding module of the online system 130 for further processing. In response to a request for content items for the target user, the content bidding module of the online system 130 evaluates all the content item candidates including the DCO content item for the target user based on a variety of evaluation factors (e.g., age of each content item, whether the content item has previously been shown) and selects the best content item for the target user at that particular moment.
In some embodiments, the creative assembly module 230 further calculates a creative score that reflects the effectiveness of the fully assembled DCO content item. In one embodiment, the creative score may simply be an average prediction score based on the prediction scores of the individual creatives included in the DCO content item. In some embodiments, the creative score of the DCO content item is a weighted average of the prediction score of the individual creatives in the DCO content item, where each creative's prediction score may be weighed differently depending on the type of the creative. In one embodiment, the weighing may be determined based on the population group that the DCO content item is targeted for. In some embodiments, the creative score of the DCO content item is calculated based on the past number of clicks on the DCO content item by its target user over a period of time.
In some embodiments, the creative assembly module 230 assembles the creatives of a content item while taking into consideration applicable rules or constraints associated with the content item. The rules are provided by the content provider, where the rules describe how the creatives of the content item should be assembled into a DCO content item. In one embodiment, the creative rule module 235 receives the rules associated with the content item from the content providers and stores the received rules in the creative rule store 250. The creative rule module 235 selects applicable rules for assembling the creatives of the content item and provides the selected rules to the creative assembly module 230.
In one embodiment, each rule for creating a DCO content item includes a condition, an operator and a type of action to be performed on one or more creatives. Condition parameter defines when an action and what type of an action should be applied to one or more creatives to be included in the DCO content item. Examples of condition include: Boolean, string, int (representing an integer value); examples of operator include: not equal, equal, bigger than, smaller than, logic “AND” and logic “OR.” Example actions that can be applied to creatives include: group, mutex, promote, demote and template.
Assembling Creatives Using Dynamic Creative Optimization
The DCO module 200 analyzes 420 the creative features, e.g., adding an extracted feature into a feature vector and mapping a feature in the feature vector to a feature value. The DCO module 200 trains one or more creative models using various training data retrieved from training data store 245. Each creative type has a corresponding creative model, e.g., image creatives having an image model and title creatives having a title model. The DCO module 200 applies 425 a trained creative model to each received creative according to the type of each creative and generates 430 a prediction score, which represents a likelihood that the target user interacts with the DCO content item having the creative being predicted. The DCO module 200 ranks 435 the creatives of the same type, e.g., all image creatives received from the content provider, based on the prediction scores of the creatives. The DCO module 200 selects 440 a creative for each creative type, where each selected creative has the highest prediction score among all the creatives of the same type. The DCO module 200 generates 445 a DCO content item composed of the selected creatives for the target user. A different target user of the content item may receive a DCO content item composed of different creatives selected from the same set of creatives provided by the content provider system 140.
The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7882045 | Cole et al. | Feb 2011 | B1 |
8131786 | Bengio | Mar 2012 | B1 |
8401899 | Kauchak et al. | Mar 2013 | B1 |
8738436 | Tuladhar | May 2014 | B2 |
8862741 | Tegtmeier | Oct 2014 | B1 |
9135292 | Tsun et al. | Sep 2015 | B1 |
9317812 | Kabiljo | Apr 2016 | B2 |
9449109 | Keel et al. | Sep 2016 | B1 |
9536011 | Kirillov | Jan 2017 | B1 |
9760910 | Tuchman et al. | Sep 2017 | B1 |
10387902 | Ayars et al. | Aug 2019 | B1 |
10423977 | Ayars et al. | Sep 2019 | B1 |
20020194215 | Cantrell et al. | Dec 2002 | A1 |
20030076350 | Vu | Apr 2003 | A1 |
20030105589 | Liu | Jun 2003 | A1 |
20030191816 | Landress et al. | Oct 2003 | A1 |
20040030726 | Baxter et al. | Feb 2004 | A1 |
20040111432 | Adams, Jr. | Jun 2004 | A1 |
20040117367 | Smith | Jun 2004 | A1 |
20040255245 | Yamada et al. | Dec 2004 | A1 |
20050251532 | Radhakrishnan | Nov 2005 | A1 |
20060200761 | Judd et al. | Sep 2006 | A1 |
20070022003 | Chao et al. | Jan 2007 | A1 |
20070027901 | Chan et al. | Feb 2007 | A1 |
20070050253 | Biggs et al. | Mar 2007 | A1 |
20070050372 | Boyle | Mar 2007 | A1 |
20070156514 | Wright et al. | Jul 2007 | A1 |
20070156621 | Wright et al. | Jul 2007 | A1 |
20070260520 | Jha et al. | Nov 2007 | A1 |
20080005683 | Aoki | Jan 2008 | A1 |
20080037877 | Jia | Feb 2008 | A1 |
20080052140 | Neal et al. | Feb 2008 | A1 |
20080059312 | Gem et al. | Mar 2008 | A1 |
20080086686 | Jing | Apr 2008 | A1 |
20080117448 | Ijams et al. | May 2008 | A1 |
20080184287 | Lipscomb | Jul 2008 | A1 |
20080249855 | Collins et al. | Oct 2008 | A1 |
20090055725 | Portnoy et al. | Feb 2009 | A1 |
20090187477 | Bardin et al. | Jul 2009 | A1 |
20090265243 | Karassner et al. | Oct 2009 | A1 |
20100083129 | Mishra et al. | Apr 2010 | A1 |
20100100442 | Gorsline et al. | Apr 2010 | A1 |
20100211621 | Hariharan et al. | Aug 2010 | A1 |
20100268609 | Nolet et al. | Oct 2010 | A1 |
20100324997 | Evans | Dec 2010 | A1 |
20110015988 | Wright et al. | Jan 2011 | A1 |
20110016408 | Grosz et al. | Jan 2011 | A1 |
20110055025 | Krol | Mar 2011 | A1 |
20110202424 | Chun et al. | Aug 2011 | A1 |
20110321003 | Doig et al. | Dec 2011 | A1 |
20120011003 | Ketchum et al. | Jan 2012 | A1 |
20120030014 | Brunsman et al. | Feb 2012 | A1 |
20120054130 | Mensink | Mar 2012 | A1 |
20120072272 | Kilar et al. | Mar 2012 | A1 |
20120072286 | Kilar et al. | Mar 2012 | A1 |
20120078961 | Goenka | Mar 2012 | A1 |
20120191548 | Des Jardins et al. | Jul 2012 | A1 |
20120291057 | Gunda et al. | Nov 2012 | A1 |
20130013425 | Spehr et al. | Jan 2013 | A1 |
20130103692 | Raza | Apr 2013 | A1 |
20130117107 | Evans | May 2013 | A1 |
20130132311 | Liu | May 2013 | A1 |
20130198636 | Kief et al. | Aug 2013 | A1 |
20130204825 | Su | Aug 2013 | A1 |
20130251248 | Guo | Sep 2013 | A1 |
20130339155 | Yerli | Dec 2013 | A1 |
20140108145 | Patel et al. | Apr 2014 | A1 |
20140114746 | Pani et al. | Apr 2014 | A1 |
20140129490 | Wu | May 2014 | A1 |
20140136935 | Santillie et al. | May 2014 | A1 |
20140156416 | Goenka et al. | Jun 2014 | A1 |
20140207585 | Walke | Jul 2014 | A1 |
20140214529 | Gross-Baser et al. | Jul 2014 | A1 |
20140214790 | Vaish et al. | Jul 2014 | A1 |
20140237331 | Brooks | Aug 2014 | A1 |
20140278959 | Nukala et al. | Sep 2014 | A1 |
20140279016 | Capel et al. | Sep 2014 | A1 |
20140281928 | Tkach et al. | Sep 2014 | A1 |
20140282076 | Fischer | Sep 2014 | A1 |
20140324604 | Munoz et al. | Oct 2014 | A1 |
20150073922 | Epperson et al. | Mar 2015 | A1 |
20150106178 | Atazky et al. | Apr 2015 | A1 |
20150154503 | Goswami | Jun 2015 | A1 |
20150206171 | Zigoris et al. | Jul 2015 | A1 |
20150213514 | Doig et al. | Jul 2015 | A1 |
20150234542 | Kirillov | Aug 2015 | A1 |
20150248423 | Christolini et al. | Sep 2015 | A1 |
20150248484 | Yu et al. | Sep 2015 | A1 |
20150332313 | Slotwiner et al. | Nov 2015 | A1 |
20150379557 | Liu et al. | Dec 2015 | A1 |
20160019243 | Kamel et al. | Jan 2016 | A1 |
20160042409 | Gyllenberg | Feb 2016 | A1 |
20160092405 | Lee et al. | Mar 2016 | A1 |
20160092935 | Bradley et al. | Mar 2016 | A1 |
20160147758 | Chhaya | May 2016 | A1 |
20160212500 | Makhlouf | Jul 2016 | A1 |
20160307228 | Balasubramanian et al. | Oct 2016 | A1 |
20160307229 | Balasubramanian et al. | Oct 2016 | A1 |
20160307237 | Glover et al. | Oct 2016 | A1 |
20160328789 | Grosz et al. | Nov 2016 | A1 |
20160334240 | Arokiaraj et al. | Nov 2016 | A1 |
20160345076 | Makhlouf | Nov 2016 | A1 |
20160357717 | Metz et al. | Dec 2016 | A1 |
20160357725 | Homans et al. | Dec 2016 | A1 |
20160364770 | Denton et al. | Dec 2016 | A1 |
20160371230 | Kirillov et al. | Dec 2016 | A1 |
20160371231 | Kirillov et al. | Dec 2016 | A1 |
20170068996 | Qin | Mar 2017 | A1 |
20170161794 | Zhu et al. | Jun 2017 | A1 |
20170178187 | Santi et al. | Jun 2017 | A1 |
20170220694 | Vaish et al. | Aug 2017 | A1 |
20170270083 | Pruitt et al. | Sep 2017 | A1 |
20180025470 | Wang | Jan 2018 | A1 |
20180040029 | Zeng | Feb 2018 | A1 |
20180060921 | Mengle et al. | Mar 2018 | A1 |
20180158094 | Chitilian et al. | Jun 2018 | A1 |
20180189074 | Kulkarni et al. | Jul 2018 | A1 |
20180189822 | Kulkarni et al. | Jul 2018 | A1 |
20180189843 | Kulkarni | Jul 2018 | A1 |
20180300745 | Aubespin et al. | Oct 2018 | A1 |
20180365707 | Jha et al. | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
2014-215685 | Nov 2014 | JP |
WO 2005045607 | May 2005 | WO |
WO 2005-125201 | Dec 2005 | WO |
WO 2011-009101 | Jan 2011 | WO |
Entry |
---|
International Search Report and Written Opinion, PCT Application No. PCT/US2017/037776, dated Sep. 25, 2017, 12 pages. |
United States Office Action, U.S. Appl. No. 15/397,549, dated Mar. 29, 2019, 15 pages. |
United States Office Action, U.S. Appl. No. 15/397,556, dated Jan. 24, 2019, 18 pages. |
United States Office Action, U.S. Appl. No. 15/397,556, dated Aug. 13, 2018, 21 pages. |
United States Office Action, U.S. Appl. No. 15/397,537, dated Apr. 17, 2019, 14 pages. |
Schrier, E. et al., “Adaptive Layout for Dynamically Aggregated Documents,” IUI '08, Jan. 13-16, 2008, Maspalomas, Gran Canaria, fourteen pages. |
United States Office Action, U.S. Appl. No. 15/397,537, dated Dec. 30, 2019, 13 pages. |
Number | Date | Country | |
---|---|---|---|
20180004847 A1 | Jan 2018 | US |