DYNAMIC TABLE FRAMEWORK FOR MANAGING DATA IN A HIGH PERFORMANCE WEB SERVICE

Information

  • Patent Application
  • 20130268508
  • Publication Number
    20130268508
  • Date Filed
    November 29, 2012
    11 years ago
  • Date Published
    October 10, 2013
    11 years ago
Abstract
A system and method provides a dynamic table framework for managing data in a high performance web service. An example embodiment includes: receiving a request at a web service; creating a dynamic record from the request; obtaining a runtime corresponding to the dynamic record, the runtime including an associated plurality of symbol values corresponding to the request; choosing a model corresponding to the runtime, the model including a plurality of symbol managers, each of the plurality of symbol managers being associated with the plurality of symbol values, each of the plurality of symbol managers for processing a specific task of the model; executing the model, by use of a data processor, to process the request, the model using at least one of the plurality of symbol managers; and returning results generated by execution of the model.
Description
TECHNICAL FIELD

This application relates to a method and system for use with an electronic commerce system, according to one embodiment, and more specifically, for managing data in a high performance web service.


BACKGROUND

In online publication systems, advertisements or related content may be displayed in a particular area of the user interface to promote sales of related products/services. The resulting sales of related products/services can be increased if the displayed advertisements or related content are particularly suited to the user viewing the ads. For example, advertisements may be displayed that relate to content previously searched by a particular user. However, there may be millions of advertisements or related content from which to choose and millions of users to whom the advertisements or related content must be served. It is important to efficiently and quickly determine which advertisements or related content are served to a particular user. But, it is also important to efficiently and quickly determine the appropriate users to whom appropriate advertisements should be shown. It is also important to provide a highly efficient web service for handling these dynamic and high volume service requests.





BRIEF DESCRIPTION OF THE DRAWINGS

Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which:



FIG. 1 is a diagrammatic representation of a portion of a user interface according to an example embodiment.



FIG. 2 is a diagrammatic representation of a portion of a user interface according to another example embodiment.



FIG. 3 is a block diagram of placement system according to an example embodiment.



FIG. 4 is a flowchart of a process according to an example embodiment.



FIGS. 5A and 5B illustrate state diagrams correlating the displaying of secondary content with purchase actions in an example embodiment.



FIG. 6 is a flowchart of a process according to an example embodiment.



FIG. 7 is a network diagram depicting a client-server system, within which one example embodiment may be deployed.



FIG. 8 is a diagrammatic representation of machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.



FIG. 9 illustrates an example embodiment of a general flow in a Dynamic Table Server (e.g., a DynServer).



FIG. 10 illustrates a relationship between the SymbolMgr and SymbolValues in an example embodiment.



FIG. 11 illustrates the structure of the DynTables database in an example embodiment.



FIG. 12 illustrates the node hierarchy in an example embodiment.



FIG. 13 illustrates the data handler hierarchy in an example embodiment.



FIG. 14 illustrates a DynTabie Schema example in an example embodiment.



FIG. 15 illustrates a Keyword Extractor (KWE) Model example in an example embodiment.



FIG. 16 is a processing flow diagram illustrating an example embodiment of a system and method providing a dynamic table framework for managing data in a high performance web service as described herein.





DETAILED DESCRIPTION

Example systems and methods provide a dynamic table framework for managing data in a high performance web service. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one of ordinary skill in the art that the various embodiments may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques have not been shown in detail.


As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Additionally, although various example embodiments discussed below focus on a network-based publication environment, the embodiments are given merely for clarity in disclosure. Thus, any type of electronic publication, electronic commerce, or electronic business system and method, including various system architectures, may employ various embodiments of the system and method described herein and is considered as being within a scope of example embodiments. Each of a variety of example embodiments is discussed in detail below.


In a publication system, a graphical user interface (GUI) may be divided into one or more portions for different types of content. For example, a GUI may include a portion for receiving an input from a user such as a form or a search query box. The GUI may also include one or more content fields. Some GUIs may include a primary content field that includes content of particular interest to the user, for example, an article, a description of an item for sale, a map, or the like. Other GUIs, such as a search results page or a landing page, may not have a primary content field. Regardless of whether a particular GUI has a primary content field, the GUI may comprise slots placed at designated positions within the GUI populated with secondary content such as advertisements (ads), recommended content, related content, or the like. A slot is a predefined area of a GUI at a predetermined position. For example, the size of a particular slot may be defined based on a percentage of the area displayed to a user or be set in pixels. The GUI may be partitioned to include slots using a technology such as frames in hypertext mark-up language (HTML), HTML Tables, Java Script, HTML <div> tags, and the like.


It may be desirable to place certain secondary content based on a predicted revenue yield generated by a particular item of secondary content. The “revenue yield” of an item of secondary content displayed in a particular slot is defined as an anticipated revenue to be derived from the user's interaction with the item of secondary content when it is placed in the particular slot. Examples of interactions include the user following a link in the secondary content (e.g., redirects), sales resulting from merchandizing items described by the primary content (e.g., conversions). For example, in a magazine site, it may be desirable to place a popular article in a slot at a top of the GUI. In an online marketplace, it may be desirable to place popular accessories related to an item for sale near an option to purchase the item. In some instances, a revenue yield may be predicted for each available item of secondary content. The revenue yield may be used to calculate an improved way to populate the slots in an interface with the secondary content. In some instances, the population of the slots is “optimized” using a matrix of the revenue yields for each available item of secondary content. The selection of the best collection of secondary content for a particular user based on user level incremental revenue and conversion prediction is described in more detail below.



FIG. 1 is a diagrammatic representation of a portion of a GUI 100 according to an example embodiment. The GUI 100 includes a search box 102, and a set of slots 104. The twenty-four slots of the GUI 100 are respectively labeled A1-A3 to H1-H3. Each slot may be available or unavailable for placing secondary content and the GUI 100 may contain a mixture of available and unavailable slots. An unavailable slot is a slot that is populated with secondary content separately from the available slots which can be populated based on revenue yield. For example, a certain slot may be unavailable if it is designated as a “paid” slot that is to be populated with a paid advertisement. The unavailable slots may be independently populated according to a revenue yield. For example, the slots A1, A2, and A3 in the top row of the set of slots 104 may be sold to advertisers.


In an online marketplace, the search box 102 may be used to receive a query from a user for descriptions of items for sale. To illustrate, a user may enter, “music player.” The search results may include listings describing items for sale such as an IPOD music player, a ZUNE music player, and a WALKMAN portable cassette player. To present the search results to the user, the listings (or links to the listings) may be used to populate at least a portion of the slots A1 to H3.



FIG. 2 is a diagrammatic representation of a portion of a GUI 200 according to another example embodiment. The GUI 200 may include a search box 102 and a primary content field 202. The GUI 200 also includes two sets of slots, set A 204 and set B 208. Set A 204 includes a row of slots, A1-A3 along a bottom of the GUI 200. Set A, as depicted, includes three slots that can be populated with secondary content. In some embodiments, a user may be able to cause the secondary content in the slots A1, A2 and A3 to scroll by selecting scroll buttons 206 on either side of the set A 204, GUI 200 further includes set B 208 that includes four slots, B1, B2, B3, and B4 positioned vertically along the right side of the GUI 200. In some instances, the secondary content to be displayed in GUI 200 may be independently determined for set A 204 and set B 208. For example, set A 204 may be designated for secondary content related to the primary content in the field 202 and set B 208 may be designated for paid content.


In an online marketplace, the primary content field 202 may include a product description or item listing that has been selected by the user from the GUI 100. Set A 204 may be populated with links to descriptions of related products (e.g., music player cases, earphones, batteries, and chargers). Set B 208 may be populated with links to descriptions of other items for sale based, for example, on user search history.


While example GUIs 100 and 200 are depicted in FIGS. 1 and 2, respectively, it is understood that alternative embodiments may comprise any combination of one or more primary content fields, secondary content fields, and user input fields (e.g., forms and search query boxes).



FIG. 3 is a block diagram of a placement system 300 according to an example embodiment. In one embodiment, the placement system 300 may be implemented by way of one or more software modules that include non-transitory instructions embodied on a computer-readable storage medium. In alternative embodiments, the placement system 300 may comprise hardware-based or processor-implemented modules. The placement system 300 is configured to place secondary content in slots within a GUI. In some embodiments, the placement system 300 places secondary content in available slots, but not in unavailable slots.


In response to a request for secondary content, a relevancy module 302 is configured to identify a set of secondary content to be used to populate the slots. The request for secondary content may be in the form of, for example, a search query received from a client device of a user, a server call for primary content, a selection received from a client device of a user to provide certain primary content, or the like. The request may include a request for a certain number of items of secondary content that, in turn, may or may not be included in a GUI.


In some instances, the request includes the number of items of secondary content to be placed and positions of the slots in the GUI. The request may include a GUI identifier that indicates a format of the GUI to be generated. The GUI identifier may be received from the system providing primary content, a search engine, or the like. Examples of formats are depicted in FIGS. 1 and 2.


In some embodiments, particularly in online marketplace environments, the relevancy module 302 may include, or have access to, search capabilities to refine the available secondary content to those deemed most relevant to the user or to users who request to view a certain item of primary content. To illustrate, a user may submit a search query for primary content. The secondary content may be content that is determined to be related to the results of the primary content. Therefore, there is a determination by the relevancy module 302 of similar or corresponding categories of content that are related to the primary content. In some embodiments, the relationship of the secondary content to the primary content may be based on user preferences, past histories, user searches, user purchases, etc. But, the relevancy module 302 should also take into consideration the actions and behaviors of other users. For example, if a majority (or high percentage) of users who enter the same search terms for the primary content eventually purchase an accessory related to the primary content, that accessory (and an item of secondary content related thereto) can be identified by the relevancy module and weighted higher or more relevant by the relevancy module 302. In some instances, a selection or collection of secondary content may be selected from a much larger set of secondary content based on user search or purchase history and preferences, social network data about the user, using algorithms such as collaborative filtering and machine learning.


Upon identification of the selected set of secondary content, the yield module 304 is configured to calculate a predicted revenue value associated with the respective items of secondary content. In some instances, the number of items of secondary content may be limited to a pre-defined number. In some instances, the revenue yield is calculated as a time-series estimation or a moving average of a number of factors associated with the item of secondary content. The revenue yield may be a value between 0 and 1. The factors may include a relevancy weight used to determine the relevancy of the item of secondary content by the relevancy module 302, revenue generated by the website based on traffic to the secondary content (e.g., for paid advertisements), click-through probability, popularity (e.g., most e-mailed, most blogged, most watched), etc.


Specifically, in an online marketplace, the revenue yield may be calculated based on factors such as user search history; revenue generated by the online marketplace upon sale of a particular item; click-through history of the item description; if the user has previously purchased, bid on, or watched particular items; time remaining to purchase or bid on an item described in a listing; a number of items remaining for sale. The revenue yield may be calculated using a weighted average, a normalization factor, or the like. For example, a sample embodiment uses a formula such as:





revenue yield=0.20*(clickthrough probability)+0.40*(price of item)+(0.10)*quality of item+(0.5)*relevancy of item


to calculate the revenue yield of a particular item of secondary content. The effect of placing an item of secondary content at a first slot versus at a second slot may be calculated using a second formula or be incorporated into a variable in the above equation, such as “clickthrough probability.”


In some embodiments, the revenue is calculated for each item of secondary content based on each particular slot. For example, using moving averages, it may be determined that secondary content A may have a revenue yield of 0.95 if placed slot A1 but a revenue yield of 0.25 if placed slot B1.


When each item of secondary content is associated with a corresponding revenue yield value, the evaluation module 306 calculates where each item of secondary content should be placed in the slots in the GUI to be presented to the user. In some instances, this may be performed separately for the content to be placed in available slots and in unavailable slots. The evaluation module 306 may first discard items of secondary content associated with a revenue yield that does not meet or exceed a predetermined threshold. The threshold may be determined empirically.


In various embodiments, the respective revenue yields are used to populate a matrix where each row is assigned a particular item of secondary content and each column is assigned to a particular slot. The values within the matrix represent an anticipated revenue yield if that particular item of secondary content is used to populate that particular slot.


In one embodiment, to calculate the matrix values, the revenue yields associated with the items of secondary content may be multiplied by a multiplier associated with that particular slot. The multiple may be a positive value between zero and one. For example, a left-most slot (being most likely to be selected by a user based on its location) may be associated with a multiplier of 1.0 while a right-most slot may be associated with a multiplier closer to zero, such as 0.1.


The evaluation module 306 may then perform a combinatorial optimization algorithm, such as the Hungarian algorithm, on the matrix and/or revenue yields calculated. Other optimization calculations may, additionally or alternatively, be structured such as dynamic programming problems.


Based on the results calculated by the evaluation module 306, a presentation module 308 generates a GUI having available slots populated with the secondary content. The presentation module 308 may generate HTML instructions to send to a user device for displaying the secondary content in the respective slots. It is noted, that depending upon the secondary content identified by the relevancy module 302 and the revenue yields calculated by the evaluation module 306, two separate users may not have access to the same secondary content even if they are viewing the same primary content.



FIG. 4 is a flowchart of a process 400 to place listings according to an example embodiment. The process 400 may be performed by the placement system 300.


In an operation 402, the relevant items of secondary content are identified. The relevant items of secondary content may include, for example, advertisements, content related to primary content to be displayed to the user, related search results, listings describing items for sale, and user reviews or comments related to the primary content.


In an operation 404, the revenue yields for each item of secondary content is determined. In some instances, the revenue yield is calculated independent of an anticipated placement. In other instances, the revenue yield is determined as a function of its anticipated placement.


in an operation 406, the revenue yields are analyzed using a combinatorial optimization technique to determine how to collectively place the secondary content for a potential maximum yield.


In an operation 408, instructions for generating a GUI are generated. The instructions are generated by the presentation module 308 and transmitted to a client device of the requesting user. The instructions indicate placement of the respective items of secondary content in the available slots included in the GUI based on the analysis of operation 406.


User Level Incremental Revenue and Conversion Prediction

For display advertising, especially for real time bidding, if we can predict how much incremental revenue a user is going to bring into a particular e-commerce site, we can decide how much we would like to pay for each impression shown to the user. Note that high revenue may not necessarily imply high incremental revenue, as some active users will visit a particular e-commerce site anyway, whether they see secondary content on the site or not, while some other inactive users do not visit the particular e-commerce site, even if they see a lot of secondary content. The past purchasing or transaction (conversion) history of a particular user can be used to determine a likelihood that the particular user will or will not be affected by viewing secondary content. The past history of presenting secondary content to the user can also be used. The prediction model of an example embodiment provides support for solving this issue.


As mentioned above, for many e-commerce systems, some active users will visit a particular website to buy goods or services whether or not they are shown secondary content. Other inactive users will visit the particular website and not make a purchase no matter how many times they are shown secondary content. It is a waste of funds to show secondary content to a user who will not make a purchase (e.g., convert). It is also a waste of funds to show secondary content to a user who will make a purchase regardless. As described in more detail below, the user level incremental revenue and conversion prediction model of an example embodiment provides support to identify which users are likely to be affected by viewing secondary content and convert on the site, thereby bringing in incremental revenue.


The user level incremental revenue and conversion prediction model of an example embodiment provides a system and method to predict: 1) if a user is not originally likely to convert on the e-commerce site after viewing secondary content, how likely is it that the user can be affected into becoming a purchaser, and 2) if a user is originally likely to convert on the e-commerce site, how likely is it that the user can be affected into purchasing more than the user would have purchased without viewing the secondary content.


Referring now to FIG. 5A, four prediction models are provided in an example embodiment to predict any of the following conditions:

    • a. If a user is not shown secondary content, how likely is it that the user will convert—denoted as P(control).
    • b. If a user is not shown secondary content and the user is likely to convert, how much is the user likely to buy on the e-commerce site—denoted as G(control).
    • c. if a user is shown secondary content, how likely is it that the user will convert—denoted as P(test).
    • d. If a user is shown secondary content and the user is likely to convert, how much is the user likely to buy on the e-commerce site—denoted as G(test).


Having defined the conditions of interest and the mechanisms for metering the conditions, we can predict the incremental revenue as follows:






P(test)*G(test)−P(control)*G(control) up to the take rate.


Once the incremental revenue is determined for each user by use of the prediction models described above, we can decide how much we are witting to pay for each impression shown to the user. If the user's predicted incremental revenue is more that the cost of the impression to be shown to the user, we could pay a pre-determined amount, in the real time bidding, to maximize the incremental revenue for the e-commerce site.



FIG. 5B illustrates a state diagram correlating the displaying of secondary content with purchase actions in an example embodiment. As shown in FIG. 5B, one purpose of the embodiments described herein is to separate States A and D from States B and C. This can be achieved by a classification or prediction model as described above. Once we can separate States A and D from States B and C, we can use a classification model to rank the conversion probability for a particular user. In short, we can use two classification models; one is to differentiate the diagonal and non-diagonal conditions as shown in FIG. 5B. The other classification model is to differentiate the horizontal conditions as shown in FIG. 5B. The final score for each user will be the multiplication of the results of the two classification models. This final score can be used to adjust the amount of funds bid for impressions to be shown to the particular user. As a result, the user's predicted incremental revenue can be correlated to the secondary content shown to the user.



FIG. 6 is a processing flow diagram illustrating an example embodiment of a system and method for user level incremental revenue and conversion prediction for interact marketing display advertising as described herein. The method of an example embodiment includes: identifying a plurality of items of secondary content for display to a particular user on an e-commerce site (processing block 1010); calculating, using one or more processors, a predicted incremental revenue value for a particular user, the predicted incremental revenue value being based in part on a likelihood that the particular user will convert if the particular user is not shown secondary content, a likelihood that the particular user will convert if the particular user is shown secondary content, and how much the particular user is likely to buy on the e-commerce site if the particular user is not shown secondary content, and how much the particular user is likely to buy on the e-commerce site if the particular user is shown secondary content (processing block 1020); using the predicted incremental revenue value for a particular user to rank a conversion probability for the particular user (processing block 1030); and generating instructions to place one or more of the plurality of items of secondary content in slots of a graphical user interface (GUI) based on the predicted incremental revenue value and conversion probability for a particular user (processing block 1040).



FIG. 7 is a network diagram depicting a client-server system 500, within which one example embodiment may be deployed. A networked system 502, in the example forms of a network-based marketplace or publication system, provides server-side functionality, via a network 504 (e.g., the Internet or Wide Area Network (WAN)) to one or more clients. FIG. 7 illustrates, for example, a web client 506 (e.g., a browser, such as the Internet Explorer browser developed by Microsoft Corporation of Redmond, Wash. State), and a programmatic client 508 executing on respective client machines 510 and 512. The client machine 510 may be a client device of a user submitting the primary content request. In response, a browser of the client machine 510 may generate the GUI shown in FIGS. 1 and 2 based on the instructions received from the presentation module 308.


An Application Program Interface (API) server 514 and a web server 516 are coupled to, and provide programmatic and web interfaces respectively to, one or more application servers 518. The application servers 518 host one or more publication applications 520 and payment applications 522. The application servers 518 are, in turn, shown to be coupled to one or more databases servers 524 that facilitate access to one or more databases 526.


The publication applications 520 may provide a number of publication functions and services to users that access the networked system 502. In example embodiments, the publication applications 520 encompass the placement system 300. The payment applications 522 may likewise provide a number of payment services and functions to users. The payment applications 522 may allow users to accumulate value (e.g., in a commercial currency, such as the U.S. dollar, or a proprietary currency, such as “points”) in accounts, and then later to redeem the accumulated value for products (e.g., goods or services) that are made available via the publication applications 520. While the publication and payment applications 520 and 522 are shown in FIG. 7 to both form part of the networked system 502, it will be appreciated that, in alternative embodiments, the payment applications 522 may form part of a payment service that is separate and distinct from the networked system 502. The placement system 300 may be included in the publication applications 520.


Further, while the system 500 shown in FIG. 7 employs a client server architecture, the various embodiments are of course not limited to such an architecture, and could equally well find application in a distributed, or peer-to-peer, architecture system, for example. The various publication and payment applications 520 and 522 could also be implemented as standalone software programs, which do not necessarily have networking capabilities.


The web client 506 accesses the various publication and payment applications 520 and 522 via the web interface supported by the web server 516. Similarly, the programmatic client 508 accesses the various services and functions provided by the publication and payment applications 520 and 522 via the programmatic interface provided by the API server 514. The programmatic client 508 may, for example, be a seller application (e.g., the TurboLister application developed by eBay of San Jose, Calif.) to enable sellers to author and manage listings on the networked system 502 in an off-line manner, and to perform batch-mode communications between the programmatic client 508 and the networked system 502.



FIG. 7 also illustrates a third party application 528, executing on a third party server machine 530, as having programmatic access to the networked system 502 via the programmatic interface provided by the API server 514. For example, the third party application 528 may, utilizing information retrieved from the networked system 502, support one or more features or functions on a website hosted by the third party. The third party website may, for example, provide one or more promotional, marketplace, or payment functions that are supported by the relevant applications of the networked system 502. In one embodiment, the third party server 520 may provide the paid advertisement that is used to populate the unavailable slots.


Additionally, certain embodiments described herein may be implemented as logic or a number of modules, engines, components, or mechanisms. A module, engine, logic, component, or mechanism (collectively referred to as a “module”) may be a tangible unit capable of performing certain operations and configured or arranged in a certain manner. In certain example embodiments, one or more computer systems (e.g., a standalone, client, or server computer system) or one or more components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) or firmware (note that software and firmware can generally be used interchangeably herein as is known by a skilled artisan) as a module that operates to perform certain operations described herein.


In various embodiments, a module may be implemented mechanically or electronically. For example, a module may comprise dedicated circuitry or logic that is permanently configured (e.g., within a special-purpose processor, application specific integrated circuit (ASIC), or array) to perform certain operations. A module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software or firmware to perform certain operations. It will be appreciated that a decision to implement a module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by, for example, cost, time, energy-usage, and package size considerations.


Accordingly, the term “module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which modules or components are temporarily configured (e.g., programmed), each of the modules or components need not be configured or instantiated at any one instance in time. For example, where the modules or components comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different modules at different times. Software may accordingly configure the processor to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.


Modules can provide information to, and receive information from, other modules. Accordingly, the described modules may be regarded as being communicatively coupled. Where multiples of such modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the modules. In embodiments in which multiple modules are configured or instantiated at different times, communications between such modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple modules have access. For example, one module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further module may then, at a later time, access the memory device to retrieve and process the stored output. Modules may also initiate communications with input or output devices and can operate on a resource (e.g., a collection of information).



FIG. 8 shows a diagrammatic representation of machine in the example form of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.


The example computer system 600 includes a processor 602 (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory 604 and a static memory 606, which communicate with each other via a bus 608. The computer system 600 may further include a video display unit 610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 600 also includes an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), a disk drive unit 616, a signal generation device 618 (e.g., a speaker) and a network interface device 620. Some embodiments may include a touchscreen (not shown).


The disk drive unit 616 includes a machine-readable medium 622 on which is stored one or more sets of instructions (e.g., software 624) embodying any one or more of the methodologies or functions described herein. The software 624 may also reside, completely or at least partially, within the main memory 604 and/or within the processor 602 during execution thereof by the computer system 600, the main memory 604 and the processor 602 also constituting machine-readable media. The software 624 may further be transmitted or received over a network 626 via the network interface device 620.


While the machine-readable medium 622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Specific examples of machine-readable storage media include non-volatile memory, including by way of example semiconductor memory devices (e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices); magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. In one embodiment, the machine-readable medium is a non-transitory machine-readable storage medium.


The instructions 624 may further be transmitted or received over a communications network 626 using a transmission medium via the network interface device 620 and utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, POTS networks, and wireless data networks (e.g., WiFi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.


Dynamic Table Framework

In an example embodiment, the system and method described herein includes a solution for managing data in a high performance web service. This system manages tables loaded into random access memory (RAM) with more or less eventual consistency. Eventual consistency is one of the consistency models used in the domain of parallel programming, for example in distributed shared memory, distributed transactions, and optimistic replication. It means that given a sufficiently long period of time over which no changes are sent, all updates can be expected to propagate eventually through the system and all the replicas will be consistent. The persistent state of the tables is managed in files read by the service at start up. It is not a simple key value pair. Instead, the system manages multiple fields for both key and value and offers foreign keys into other tables for joins. Operations on the table records and input data are done via a structured language built using Extensible Markup Language (XML) for structure with internal expressions that follow a C++ type format on the fields by their names and types. Our use of the system/framework is to power a number of diverse web services, which evaluate user input using models and return some kind of response, which could be a listing of items and properties or binary data such as art image.


In one embodiment in a high-performance, real-time Advertising or Listing service, we have a need to find items by keyword, which were relevant for an ad. Initially, all of our tables were hand-coded and we could only make models, which used the data triggered at a coarse, hard-wired level, say by passing in the name of a function and hard-coding it to operate on fixed fields in the hand-coded tables. When our schema changed or when users making models wanted to use new functions on different fields, it took more hand-coding to update the tables and add new specific functions.


We improved the efficiency of the system by implementing a system and method providing a dynamic table framework for managing data in a high performance web service. The dynamic table framework provides among the following features:

    • a. external specification of field names and types for multiple tables.
    • b. systems to upload records for specific tables as well as to dump (output) them.
    • c. consistent values for table records used in modeling.
    • d. flexible modeling using the record(s) involved with the calculation. Ability to write complex functions and code, which uses the named fields with type restrictions enforced (can't add a string to a number, etc).
    • e. high performance in an MT (multi-threaded) environment, which essentially means no allocating.
    • f. a variety of record types, including those which manage their allocation through a variety of means, such as custom C++ allocators as well record types which hold reference to cached joined records.
    • g. reusable components to parse/verify/update/view models. In one embodiment, the models are built out of symbol managers (SymbolMgrs), each one is for a specific task of the model. A model might have a symbol manager for adding records to consideration, or a symbol manager for filtering results or a symbol manager for a program with nested-if clauses or loops.
    • h. reusable components to manage model selection for any given user input, including parsing, execution, and verification.
    • i. reusable components to parse/view/update items


The dynamic table framework system provides high performance. Our system returned in 2 ms on average as opposed to 40-50 ms or more with the old system, largely due to new efficiencies in managing allocation for the MT process. When we compare the performance of our modeling language against embedded perl or embedded python, which are often suggested when discussing the system, we can get 8 times or more the performance even when comparing to perl or python running natively. For flexibility of modeling, once we added this framework, the modelers no longer needed hard-coded adjustments. They could code what they wanted in the modeling language provided. If new fields were needed, we could just update the schema and restart. Flexibility wise, we have succeeded given our re-use of the framework for a wide range of services, including display ad service, landing page optimization service, keyword extraction service, item update listening service, keyword id service and a straight forward image caching service.


Referring to FIG. 9, an example embodiment illustrates a general flow in a Dynamic Table Server (e.g., a DynServer). It will be apparent to those of ordinary skill in the art that the general flow can vary from server to server.


As shown in FIG. 9, SymbolValues are a collection of symbols-value pairs the SymbolMgr can use. Types of SymbolValues can include:

    • a. Named doubles, referred to as val(x)
    • b. Named strings, referred to as str(country)
    • c. Named numeric functions, such as exp(val(x)) or min(val(x), val(y))
    • d. Named string functions, such as $to_lower(“Hi ThErE”)
    • e. Named numeric DynTable fields, such as my_table.one
    • f. Named string DynTable fields, such as $my_table.some_string
    • g. Named numeric array DynTable fields, such as @my_table.evens
    • h. Named string array DynTable fields, such as $@my_table.string_vals
    • i. SymbolValues don't include constants or params, which are directly used by the SymbolMgrs
    • j. “hi there”, in the expression: “hi there”==$to_lower(“Hi ThErE”)
    • k. The numbers in the expression: 6<3.14*2


Referring to FIG. 10, a relationship between the SymbolMgr and SymbolValues in an example embodiment is illustrated.


In an example embodiment, SymbolMgr and SymbolValue creation can be described as follows:

    • a. SymbolMgrs=text expressions+a sample SymbolValues.
    • b. Boost Spirit Parser:
      • i. object-oriented recursive-descent parser generator framework.
      • ii. approximates the syntax of Extended Backus-Normal Form (EBNF),
      • iii. We are using what is now Spirit classic. In particular, we use a classic parse tree method.
    • c. Generate a parse tree for each expression
      • i. validates each expression, including types and function signatures.
      • ii. parsing done once for each expression, used many times. Not time critical.
      • iii. convert to our Node class hierarchy to use for repeated evaluation.
    • d. Xerces SAX2 parser
      • i. scaffolding around the expressions. Builds nested-if/assignment structures.
      • ii. Validates overall model structure.


Referring to FIG. 11, a diagram shows the structure of the DynTables database in an example embodiment.


In an example embodiment, Expression Types can be described as follows:

    • a. DbleEval: evaluate returns double
      • i. exp(val(x)+4)*fun1(val(y))
    • b. LogicEval: evaluate returns boot
      • i. val(x)==3 and not val(y)<7
    • c. Assignment: evaluate returns void
      • i. val(x)=3
      • ii. sort_order(petr( )*qs( ))


Referring to FIG. 12, a diagram shows the node hierarchy in an example embodiment.


In an example embodiment, a sample grammar can be described as follows:

    • a. direct numeric value, e.g. val(x):
      • symbol=inner_node_d[“val(”>>token_node_d[+(alnum_p|ch_p(‘_’))]>>‘)’];
    • b. For a numeric table field, e.g. my_table.num_field1:
      • table_num_field=token_node_d[+(alnum_p|ch_p(‘_’))>>ch_p(‘.’)>>+(alnum_p|ch_p(‘_’))];
    • c. a function, e.g. min(val(x), val(y)):









function
=

token_node


_d


[

+

(

alnum_p
|

ch_p



(




_


)



)


]




>>

no_node


_d
[


ch_p



(





(


)

]


>>
*(fun_arg

)


>>

*

(


no_node


_d


[

ch_p



(




,


)


]



>>
fun_arg

)


>>

no_nude


_d
[

ch_p




(


)




)



]

;






fun_arg
=

expression



str_val



table_num

_array

_field





table_str

_array

_field

;







Referring to FIG. 13, a diagram shows the data handler hierarchy in an example embodiment.


Referring to FIG. 14, a diagram shows a DynTable Schema example in an example embodiment.


Referring to FIG. 15, a diagram shows a Keyword Extractor (KWE) Model example in an example embodiment.


In an example embodiment, a general flow of the DynServer running a web service can be described as follows:


1)1. Referring again to FIG. 9, a general flow of a dynserver running in a web service is shown. Take our query from the URL. Pull the parameters into a dynrecord (can automatically detect the fields by the param names), Pull it into a dynrecord since we can then use the values in our expressions as well as find any related fields, looked up by foreign keys into other tables.


Then get a runtime. This holds all materials which we would otherwise have to allocate for this query, in particular the SymbolValues for the various SymbolMgrs. Note that this is not dynamic. Each standard service knowns which sort of SymbolMgrs it will use, so we can put the needed Symbol Values there in our code. This also has the CDynRecordSet which will hold the shared pointer to all records related to this query. Note that these shared pointers are almost always read only in the query. If not, then the specific DynRecord has to manage its own locking. Models can have as many SymbolMgrs as they need, say one for applying a filter and another for calculating a result.


Then we choose a model using a model selection. It is a two-layer deep if-else check. The outer layer has an expression based on the input (say “$query.site=“US”), where the URL had a param “&site=US”. That picks a group of rules. We use the first group of rules which has an if-clause which evaluates to true. We then have an inner set of rules with their own if-clauses. We take the first of these which is true. That has a model group, which is a set of model ids each with a “part”. The part is essentially the chance of taking the model. For example, we put a marble in the bag for each part for that model id. We then pull the model id from the bag.


The two layers are generally enough. But we also can just use a NestedCalculationStep to go as deep as we might need with nested if-clauses if we need more depth in picking the model.


Once we have the model, we evaluate each SymbolMgr in it as needed. Each one generally has a specific task, such as filtering results or calculating results. Once done with that we return the results.


2) Referring again to FIG. 10, Symbol Values are shown. This is the scratch space for evaluating expressions in the models. Each symbol manager potentially has different symbols, in particular because they have different tables included in them. The runtime operates the shared global SymbolMgr to change and read the values in the SymbolValues. Essentially the runtime has the offset into its tables and other values to finds and set each value.


3) Referring still to FIG. 10, a diagram of symbol values shows how the offsets work. Say x is the first double and y is the 3rd double in the SymbolValues. The global SymbolMgr can then access them, in particular to set the x in this runtime without interfering with other threads. Note the table is there as well. For example, my_table would be known by its table index so we would know which slot it is in for the CDynRecordSet which is just a vector of related records by table index. Only the tables pulled into this SymbolMgr would be allowed in expressions for this SymbolMgr.


Referring still to FIG. 10, a relationship between the SymbolMgr and SymbolValues in an example embodiment is illustrated.


5) Referring to FIG. 11, a diagram shows the structure of the DynTables database in an example embodiment. DynTables—A TableSchema essentially keeps track of which fields are in a table as well as their type. It awards each a slot so we know it is the first, second, third or whatever double or float. DynRecord is the abstract record class for this table. It knows how to access or set its various doubles, floats, ints, strings and arrays by this slot. DynRecordImp actually has a buffer of memory that holds real values. This comes with memory offsets to inform the DynRecord slot where to put or get the value. Various classes derived from DynRecordImp can use different methods for offsetting or for allocation.


A DBTable is a table schema combined with a LockedHash of boost shared pointers to DynRecords of the appropriate type. The LockedHash is a specific implementation of a hash for shared pointers with locking down to the bucket level which allows concurrent processing to a high degree. It has some features for merging partial updates of records as well as options for removal which can invalidate cached records held elsewhere.


The DynTable database is the combination of DBTables in the global singleton SchemaDB.


6) Expression types: We only offer three type of expression for specific purposes. One to evaluate an expression, DbleEval. We also have logical comparisons mostly using C++ comparisons. Then we have assignments. All loops and nested ifs are built out of these expression types, usually just from the logical comparisons and the assignments.


Referring to FIG. 12, a diagram shows the node hierarchy in an example embodiment. Note that expect for the top level logical, dbleEval and assignments nodes, this is all hidden from the user. These expressions are compiled upon parsing the input expressions into Nodes. After that they are just applied on the data in the SymbolValues.


Grammar can be used in parsing input expressions into Nodes. In the case of “val(x)”, it indicates that we look for “val(” followed by some string followed by “)”. Note that the parsing is just syntactic, but to make the Node, in this case a symbol node, you have to have a symbol with name “x” or the model which had this expression would not validate. We also have checks on assigning table fields which are not assignable. We also have type checks on say adding a number to a string, which is also invalid.


The XML xsd for the NestedCalculationSteps can be used in many models. This allows infinitely nested if-clauses as well as looping, though we don't usually use the looping. If the expressions are all valid, they are built into a CalcStep which hold the nodes appropriate for execution at each if-clause or assignment step. The recursion goes NestedCalculationSteps->NestedCalculationStep->NestedCalculationAction->NestedCalculationSteps as deeply as required. The if-clause is always a LogicEval and is checked to see if the block should be executed. AssignmentSteps are used for all of the assignments. Of course each Node has to be valid in its parsing before we can have a valid NestedCalculationSteps.


ModelSelection can be used in many services. It's a two layer deep if-clause selection with a random chance of picking some model in the model group or you can just have one model in the model group. It is generally sufficient, but you can build any other XML level selection method out of LogicEval nodes and assignment nodes if you need. In some cases we have used the full CalcStep for this selection.


Referring to FIG. 13, a diagram shows the data handler hierarchy in an example embodiment. FIG. 13 shows how we parse the xml models and model selection using the Xerces xml parser for C++ from apache. We actually have a layer we use on top of Xerces, but this gives the gist of it.


Referring to FIG. 14, a diagram shows a DynTable Schema example in an example embodiment. The example shows a DynTable schema for a currency table, Table size is used in the LockedHash to set the base size of the vector. LockedHash is a chained hash so it will not resize while the server runs since we cannot do that in our real time system. HashLocks indicates the number of locks to use. In this case we have 100 buckets and 31 locks, so each lock will manage about 3 buckets. Adding more locks can increase concurrent execution. We don't need so many locks usually, since the number of concurrent queries is usually limited to 16-32, but some tables have 10k locks to avoid blocking as much as possible.


Referring to FIG. 15, a diagram shows a Keyword Extractor (KWE) Model example in an example embodiment. The example shows a simple model for a keyword extractor. This model considers records which match terms in the input. We use the filter step to remove candidate records which matched the input terms but don't follow the clause: custom.value2>2000 for the related record from the custom table. For records which pass this filter, we evaluate each and score with the given assignment, which is simple here but could be more complex. This service returns the ten records with the highest scored value as read from the field query.primary.



FIG. 16 is a processing flow diagram illustrating an example embodiment of a system and method providing a dynamic table framework for managing data in a high performance web service as described herein. The method 1600 of an example embodiment includes: receiving a request at a web service (processing block 1610); creating a dynamic record from the request (processing block 1620); obtaining a runtime corresponding to the dynamic record, the runtime including an associated plurality of symbol values corresponding to the request (processing block 1630); choosing a model corresponding to the runtime, the model including a plurality of symbol managers, each of the plurality of symbol managers being associated with the plurality of symbol values, each of the plurality of symbol managers for processing a specific task of the model (processing block 1640); executing the model, by use of a data processor, to process the request, the model using at least one of the plurality of symbol managers (processing block 1650); and returning results generated by execution of the model (processing block 1660).


Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of embodiments of the present invention. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is, in fact, disclosed.


The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.


Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present invention. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present invention as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.


Thus, a system and method described herein includes a solution for managing data in a high performance web service. Although the various embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.


The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims
  • 1. A method comprising: receiving a request at a web service;creating a dynamic record from the request;obtaining a runtime corresponding to the dynamic record, the runtime including an associated plurality of symbol values corresponding to the request;choosing a model corresponding to the runtime, the model including a plurality of symbol managers, each of the plurality of symbol managers being associated with the plurality of symbol values, each of the plurality of symbol managers for processing a specific task of the model;executing the model, by use of a data processor, to process the request, the model using at least one of the plurality of symbol managers; andreturning results generated by execution of the model.
  • 2. The method as claimed in claim 1 wherein parameters from the request are included in the dynamic record.
  • 3. The method as claimed in claim 2 including automatically detecting fields based on names of the parameters.
  • 4. The method as claimed in claim 1 wherein choosing a model includes evaluating an expression based on information in the request.
  • 5. The method as claimed in claim 1 wherein executing the model includes evaluating the plurality of symbol managers included with the model.
  • 6. The method as claimed in claim 1 wherein the runtime is configured to operate a shared global symbol manager to change and read values in corresponding symbol values.
  • 7. The method as claimed in claim 1 wherein the results returned include a listing of items.
  • 8. A system comprising: a data processor;a dynamic table framework, executable by the data processor, configured to: receive a request at a web service;create a dynamic record from the request;obtain a runtime corresponding to the dynamic record, the runtime including an associated plurality of symbol values corresponding to the request;choose a model corresponding to the runtime, the model including a plurality of symbol managers, each of the plurality of symbol managers being associated with the plurality of symbol values, each of the plurality of symbol managers for processing a specific task of the model;execute the model, by use of a data processor, to process the request, the model using at least one of the plurality of symbol managers; andreturn results generated by execution of the model.
  • 9. The system as claimed in claim 8 wherein parameters from the request are included in the dynamic record.
  • 10. The system as claimed in claim 9 being further configured to automatically detect fields based on names of the parameters.
  • 11. The system as claimed in claim 8 being further configured to choose a model by evaluating an expression based on information in the request.
  • 12. The system as claimed in claim 8 being further configured to execute the model by evaluating the plurality of symbol managers included with the model.
  • 13. The system as claimed in claim 8 wherein the runtime is configured to operate a shared global symbol manager to change and read values in corresponding symbol values.
  • 14. The method as claimed in claim 1 wherein the results returned include a listing of items.
  • 15. A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to: receive a request at a web service;create a dynamic record from the request;obtain a runtime corresponding to the dynamic record, the runtime including an associated plurality of symbol values corresponding to the request;choose a model corresponding to the runtime, the model including a plurality of symbol managers, each of the plurality of symbol managers being associated with the plurality of symbol values, each of the plurality of symbol managers for processing a specific task of the model;execute the model, by use of a data processor, to process the request, the model using at least one of the plurality of symbol managers; andreturn results generated by execution of the model.
  • 16. The machine-useable storage medium as claimed in claim 15 wherein parameters from the request are included in the dynamic record.
  • 17. The machine-useable storage medium as claimed in claim 16 being further configured to automatically detect fields based on names of the parameters.
  • 18. The machine-useable storage medium as claimed in claim 15 being further configured to choose a model by evaluating an expression based on information in the request.
  • 19. The machine-useable storage medium as claimed in claim 15 being further configured to execute the model by evaluating the plurality of symbol managers included with the model.
  • 20. The machine-useable storage medium as claimed in claim 15 wherein the runtime is configured to operate a shared global symbol manager to change and read values in corresponding symbol values.
PRIORITY PATENT APPLICATION

This non-provisional U.S. patent application claims priority to U.S. provisional patent application Ser. No. 61/622,178; filed on Apr. 10, 2012 by the same applicant as the present patent application. This present patent application draws priority from the referenced patent application. The entire disclosure of the referenced patent application is considered part of the disclosure of the present application and is hereby incorporated by reference herein in its entirety.

Provisional Applications (1)
Number Date Country
61622178 Apr 2012 US