The present invention is in the field of serving offers to individuals, for example via the internet to users of web browsers. In particular, some embodiments of the invention are in the specific field of serving targeted offers, for example offers that are aimed at a particular group of respondents. For example, a decision to serve an offer may be made automatically in real time and may utilize machine learning techniques to build and continuously improve a mathematical model used to predict which of a number of available offers an individual is most likely to respond to.
The following are definitions of terms used in this description and in the field to which the invention relates:
The term “offer” is used herein to denote one of a number of alternatives available for presentation to a potential respondent. An offer may include a presentation of information or a notice to a potential respondent. Examples of offers include but are not limited to offers for sale of a product or service, or offers ancillary to an offer for sale such as a “buy one get one free” promotion or special price.
“Respondent”, or potential respondent usually refers to a person or individual who is expected to respond to an offer. An example of a respondent is a potential customer for a product or service that is being promoted via an offer.
“Responses” can be in various forms and at various levels. Thus examples of responses include “clicks” on a link on a web page (A click may be for example the use of a mouse or other pointing device to choose or indicate an area or icon on a screen or monitor; clicks may be performed using other devices such as touchscreens.), purchase of a product or other acquisition, e.g., within a predetermined time period, and a yes (or no) answer to a question posed or sentence read by a call center operator. These are not limiting examples and others will be apparent to those skilled in the art. Sometimes the term “response” is used to denote a positive response, for example in situations where a negative response to an offer is possible. It should also be noted that responses can be Boolean (e.g., for a betting website, whether or not a bet was made), integer (e.g., number of bets made) or real (e.g., total value of bets made).
An offer is said to be “served” to a potential respondent. The serving of an offer may take the form of for example presentation of a web page, in which case it is commonly referred to as an “impression”. The serving of an offer may take the form of display in a part of a web page, for example designed to improve the sales of products or services being promoted via the web page. Other examples of serving of an offer include but are not limited to reading a piece of text (script) to a caller, playing a piece of music such as an advertising jingle and mailing a flyer or advertising material, e.g., in paper form. A party serving an offer, or on whose behalf the offer is served, for example the party whose products or services are being promoted, may have available to it a number of different offers available to be served to a respondent, the selection of which may be according to one or more characteristics of the respondent.
“Response rate” is usually measured as ratio of responses to serves of a particular offer, but can also be measured in terms of number of responses in a unit time period, for example if the rate of serve is relatively stable. Number of serves and time period can be considered to be equivalent, or proportional for a constant rate of serves. Response rate can also be determined as a ratio of positive responses to serves, where negative responses are possible, or a ratio of positive responses to a total of non-responses plus negative responses.
In a computing system serving offers to respondents, responses are detected and may be reported e.g. in order to determine response rate. For this purpose response “events” may be defined, such as but not limited to a click on a web page, a text or voice answer “yes”, the expiry of a predetermined time period.
“Standard error” StdErr is a well-known statistical parameter and may be used for example as a measure of confidence in a calculation. Where several calculations are performed a standard deviation may be determined, with the standard error being related to the standard deviation StdDev by the equation: StdErr=Stdev/sqrt(n), where n represents the number of calculations used to determine the standard deviation. Thus the standard error decreases as sample size increases.
A “reward” is the hoped-for response to an offer. It may be as simple as a click on an item on a web-page, or it may be measured in monetary terms such as the profit derived from a customer making a purchase in response to an offer.
Rewards achieved in response to an offer may be distributed over a time period following the serving of an offer. A number of time-dependent functions may be used to represent the reward distribution and any of these is included in the term “reward distribution” unless otherwise stated. For example the reward distribution may be represented by an exponential decay function with the decay constant determined by the probability of having achieved a reward at a particular point in time. In another example the reward distribution may be a cumulative function for example the fraction of total expected reward received at any point in time. Alternatively the function may take on any other shape.
Some embodiments of the invention provide methods and systems using one or more processors in a computing system of selecting an offer from a set of offers to be served to one or more respondents. An embodiment of the method may include for example:
Thus in methods according to some embodiments of the invention, an estimate of the distribution is used instead of waiting for a set of real, or observed, data on which to base future reward predictions. For example the obtaining of each expected reward distribution may take place before the first serving of the corresponding offer. Thus reward distributions for each offer can be used before observational data has been gathered to determine the distribution.
According to some embodiments of the invention, the expected reward distributions are updated in repeated or iterative update operations after the initial serving of each offer. The updating may be based on an observed distribution of reward received in response to the servings of the offer. The updated expected reward distribution may then be used in the next selection of an offer.
The observed distribution does not need to span the whole of the period during which responses are expected. An update operation may be performed, according to some embodiments of the invention, at any time after the first serving of an offer. The observed distribution on which an update is based does not even need to include any positive responses.
Methods according to some embodiments of the invention may include compiling the observed reward distribution, for example for each offer. This may be performed for example by one or more processors in a computing system operating serve decision logic which may be said to be “observing” the distribution.
According to some embodiments of the invention confidence bounds may be maintained in association with the observed distribution so that any update is based on the observed distribution only to the limit of the confidence bounds. Thus the greater the set of observed responses, the greater will be the confidence in the observed distribution. This may help to mitigate the effect of random errors on the learning process.
The estimate of the reward distribution may be based on an estimate of the elapsed time, following the serving of an offer, by which most of the reward, e.g. 95%, will have been collected. For example if the time is 14 days, it is assumed that any respondent will have responded or otherwise generated a reward, by the end of 14 days after having been served that offer. According to embodiments of the invention, an update operation may take place before the expiry of this time following the first serving of an offer. For example, if the time period is 14 days, the expected reward distribution may be updated sooner than 14 days after the first serving of the offer. It may be considered that at this point in time a complete set of response data is not available. According to some embodiments of the invention updating may take place based on what may be termed “incomplete” response data. Nevertheless such updating may be beneficial and improve efficiency of offer selection. Other percentages and parameters may be used.
Some embodiments of the invention may take the form of a non-transitory computer readable medium storing or bearing instructions which, when executed or implemented in one or more processors in a computing system, cause the system to carry out any of the methods described herein.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanied drawings. Embodiments of the invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like reference numerals indicate corresponding, analogous or similar elements, and in which:
In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention.
Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory processor-readable storage medium that may store instructions, which when executed by the processor, cause the processor to perform operations and/or processes. Although embodiments of the invention are not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”. The terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. The term set when used herein may include one or more items. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
An offer may be chosen on the basis of a prediction of the likely reward to result from the offer. A model of respondent behavior may be used to make the prediction. The model may be improved to better reflect actual respondent behavior based on actual response events. In other words the model may learn or be trained based on actual respondent behavior. The model may allow a computer system or a set of connected computer systems to provide better, or more accurate serving or providing of offers as compared to existing systems.
When a respondent is shown targeted content (offers, advertisements, or other information) there is often a delay before the respondent responds. This response may be needed for the system to evaluate the accuracy of the prediction made and hence improve its performance over time, for example by updating the model. This evaluation and improvement can only be made after the respondent has been given adequate time to respond. After that time, if no response has been generated or no reward received, this lack of response or reward or both, sometimes referred to as a negative response, may also be used in the evaluation of the system and/or model.
This situation may be referred to as “Delayed Rewards”. The need to wait for the respondent to respond brings an unavoidable delay to the initial creation of a model and to subsequent learning in order to improve the model.
The modeling of delayed rewards may do for example:
The behavior of respondents of different types in response to different offers may be modeled. The result of such modeling is commonly described as a single model. The singular term “model” is used herein to denote a model of the behavior of a single respondent in response to a single offer as well as a collection of such models.
The model may be used on a subsequent occasion when a similar offer is created and may be updated to improve its accuracy by comparing a prediction with what actually happened. It will be appreciated that using this process the creation of a model for delayed rewards may be very slow. The consequent delay to learning may have tangible effects such as:
Some embodiments of the invention provide methods and systems that allow learning and hence updating of a model to start immediately after an offer has been displayed to a respondent, thereby minimizing the delay to learning, and improving the operation and accuracy of the overall system. According to some embodiments of the invention this may be done by learning the reward distribution, e.g. over time.
A hypothetical cumulative reward distribution which may be used in some embodiments of the invention may look like the graph of
Thus, according to some embodiments of the invention, it may be assumed that that the probability of reward arriving by any particular time after the offer is shown follows a cumulative exponential decay function which may be represented by the equation of, for example:
p=1−ê(−αt), (1)
where:
It may be further assumed that most rewards arrive quickly after which there is a long tail. In the example reward distribution shown in
If this distribution is known, or approximated, e.g. for a group of respondents, or for a particular offer or set of offers, the model may be updated at any arbitrary time after the offer is shown to a respondent. For example, the distribution shown in
It will be appreciated from the foregoing that it is not necessary to have received a reward in order to update the model, and therefore updates can be performed at any time and optionally but not necessarily in response to receiving a reward. An update can even be performed before a single reward has been received.
Some embodiments of the invention provide an efficient way to learn this distribution, or a more efficient way for a computer system to use such distribution information.
It should be noted that the time t in equation (1) is the time between the serving of an offer to a respondent and that respondent generating a reward. This may not correspond to the time elapsed from the initial serving of the offer since the same offer may be served to different respondents at different times.
According to some embodiments of the invention a different expected reward distribution is prepared for each offer. The expected reward distribution may be used in modeling respondent behavior. Having a different expected reward distribution for each offer is useful since the expected time within which most of the reward can be assumed to have been collected, sometimes known as the “drop-off” time, may vary markedly between one offer and another. For example, some offers may be more expensive than others and require more thought on the part of the respondent before responding. The more delayed the collection of the reward, the slower is the process of learning the reward distribution. This is particularly notable in learning methods which rely on waiting for reward to have been received before any updating of a model is carried out.
It may be desirable to minimize random errors which may occur between the different expected reward distributions, since small errors in the distributions can lead to large errors in the targeting of offers to respondents.
Simply creating one histogram for each offer of the delay between display of the offer and response, although possible according to embodiments of the invention, may lead to distributions containing random errors that will impede learning. It is desirable for each expected reward distribution to change slowly and only in the direction of greater accuracy.
The flow of
Operations 303 and 305 may be, but are not always required to be, carried out before, in other words prior to the first serving of the offer and may also be referred to as the initial approximation. According to embodiments of the invention one or more processors may retrieve the expected reward distribution from elsewhere, such as one or more external computing devices where the expected reward distribution is determined. Alternatively the expected distribution may be determined in one or more processors operating according to embodiments of this invention. The overall aim of the operations shown in
The determination of the expected reward distribution according to the embodiment shown in
From equation (1), knowing the drop off time a can be determined and then the initial guess may be used to determine an expected reward distribution. This may be done at operation 305 for example by fitting a cumulative exponential decay function to the time estimate received in operation 303, such as the function represented by equation (1). The reward distribution determined at operation 305 may also be regarded as a default reward distribution since it may be used by default in the determination of an expected reward even if there is no observed reward distribution on which to base a determination.
In a separate series of operations 307-311 that need not necessarily follow operations 303-305, an observed reward distribution is compiled. At 307 a processor implementing a method according to an embodiment of the invention is in a waiting state awaiting a response event. At operation 309 a notification of a response event is received and used to compile an observed reward distribution. The response event will have occurred in response to the serving of an offer. The notification may identify the offer and an amount of reward, e.g. income from a respondent. In some methods and systems according to embodiments of the invention, negative response events may be notified as well as positive response events. A negative response event could include but is not limited to a respondent positively declining an offer or no response having been received after a predetermined period of time such as the time period received in operation 303.
At operation 311 confidence bounds are maintained for data points, possibly but not necessarily all data points, included in the observed distribution. This will include the determination of the confidence bounds, e.g. upper and lower limits, for data points in a manner known in the art. For example the confidence bounds may be the observed amount plus or minus one standard error. As each new data point is added to the observed distribution the confidence bounds predetermined for existing data points will need to be determined anew, e.g. recalculated.
Operations 313 and 315 relate to the updating or adjusting of the expected reward distribution. According to some embodiments of the invention, these operations may be carried out in response to each response event. According to other embodiments of the invention the performance of these operations may be asynchronous with the receiving of response event notifications.
At operation 313 it is determined whether the expected reward distribution lies outside the confidence bounds and if so, at operation 315 the expected reward distribution is adjusted or updated so that the expected reward distribution is within the confidence bounds. According to some embodiments of the invention the amount of the adjustment at operation 313 is just sufficient, for example the minimum needed, to bring the expected reward distribution within the confidence bounds.
If it is determined at operation 313 that the expected reward distribution is not outside the confidence bounds, operation 315 does not occur. The flow may return to operation 307 so that decision 313 only occurs after a response event. Alternatively, if the updating of the expected reward distribution is asynchronous, operation 313 may simply be repeated at intervals, for example periodically.
In a practical implementation of the operation flow shown in
Some embodiments of the invention may be used in making a decision between one or more offers to be served to a respondent. This may take the form of optimization of a web page for a particular respondent. Thus some embodiments of the invention may take the form of a system for web page optimization. Such a system may determine which is the most appropriate offer of a set of offers to be served to a respondent. It may be configured to implement the operations described with reference to
Each of the servers 501 and 502 and the respondent and user devices 503 and 505 may comprise computing devices comprising one or more processors. An example of a suitable computing device is illustrated in
The decision server 501 may comprise one or more processors implementing one or more computing platforms or modules, two of which are indicated in
A success criterion or goal may also be defined by a user so that the system can measure the success of each offer. The success criterion or goal may define the reward. This could be click through, product acquisition, revenue spent, or any other metric of interest. Whatever it is, whenever a respondent performs the behavior leading to the success criterion or goal, the decision server should receive this information to improve the accuracy of its predictions. The goal is determined by a user configuring the system.
For some embodiments of the invention, it is desirable to configure the estimated time period, or estimated drop off time, being the time period for which a system needs to wait to receive a predetermined majority fraction, for example an estimated 95% (or other suitable maximum percentage) of any response generated by the offer's display or other serving of an offer. In the examples of
In the example systems shown in
In the example of
Referring to
In the foregoing example, the content of each offer is stored at the website host server 502 and the decision server 503 simply uses identifiers for each offer. It is also possible, according to some embodiments of the invention, for the content to be stored at the decision server 501.
Referring to
The response data is used to determine an actual reward. The determination of the actual reward resulting from the serving of an offer (or, for example if the reward is response rate, multiple serves of the offer) may be performed at the website host server 502, the decision server 501 or elsewhere. According to some embodiments of the invention, the reward is reported as a notification to a decisioning platform within the decision server 501.
It should be noted that, from the point of view of the respondent or other operator of respondent device 503, such as a call center agent, the communication between the website host server 502 and the decision server 501 is invisible, and the respondent or call center agent, for example, may not be aware that this is happening.
In brief, respondents have various offers displayed to them as they interact with some third party application. Whether or not they interact with this content (e.g., click through, go on to acquire something, or whatever the optimization goal is) this is recorded and notified to the decision server 501. A system according to some embodiments of the invention may learn that one offer is more successful than the rest in terms of reward for a particular group of respondents, and this offer will then be preferentially served to this group in future. Therefore, future respondents, or the same respondents coming back, should generate improved rewards. This invention aims to provide greater rewards from respondents more quickly and in a more reliable way.
A suitable architecture for a system according to some embodiments of the invention will now be described in more detail.
The decisioning platform 701 listens for requests for a selection of an offer from a set of offers, or Decision Requests. Decision Requests 705 may be received for example from a website host server such as server 502 or a respondent device such as device 503 as shown in
According to some embodiments of the invention, the decision request 705 may include a value for one or more variables characterizing the respondent. These may be used to calculate a predicted reward for each offer and the prediction may be used in the selection of an offer to be served to the respondent. According to other embodiments of the invention, the decision request 705 may simply identify the respondent and values for variables characterizing the respondent may be retrieved from a database, for example at decision server 501.
A targeting strategy may then be applied to the set of filtered offers, for example using one or more targeting applications. This may be applied in a targeting strategy module 708 which is comprised in the decisioning platform 701. There are several available strategies and corresponding modules illustrated in
In response to the request, an offer is selected in decision module 807. The selection is based at least partially on the expected, e.g. learned, reward distribution. The selection may be carried out in various ways and may use the learned reward distribution in various ways. According to some embodiments of the invention, as part of the decision process, a predicted reward is calculated for each offer in the set of offers. The expected reward distribution may be used in the determination of predicted reward. For example, the predicted reward which is calculated may not be specific to the customer to whom the offer is served but may be a prediction of total reward, e.g. for all customers in a certain category over a particular period of time. This calculation may require knowledge of the drop-off time, so the sooner this can be learned based on observations, the earlier will improvements in calculation of total reward be achieved.
The decision, or selection of an offer, may be based solely on the predicted reward or may take other factors into account. Thus a score may be determined for each offer in response to the request and the score may simply be equivalent to the predicted reward or may take other factors into account. For example, according to some embodiments of the invention, all of the offers comprised in the request 805 may be scored against values for one or more variables characterizing the respondent, for example variables contained in a respondent profile, where each score is a prediction of the expected future reward from that particular offer being shown to the current respondent. The expected reward distribution may affect some of the variables, or the extent to which those variables are taken into account, or weighted, in the score determination. The scores may be generated using a mathematical model that is being continually updated, part of which may be the expected reward distribution. The expected reward distribution may be updated for example using the process described with reference to
Following the decision, an “impression”, e.g., an instance of serving the offer, is logged in a serve logger 809 for the chosen offer. A signal identifying the selected offer is output to cause the selected offer to be served to a respondent. This is shown in
Independently, positive response events, such as response event 820, may be received by a model repository 813 in which the model used to predict rewards is stored. These response events may come from many places and may depend on the goal and the configuration. Some examples:
If the goal is click-through on a web page, the click-through page could be tagged to automatically send an event to a decision server 501, in much the same way as packages like Google Analytics record page-views in real-time.
If the goal is revenue, a client of the decision server 501 (e.g., the company running the website) may send a batch of events every night created from their purchase records.
In the example embodiment of the invention shown in
The updating can be considered to take place, in one example, in several stages:
The updated reward distribution generated in module 815 according to operations 313 and 315 may be provided to the update model module 817 and here it may be used, for example along with other information supplied from other sources, to update the model, for example to update a part of an overall model to which the updated reward distribution relates.
The reward distribution is not necessarily updated in response to each response event. According to some embodiments of the invention, update operations may be performed in module 815 after a batch of response notifications has been received. Batches of notifications may be compiled on a time basis, for example periodically such as daily or hourly, or on a numerical basis so that each batch contains the same number of responses. The batches may be compiled in the model repository 813, in which case the reward distribution may be updated in module 815 in response to the receipt of a batch of responses or response notifications from model repository 813. Alternatively module 815 may include memory or storage and the batches may be compiled at module 815. Either way, the module 815 may be responsible for receiving notifications, for example in batches, of response events occurring in response to servings of offers and compiling the observed reward distribution for each offer using said notifications.
The graph of
The y-axis shows efficiency of learning. This defined so that if offers are chosen at random, efficiency equals 0%. If the best possible choice is made for every customer then efficiency equals 100%.
The lower line shows modeling according to a method known in the art. Here no learning happens for 14 days as no responses are processed during that time. The flat section shows that without learning the system can function no better than random. Once responses start to be processed, learning is rapid, reaching a plateau of about 80% after 26 days.
The upper line shows modeling according to a method according to some embodiments of the invention. Learning starts immediately. Even though the efficiency is low at the beginning the system will be giving a positive ROI from an earlier stage.
This is a very simple simulation where the drop-off time was guessed correctly and the default distribution was correct. It shows the benefit of using a reward distribution to perform updates. Some embodiments of the invention enable use of a reward distribution in a reliable way.
Reference is made to
Operating system 1015 may be or may include any code segment designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing system 1000, for example, scheduling execution of programs. Operating system 1015 may be a commercial operating system. Memory 1020 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units. In one embodiment, memory 1020 is a non-transitory processor-readable storage medium that stores instructions and the instructions are executed by controller 1005. Memory 1020 may be or may include a plurality of, possibly different memory units.
Executable code 1025 may be any executable code, e.g., an application, a program, a process, task or script. Executable code 1025 may be executed by controller 1005 possibly under control of operating system 1015. Executable code 1025 may comprise code for selecting an offer to be served and calculating reward predictions according to some embodiments of the invention.
In some embodiments, more than one computing system 1000 may be used. For example, a plurality of computing devices that include components similar to those included in computing system 1000 may be connected to a network and used as a system.
Storage 1030 may be or may include one or more storage components, for example, a hard disk drive, a Compact Disk (CD) drive, a CD-Recordable (CD-R) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. For example, memory 1020 may be a non-volatile memory having the storage capacity of storage 1030. Accordingly, although shown as a separate component, storage 1030 may be embedded or included in memory 1020. Storage 1030 or memory 1020 may store identifiers of or content of offers, and may thus serve the function of offer repository 703 shown in
Input to and output from a computing system according to some embodiments of the invention may be via an API, such as API 1012 shown in
The decision server 501 may include user input devices. Input devices 1035 may be or may include a mouse, a keyboard, a touch screen or pad or any suitable input device. It will be recognized that any suitable number of input devices may be operatively connected to computing system 1000 as shown by block 1035.
The decision server may include one or more output devices. Output devices 1040 may include one or more displays, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices may be operatively connected to computing system 1000 as shown by block 1040. Any applicable input/output (I/O) devices may be connected to computing system 1000 as shown by blocks 1035 and 1040. For example, a wired or wireless network interface card (NIC), a modem, printer or a universal serial bus (USB) device or external hard drive may be included in input devices 1035 and/or output devices 1040.
Input devices 1035 and output devices 1040 are shown as providing input to the system 1000 via the API 1012 for the purpose of embodiments of the invention. For the performance of other functions carried out by system 1000, input devices 1035 and output devices 1040 may provide input to or receive output from other parts of the system 1000.
Alternatively, all output from the decision server 501 may be to a remote device such as user device 505 in which case the output devices may be replaced by a data port.
Some embodiments of the invention may include computer readable medium or an article such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein. For example, some embodiments of the invention may comprise a storage medium such as memory 1020, computer-executable instructions such as executable code 1025 and a controller such as controller 1005.
A system according to some embodiments of the invention may include components such as, but not limited to, a plurality of central processing units (CPU), e.g., similar to controller 1005, or any other suitable multi-purpose or specific processors or controllers, a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units. An embodiment of system may additionally include other suitable hardware components and/or software components. In some embodiments, a system may include or may be, for example, a personal computer, a desktop computer, a mobile computer, a laptop computer, a notebook computer, a terminal, a workstation, a server computer, a Personal Digital Assistant (PDA) device, a tablet computer, a network device, or any other suitable computing device. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed at the same point in time.
Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed at the same point in time.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art.
Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed at the same point in time.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Various embodiments have been presented. Each of these embodiments may of course include features from other embodiments presented, and embodiments not specifically described may include various features described herein.
This application claims benefit from U.S. provisional patent application No. 62/141,273 filed Apr. 1, 2015, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62141273 | Apr 2015 | US |