The disclosure relates generally to the field of measuring similarities, and more specifically to systems, methods, and devices for measuring similarities of and generating sorted recommendations for unique items.
Collaborative filtering systems can be used to recommend items to a user based on a user's previously expressed preferences. In general, a collaborative filter collects information about the preferences of many users and uses that information to predict the preferences of individual users. For example, if a user streams videos from a streaming video service, the service may utilize a collaborative filter to generate recommendations of alternate videos to stream based on an estimated likelihood that the user will be interested in the alternate videos. In another example, a user may purchase books from a bookseller, and the bookseller may utilize a collaborative filter to make recommendations to the user of alternate books based on an estimated likelihood that the user will be interested in the alternate books.
Collaborative filtering has limitations in its effectiveness, particularly when the relevant products or services are unique. Typically, a collaborative filter will assume that all similar items are identical. For example, when a user streams a particular movie, the collaborative filter assumes that all users who stream that movie view the same content, which is typically a valid assumption for a video streaming service. In another example, a collaborative filter that makes recommendations of books will typically assume that all customers that purchase a particular book are buying identical content. Accordingly, it can be advantageous to have systems, devices, and/or methods for measuring similarity of and generating recommendations for unique items.
The disclosure herein provides methods, systems, and devices for measuring similarities of and generating recommendations for unique items, customizable items, and/or items having varying conditions, such as used vehicles and homes.
In some embodiments, a recommendation system for generating recommendations of alternative unique items comprises: an items information database configured to store data relating to unique items; a penalty computation engine configured to calculate a dissimilarity penalty, the dissimilarity penalty at least partially generated based on a magnitude of dissimilarity between a selected item and an alternative item, the penalty computation engine comprising: a customizations filter configured to calculate a customization score, the customization score representing an estimated preference impact of a difference between at least one customization attribute of the selected item and at least one customization attribute of the alternative item; a condition filter configured to calculate a condition score, the condition score representing an estimated preference impact of a difference between at least one condition attribute of the selected item and at least one condition attribute of the alternative item; wherein data representing the at least one customization attribute and the at least one condition attribute of the alternative item is configured to be stored in the items information database; and a dissimilarity penalty calculator configured to generate the dissimilarity penalty by combining at least the customization score and the condition score; a recommendation compilation engine configured to generate a recommendation of alternative unique items, wherein the recommendation compilation engine is configured to electronically communicate with the penalty computation engine to calculate dissimilarity penalties for each of a plurality of alternative unique items, the recommendation of alternative unique items comprising a ranking of at least a portion of the plurality of alternative unique items, the ranking based at least partially on the calculated dissimilarity penalties; and one or more computers configured to operate the recommendation compilation engine, wherein the one or more computers comprises a computer processor and an electronic storage medium.
In certain embodiments, a computer-implemented method for generating recommendations of alternative unique items comprises: receiving electronic data indicating a selection of a selected item; calculating, using a computer system, a customization score for each of a plurality of alternative unique items, the customization score representing an estimated preference impact of a difference between at least one customization attribute of the selected item and at least one customization attribute of the alternative unique item; calculating, using the computer system, a condition score for each of the plurality of alternative unique items, the condition score representing an estimated preference impact of a difference between at least one condition attribute of the selected item and at least one condition attribute of the alternative unique item; generating, using the computer system, a dissimilarity penalty for each of the plurality of alternative unique items by combining at least the customization score and the condition score; and generating, using the computer system, a recommendation of alternative unique items, the recommendation comprising a ranking of at least a portion of the plurality of alternative unique items, the ranking based at least partially on the generated dissimilarity penalties; wherein the computer system comprises a computer processor and electronic memory.
In some embodiments, a computer-readable, non-transitory storage medium having a computer program stored thereon for causing a suitably programmed computer system to process by one or more processors computer-program code by performing a method for generating recommendations of alternative unique items when the computer program is executed on the suitably programmed computer system comprises: receiving electronic data indicating a selection of a selected item; calculating, using a computer system, a customization score for each of a plurality of alternative unique items, the customization score representing an estimated preference impact of a difference between at least one customization attribute of the selected item and at least one customization attribute of the alternative unique item; calculating, using the computer system, a condition score for each of the plurality of alternative unique items, the condition score representing an estimated preference impact of a difference between at least one condition attribute of the selected item and at least one condition attribute of the alternative unique item; generating, using the computer system, a dissimilarity penalty for each of the plurality of alternative unique items by combining at least the customization score and the condition score; and generating, using the computer system, a recommendation of alternative unique items, the recommendation comprising a ranking of at least a portion of the plurality of alternative unique items, the ranking based at least partially on the generated dissimilarity penalties; wherein the computer system comprises a computer processor and electronic memory.
For purposes of this summary, certain aspects, advantages, and novel features of the invention are described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, for example, those skilled in the art will recognize that the invention may be embodied or carried out in a manner that achieves one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
The foregoing and other features, aspects, and advantages of the present invention are described in detail below with reference to the drawings of various embodiments, which are intended to illustrate and not to limit the invention. The drawings comprise the following figures in which:
Although several embodiments, examples, and illustrations are disclosed below, it will be understood by those of ordinary skill in the art that the invention described herein extends beyond the specifically disclosed embodiments, examples, and illustrations and includes other uses of the invention and obvious modifications and equivalents thereof Embodiments of the invention are described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner simply because it is being used in conjunction with a detailed description of certain specific embodiments of the invention. In addition, embodiments of the invention can comprise several novel features and no single feature is solely responsible for its desirable attributes or is essential to practicing the inventions herein described.
The disclosure herein provides methods, systems, and devices for measuring similarities of and generating recommendations for unique items, customizable items, and/or items having varying conditions, such as used vehicles, homes, commercial real estate, household goods, collectibles, automotive components, and the like. In an embodiment, a system receives information indicating an item a user is interested in. The system is configured to compare that item to various alternate items and to return a list of alternate items, the list being sorted by the estimated likelihood that the user will also be interested in each of the alternate items. For example, a system as described herein can be configured to generate a sorted list of used vehicle listings that a user may be interested in, based on the user's expressed interest in a different used vehicle listing.
Collaborative filtering systems can be used to recommend items to a user based on a user's previously expressed preferences. For example, a bookseller may use a collaborative filtering system that makes recommendations of alternative books to users; however, the system will typically assume that all customers that purchase a particular book are buying identical content. The bookseller may make distinctions between media format, for example, hardcover, softcover, electronic, audio, etc., but the collaborative filter typically does not take that information into account in generating recommendations.
When unique items are being sold, a collaborative filter alone may not be sufficient to make a useful recommendation to a user of alternative items based on the user's previously expressed interests. For example, used vehicles are unique. No used vehicle has the exact same condition or customization as any other used vehicle, limiting the effectiveness of a collaborative filter that assumes all similar items are identical. Therefore, it is desirable to have a system for making useful recommendations of customized and precisely conditioned items, such as used vehicles and homes.
In an embodiment, a recommendation system decomposes information describing unique items to determine the unique items' prototypes, customizations, conditions, and/or statuses in the marketplace. The system can be configured to take into account each of these factors in generating estimated likelihoods that a user will be interested in alternate items based on the user's expressed interest in a base item or multiple base items. For example, a recommendation system can be configured to perform collaborative filtering with respect to prototypes, but also to weigh the differences in value of various customizations, evaluate differences in condition and marketplace status, and then combine some or all of this information to estimate more accurately which alternate items a user may be interested in based on the user's expressed interest in the base item or items.
In some embodiments, a recommendation system decomposes a selected item and various alternative items into their prototypes, customizations, conditions, and/or statuses in the marketplace. The system can be configured to calculate a score for each of the proto-types, customizations, conditions, and/or statuses of each alternative item as compared to the selected item. In some embodiments, the recommendation system is configured to normalize the various scores and to combine them together to produce a single dissimilarity penalty or score. In some embodiments, the lower the value of the dissimilarity penalty, the more similar the items are. For example, a dissimilarity penalty of zero would indicate identical items. The dissimilarity penalty for each alternative item can then be used to generate a recommendation comprising a sorted list of alternative items, with items having lower dissimilarity penalties being at the top of the list.
With respect to the various scores, in some embodiments a collaborative filter may be used to compare the prototypes of an alternative item and the selected item. The collaborative filter can generate a score indicating how similar the prototypes of the two items are. However, when dealing with unique items, a better recommendation can be generated by also or alternatively calculating one or more scores that indicate at least partially a similarity of an alternative item to the selected item with respect to the items' customizations, conditions, and/or statuses. Therefore, in some embodiments, a customization score is calculated to indicate at least partially a similarity between the customizations of the alternative item and selected item. Condition and/or status scores may also be calculated.
Embodiments of recommendation systems as described herein address deficiencies in merely using collaborative filters to recommend alternative items, especially when dealing with items that are unique, customized, and/or that have varying conditions or marketplace statuses. By considering differences in customizations, condition, and status, alone or in combination with a collaborative filter, recommendation systems described herein can be configured to generate more accurate and/or more useful recommendations of alternative items than merely using a collaborative filter.
In some embodiments, a prototype is a definition of a category in which, disregarding customization, condition, or marketplace status, all items within that category are considered to be sufficiently interchangeable. For example, a used vehicle prototype may comprise a year, make, and model, such as “2001 Honda Accords.” In some embodiments, a prototype may be defined more narrowly and comprise, for example, a year, make, model, trim, and body style (for example, “2001 Honda Accord LX Sedans”). In some embodiments, a prototype may be defined more broadly and consist of, for example, several years of a similar car make or several years of a car make and body style. In some embodiments, a prototype may be defined by at least partially, for example, a style of vehicle, such as crew cab trucks or extended bed trucks. The precise scope of a prototype can be configured based on a desired level of interchangeability. In some embodiments, a training system or engine analyzes data containing indications of user preferences to estimate the interchangeability of the items included in various prototype definitions and to then enable setting of the scope of the prototype definitions based on the estimated interchangeability of the various items.
A customization of a used vehicle may comprise, for example, the engine size, the type of material used for the interior of a car (for example, leather versus cloth), the color of the car, and the like. A condition of a used vehicle may comprise, for example, the number of miles on the odometer, whether the title is clean, whether the vehicle has been in an accident, and the like. The status of a used vehicle in the marketplace may comprise, for example, the listing price of the car, where the car is listed for sale geographically, whether the car is offered for sale by a dealer or a private seller, and the like.
In various embodiments described herein, recommendation systems are configured to provide useful recommendations by taking into account collaborative filtering on prototypes, customizations of various items, and the conditions and marketplace statuses of various items, and by combining all or some of these considerations to produce a single recommendation. If, on the other hand, a recommendation system for unique items was based solely on collaborative filtering that assumes similar items are identical, the system would likely produce less useful recommendations. For example, a collaborative filter may learn that Toyota Camrys are desirable to customers who have expressed interest in Honda Accords. However, a Camry that differs wildly in terms of its luxury and performance features, its condition, and/or its market status from an Accord a user has expressed interest in will not be a useful recommendation. Therefore, when a recommendation system takes into account various customizations, conditions, and/or market statuses, the recommendation system will likely be able to make more useful recommendations of cars that would be of interest to the user.
Although the embodiments described herein are generally described with respect to recommending used vehicles to users, the embodiments described herein are applicable in other markets where items and/or services are unique, customizable, and/or have varying conditions. For example, in real estate, a system can be configured to define a prototype in terms of a neighborhood description, type of residence, size of residence, and the like. Customizations can include flooring choices, the presence or absence of garages and pools, and the like. Condition and market status can include the year built, the price per square foot, the listing price, the geographic area, and the like. The neighborhood description for a real estate prototype may include, for example, the zip code, school attendance zone, etc. The type of residence in a real estate prototype may include, for example, single family residence versus multi-family residence. The size may include the number of bathrooms, the number of bedrooms, the total square feet, etc. The embodiments described herein may also apply to various other types of items, such as existing homes, commercial real estate, household goods, customized electronics, customized goods, clothing, automotive components, collectibles, sporting goods, toys, hobby products, gift items, and/or various other types of unique or customizable products or items or products or items having various conditions offered for sale.
As an example of the embodiments described herein being applied to services offered for sale, a person looking to hire a window washing service may be interested in recommendations of alternative window washing services. A system could be configured to define a prototype in terms of, for example, whether a window washing service is a residential or commercial window washing service. Customizations could include the types of tools each window washing service uses, the specific capabilities of each window washing service, and/or the like. Condition and market status can include the price of each service, the geographic location of each service, and/or the like. These techniques may also be applied to various other services, such as dog walking services, computer repair services, car repair services, medical services, insurance services, and various other types of services.
In an embodiment, a base listing or group of listings is provided to a recommendation system to indicate a used vehicle listing or listings that a user has expressed interest in. The recommendation system is configured to compute a dissimilarity penalty between the base listing or listings and various alternative vehicle listings. A separate dissimilarity penalty is calculated for each alternative vehicle listing. The recommendation system can be configured to then produce a sorted list of alternatives or candidates by sorting the candidates based on the dissimilar penalty to provide only the most relevant alternate listings to a user.
Each calculated dissimilarity penalty is a quantitative indication of how similar each alternative vehicle is to the base vehicle or vehicles or how likely it would be that the user would be interested in the alternative vehicle. For example, if an alternative vehicle listing is identical or perfectly substitutable to the base vehicle, the dissimilarity penalty may be zero. If an alternative vehicle listing is only slightly different than the base vehicle, the dissimilarity penalty may be, for example, two. If an alternative vehicle listing has more significant differences than the base vehicle, the dissimilarity penalty may be, for example, ten or even much larger. In some embodiments, the recommendation system can be configured to disregard dissimilarity penalties higher than a certain threshold, such as 1000, and/or to consider any dissimilarity penalty greater than a certain threshold as indicating two items are completely dissimilar and not interchangeable at all. It should be noted that the dissimilarity penalty, although generally described as indicating the amount of “difference” or “similarity” between two items, may be configured to take into account more than just the raw “differences” or “similarities” between two items. For example, the calculation of a dissimilarity penalty can be configured to take into account what the relative values are of various customizations, how likely a user will be interested in various customizations or conditions, how far a user will likely want to travel to purchase a used vehicle, etc.
In some embodiments, a recommendation system can be configured to calculate dissimilarity penalties by combining multiple Mahalanobis distances. A Mahalanobis distance is a measurement that uses correlations between various variables to determine the similarity of one set of data to a base set of data. The Mahalanobis distance between vectors v1, v2 is sqrt((v1−v2){circumflex over ( )}T*S{circumflex over ( )}−1*(v1−v2)), where T denotes a transpose, and S{circumflex over ( )}−1 is an inverse covariance matrix. In some embodiments, other types of calculations are used to calculate dissimilarity penalties, in addition to or in lieu of Mahalanobis distance measurements, for example, linear regressions, nonlinear regressions, nearest neighbor analysis, and the like.
In some embodiments, four dissimilarity penalties are calculated: one using a collaborative filter, one using a customization penalty calculation, one using a condition penalty calculation, and one using a status penalty calculation. In some embodiments, the condition and status penalty calculation are combined into one calculation. The multiple dissimilarity penalties can then be combined to generate an overall dissimilarity penalty for each alternative item. In some embodiments, one or more of the dissimilarity penalties are normalized before being combined, for example, by using a Mahalanobis distance calculation. In some embodiments, one or more of the dissimilarity penalties are normalized by being converted to a probability or log(probability) before being combined. In other embodiments, various other methods of normalization may be used. In some embodiments, different weights are applied to each dissimilarity penalty, for example by multiplying one or more raw dissimilarity penalties or normalized dissimilarity penalties by a predetermined factor, to result in some penalties being weighted higher than others in the combination process.
A collaborative filter of a recommendation system can be configured to estimate a probability that a user will be interested in a candidate or alternative listing given the user's preferences and the prototypes of the base and alternative items. The user's preferences may include, for example, a history of other prototypes the user has been exposed to and labels of which prototypes are relevant. In some embodiments, the probability is assumed to be the parameter of binomial distribution whose variance is then used with the probability to compute a Mahalanobis distance for the candidate. This Mahalanobis distance is a nonlinear mapping converted from a probability (i.e. high value implies high relevance) to a penalty (i.e. low value implies high relevance) commensurate with the other Mahalanobis distance calculations.
A customization filter of a recommendation system can be configured to calculate a penalty derived from a model that predicts the preference impact of different customization options. The predicted preference impact may comprise a predicted impact on price and/or various other criterion or attributes that may affect a user's preference for one item as compared to another item. For example, for a pair of vehicles, their customization penalty can be computed as the Mahalanobis distance from the origin for a single vector with nonzero elements that contains the price impact of options present on one vehicle but not the other. A condition filter of a recommendation system can be configured to calculate a condition penalty for a pair of vehicles by computing the Mahalanobis distance on vectors describing various condition attributes, such as year, mileage, and price. A status filter of a recommendation system can be configured to similarly calculate a status penalty that takes into account geographic location, whether the vehicle is being offered for sale by a dealer or a private seller, and other marketplace status attributes. In some embodiments, the marketplace status attributes are combined with the condition attributes to calculate a single Mahalanobis distance incorporating both condition and status attributes.
The multiple Mahalanobis distance calculations used in various systems as described herein can be configured to utilize inverse covariance matrices that can be created using training data and a training system. The training data may be, for example, collected from logs of user behavior on various websites that display cars for sale to consumers. In some embodiments, when a consumer expresses interest in a pair of vehicles, that interest is interpreted as a signal of the relevance of the two vehicle prototypes and can be utilized to generate training vectors describing the differences in customization, condition, and/or status for the two vehicles.
The user access point system 100 comprises filters 108 to allow the user to further filter the displayed alternate items 104. For example, a user may indicate that the user is only interested in items that are a certain make and model, within a certain price range, within a certain mileage range, etc. The user access point system 100 can also be configured to enable a user to search for a specific vehicle or listing using the search box 106. For example, a user can insert a VIN number or stock number in the search box 106 and search for that specific item. In some embodiments, the user access point system 100 is configured to allow a user to select an alternate item 104. When the user selects the alternate item 104, the user access point system 100 can be configured to display additional information about that item to the user. In addition, the user access point system can be configured to send an indication to a recommendation system that the user is interested in that particular alternate item 104. The recommendation system can be configured to then develop a sorted listing of recommended alternates to the selected alternate item 104 and send the sorted list to the user access point system 100 for display to the user.
The recommendation system 202 comprises a recommendation compilation engine 210, a penalty computation engine 212, a training engine 214, and multiple databases. The databases of the recommendation system 202 comprise a training data database 220, an inventory or items information database 222, a prototype factor database 224, a customization factor database 225, a condition factor database 226, and a status factor database 227. The inventory or items information database 222 can be configured to hold or store data related to or describing various items currently on the market for sale. In various embodiments, the items information database 222 can be configured to store data related to a relatively large number of items, such as 1,000, 10,000, 100,000, 1,000,000, or more. For example, the items information database 222 can contain information describing various used vehicles for sale including all relevant information of those items, such as make, model, year, price, condition, marketplace status, etc. In some embodiments, the items information database 222 can be configured to store data related to 18,000,000 or more items. The training database 220 can be configured to contain training data used by the training engine 214 to generate various factors and/or inverse covariance matrices stored in the various factor databases for use by the penalty computation engine 212. The penalty computation engine 212 can be configured to communicate with the various factor databases to compute various penalties and a final dissimilarity penalty for each comparison of a base item or items to a potential alternate item.
The recommendation compilation model 210 can be configured to instruct the penalty computation engine 212 to perform comparisons of a base item or items to a potential alternate item. The recommendation compilation engine 210 can be configured to keep track of the dissimilarity penalty computed by the penalty computation engine 212 of each potential alternate item versus the base item or items. The recommendation compilation engine 210 can instruct the penalty computation engine 212 to perform penalty computations over and over with respect to a single base item or group of base items and various alternate items. Once the penalty computation engine 212 has calculated dissimilarity penalties for a sufficient number of potential alternate items, the recommendation compilation engine 210 can be configured to sort the various potential alternate items based on their respective dissimilarity penalty calculated by the penalty computation engine 212. The recommendation compilation engine 210 can be configured to then send the sorted list of potential alternate items through the network 204 to one or more user access point systems 100 for display to a user or users.
The penalty computation engine 212 comprises various modules for use in computing a dissimilarity penalty. The various modules comprise a decomposition filter 230, a collaborative filter 231, a customization filter 232, a condition filter 233, a status filter 234, a normalization filter 235, and a dissimilarity penalty calculator 236. Some or all of the modules of the penalty computation engine 212 can be configured to perform discreet portions the dissimilarity penalty computation process, as further shown and described in
In operation, the penalty computation engine 212 is configured to receive information describing a base item or group of base items and an alternative item from the recommendation compilation engine 210 and/or items information database 222. The penalty computation engine 212 is configured to then use its various modules to calculate a dissimilarity penalty and send the calculated dissimilarity penalty back to the recommendation compilation engine 210. Some or all of the various modules of the penalty computation engine 212 can be configured to communicate with the various factor databases of the recommendation system 202 to receive factors or inverse covariance matrices used in penalty calculations.
In the embodiment shown in
The training engine 214 comprises several modules used to generate factors and/or inverse covariance matrices to include in the various factor databases. The training engine 214 comprises a decomposition training generator 240, a collaborative filter training generator 241, a customization training generator 242, a condition training generator 243, and a status training generator 244. The various modules of the training engine 214 can be configured to calculate various factors based on data from the training data database 220 as shown and described in
In some embodiments, the recommendation system 202 can be incorporated into one or more user access point systems 100. In those embodiments, the user makes selections using a user access point system 100, and the user access point system 100 is configured to generate recommendations without having to contact a remote recommendation system over a network. In some embodiments, the recommendation system 202 can be incorporated into the one or more user access point systems 100, but the user access point systems 100 can additionally be configured to access a remote system to update one or more of the recommendation system's various databases or configuration parameters. In some embodiments, modules of the recommendation system 202 are separated into separate systems rather than all being part of the same recommendation system 202. For example, one system can be configured to generate training data and generate data for the various factor databases, while another system can be configured to generate recommendations. In various embodiments, additional systems can be included, for example, to generate data to fill the items information database 222 and/or the training data database 220. In some embodiments, the recommendation system 202 comprises an administration module configured to allow a system administrator to administer various settings of the system, such as to adjust relative weights applied in the normalization filter 235 and to adjust various other configuration settings.
In some embodiments, the recommendation system 202 comprises a training data collection engine 250 configured to collect training data by monitoring user interactions with various listings of unique items. The training data collection engine 250 may, in some embodiments, comprise one or more item listing systems, such as automotive websites. These item listing systems may, for example, list a plurality of used vehicles for sale and allow users of the systems to interact with those listings. Users may, for example, interact with the listings by clicking on certain listings, comparing listings to each other, purchasing an item associated with a listing, expressing interest in one or more listings, and/or the like. The training data collection engine 250 can be configured to collect training data and store the training data in the training data database 220 for use by the training engine 214 to generate the various factors utilized by the penalty computation engine 212. The training data collection engine 250 can be configured to operate, in some embodiments, as shown and described below with reference to
In some embodiments, the training data collection engine 250 operates substantially in real time by logging user interactions with various unique items as the users are interacting with the listings of these unique items. One or more computer systems is necessary for the training data collection process due at least in part to the volume of information required to be collected to enable the training engine 250 to generate useful factors for use by the penalty computation engine 212. A human would not realistically be able to monitor one or more or a multitude of item listing systems substantially in real time, as numerous users are simultaneously interacting with listings of these services. In some embodiments, the training data collection engine 250 may comprise 5, 10, 50, 100 or more item listing services or systems that all need to be monitored substantially in real time and substantially simultaneously. In some embodiments, each of the item listing systems may have 5, 10, 50, 100, 1000 or more users using the listing system substantially simultaneously, adding to the need for at least one computer system to monitor the interactions of users with listings.
In some embodiments, other portions of the recommendation system 202 also operate substantially in real time. For example, when a user of the recommendation system 202 selects an item the user is interested in, such as by using the user access point system 100, the user access point system 100 is configured to send data relating to the selected item to the recommendation system 202 through the network 204. The user of the user access point system 100 will expect a response from the recommendation system 202 in a relatively short amount of time. The user may, for example, expect a recommendation of alternative items from the recommendation system in merely the length of time a webpage takes to load. In some instances, the time available to generate a recommendation based on a selected item may comprise a few seconds or even less time, such as less than one second. Therefore, a recommendation system configured to generate a recommendation based on a selected item requires at least one computer system configured to generate the recommendation substantially in real time. A human would not be able to decompose the selected item and alternative items into their various attributes, calculate various scores for each alternative item, calculate a dissimilarity penalty for each alternative item, sort the alternative items by their respective dissimilarity penalties, and present a recommendation comprising at least some of the alternative items to a user all in a manner of seconds or even less time. Rather, if a human were even able to perform these tasks, the human would spend several orders of magnitude more time on the process, which would be unacceptable to most users of such a system.
Not only is one or more computer systems and/or computer hardware required to operate the training data collection engine 250 and/or other portions of the recommendation system 202 to allow the system to operate at an acceptable speed, but a human would not even be able to perform at least some of the operations performed by the recommendation system 202. For example, the training data collection engine 250 in some embodiments requires simultaneous monitoring of multiple item listing services generating websites for display to a multitude of users. A human being would not be able to realistically monitor all of these interactions without the assistance of a computer system. With respect to other portions of the recommendation system 202, various calculations take place that would be extremely complex for a human to do without the assistance of a computer system. Some examples are the Mahalanobis distance calculations, covariance matrix calculations, regression calculations, and various other complex calculations required in some embodiments by the recommendation system 202.
Additionally, when generating a recommendation, a multitude of variables must be tracked for each alternative item, and in some embodiments a relatively large number of alternative items is considered. For example, the recommendation compilation engine 210 may take into account 10, 50, 100, 1000, 10,000, or more alternative items in the calculation of one recommendation to present to a user. In addition to the amount of time it would take a human to perform such calculations, it would be difficult, if not impossible, for a human to keep track of all of the variables and calculated items required to be calculated for each of the alternative items when generating a single recommendation. Additionally, even if only a few alternative items were being considered, the various factors, such as prototype, customization, condition, and status factors used to calculate dissimilarity penalties must also be managed. A human would not be able to realistically manage each of these plurality of factors in addition to calculating the various scores and dissimilarity penalties. Therefore, it can be seen that the operation of a recommendation system as described herein necessitates the use of computer hardware and/or at least one computer system.
At block 306 a dissimilarity penalty is calculated for each candidate of the candidate vehicle listings as compared to the base vehicle listing. For example, the recommendation compilation engine 210 of the recommendation system 202 can be configured to instruct the penalty computation engine 212 to calculate a dissimilarity penalty of the base vehicle listing versus an individual candidate vehicle listing. The recommendation compilation engine 210 can be configured to instruct the penalty computation engine 212 to repeat this process for each candidate vehicle in the candidate vehicle listings. In some embodiments, the candidate vehicle listings comprise 10, 100, 1000, 10,000 or more candidate vehicle listings. In some embodiments, the process performed at block 306 is performed substantially in real time.
At block 308 the candidate vehicle listings are sorted based on their respective dissimilarity penalties. For example, the recommendation compilation engine 210 of the recommendation system 202 can be configured to sort the candidate vehicles in the candidate vehicle listings based on the dissimilarity penalties calculated by the penalty computation engine 212. In some embodiments, the system can also be configured to eliminate certain candidate vehicle listings from the overall set of candidate vehicle listings at block 308. For example, if a candidate vehicle listing has a dissimilarity penalty exceeding a certain value or relative value, that listing may be eliminated. In another example, the system is configured to only include a predetermined number of candidate vehicle listings, such as five or ten, and only that number of listings having the lowest dissimilarity penalties are retained, with the remaining listings being discarded. At block 310 the sorted listing of the candidate or alternate vehicle listings is provided to a user. For example, the sorted list can be sent from the recommendation system 202 through the network 204 to one or more user access point systems 100 for viewing by the user.
At block 404 a collaborative filter receives a user's preferences and compares the prototypes of items 1 and 2 to generate a score indicating the probability that a user will be interested in item 2. This process may be performed by, for example, the collaborative filter 231 of the recommendation system 202 shown in
At block 406 a customization penalty or score is calculated based on a model that predicts the preference impact of different customization options of the two items or vehicles. In some embodiments, the predicted preference impact is a predicted price impact. In other embodiments, the predicted preference impact may be a predicted impact on additional and/or other criterion or attributes that may affect a user's preference of one item over another. As described above, the penalty can be calculated, for example, as the Mahalanobis distance from the origin for a single vector with nonzero elements that contains the price impacts of options present on one vehicle but not the other vehicle. This penalty or score is output as a score at block 416. The process performed at block 406 can be performed by, for example, the customization filter 232 of the recommendation system 202 shown in
At block 408 a condition and status penalty for the pair of vehicles is computed, as described above, as the Mahalanobis distance between vectors describing condition and/or market attributes such as year, mileage, price, geographic location, etc., for items 1 and 2. This condition penalty is output as a score at block 418. The process performed at block 408 can be performed by, for example, the condition filter 233 and/or the status filter 234 of the recommendation system 202 shown in
At blocks 420, 422, and 424, one or more of the various penalties or scores output from the prototype, customization, and condition and/or status calculations are normalized prior to being combined. The normalization can be performed by, for example, the normalization filter 235 of the recommendation system 202 shown in
At block 430 the three scores or penalties are combined producing a final dissimilarity penalty for item 2 as compared to item 1. The dissimilarity penalty may be calculated by, for example, the dissimilarity penalty calculator 236 of the recommendation system 202 shown in
Section 604 illustrates an example of calculating customization penalties between a base vehicle and a first candidate and a second candidate. This is an example of the calculations performed at blocks 406, 416, and 422 of
When performing the customization penalty calculation a second time for the base vehicle versus candidate 2, the penalty comes out differently. In this case, both vehicles have an 8 cylinder engine, so there is no penalty for that option. However, the base vehicle has keyless entry, which candidate 2 does not. The price difference for the keyless entry option is estimated to be $295. Neither of these vehicles has the flexible fuel option, so there is no penalty for that option. Candidate 2 does have the four wheel drive option that the base vehicle does not have. Therefore, an estimated $3,154 price difference is indicated. In this case, the total estimated price difference is $3,449, leading to a normalized customization penalty of 11.90.
Although the customization penalty in this embodiment is proportional to the estimated value of each customization option, various other embodiments may calculate the penalty in various other manners. In some embodiments, the training engine 214 of the recommendation system 202 may additionally include one or more modules configured to analyze data from the training data database 220 to determine the estimated values of various customization options. These estimated values may then be used during the customization penalty calculation process by, for example, the customization filter 232.
Section 606 illustrates an example of calculating a condition penalty comparing a base vehicle to a first and a second candidate. In this example, the base vehicle is a 2010 model year with 46,006 miles and is listed for a price of $24,995.00. Candidate 1 is a 2012 model year, two years newer than the base vehicle, which generates a normalized penalty of 1.83. Candidate 1 has less mileage than the base vehicle, which generates a normalized penalty of 0.77. Candidate 1 is listed for the same price as the base vehicle, therefore there is no penalty. The total penalty is calculated by adding the various individual penalties. In this case, the total normalized condition penalty is 2.60. Although in this example the condition penalty calculation takes into account three criterion or attributes that may affect a user's preference for one item as compared to another item, namely year, mileage, and price, in other embodiments, the condition penalty calculation process may take into account more, less, or different criterion or attributes.
For candidate 2 versus the base vehicle, candidate 2 is a model year 2007, creating a normalized penalty of 4.11. The mileage and price differences generate normalized penalties of 4.77 and 5.41, respectively. Therefore, the total normalized condition penalty for candidate 2 is 14.29. The various penalty values calculated in section 606 can be calculated, for example, as squared Mahalanobis distances, as described above.
At section 608 the final dissimilarity penalty is calculated for each candidate item. For candidate 1, the dissimilarity penalty is calculated as 11.53, which is the sum of the penalties from the prototype, customization, and condition penalties. Similarly, the dissimilarity penalty for candidate 2 is calculated as 42.19. Therefore, in this case, if both candidates 1 and 2 were to be displayed to a user as potential alternates to the base vehicle, candidate 1, having the lower dissimilarity penalty, would displayed prior to candidate 2 in a list sorted on the dissimilarity penalties. Although not shown in
At block 704 the various prototypes viewed by a user are provided. At block 710 a collaborative filter is trained. For example, the process reviews the various pairs of prototypes observed by a user and generates training vectors or factors to be stored at block 712 in, for example, the prototype factor database 224 shown in
At block 706 the various customizations of the vehicle listings observed by the user are provided. In embodiments that utilize an estimated price impact as the criteria or as one of multiple criterion for estimating a preference impact of various condition attributes, the process may additionally provide estimated values of all model specific price driver attributes for each listing (i.e. the value of each individual customization option, as discussed above with reference to
At block 708 various condition factors of the listings observed by the users are provided. For example, the year, mileage, price, etc., of the listings observed by the users are provided. At block 718 the process computes a relative change from a user-specific mean for the various condition factors. At block 720 the data from multiple users is combined and a covariance matrix is produced. The process at blocks 718 and 720 can be performed by, for example, the condition training generator 243 shown in
At block 806 the recommendation system receives details of the selected item. For example, the recommendation system 202 may receive details of various attributes of the item selected at block 804. At block 808 the recommendation system receives alternative item details. For example, the recommendation system may receive details of various attributes of a plurality of alternative items. The attributes received at blocks 806 and 808 may comprise, for example, attributes defining items' prototype, customization, condition, and/or status attributes.
At block 810 a dissimilarity penalty calculator generates a dissimilarity penalty for each alternative item. In some embodiments, the dissimilarity penalty calculator generates the dissimilarity penalties as shown and described with reference to
At block 814, the recommendation compilation engine presents the recommendation. For example, the recommendation compilation engine may transmit data representing the recommendation through a network. At block 816, the user and/or requesting system displays the presentation and/or forwards the presentation to another system. For example, the display interface 208 of the user access point system 100 may display the recommendation to a user using an electronic display. In another example, a requesting system transfers the presentation to another system through a network to allow the another system to present or otherwise utilize the recommendation.
At block 908 the recommendation system receives the data indicating the selected item. The recommendation system may, for example, receive the data indicating the selected item through a network. At block 910, the recommendation system determines whether attributes of the selected item need to be retrieved. For example, the selected item may have prototype, customization, condition, and/or status attributes. In some embodiments, the user and/or requesting system may include these various attributes in the data sent to the recommendation system. In that case, additional attributes may not need to be retrieved by the recommendation system. In other embodiments, the user and/or requesting system may send a unique identifier to the recommendation system, wherein the unique identifier identifies the selected item. In that case, the process moves to block 912 and retrieves the attributes from an items information database shown at block 914. The recommendation system can, for example, utilize the unique identifier to retrieve attribute data related to the selected item, the attribute data being stored in the items information database and linked to the unique identifier.
If attributes do not need to be retrieved, or once the attributes have been retrieved, the process moves to block 916. At block 916 a decomposition filter decomposes the selected item. This may be performed by, for example, the decomposition filter 230 shown in
At block 918 the recommendation compilation engine determines a number of alternative items to analyze. The number of alternative items may be defined by, for example, a variable set by an administrator. In other embodiments, the number of alternative items to analyze may depend on the number of alternative items available for analysis in the items information database or on data received from the user and/or requesting system. In some embodiments, the recommendation system determines to analyze every alternative item in the items information database. In other embodiments, the system determines to analyze only a subset of items in the items information database.
At block 920 a penalty computation engine retrieves data related to one alternative item. This block may be performed by, for example, the penalty computation engine 212 shown in
At blocks 924 through 942, the recommendation system calculates a dissimilarity penalty for the one alternative item. At block 924, a collaborative filter calculates a probability score. The probability score may be calculated as described above with reference to
At block 932, a condition filter calculates a condition score for the alternative item. The condition score may be calculated as shown and described above with reference to
At block 940 a normalization filter normalizes the various scores. Although, in this embodiment, the normalization of the four scores takes place after all the scores have been calculated, in some embodiments, normalization of one or more of the scores is a part of the process of calculating that score. In those embodiments, one or more scores do not need to be normalized at block 940, because they were already normalized during their calculation. At block 942, a dissimilarity penalty calculator generates a dissimilarity penalty. The dissimilarity penalty may be calculated based on the various scores as shown and described with reference to
At block 944, the recommendation system determines whether there are more alternative items to analyze. If there are more alternative items to analyze, the process moves back to block 920 and proceeds as previously described. Once all of the alternative items have been analyzed, the process flow moves from block 944 to block 946. At block 946, the recommendation compilation engine sorts the alternative items based on their dissimilarity penalties. In some embodiments, the recommendation compilation engine sorts the alternative items in ascending order based on the dissimilarity penalties. At block 948, the recommendation compilation engine determines a number of alternative items to present. In some embodiments, an administrator of the recommendation system may pre-determine the number of alternative items to present in the recommendation. In other embodiments, the number of alternative items to present may be determined by information transmitted to the recommendation system from the user and/or requesting system.
At block 950, the recommendation compilation engine generates a recommendation. The recommendation may comprise, for example, the first ten alternative items in the sorted alternative item list, if the determined number of alternative items to present was ten. At block 952, the recommendation compilation engine presents the recommendation. For example, the recommendation compilation engine may transmit data representing the recommendation through a network to the user and/or requesting system. At block 954 the user and/or requesting system receives the presentation from the recommendation system. At block 956, the user and/or requesting system displays the presentation and/or forwards the presentation to another system. For example, the user access point system 100 may utilize the display interface 208 to display the presented recommendation to a user on an electronic display. In another example, the system may forward the presentation on to another system for that system's use.
At block 1008, a training engine extracts all pairs of listings viewed by a single user. For example, the training engine 214 shown in
At block 1012, the training engine determines whether there is data for additional users. If there is additional user data, the process flow moves back to block 1008 and extracts all pairs of listings viewed by the next user. This process continues until there are no additional users to extract data relating to. Once there is no additional user data, the process moves from block 1012 to blocks 1014, 1022, 1032, and/or 1044.
Beginning at blocks 1014, 1022, 1032, and 1044, the training engine generates factors for use in computing dissimilarity penalties as further described above with reference to
At block 1022, the training engine provides status attributes to a status training generator. The status attributes may be, for example, status attributes determined during the decomposition at block 1010. At block 1024, the status training generator combines status data from multiple users. At block 1026, the status training generator computes a covariance matrix based on the combined status data from multiple users, as further described above with reference to
At block 1032, the training engine provides customization attributes to a customization training generator. The customization attributes may be, for example, the customization attributes determined during the decomposition at block 1010. At block 1034, the customization training generator determines values for all price driver attributes. These values may be determined, for example, as described above with reference to
At block 1044, the training engine provides condition attributes to a condition training generator. These condition attributes may be, for example, the condition attributes determined during the decomposition at block 1010. At block 1046, the condition training generator determines user specific mean values of the various condition attributes. The user specific mean values may be populated in real time and/or retrieved from a database of user specific mean values. At block 1048, the condition training generator computes a relative changes from the user specific mean values. For example, the condition training generator computes variations of the condition attributes provided at block 1044 to the user specific mean values determined at block 1046. At block 1050, the condition training generator combines condition data from multiple users. At block 1052, the condition training generator computes a covariance matrix based on the condition data from multiple users. At block 1054, the condition training generator stores the covariance values from the covariance matrix in, for example, a condition factor database shown at block 1056.
Computing System
In some embodiments, the computer clients and/or servers described above take the form of a computing system 1100 illustrated in
Recommendation System Module
In one embodiment, the computing system 1100 comprises a recommendation system module 1106 that carries out the functions described herein with reference to generating recommendations of unique items, including any one of the recommendation techniques described above. In some embodiments, the computing system 1100 additionally comprises a training engine, decomposition training generator, collaborative filter training generator, customization training generator, condition training generator, status training generator, recommendation compilation engine, penalty computation engine, decomposition filter, collaborative filter, customization filter, condition filter, status filter, normalization filter, dissimilarity penalty calculator, user access point system module, item selection receiver, and/or display interface that carries out the functions described herein with reference to generating recommendations of unique items. The recommendation system module 1106 and/or other modules may be executed on the computing system 1100 by a central processing unit 1102 discussed further below.
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, COBOL, CICS, Java, Lua, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
Computing System Components
In one embodiment, the computing system 1100 also comprises a mainframe computer suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system 1100 also comprises a central processing unit (“CPU”) 1102, which may comprise a conventional microprocessor. The computing system 1100 further comprises a memory 1104, such as random access memory (“RAM”) for temporary storage of information and/or a read only memory (“ROM”) for permanent storage of information, and a mass storage device 1108, such as a hard drive, diskette, or optical media storage device. Typically, the modules of the computing system 1100 are connected to the computer using a standards based bus system. In different embodiments, the standards based bus system could be Peripheral Component Interconnect (PCI), Microchannel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures, for example.
The computing system 1100 comprises one or more commonly available input/output (I/O) devices and interfaces 1112, such as a keyboard, mouse, touchpad, and printer. In one embodiment, the I/O devices and interfaces 1112 comprise one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs, application software data, and multimedia presentations, for example. In one or more embodiments, the I/O devices and interfaces 1112 comprise a microphone and/or motion sensor that allow a user to generate input to the computing system 1100 using sounds, voice, motion, gestures, or the like. In the embodiment of
Computing System Device/Operating System
The computing system 1100 may run on a variety of computing devices, such as, for example, a server, a Windows server, a Structure Query Language server, a Unix server, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a cell phone, a smartphone, a personal digital assistant, a kiosk, an audio player, an e-reader device, and so forth. The computing system 1100 is generally controlled and coordinated by operating system software, such as z/OS, Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8, Linux, BSD, SunOS, Solaris, Android, iOS, BlackBerry OS, or other compatible operating systems. In Macintosh systems, the operating system may be any available operating system, such as MAC OS X. In other embodiments, the computing system 1100 may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (“GUI”), among other things.
Network
In the embodiment of
Access to the recommendation system module 1106 of the computer system 1100 by computing systems 1117 and/or by data sources 1119 may be through a web-enabled user access point such as the computing systems' 1117 or data source's 1119 personal computer, cellular phone, smartphone, laptop, tablet computer, e-reader device, audio player, or other device capable of connecting to the network 1116. Such a device may have a browser module that is implemented as a module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 1116.
The browser module may be implemented as a combination of an all points addressable display such as a cathode-ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. In addition, the browser module may be implemented to communicate with input devices 1112 and may also comprise software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements such as, for example, menus, windows, dialog boxes, toolbars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the browser module may communicate with a set of input and output devices to receive signals from the user.
The input device(s) may comprise a keyboard, roller ball, pen and stylus, mouse, trackball, voice recognition system, or pre-designated switches or buttons. The output device(s) may comprise a speaker, a display screen, a printer, or a voice synthesizer. In addition a touch screen may act as a hybrid input/output device. In another embodiment, a user may interact with the system more directly such as through a system terminal connected to the score generator without communications over the Internet, a WAN, or LAN, or similar network.
In some embodiments, the system 1100 may comprise a physical or logical connection established between a remote microprocessor and a mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases on-line in real time. The remote microprocessor may be operated by an entity operating the computer system 1100, including the client server systems or the main server system, and/or may be operated by one or more of the data sources 1119 and/or one or more of the computing systems 1117. In some embodiments, terminal emulation software may be used on the microprocessor for participating in the micro-mainframe link.
In some embodiments, computing systems 1117 who are internal to an entity operating the computer system 1100 may access the recommendation system module 1106 internally as an application or process run by the CPU 1102.
User Access Point
In an embodiment, a user access point or user interface comprises a personal computer, a laptop computer, a tablet computer, an e-reader device, a cellular phone, a smartphone, a GPS system, a Blackberry® device, a portable computing device, a server, a computer workstation, a local area network of individual computers, an interactive kiosk, a personal digital assistant, an interactive wireless communications device, a handheld computer, an embedded computing device, an audio player, or the like.
Other Systems
In addition to the systems that are illustrated in
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The headings used herein are for the convenience of the reader only and are not meant to limit the scope of the inventions or claims.
Although this invention has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the present invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof. Additionally, the skilled artisan will recognize that any of the above-described methods can be carried out using any appropriate apparatus. Further, the disclosure herein of any particular feature, aspect, method, property, characteristic, quality, attribute, element, or the like in connection with an embodiment can be used in all other embodiments set forth herein. For all of the embodiments described herein the steps of the methods need not be performed sequentially. Thus, it is intended that the scope of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above.
This application is a continuation of U.S. patent application Ser. No. 15/617,927, titled SYSTEMS, METHODS, AND DEVICES FOR MEASURING SIMILARITY OF AND GENERATING RECOMMENDATIONS FOR UNIQUE ITEMS, filed on Jun. 8, 2017, issued as U.S. Pat. No. 10,643,265, which is a continuation of U.S. patent application Ser. No. 15/076,468, titled SYSTEMS, METHODS, AND DEVICES FOR MEASURING SIMILARITY OF AND GENERATING RECOMMENDATIONS FOR UNIQUE ITEMS, filed on Mar. 21, 2016, issued as U.S. Pat. No. 9,710,843, which is a continuation of U.S. patent application Ser. No. 14/790,552, titled SYSTEMS, METHODS, AND DEVICES FOR MEASURING SIMILARITY OF AND GENERATING RECOMMENDATIONS FOR UNIQUE ITEMS, filed on Jul. 2, 2015, issued as U.S. Pat. No. 9,324,104, which is a continuation of U.S. patent application Ser. No. 13/927,513, titled SYSTEMS, METHODS, AND DEVICES FOR MEASURING SIMILARITY OF AND GENERATING RECOMMENDATIONS FOR UNIQUE ITEMS, filed on Jun. 26, 2013, issued as U.S. Pat. No. 9,104,718, which claims the benefit of U.S. Provisional Application No. 61/774,325, titled SYSTEMS, METHODS, AND DEVICES FOR MEASURING SIMILARITY OF AND GENERATING RECOMMENDATIONS FOR UNIQUE ITEMS, filed on Mar. 7, 2013. Each of the foregoing applications is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5459656 | Fields | Oct 1995 | A |
5687322 | Deaton et al. | Nov 1997 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6029195 | Herz | Feb 2000 | A |
6266649 | Linden | Jul 2001 | B1 |
6430539 | Lazarus et al. | Aug 2002 | B1 |
6510406 | Marchisio | Jan 2003 | B1 |
6539392 | Rebane | Mar 2003 | B1 |
6611726 | Crosswhite | Aug 2003 | B1 |
6711581 | Rebane | Mar 2004 | B2 |
6751600 | Wolin | Jun 2004 | B1 |
6751614 | Rao | Jun 2004 | B1 |
6775664 | Lang et al. | Aug 2004 | B2 |
6873983 | Ugai et al. | Mar 2005 | B2 |
6886005 | Davis | Apr 2005 | B2 |
7013005 | Yacoub et al. | Mar 2006 | B2 |
7069258 | Bothwell | Jun 2006 | B1 |
7165119 | Fish | Jan 2007 | B2 |
7167871 | Farahat et al. | Jan 2007 | B2 |
7206780 | Slackman | Apr 2007 | B2 |
7225107 | Buxton et al. | May 2007 | B2 |
7243102 | Naam et al. | Jul 2007 | B1 |
7260568 | Zhang et al. | Aug 2007 | B2 |
7283951 | Marchisio et al. | Oct 2007 | B2 |
7293017 | Hurst-Hiller et al. | Nov 2007 | B2 |
7356430 | Miguelanez et al. | Apr 2008 | B2 |
7395170 | Scott et al. | Jul 2008 | B2 |
7398201 | Marchisio et al. | Jul 2008 | B2 |
7433885 | Jones | Oct 2008 | B2 |
7440955 | Khandelwal et al. | Oct 2008 | B2 |
7444308 | Guyon et al. | Oct 2008 | B2 |
7467232 | Fish et al. | Dec 2008 | B2 |
7509321 | Wong et al. | Mar 2009 | B2 |
7523047 | Neal et al. | Apr 2009 | B1 |
7542947 | Guyon et al. | Jun 2009 | B2 |
7565362 | Brill et al. | Jul 2009 | B2 |
7593904 | Kirshenbaum et al. | Sep 2009 | B1 |
7593934 | Li et al. | Sep 2009 | B2 |
7596552 | Levy et al. | Sep 2009 | B2 |
7603348 | He et al. | Oct 2009 | B2 |
7631008 | Carson et al. | Dec 2009 | B2 |
7636715 | Kalleh | Dec 2009 | B2 |
7647314 | Sun et al. | Jan 2010 | B2 |
7657493 | Meijer et al. | Feb 2010 | B2 |
7660581 | Ramer et al. | Feb 2010 | B2 |
7664746 | Majumder | Feb 2010 | B2 |
7672865 | Kumar et al. | Mar 2010 | B2 |
7680835 | MacLaurin et al. | Mar 2010 | B2 |
7685197 | Fain et al. | Mar 2010 | B2 |
7693818 | Majumder | Apr 2010 | B2 |
7693901 | Ka et al. | Apr 2010 | B2 |
7716202 | Slackman | May 2010 | B2 |
7716217 | Marston et al. | May 2010 | B2 |
7716225 | Dean et al. | May 2010 | B1 |
7716226 | Barney | May 2010 | B2 |
7725307 | Bennett | May 2010 | B2 |
7725451 | Jing et al. | May 2010 | B2 |
7739408 | Fish et al. | Jun 2010 | B2 |
7761447 | Brill et al. | Jul 2010 | B2 |
7788252 | Delli Santi et al. | Aug 2010 | B2 |
7801358 | Furmaniak et al. | Sep 2010 | B2 |
7801843 | Kumar et al. | Sep 2010 | B2 |
7802197 | Lew et al. | Sep 2010 | B2 |
7805331 | Demir et al. | Sep 2010 | B2 |
7805385 | Steck et al. | Sep 2010 | B2 |
7805438 | Liu et al. | Sep 2010 | B2 |
7809740 | Chung et al. | Oct 2010 | B2 |
7818186 | Bonissone et al. | Oct 2010 | B2 |
7827060 | Wright et al. | Nov 2010 | B2 |
7827170 | Horling et al. | Nov 2010 | B1 |
7831463 | Nagar | Nov 2010 | B2 |
7836057 | Micaelian et al. | Nov 2010 | B1 |
7849030 | Ellingsworth | Dec 2010 | B2 |
7860871 | Ramer et al. | Dec 2010 | B2 |
7865187 | Ramer et al. | Jan 2011 | B2 |
7865418 | Uenohara et al. | Jan 2011 | B2 |
7870017 | Kamath | Jan 2011 | B2 |
7895193 | Cucerzan et al. | Feb 2011 | B2 |
7899455 | Ramer et al. | Mar 2011 | B2 |
7899707 | Mesaros | Mar 2011 | B1 |
7904448 | Chung et al. | Mar 2011 | B2 |
7908238 | Nolet et al. | Mar 2011 | B1 |
7912458 | Ramer et al. | Mar 2011 | B2 |
7912713 | Vair et al. | Mar 2011 | B2 |
7921068 | Guyon et al. | Apr 2011 | B2 |
7921069 | Canny et al. | Apr 2011 | B2 |
7930197 | Ozzie et al. | Apr 2011 | B2 |
7933388 | Vanier et al. | Apr 2011 | B1 |
7937345 | Schmidtler et al. | May 2011 | B2 |
7941329 | Kenedy et al. | May 2011 | B2 |
7958067 | Schmidtler et al. | Jun 2011 | B2 |
7966219 | Singh et al. | Jun 2011 | B1 |
7987261 | Gamble | Jul 2011 | B2 |
8001121 | Wang et al. | Aug 2011 | B2 |
8005643 | Tunkelang et al. | Aug 2011 | B2 |
8005684 | Cheng et al. | Aug 2011 | B1 |
8005774 | Chapelle | Aug 2011 | B2 |
8005826 | Sahami et al. | Aug 2011 | B1 |
8015065 | Davies | Sep 2011 | B2 |
8024327 | Tunkelang et al. | Sep 2011 | B2 |
8024349 | Shao et al. | Sep 2011 | B1 |
8027864 | Gilbert | Sep 2011 | B2 |
8027865 | Gilbert | Sep 2011 | B2 |
8032405 | Gilbert | Oct 2011 | B2 |
8051033 | Kenedy et al. | Nov 2011 | B2 |
8051073 | Tunkelang et al. | Nov 2011 | B2 |
8065184 | Wright et al. | Nov 2011 | B2 |
8065254 | Das et al. | Nov 2011 | B1 |
8069055 | Keen | Nov 2011 | B2 |
8078606 | Slackman | Dec 2011 | B2 |
8095523 | Brave et al. | Jan 2012 | B2 |
8099376 | Serrano-Morales et al. | Jan 2012 | B2 |
8126881 | Sethi et al. | Feb 2012 | B1 |
8326845 | Sethi et al. | Dec 2012 | B2 |
8375037 | Sethi et al. | Feb 2013 | B2 |
8600823 | Raines et al. | Dec 2013 | B1 |
8620717 | Micaelian et al. | Dec 2013 | B1 |
8645844 | Strobel et al. | Feb 2014 | B1 |
8650093 | Seergy et al. | Feb 2014 | B2 |
8744925 | Seergy et al. | Jun 2014 | B2 |
8868572 | Sethi et al. | Oct 2014 | B2 |
8954424 | Gupta et al. | Feb 2015 | B2 |
9104718 | Levy et al. | Aug 2015 | B1 |
9123075 | Seergy et al. | Sep 2015 | B2 |
9141984 | Seergy et al. | Sep 2015 | B2 |
9147216 | Seergy et al. | Sep 2015 | B2 |
9324104 | Levy et al. | Apr 2016 | B1 |
9460467 | Seergy et al. | Oct 2016 | B2 |
9465873 | Franke et al. | Oct 2016 | B1 |
9626704 | Seergy et al. | Apr 2017 | B2 |
9665897 | Seergy et al. | May 2017 | B2 |
9690857 | Franke et al. | Jun 2017 | B1 |
9710843 | Levy et al. | Jul 2017 | B2 |
9799000 | Sethi et al. | Oct 2017 | B2 |
9830635 | Levy et al. | Nov 2017 | B1 |
10007946 | Levy et al. | Jun 2018 | B1 |
10109001 | Levy et al. | Oct 2018 | B1 |
10115074 | Sethi et al. | Oct 2018 | B1 |
10127596 | Franke et al. | Nov 2018 | B1 |
10140655 | Seergy et al. | Nov 2018 | B2 |
10157231 | Franke et al. | Dec 2018 | B1 |
10223720 | Seergy et al. | Mar 2019 | B2 |
10223722 | Seergy et al. | Mar 2019 | B2 |
10268704 | Sanderson et al. | Apr 2019 | B1 |
10572555 | Franke et al. | Feb 2020 | B1 |
10643265 | Levy | May 2020 | B2 |
10796362 | Seergy et al. | Oct 2020 | B2 |
10942976 | Franke et al. | Mar 2021 | B2 |
20020035520 | Weiss | Mar 2002 | A1 |
20020077931 | Henrion et al. | Jun 2002 | A1 |
20030004745 | Takakura | Jan 2003 | A1 |
20030014402 | Sealand et al. | Jan 2003 | A1 |
20030088457 | Keil et al. | May 2003 | A1 |
20030089218 | Gang et al. | May 2003 | A1 |
20030225644 | Casati et al. | Dec 2003 | A1 |
20030229552 | Lebaric et al. | Dec 2003 | A1 |
20050027670 | Petropoulos | Feb 2005 | A1 |
20050086070 | Engelman | Apr 2005 | A1 |
20050154717 | Watson et al. | Jul 2005 | A1 |
20060026081 | Keil et al. | Feb 2006 | A1 |
20060041465 | Woehler | Feb 2006 | A1 |
20060248035 | Gendler | Nov 2006 | A1 |
20070027741 | Gruhl et al. | Feb 2007 | A1 |
20070060114 | Ramer et al. | Mar 2007 | A1 |
20070060129 | Ramer et al. | Mar 2007 | A1 |
20070143184 | Szmanda | Jun 2007 | A1 |
20070156514 | Wright et al. | Jul 2007 | A1 |
20070156887 | Wright et al. | Jul 2007 | A1 |
20070239373 | Nasle | Oct 2007 | A1 |
20080065425 | Giuffre et al. | Mar 2008 | A1 |
20080154761 | Flake | Jun 2008 | A1 |
20080162574 | Gilbert | Jul 2008 | A1 |
20080222010 | Hudak et al. | Sep 2008 | A1 |
20090006118 | Pollak | Jan 2009 | A1 |
20090112927 | Chitnis et al. | Apr 2009 | A1 |
20110055207 | Schorzman et al. | Mar 2011 | A1 |
20120005045 | Baker | Jan 2012 | A1 |
20120047158 | Lee et al. | Feb 2012 | A1 |
20120239582 | Solari | Sep 2012 | A1 |
20120303412 | Etzioni | Nov 2012 | A1 |
20130006801 | Solari | Jan 2013 | A1 |
20130030870 | Swinson et al. | Jan 2013 | A1 |
20130073411 | Bhogal | Mar 2013 | A1 |
20130073413 | Bhogal et al. | Mar 2013 | A1 |
20130159057 | Hsiao | Jun 2013 | A1 |
20130173453 | Raines | Jul 2013 | A1 |
20130179252 | Dong et al. | Jul 2013 | A1 |
20130246300 | Fischer | Sep 2013 | A1 |
20140032572 | Eustice et al. | Jan 2014 | A1 |
20140046804 | Nadjarian et al. | Feb 2014 | A1 |
20140100989 | Zhang | Apr 2014 | A1 |
20140257934 | Chrzan et al. | Sep 2014 | A1 |
20140258042 | Butler et al. | Sep 2014 | A1 |
20140258044 | Chrzan et al. | Sep 2014 | A1 |
20140279195 | Kubicki et al. | Sep 2014 | A1 |
20150100420 | Van Horn et al. | Apr 2015 | A1 |
20150310131 | Greystoke et al. | Oct 2015 | A1 |
20150324737 | Chrzan et al. | Nov 2015 | A1 |
20150324879 | Lu et al. | Nov 2015 | A1 |
20170011444 | Greystoke | Jan 2017 | A1 |
Entry |
---|
U.S. Appl. No. 13/938,045 including its prosecution history, the cited references, and the Office Actions therein, Franke et al. |
U.S. Appl. No. 15/981,750 including its prosecution history, the cited references, and the Office Actiosn therein, Levy et al. |
Persaud et al., “Innovative mobile marketing via smartphones: Are consumers ready?”, Article in Marketing Intelligence & Planning, Jun. 2012. |
Number | Date | Country | |
---|---|---|---|
61774325 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15617927 | Jun 2017 | US |
Child | 16836608 | US | |
Parent | 15076468 | Mar 2016 | US |
Child | 15617927 | US | |
Parent | 14790552 | Jul 2015 | US |
Child | 15076468 | US | |
Parent | 13927513 | Jun 2013 | US |
Child | 14790552 | US |