This invention is in the field of the incorporation of machine learning into large scale systems using logical separation of machine learning algorithms and models.
Automating customer care through self-service solutions (e.g., Interactive Voice Response (IVR), web-based self-care, etc. . . . ) results in substantial cost savings and operational efficiencies. However, due to several factors, such automated systems are unable to provide customers with a quality experience. The present invention addresses some of the deficiencies experienced with presently existing automated care systems.
Machine learning is a field where various algorithms have been developed that can automatically learn from experience. The foundation of these algorithms is built on mathematics and statistics which can be employed to predict events, classify entities, diagnose problems and model function approximations, just to name a few examples. While there are various products available for incorporating machine learning into computerized systems, those products currently suffer from a variety of limitations. For example, they generally lack distributed processing capabilities, and rely heavily on batch and non-transactional data processing. The teachings and techniques of this application can be used to address one or more of the limitations of the prior art to improve the scalability of machine learning solutions.
As discussed herein, portions of this application could be implemented in a method of incorporating a machine learning solution, comprising a machine learning model and a machine learning algorithm, into a transaction processing system. Such a method might comprise the steps of: configuring an application interface component to access the functionality of the machine learning solution; configuring an algorithm management component to create, store, and provide access to instances of the machine learning algorithm according to requests received from the application interface component; and, configuring a model management component to create, store, version, synchronize, and provide access to instances of the machine learning model according to requests received from the application interface component. In some such implementations, the act of configuring the model management component comprises the steps of: implementing a synchronization policy for an implementation of the machine learning model according to a synchronization policy interface; implementing a persistence policy for the implementation of the machine learning model according to a persistence policy interface; and, implementing a versioning policy for the implementation of the machine learning model according to a versioning policy interface.
Further, some embodiments might include a mechanism for recording all activities of one or more of the agent, the caller, and/or the automated care system. In some embodiments, such recorded information might be used to improve the quality of self-care applications. Some embodiments might make use of transaction information and use it to learn, which might be accomplished with the help of machine learning software agents. This might allow the automated self-care system to improve its performance in the area of user interface, speech, language and classification models, application logic, and/or other areas relevant to customer and/or agent interactions. In some embodiments, agent, IVR and/or caller actions (e.g., what was spoken, what control buttons were pressed, what text input was proffered) might be logged with relevant contextual information for further analysis and as input to an adaptive learning phase, where this data might be used to improve self-care automation application.
Some embodiments of the invention of this application comprise a system and/or methods deployed comprising/using a computer memory containing one or more models utilized to process a customer interaction. In such embodiments, the system/method might further comprise/use one or more sets of computer executable instructions. For the sake of clarity, certain terms in the above paragraph should be understood to have particular meanings within the context of this application. For example, the term “computer executable instructions” should be understood to refer to data which can be used to specify physical or logical operations which can be performed by a computer. Similarly, a “computer” or a “computer system” should be understood to refer to any device or group of devices which is capable of performing one or more logical and/or physical operations on data to produce a result. “Data” should be understood to refer to information which is represented in a form which is capable of being processed, stored and/or transmitted. A “computer readable medium” should be understood to refer to any object, substance, or combination of objects or substances, capable of storing data or instructions in a form in which they can be retrieved and/or processed by a device. A “computer readable medium” should not be limited to any particular type or organization, and should be understood to include distributed and decentralized systems however they are physically or logically disposed, as well as storage objects of systems which are located in a defined and/or circumscribed physical and/or logical space. An “interface” should be understood to refer to a set of commands, formats, specifications and tools which are used by an entity presenting the interface to send and receive information.
For the purpose of clarity, certain terms used in the above description should be understood as having particular meanings within the technological context of this application. In that vein, the term “step” should be understood to refer to an action, measure, or process which might be taken to achieve a goal. It should further be understood that, unless an order is explicitly set forth as “necessary” through the use of that word, steps are not limited to being performed in the order in which they are presented, and can be performed in any order, or in parallel.
The verb “configure,” when used in the context of “configuring an application interface component,” or “configuring a model management component,” should be understood to refer to the act of designing, adapting or modifying the thing being configured for a specific purpose. For example, “configuring an algorithm management component to create, store, and provide access to instances of a machine learning algorithm” might comprise modifying an application interface component to contain references to specific storage devices or locations on identified computer readable media on which instances of machine learning algorithm will be stored for a particular implementation.
A “transaction processing system” should be understood to refer to a system which is operable to perform unitary tasks or interactions. For example, a transaction processing system which receives requests for recommendations, and is expected to provide the requested recommendations, would be able to service the requests in real time, without requiring suspension of the requesting process to accommodate off line batch processing (e.g., overnight incorporation of learning events into a recommendation model).
Additionally, a “component” should be understood to refer to a constituent part of a larger entity (e.g., a system or an apparatus). One specific type of component which is often used in software is a module (i.e., a set of one or more instructions which, when executed by a computer, result in that computer performing a specified task). Continuing, the verb “incorporate,” in the context of “incorporating” a machine learning solution into a system, should be understood to refer to the act of making the machine learning solution, or its functionality, a part of, or available to, the system into which the solution is “incorporated.” Further, the verb “access” (and various forms thereof) in the context of this application should be understood to refer to the act of, locating, retrieving, utilizing, or making available, the thing being “accessed.”
“Algorithms” comprise the logic necessary to update and/or create associated models. The term “functionality” should be understood to refer to the capabilities of a thing to be used for one or more purposes, or to fulfill one or more requirements.
The verb phrase “provide access to” (and the various forms thereof) should be understood to refer to the act of allowing some other entity to access that which the access is provided to. The verb “create” (and the various forms thereof) should be understood to refer to the act of bringing something into existence. The verb “store” (and the various forms thereof) should be understood to include any act of preserving or maintaining, however brief in duration that act might be. The verb “version” (and the various forms thereof) should be understood to refer to the act of assigning a unique identifier to an identifiable state of a set of data, such as a machine learning model. The verb “synchronize” (and the various forms thereof) should be understood to refer to bringing different instances of some construct, such as a machine learning model, into agreement with one another so that the same calculations performed using different (though synchronized) instances of the construct will yield the same result. The term “request” should be understood to refer to a message sent between components. The verb “receive” (and various forms thereof) should be understood to refer to the act of obtaining access to something. It should be understood that the word “receiving” does not imply obtaining exclusive access to something. Thus, a message could be “received” through the well known prior art methods of reading a notice posted on a bulletin board, or overhearing an announcement which was intended for someone else. Of course, that example is not intended to exclude situations where a device or entity gains exclusive access from the scope of the verb “receive.” For example, if a letter is handed to its addressee, it could be said that the addressee has “received” the letter. The verb “implement” (and the various forms thereof) should be understood to refer to the act of making one or more steps necessary for the thing being “implemented” to be brought into existence, or realized. When a first thing is “implemented” “according to” a second thing, it should be understood to mean that the first thing is being put into practice in a manner consistent with, or controlled by, the second thing.
As a second example of how the teachings of this application might be implemented, some portions of this application might be implemented in a method comprising: receiving a learning event; sending the learning event to a model management component, where the model management component is configured to coordinate the learning event with a model; using a method exposed by a synchronization policy interface associated with the model, sending the learning event to a prototypical model; updating the prototypical model according to the learning event; and, propagating the updated prototypical model to a plurality of models. In such a method, the plurality of models to which the updated prototypical model is propagated might comprise the model and at least one model hosted remotely from the updated prototypical model. Further, in some such methods, the propagating step might take place during on-line processing.
For the purpose of clarity, certain terms as used above should be understood to have specific meanings in the context of this application. In that vein, the term “learning event” should be understood to refer to a situation or occurrence which takes place during the operation of a computer system which should be used to influence the operation of the computer system in the future.
The verb “send” (and various forms thereof) should be understood to refer to an entity or device making a thing available to one or more other entities or devices. It should be understood that the word sending does not imply that the entity or device “sending” a thing has a particular destination selected for that thing; thus, as an analogy, a message could be sent using the well known prior art method of writing the message on a piece of paper, placing the paper in a bottle, and throwing the bottle into the ocean. Of course, the above example is not intended to imply that the word “sending” is restricted to situations in which a destination is not known. Thus, sending a thing refers to making that thing available to one or more other devices or entities, regardless of whether those devices or entities are known or selected by sender. Thus, the “sending” of a “learning event” should be understood to refer to making the learning event available to the thing to which it is sent (e.g., by transmitting data representing the learning event over a network connection to the thing to which the learning event is sent).
In the context of coordinating a learning event with a model, the verb “coordinate” should be understood to refer to the act of establishing a relationship or association between the learning event and the model. Of course, it should be understood that the verb “coordinate” can also take the meaning of managing, directing, ordering, or causing one or more objects to function in a concerted manner. The specific meaning which should be ascribed to any instance of the verb “coordinate” (or a form thereof) should therefore be determined not only by the explicit definitions set forth herein, but also by context.
Additionally, “on-line processing” should be understood to refer to processing in which the product of the processing (e.g., a data output) is provided immediately or substantially immediately (e.g., within the scope of a single interaction such as a conversation or an on-line shopping session) after the inception of the processing. The term “hosted remotely” should be understood to refer to something which is stored on a machine which is physically separate from some other machine.
A “model” should be understood to refer to a pattern, plan, representation, or description designed to show the structure or workings of an object, system, or concept. Models can be understood as representations of data which can be created, used and updated in the context of machine learning. A “prototypical model” should be understood to refer to a model which is designated as a standard, or official, instance of a particular model implementation.
The verb “propagate” should be understood to refer to the act of being distributed throughout some domain (e.g., a prototypical model being propagated throughout the domain of like model implementations).
The term “method” should be understood to refer to a sequence of one or more instructions which is defined as a part of an object.
The verb “expose” (and the various forms thereof) in the context of a “method” “exposed” by an “interface” should be understood to refer to making a method available to one or more outside entities which are able to access the interface. When an interface is described as being “associated with” a model, it should be understood to mean that the interface has a definite relationship with the model.
Additionally, the verb “update” (and the various forms thereof) should be understood to refer to the act of modifying the thing being “updated” with newer, more accurate, more specialized, or otherwise different, information.
Finally, a statement that a first thing “takes place during” a second thing should be understood to indicate that the first thing happens contemporaneously with the second thing.
A “machine learning engine” should be understood as an abstraction for the functional capabilities of runtime execution of machine learning algorithms and models. A “machine learning context” is the runtime state of the model and the appropriate algorithm. A “machine learning controller” controls access to model and algorithms.
In an embodiment, there is a method of incorporating a computerized machine learning solution, comprising a machine learning model and a machine learning algorithm, into a transaction processing system, the method comprising configuring an application interface component to access a set of functionality associated with said machine learning solution; configuring an algorithm management component to access at least one instance of said machine learning algorithm according to a request received from the application interface component; configuring a model management component to access at least one instance of said machine learning model according to said request received from the application interface component, wherein configuring said model management component comprises the steps of: configuring a synchronization policy associated with said model management component; configuring a persistence policy associated with said model management component; and configuring a versioning policy associated with said model management component; and further configuring said machine learning solution to apply said at least one instance of said machine learning algorithm and said at least one instance of said machine learning model according to said request received from the application interface component.
In another embodiment, there is a computerized method of incorporating a machine learning solution, comprising a machine learning model, into a transaction processing system, the method comprising configuring an application interface component to access a set of functionality associated with the machine learning solution; configuring a model management component to access at least one instance of said machine learning model according to a request received from the application interface component, wherein configuring said model management component comprises the steps of: configuring a synchronization policy associated with said model management component; configuring a persistence policy associated with said model management component; and configuring a versioning policy associated with said model management component.
In another embodiment, there is method of incorporating said machine learning solution wherein said persistence policy comprises a set of computer-executable instructions encoding how, when, and where said machine learning model should be persisted.
In another embodiment, there is a method of incorporating said machine learning solution wherein said versioning policy is implemented according to a versioning policy interface which encodes how a machine learning model version in memory and a machine learning model version in persistent storage should be synchronized.
In another embodiment, there is a method of incorporating said machine learning solution further comprising accessing said synchronization policy through a synchronization policy interface.
In another embodiment, there is a method of incorporating said machine learning solution wherein said synchronization policy interface comprises computer-executable instructions for invoking said synchronization policy for said machine learning model.
In another embodiment, there is a computerized method comprising receiving a learning event; sending said learning event to a model management component, wherein said model management component is configured to determine a machine learning model associated with said learning event; using said machine learning model, accessing a synchronization policy associated with said model; sending said learning event to a prototypical machine learning model determined from said synchronization policy; updating said prototypical machine learning model according to said learning event to produce an updated prototypical machine learning model; and propagating said updated prototypical machine learning model to a plurality of machine learning models comprising said machine learning model and at least one other machine learning model hosted remotely from said updated prototypical machine learning model and said machine learning model. The propagating step may occur in real-time or near real-time.
In another embodiment, there is a computerized machine learning system comprising an application interface component further comprising a machine learning engine; a machine learning context; and a machine learning controller. It also includes a model management component further comprising a model pool; a model pool manager; and a synchronization manager. It also includes an algorithm management component comprising an algorithm manager; an algorithm instance map; an algorithm factory; an algorithm implementation; and an algorithm interface. The machine learning engine, in combination with the machine learning context provides said application interface component, accessible by an application program, and wherein said machine learning engine communicates a request to said algorithm management component and/or said model management component. The algorithm interface defines how an algorithm instance of an algorithm implementation is accessed. The algorithm manager is configured to create, retrieve, update and/or delete an algorithm instance, through said algorithm factory, and enter said algorithm instance into said algorithm instance map based on said request received through said application interface component. A model is stored within said model pool and wherein a synchronization policy is associated with said model. The model pool manager is configured to create, retrieve, update and/or delete said model within the model pool based on said request received through said application interface component. The synchronization manager executes said synchronization policy associated with said model. The machine learning controller binds an algorithm instance with said model.
In another embodiment, the application interface component, provided by the machine learning context, is configured to expose a particular or proprietary machine learning algorithm. Not all algorithms may conform to the standard abstracted interface (learn, classify, regression); the “proprietary” provides access to algorithms interfaces that may not conform to the standard interface due to customizations or a new function.
In another embodiment, the application interface component, provided by the machine learning context, is configured to expose a general purpose interface.
In another embodiment, the application interface component, provided by the machine learning context, is configured to expose a focused interface to be applied to a specific task.
In another embodiment, the synchronization manager is configured to update a prototypical model only if said request contains an appropriate learning event.
In another embodiment, the appropriate learning event comprises that said model, associated with said request, is identical to the prototypical model except for a set of predetermined parameters which are updatable.
In another embodiment, the appropriate learning event comprises a situation wherein said model, associated with said request, and said prototypical model comprise an identical vendor, an identical algorithm structure, and a set of identical model parameter attribute types.
In another embodiment, the synchronization manager is configured to propagate an updated prototypical model to a plurality of models comprising said model and at least one model hosted remotely from said updated prototypical model.
In another embodiment, the propagation of said updated prototypical model to said plurality of models is executed in accordance with a synchronization policy associated with each of said plurality of models.
In another embodiment, the model pool manager is configured to retrieve a plurality of models within the model pool based on said request received through said application interface component.
In another embodiment, there is a method of utilizing a computerized machine learning solution, comprising a machine learning model, in a transaction processing system, the method comprising using a machine learning engine of an application interface component to connect said machine learning solution to an on-line retailer's website; coding a classification method, in said transaction processing system, to be invoked when a customer visits the website; coding said classification method to send a message to a machine learning controller; requesting a model manager and an algorithm manager from said machine learning controller for an instance of an associated algorithm and an instance of an associated model; determining a recommendation from a set of available products based on processing said instance of said associated algorithm, said instance of said associated model and an output from said classification method; coding a learn( ) method to be called when a purchase takes place; passing a purchase message, regarding said purchase, to said machine learning controller as a learning event; passing said learning event to an update method exposed by a synchronization policy interface of said instance of said associated model via said model manager; sending said learning event to a prototypical model; creating an updated version of said prototypical model; and propagating said updated prototypical model to a plurality of distributed servers according to a propagation method exposed by said synchronization policy interface. The propagation method may be a master-slave synchronization policy.
Additionally, it should be understood that this application is not limited to being implemented as described above. The inventors contemplate that the teachings of this application could be implemented in a variety of methods, data structures, interfaces, computer readable media, and other forms which might be appropriate to a given situation. Additionally, the discussion above is not intended to be an exhaustive recitation of all potential implementations of this disclosure.
a sets forth additional detail regarding the application interface component shown in
b sets forth additional detail regarding the algorithm management component shown in
c sets forth additional detail regarding the model management component shown in
Other examples, features, aspects, embodiments, and advantages of the invention will become apparent to those skilled in the art from the following description, which includes by way of illustration, the best mode contemplated by the inventor(s) for carrying out the invention. As will be realized, the invention is capable of other different and obvious aspects, all without departing from the invention. Accordingly, the drawings and descriptions should be regarded as illustrative in nature and not restrictive. It should therefore be understood that the inventors contemplate a variety of embodiments that are not explicitly disclosed herein. For purposes of clarity, when a method/system/interface is set down in terms of acts, steps, or components configured to perform certain functions, it should be understood that the described embodiment is not necessarily intended to be limited to a particular order.
This application discusses certain computerized methods, computer systems and computer readable media which can be used to support computerized machine learning in contexts, such as large scale distributed transaction processing systems, which are not addressed by presently available market offerings. For the purpose of clarity, a machine learning solution should be understood to refer to sets of computer-executable instructions, computer-readable data and computerized components which can allow the automatic modification of the behavior of a computerized system based on experience. In this application, machine learning solutions are characterized as including models and algorithms. Models can be understood as representations of data which can be created, used and updated in the context of machine learning. Algorithms contain the logic necessary to update and/or create associated models. To clarify this concept, consider the example of a Bayesian network. A model for a Bayesian network would likely contain nodes, edges and conditional probability tables. Such a model could be stored in memory in a format such as a joint tree for efficiency purposes, and might be stored in files in a variety of formats, such as vendor specific binary or text formats, or standard formats such as Predictive Modeling Markup Language (PMML). A Bayesian network algorithm then could perform inferences using the Bayesian network model, adjust the model based on experience and/or learning events, or both. As a second example, a decision tree model would likely be represented in memory by a tree structure containing nodes representing decisions and/or conditionals. The decision tree model could be stored in files in vendor specific or general formats (or both), as set forth above for Bayesian network models. The logic, which would preferably be implemented in software, which updated the model and/or made classifications using the model and particular data would be the decision tree algorithm. The theoretical details underpinning Bayesian networks and decision trees in machine learning are well known to those of ordinary skill in the art and are not set forth herein.
As is discussed herein, the techniques and teachings of this application are broadly applicable to a variety of machine learning algorithms, models, and implementations. To clarify, an implementation should be understood as a particular way of realizing a concept. Differing model implementations might vary based on their vendor, the algorithm which creates and/or updates the model, the data format of the model (which format may be different between an instance of a model in memory and one which is persisted to secondary storage), and/or attributes which are included in the model. Such an implementation should not be confused with an instance, which should be understood as a particular object corresponding to a definition (e.g., a class definition specifying a model implementation).
To illustrate a distinction between these terms, there might be many instances of a particular implementation, (e.g., there might be many individual instances of a particular type of model implementation, or many instances of a particular algorithm implementation). Of course, it should also be understood that, while the techniques and teachings of this application are broadly applicable to a variety of implementations of both models and algorithms, the systems and methods which can be implemented according to this application will not necessarily be so generically applicable. Indeed, it is possible that the teachings of this application could be implemented in a manner which is specific to a particular model or algorithm or their respective implementations.
Turning to the drawings,
Addressing the particular components of
Returning specifically to
Algorithm Management
b sets forth a diagram which depicts, in additional detail, components which might be included in the algorithm management component [105] of
Moving on from the algorithm interface [112], the algorithm management component [104] depicted in
The teachings of this application could be implemented in a manner which is flexible enough to accommodate a plurality of alternative algorithms, such as decision trees, reinforcement learning, Bayesian networks, K-nearest neighbor, neural networks, support vector machines, and other machine learning algorithms. Returning to
Model Management
c sets forth a diagram which depicts, in additional detail, components which might be included in the model management component [104] of
Additionally, while it should be understood that the general architecture set forth in
As set forth previously, the architecture of
Synchronization of Models
Turning to the functionality of synchronization of models, the architecture of
As a concrete example of a synchronization policy which could be implemented in a system implemented according to the architecture of
It should be understood that, while a master-slave periodic synchronization policy is described above, the architecture set forth in
As an example of how the architecture of
For the sake of clarity, it should be emphasized that, while the discussion above focuses on how a synchronization policy could be implemented in a system according to the architecture of
It should be understood that the segregation of model implementations described above is also intended to be illustrative on a particular approach to synchronization in the context of an implementation supporting multiple model implementations, and should not be treated as limiting on claims included in future applications which claim the benefit of this application. To demonstrate an alternate approach which could be used for synchronizing an implementation which supports multiple model implementations, consider the scenario in which different learning events are classified based on their relevance to particular model implementations. For example, when a purchase is made through the web site of the on-line retailer, that purchase might be used as a learning event which is relevant both to a model used for making product recommendations, and a model used for making targeted promotional offers. When such a learning event takes place, instead of sending it to a single master model which is then synchronized with models which are identical to the master model except for the potentially updatable parameters, the learning event's relevance classification would be obtained, and the learning event could be sent to all models to which it was relevant. Those models might be single model implementations used for different purposes, or they might be different model implementations entirely (e.g., different formats, parameters, and associated algorithms). Once the learning events had been associated with the relevant models, those models could use their own synchronization policies with the learning event, which might be master-slave policies, concurrent batch synchronization policies, combined synchronization policies, or other types of policies which are appropriate for the particular implementation as it is used within the constraints of the underlying system.
Similarly, while
Persistence of Models
Moving on from the discussion of synchronization of models set forth above, a system implemented according to the architecture of
As a specific example of functionality which could be included within a model persistence manager [125] consider the mechanics of actually retrieving a model. When retrieving a model, it is necessary to identify (or have stored information or defaults specifying) a model, for example, by name and location. One method of making that identification is to include a registry or name service within the model persistence manger [125] from which the appropriate identification information could be retrieved. As another example of functionality which could be incorporated into a model persistence manager [125], consider the task of model format conversion. In some implementations, for reasons such as increasing space or search efficiency of persistent storage, models in persistent storage might all be represented using a common format, which could be translated into a specific in-memory format for a particular model implementation when such a model is retrieved from storage into memory. The task of translating models from their individual formats for storage, and translating them to their individual formats for retrieval could be handled as part of the processing performed by the model persistence manager [125]. Of course, it should be understood that not all implementations will include such format translation, and that some implementations might store models in native format, or store some models in native format while translating others to a common format.
While
Versioning of Models
The architecture of
As an example of how a model versioning manager [119] and a policy implemented according to a versioning policy interface [117] could interact with other components (e.g., a model [115], a persistence manager [125] and a domain actor utilizing the functionality of a machine learning solution via the application interface component [107]) of a system implemented according to the teachings of this application, consider the sequence diagram of
The actual mechanics of tracking a model's versions could vary between implementations of the versioning policy interface [117]. For example, in some implementations, there might be a database set up to keep track of the models, their locations, and their version identifications. Such a database could be implemented as a separate component (or collection of distributed components, as appropriate) which would be accessed by one or more of the components illustrated in
As set forth above, the architecture of
In general, the present application sets forth infrastructure to support distributed machine learning in a transactional system. The infrastructure described herein provides a flexible framework which can accommodate disparate machine learning models, algorithms, and system configurations. The infrastructure would allow distribution of models and algorithms across hosts, and would allow requests for models and algorithms to be made by clients which are agnostic of the physical location of any particular algorithm or model. Specific customizations which might be necessary or desirable for individual implementations of models or algorithms could be made by tailoring particular policies accessed through generic interfaces in the infrastructure (e.g., synchronization policy interface [116]). Additionally, the architecture can be used to provide fail-over capabilities and load balancing for individual components used in a machine learning solution. Thus, a system implemented according to this application could be used to extend machine learning models and algorithms to large scale environments where they cannot currently be utilized.
As a concrete example of how the teachings of this application could be used to incorporate a machine learning solution into a real world transaction processing system, consider how an on-line retailer could incorporate a machine learning solution into a system as set forth in
Following the teachings of this application, to implement the desired recommendation and learning functionality, a developer working for the on-line retailer could use the machine learning engine [101] of the application interface component [107] to connect the desired functionality with the on-line retailer's web site. For example, using the methods as shown in
Moving on from the retrieval of models and algorithms, consider now the events which might take place for a learning event to be received, processed, and propagated in a system such as that depicted in
Continuing with the sequence of events above, assume that the synchronization policy is to asynchronously queue learning events at the remote server [306] and, every 100 events, process those events and propagate the changes to those events to the appropriate models residing on the physical servers [303][304][305]. Assume further, that the learning event discussed in the previous paragraph is the 100th learning event added to the queue for the prototypical model. In such a scenario, when the network synchronizer [122] sends the learning event to the remote server [306], the remote server would process all of the queued learning events to create an updated version of the prototypical model. That updated version would then be provided to the network synchronizer [122] to update the model [115] stored locally on the physical server which presented the learning event. Additionally, the remote server [306] would propagate the updated prototypical model to the remaining physical servers, so that all of the physical servers [303][304][305] would have the most current copy of the model, including all modifications based on all learning events, regardless of which physical server [303][304][305] actually detected those events.
It should be understood that the above example of how a machine learning solution could be integrated into a transaction processing system is intended to be illustrative only, and not limiting on the scope of claims included applications which claim the benefit of this application. The following are selected examples of variations on the above discussion of the situation of the on-line retailer which could be implemented by one of ordinary skill in the art without undue experimentation in light of this application.
As one exemplary variation, it should be noted that, while the above discussion of machine learning in the context of an on-line retailer was focused on making recommendations of products, the teachings of this application are not limited to making recommendations, and could be used to incorporate machine learning solutions into systems which perform a broad array of tasks, including making decisions about call routing in a network, determining a response for a computer in a interactive voice response system, and determining a likely source of a problem a customer is experiencing with a network service. Additionally, it should be noted that, while the discussion of machine learning in the context of an on-line retailer focused on a single model, it is possible that the teachings of this application could be used to support many models simultaneously in a single deployment. For example, in the case of the on-line retailer, there might be, in addition to the models used for recommendations, different models and algorithms used to perform functions such as fraud detection, determination of eligibility for special promotions, or any other function which could appropriately be incorporated into the operations of the retailer. All such functions, perhaps using different models, algorithms, synchronization policies, etc. could be supported by a single implementation of this disclosure deployed on the system as described in
In a similar vein, while the discussion of machine learning in the context of an on-line retailer was written in a manner which is agnostic regarding particular algorithms and models (i.e., the system was designed to work equally well with a variety of models and algorithms such as Bayesian network, decision tree, or neural network algorithms and models), it is also possible that the supporting infrastructure for a machine learning solution could be implemented in a manner which is customized for a particular machine learning solution from a particular vendor. Further, while the example of learning in the context of an on-line retailer included a synchronization policy which processed learning events every time 100 learning events were queued, different policies might be appropriate for different implementations. For example, based on the teachings of this application, a system could be implemented in which a multiprocessor Red Hat Unix system operating on a high speed network infrastructure is used for making recommendations and processing learning events. As an implementation of the teachings of this application on such a system is estimated to be capable of performing at least 2000 classifications/second (7250 classifications/second at peak time) and processing at least 660 learning events/second (2400 learning events/second max estimation), a policy might be designed requiring that updated models would be propagated at least every 660 learning events, so that the models used by the system would not be more than 1 second out of date at any given time.
Some of the principles regarding the synchronization/versioning/persistence of models may also be applied/integrated into the algorithm management component. In an embodiment, if the algorithm that needs to be synchronized doesn't affect the structure of the model then it the algorithm could be synchronized within the running system. If the algorithm changes the structure then every model that is affected by the algorithm change would have to be converted to the new structure and, possibly, retrained and then synchronized into the system. Methods would need to be employed to ensure that algorithms aren't inadvertently synchronized that make changes to the model structure.
The list of variations set forth above is not meant to be exhaustive or limiting on the scope of potential implementations which could fall within the scope of claims included in applications which claim the benefit of this application. Instead, all such claims should be read as limited only by the words included in their claims, and to insubstantially different variations therefrom.
This is a non-provisional patent application which claims priority from U.S. Provisional Patent Application 60/892,299, filed Mar. 1, 2007, “A System and Method for Supporting the Utilization of Machine Learning”, and from U.S. Provisional Patent Application 60/747,896, filed May 22, 2006, “System and Method for Assisted Automation”. Both applications are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5206903 | Kohler et al. | Apr 1993 | A |
5214715 | Carpenter et al. | May 1993 | A |
5345380 | Babson, III et al. | Sep 1994 | A |
5411947 | Hostetler et al. | May 1995 | A |
5581664 | Allen et al. | Dec 1996 | A |
5586218 | Allen | Dec 1996 | A |
5615296 | Stanford et al. | Mar 1997 | A |
5625748 | McDonough et al. | Apr 1997 | A |
5652897 | Linebarger et al. | Jul 1997 | A |
5678002 | Fawcett et al. | Oct 1997 | A |
5701399 | Lee et al. | Dec 1997 | A |
5748711 | Scherer | May 1998 | A |
5757904 | Anderson | May 1998 | A |
5802526 | Fawcett et al. | Sep 1998 | A |
5802536 | Yoshii et al. | Sep 1998 | A |
5825869 | Brooks et al. | Oct 1998 | A |
5852814 | Allen | Dec 1998 | A |
5867562 | Scherer | Feb 1999 | A |
5872833 | Scherer | Feb 1999 | A |
5895466 | Goldberg et al. | Apr 1999 | A |
5944839 | Isenberg | Aug 1999 | A |
5956683 | Jacobs et al. | Sep 1999 | A |
5960399 | Barclay et al. | Sep 1999 | A |
5963940 | Liddy et al. | Oct 1999 | A |
5966429 | Scherer | Oct 1999 | A |
5987415 | Breese et al. | Nov 1999 | A |
5991394 | Dezonno et al. | Nov 1999 | A |
6021403 | Horvitz et al. | Feb 2000 | A |
6029099 | Brown | Feb 2000 | A |
6038544 | Machin et al. | Mar 2000 | A |
6044142 | Hammarstrom et al. | Mar 2000 | A |
6044146 | Gisby et al. | Mar 2000 | A |
6070142 | McDonough et al. | May 2000 | A |
6094673 | Dilip et al. | Jul 2000 | A |
6122632 | Botts et al. | Sep 2000 | A |
6137870 | Scherer | Oct 2000 | A |
6173266 | Marx et al. | Jan 2001 | B1 |
6173279 | Levin et al. | Jan 2001 | B1 |
6177932 | Galdes et al. | Jan 2001 | B1 |
6182059 | Angotti et al. | Jan 2001 | B1 |
6188751 | Scherer | Feb 2001 | B1 |
6192110 | Abella et al. | Feb 2001 | B1 |
6205207 | Scherer | Mar 2001 | B1 |
6212502 | Ball et al. | Apr 2001 | B1 |
6233547 | Denber | May 2001 | B1 |
6233570 | Horvitz et al. | May 2001 | B1 |
6243680 | Gupta et al. | Jun 2001 | B1 |
6243684 | Stuart et al. | Jun 2001 | B1 |
6249807 | Shaw et al. | Jun 2001 | B1 |
6249809 | Bro | Jun 2001 | B1 |
6253173 | Ma | Jun 2001 | B1 |
6256620 | Jawahar et al. | Jul 2001 | B1 |
6260035 | Horvitz et al. | Jul 2001 | B1 |
6262730 | Horvitz et al. | Jul 2001 | B1 |
6263066 | Shtivelman et al. | Jul 2001 | B1 |
6263325 | Yoshida et al. | Jul 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6278996 | Richardson et al. | Aug 2001 | B1 |
6282527 | Gounares et al. | Aug 2001 | B1 |
6282565 | Shaw et al. | Aug 2001 | B1 |
6304864 | Liddy et al. | Oct 2001 | B1 |
6307922 | Scherer | Oct 2001 | B1 |
6330554 | Altschuler et al. | Dec 2001 | B1 |
6337906 | Bugash et al. | Jan 2002 | B1 |
6343116 | Quinton et al. | Jan 2002 | B1 |
6356633 | Armstrong | Mar 2002 | B1 |
6356869 | Chapados et al. | Mar 2002 | B1 |
6366127 | Friedman et al. | Apr 2002 | B1 |
6370526 | Agrawal et al. | Apr 2002 | B1 |
6377944 | Busey et al. | Apr 2002 | B1 |
6381640 | Beck et al. | Apr 2002 | B1 |
6389124 | Schnarel et al. | May 2002 | B1 |
6393428 | Miller et al. | May 2002 | B1 |
6401061 | Zieman | Jun 2002 | B1 |
6405185 | Pechanek et al. | Jun 2002 | B1 |
6411692 | Scherer | Jun 2002 | B1 |
6411926 | Chang | Jun 2002 | B1 |
6415290 | Botts et al. | Jul 2002 | B1 |
6434230 | Gabriel | Aug 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6442519 | Kanevsky et al. | Aug 2002 | B1 |
6449356 | Dezonno | Sep 2002 | B1 |
6449588 | Bowman-Amuah | Sep 2002 | B1 |
6449646 | Sikora et al. | Sep 2002 | B1 |
6451187 | Suzuki et al. | Sep 2002 | B1 |
6480599 | Ainslie et al. | Nov 2002 | B1 |
6493686 | Francone et al. | Dec 2002 | B1 |
6498921 | Ho et al. | Dec 2002 | B1 |
6519571 | Guheen et al. | Feb 2003 | B1 |
6519580 | Johnson et al. | Feb 2003 | B1 |
6519628 | Locascio | Feb 2003 | B1 |
6523021 | Monberg et al. | Feb 2003 | B1 |
6539419 | Beck et al. | Mar 2003 | B2 |
6546087 | Shaffer et al. | Apr 2003 | B2 |
6560590 | Shwe et al. | May 2003 | B1 |
6563921 | Williams et al. | May 2003 | B1 |
6567805 | Johnson et al. | May 2003 | B1 |
6571225 | Oles et al. | May 2003 | B1 |
6574599 | Lim et al. | Jun 2003 | B1 |
6581048 | Werbos | Jun 2003 | B1 |
6587558 | Lo | Jul 2003 | B2 |
6594684 | Hodjat et al. | Jul 2003 | B1 |
6598039 | Livowsky | Jul 2003 | B1 |
6604141 | Ventura | Aug 2003 | B1 |
6606598 | Holthouse et al. | Aug 2003 | B1 |
6614885 | Polcyn | Sep 2003 | B2 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6618725 | Fukuda et al. | Sep 2003 | B1 |
6632249 | Pollock | Oct 2003 | B2 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6643622 | Stuart et al. | Nov 2003 | B2 |
6650748 | Edwards et al. | Nov 2003 | B1 |
6652283 | Van Schaack et al. | Nov 2003 | B1 |
6658598 | Sullivan | Dec 2003 | B1 |
6665395 | Busey et al. | Dec 2003 | B1 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6665644 | Kanevsky et al. | Dec 2003 | B1 |
6665655 | Ponte et al. | Dec 2003 | B1 |
6829603 | Chai et al. | Dec 2003 | B1 |
6745172 | Mancisidor et al. | Jan 2004 | B1 |
6694314 | Sullivan et al. | Feb 2004 | B1 |
6694482 | Arellano et al. | Feb 2004 | B1 |
6701311 | Biebesheimer et al. | Mar 2004 | B2 |
6704410 | McFarlane et al. | Mar 2004 | B1 |
6707906 | Ben-Chanoch | Mar 2004 | B1 |
6718313 | Lent et al. | Apr 2004 | B1 |
6721416 | Farrell | Apr 2004 | B1 |
6724887 | Eilbacher et al. | Apr 2004 | B1 |
6725209 | Iliff | Apr 2004 | B1 |
6732188 | Flockhart et al. | May 2004 | B1 |
6735572 | Landesmann | May 2004 | B2 |
6741698 | Jensen | May 2004 | B1 |
6741699 | Flockhart et al. | May 2004 | B1 |
6741959 | Kaiser | May 2004 | B1 |
6741974 | Harrison et al. | May 2004 | B1 |
6584185 | Nixon | Jun 2004 | B1 |
6754334 | Williams et al. | Jun 2004 | B2 |
6760727 | Schroeder et al. | Jul 2004 | B1 |
6766011 | Fromm | Jul 2004 | B1 |
6766320 | Wang et al. | Jul 2004 | B1 |
6771746 | Shambaugh et al. | Aug 2004 | B2 |
6771765 | Crowther et al. | Aug 2004 | B1 |
6772190 | Hodjat et al. | Aug 2004 | B2 |
6775378 | Villena et al. | Aug 2004 | B1 |
6778660 | Fromm | Aug 2004 | B2 |
6778951 | Contractor | Aug 2004 | B1 |
6795530 | Gilbert et al. | Sep 2004 | B1 |
6798876 | Bala | Sep 2004 | B1 |
6807544 | Morimoto et al. | Oct 2004 | B1 |
6819748 | Weiss et al. | Nov 2004 | B2 |
6819759 | Khuc et al. | Nov 2004 | B1 |
6823054 | Suhm et al. | Nov 2004 | B1 |
6829348 | Schroeder et al. | Dec 2004 | B1 |
6829585 | Grewal et al. | Dec 2004 | B1 |
6832263 | Polizzi et al. | Dec 2004 | B2 |
6836540 | Falcone et al. | Dec 2004 | B2 |
6839671 | Attwater et al. | Jan 2005 | B2 |
6842737 | Stiles | Jan 2005 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6842877 | Robarts et al. | Jan 2005 | B2 |
6845154 | Cave et al. | Jan 2005 | B1 |
6845155 | Elsey | Jan 2005 | B2 |
6845374 | Oliver et al. | Jan 2005 | B1 |
6847715 | Swartz | Jan 2005 | B1 |
6850612 | Johnson et al. | Feb 2005 | B2 |
6850923 | Nakisa et al. | Feb 2005 | B1 |
6859529 | Duncan et al. | Feb 2005 | B2 |
6871174 | Dolan et al. | Mar 2005 | B1 |
6871213 | Graham et al. | Mar 2005 | B1 |
6873990 | Oblinger | Mar 2005 | B2 |
6879685 | Peterson et al. | Apr 2005 | B1 |
6879967 | Stork | Apr 2005 | B1 |
6882723 | Peterson et al. | Apr 2005 | B1 |
6885734 | Eberle et al. | Apr 2005 | B1 |
6895558 | Loveland et al. | May 2005 | B1 |
6898277 | Meteer et al. | May 2005 | B1 |
6901397 | Moldenhauer et al. | May 2005 | B1 |
6904143 | Peterson et al. | Jun 2005 | B1 |
6907119 | Case et al. | Jun 2005 | B2 |
6910003 | Arnold et al. | Jun 2005 | B1 |
6910072 | Macleod Beck et al. | Jun 2005 | B2 |
6915246 | Gusler et al. | Jul 2005 | B2 |
6915270 | Young et al. | Jul 2005 | B1 |
6922466 | Peterson et al. | Jul 2005 | B1 |
6922689 | Shtivelman | Jul 2005 | B2 |
6924828 | Hirsch | Aug 2005 | B1 |
6925452 | Hellerstein et al. | Aug 2005 | B1 |
6928156 | Book et al. | Aug 2005 | B2 |
6931119 | Michelson et al. | Aug 2005 | B2 |
6931434 | Donoho et al. | Aug 2005 | B1 |
6934381 | Klein et al. | Aug 2005 | B1 |
6937705 | Godfrey et al. | Aug 2005 | B1 |
6938000 | Joseph et al. | Aug 2005 | B2 |
6941266 | Gorin et al. | Sep 2005 | B1 |
6941304 | Gainey et al. | Sep 2005 | B2 |
6944592 | Pickering | Sep 2005 | B1 |
6950505 | Longman et al. | Sep 2005 | B2 |
6950827 | Jung | Sep 2005 | B2 |
6952470 | Tioe et al. | Oct 2005 | B1 |
6956941 | Duncan et al. | Oct 2005 | B1 |
6959081 | Brown et al. | Oct 2005 | B2 |
6961720 | Nelken | Nov 2005 | B1 |
6965865 | Pletz et al. | Nov 2005 | B2 |
6970554 | Peterson et al. | Nov 2005 | B1 |
6970821 | Shambaugh et al. | Nov 2005 | B1 |
6975708 | Scherer | Dec 2005 | B1 |
6983239 | Epstein | Jan 2006 | B1 |
6987846 | James | Jan 2006 | B1 |
6993475 | McConnell et al. | Jan 2006 | B1 |
6999990 | Sullivan et al. | Feb 2006 | B1 |
7003079 | McCarthy et al. | Feb 2006 | B1 |
7003459 | Gorin et al. | Feb 2006 | B1 |
7007067 | Azvine et al. | Feb 2006 | B1 |
7035384 | Scherer | Apr 2006 | B1 |
7039165 | Saylor et al. | May 2006 | B1 |
7039166 | Peterson et al. | May 2006 | B1 |
7045181 | Yoshizawa et al. | May 2006 | B2 |
7047498 | Lui et al. | May 2006 | B2 |
7050976 | Packingham | May 2006 | B1 |
7050977 | Bennett | May 2006 | B1 |
7065188 | Mei et al. | Jun 2006 | B1 |
7068774 | Judkins et al. | Jun 2006 | B1 |
7076032 | Pirasteh et al. | Jul 2006 | B1 |
7085367 | Lang | Aug 2006 | B1 |
7092509 | Mears et al. | Aug 2006 | B1 |
7092888 | McCarthy et al. | Aug 2006 | B1 |
7096219 | Karch | Aug 2006 | B1 |
7099855 | Nelken et al. | Aug 2006 | B1 |
7107254 | Dumais et al. | Sep 2006 | B1 |
7110525 | Heller et al. | Sep 2006 | B1 |
7155158 | Iuppa et al. | Dec 2006 | B1 |
7158935 | Gorin et al. | Jan 2007 | B1 |
7213742 | Birch et al. | May 2007 | B1 |
20010010714 | Nemoto | Aug 2001 | A1 |
20010024497 | Campbell et al. | Sep 2001 | A1 |
20010041562 | Elsey et al. | Nov 2001 | A1 |
20010044800 | Han | Nov 2001 | A1 |
20010047261 | Kassan | Nov 2001 | A1 |
20010047270 | Gusick et al. | Nov 2001 | A1 |
20010049688 | Fratkina | Dec 2001 | A1 |
20010053977 | Schaefer | Dec 2001 | A1 |
20010054064 | Kannan | Dec 2001 | A1 |
20010056346 | Ueyama | Dec 2001 | A1 |
20020007356 | Rice et al. | Jan 2002 | A1 |
20020013692 | Chandhok et al. | Jan 2002 | A1 |
20020023144 | Linyard et al. | Feb 2002 | A1 |
20020032591 | Mahaffy et al. | Mar 2002 | A1 |
20020044296 | Skaanning | Apr 2002 | A1 |
20020046096 | Srinivasan et al. | Apr 2002 | A1 |
20020051522 | Merrow et al. | May 2002 | A1 |
20020055975 | Petrovykh | May 2002 | A1 |
20020062245 | Niu et al. | May 2002 | A1 |
20020062315 | Weiss et al. | May 2002 | A1 |
20020072921 | Boland et al. | Jun 2002 | A1 |
20020087325 | Lee et al. | Jul 2002 | A1 |
20020026435 | Wyss et al. | Aug 2002 | A1 |
20020104026 | Barra et al. | Aug 2002 | A1 |
20020111811 | Bares et al. | Aug 2002 | A1 |
20020116243 | Mancisidor et al. | Aug 2002 | A1 |
20020116698 | Lurie et al. | Aug 2002 | A1 |
20020118220 | Lui et al. | Aug 2002 | A1 |
20020123957 | Notarius et al. | Sep 2002 | A1 |
20020143548 | Korall et al. | Oct 2002 | A1 |
20020146668 | Burgin et al. | Oct 2002 | A1 |
20020156776 | Davallou | Oct 2002 | A1 |
20020161626 | Plante et al. | Oct 2002 | A1 |
20020161896 | Wen et al. | Oct 2002 | A1 |
20020168621 | Cook et al. | Nov 2002 | A1 |
20020169834 | Miloslavsky et al. | Nov 2002 | A1 |
20020174199 | Horvitz | Nov 2002 | A1 |
20020178022 | Anderson et al. | Nov 2002 | A1 |
20020184069 | Kosiba et al. | Dec 2002 | A1 |
20030004717 | Strom et al. | Jan 2003 | A1 |
20030007612 | Garcia | Jan 2003 | A1 |
20030028451 | Ananian | Feb 2003 | A1 |
20030031309 | Rupe et al. | Feb 2003 | A1 |
20030035531 | Brown et al. | Feb 2003 | A1 |
20030037177 | Sutton et al. | Feb 2003 | A1 |
20030046297 | Mason | Mar 2003 | A1 |
20030046311 | Baidya et al. | Mar 2003 | A1 |
20030084009 | Bigus et al. | May 2003 | A1 |
20030084066 | Waterman et al. | May 2003 | A1 |
20030095652 | Mengshoel et al. | May 2003 | A1 |
20030100998 | Brunner et al. | May 2003 | A2 |
20030108162 | Brown et al. | Jun 2003 | A1 |
20030108184 | Brown et al. | Jun 2003 | A1 |
20030115056 | Gusler et al. | Jun 2003 | A1 |
20030117434 | Hugh | Jun 2003 | A1 |
20030120502 | Robb et al. | Jun 2003 | A1 |
20030120653 | Brady et al. | Jun 2003 | A1 |
20030144055 | Guo et al. | Jul 2003 | A1 |
20030154120 | Freishtat et al. | Aug 2003 | A1 |
20030169870 | Stanford | Sep 2003 | A1 |
20030169942 | Pines et al. | Sep 2003 | A1 |
20030172043 | Guyon et al. | Sep 2003 | A1 |
20030177009 | Odinak et al. | Sep 2003 | A1 |
20030177017 | Boyer et al. | Sep 2003 | A1 |
20030187639 | Mills | Oct 2003 | A1 |
20030198321 | Polcyn | Oct 2003 | A1 |
20030204404 | Weldon et al. | Oct 2003 | A1 |
20030212654 | Harper et al. | Nov 2003 | A1 |
20030222897 | Moore et al. | Dec 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20030228007 | Kurosaki | Dec 2003 | A1 |
20030233392 | Forin et al. | Dec 2003 | A1 |
20030236662 | Goodman | Dec 2003 | A1 |
20040002502 | Banholzer et al. | Jan 2004 | A1 |
20040002838 | Oliver et al. | Jan 2004 | A1 |
20040005047 | Joseph et al. | Jan 2004 | A1 |
20040006478 | Alpdemir et al. | Jan 2004 | A1 |
20040010429 | Vedula et al. | Jan 2004 | A1 |
20040062364 | Dezonno et al. | Apr 2004 | A1 |
20040066416 | Knott et al. | Apr 2004 | A1 |
20040068497 | Rishel et al. | Apr 2004 | A1 |
20040081183 | Monza et al. | Apr 2004 | A1 |
20040093323 | Bluhm et al. | May 2004 | A1 |
20040117185 | Scarano et al. | Jun 2004 | A1 |
20040140630 | Beishline et al. | Jul 2004 | A1 |
20040141508 | Schoeneberger et al. | Jul 2004 | A1 |
20040148154 | Acero et al. | Jul 2004 | A1 |
20040162724 | Hill et al. | Aug 2004 | A1 |
20040162812 | Lane et al. | Aug 2004 | A1 |
20040176968 | Syed et al. | Sep 2004 | A1 |
20040181471 | Rogers | Sep 2004 | A1 |
20040181588 | Wang et al. | Sep 2004 | A1 |
20040193401 | Ringger et al. | Sep 2004 | A1 |
20040204940 | Alshavi et al. | Oct 2004 | A1 |
20040205112 | Margolus | Oct 2004 | A1 |
20040210637 | Loveland | Oct 2004 | A1 |
20040220772 | Cobble et al. | Nov 2004 | A1 |
20040226001 | Teegan et al. | Nov 2004 | A1 |
20040228470 | Williams et al. | Nov 2004 | A1 |
20040230689 | Loveland | Nov 2004 | A1 |
20040234051 | Quinton | Nov 2004 | A1 |
20040240629 | Quinton | Dec 2004 | A1 |
20040240636 | Quinton | Dec 2004 | A1 |
20040240639 | Colson et al. | Dec 2004 | A1 |
20040240659 | Gagle et al. | Dec 2004 | A1 |
20040243417 | Pitts et al. | Dec 2004 | A9 |
20040249636 | Applebaum et al. | Dec 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20040252822 | Statham et al. | Dec 2004 | A1 |
20040260546 | Seo et al. | Dec 2004 | A1 |
20040260564 | Horvitz | Dec 2004 | A1 |
20040268229 | Paoli et al. | Dec 2004 | A1 |
20050002516 | Shtivelman | Jan 2005 | A1 |
20050021485 | Nodelman et al. | Jan 2005 | A1 |
20050021599 | Peters | Jan 2005 | A1 |
20050027495 | Matichuk | Feb 2005 | A1 |
20050027827 | Owhadi et al. | Feb 2005 | A1 |
20050047583 | Sumner et al. | Mar 2005 | A1 |
20050049852 | Chao | Mar 2005 | A1 |
20050050527 | McCrady et al. | Mar 2005 | A1 |
20050065789 | Yacoub et al. | Mar 2005 | A1 |
20050065899 | Li et al. | Mar 2005 | A1 |
20050066236 | Goeller et al. | Mar 2005 | A1 |
20050068913 | Tan et al. | Mar 2005 | A1 |
20050071178 | Beckstrom et al. | Mar 2005 | A1 |
20050083846 | Bahl | Apr 2005 | A1 |
20050084082 | Horvitz et al. | Apr 2005 | A1 |
20050091123 | Freishtat et al. | Apr 2005 | A1 |
20050091147 | Ingargiola et al. | Apr 2005 | A1 |
20050091219 | Karachale et al. | Apr 2005 | A1 |
20050097028 | Watanabe et al. | May 2005 | A1 |
20050097197 | Vincent | May 2005 | A1 |
20050105712 | Williams et al. | May 2005 | A1 |
20050114376 | Lane et al. | May 2005 | A1 |
20050125229 | Kurzweil | Jun 2005 | A1 |
20050125369 | Buck et al. | Jun 2005 | A1 |
20050125370 | Brennan et al. | Jun 2005 | A1 |
20050125371 | Bhide et al. | Jun 2005 | A1 |
20050125463 | Joshi et al. | Jun 2005 | A1 |
20050132094 | Wu | Jun 2005 | A1 |
20050135595 | Bushey et al. | Jun 2005 | A1 |
20050143628 | Dai et al. | Jun 2005 | A1 |
20050149520 | De Vries | Jul 2005 | A1 |
20050152531 | Hamilton, III et al. | Jul 2005 | A1 |
20050154591 | Lecoeuche | Jul 2005 | A1 |
20050160060 | Swartz et al. | Jul 2005 | A1 |
20050163302 | Mock et al. | Jul 2005 | A1 |
20050165803 | Chopra et al. | Jul 2005 | A1 |
20050171932 | Nandhra | Aug 2005 | A1 |
20050176167 | Lee | Aug 2005 | A1 |
20050177368 | Odinak | Aug 2005 | A1 |
20050177414 | Priogin et al. | Aug 2005 | A1 |
20050177601 | Chopra et al. | Aug 2005 | A1 |
20050187944 | Acheson et al. | Aug 2005 | A1 |
20050193102 | Horvitz | Sep 2005 | A1 |
20050195966 | Adar et al. | Sep 2005 | A1 |
20050198110 | Garms et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050203760 | Gottumukkala | Sep 2005 | A1 |
20050203949 | Cabrera et al. | Sep 2005 | A1 |
20050204051 | Box et al. | Sep 2005 | A1 |
20050212807 | Premchandran | Sep 2005 | A1 |
20050228707 | Hendrickson | Oct 2005 | A1 |
20050228796 | Jung | Oct 2005 | A1 |
20050228803 | Farmer et al. | Oct 2005 | A1 |
20050232409 | Fain et al. | Oct 2005 | A1 |
20050246241 | Irizarry et al. | Nov 2005 | A1 |
20050251382 | Chang et al. | Nov 2005 | A1 |
20050256819 | Tibbs et al. | Nov 2005 | A1 |
20050256850 | Ma et al. | Nov 2005 | A1 |
20050256865 | Ma et al. | Nov 2005 | A1 |
20050267772 | Nielsen et al. | Dec 2005 | A1 |
20050270293 | Guo et al. | Dec 2005 | A1 |
20050273336 | Chang et al. | Dec 2005 | A1 |
20050273384 | Fraser | Dec 2005 | A1 |
20050273771 | Chang et al. | Dec 2005 | A1 |
20050278124 | Duffy et al. | Dec 2005 | A1 |
20050278177 | Gottesman | Dec 2005 | A1 |
20050278213 | Faihe | Dec 2005 | A1 |
20050288006 | Apfel | Dec 2005 | A1 |
20050288871 | Duffy et al. | Dec 2005 | A1 |
20050288981 | Elias et al. | Dec 2005 | A1 |
20060004845 | Kristiansen et al. | Jan 2006 | A1 |
20060010164 | Netz et al. | Jan 2006 | A1 |
20060010206 | Apacible et al. | Jan 2006 | A1 |
20060015390 | Rijsinghani et al. | Jan 2006 | A1 |
20060020692 | Jaffray et al. | Jan 2006 | A1 |
20060036445 | Horvitz | Feb 2006 | A1 |
20060036642 | Horvitz et al. | Feb 2006 | A1 |
20060041423 | Kline et al. | Feb 2006 | A1 |
20060041648 | Horvitz | Feb 2006 | A1 |
20060053204 | Sundararajan et al. | Mar 2006 | A1 |
20060059431 | Pahud | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060069570 | Allison et al. | Mar 2006 | A1 |
20060069684 | Vadlamani et al. | Mar 2006 | A1 |
20060069863 | Palmer | Mar 2006 | A1 |
20060070081 | Wang | Mar 2006 | A1 |
20060070086 | Wang | Mar 2006 | A1 |
20060074732 | Shukla et al. | Apr 2006 | A1 |
20060074831 | Hyder et al. | Apr 2006 | A1 |
20060075024 | Zircher et al. | Apr 2006 | A1 |
20060080107 | Hill et al. | Apr 2006 | A1 |
20060080468 | Vadlamani et al. | Apr 2006 | A1 |
20060080670 | Lomet | Apr 2006 | A1 |
20060101077 | Warner et al. | May 2006 | A1 |
20060106743 | Horvitz | May 2006 | A1 |
20060109974 | Paden et al. | May 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060122917 | Lokuge et al. | Jun 2006 | A1 |
20060149555 | Fabbrizio et al. | Jul 2006 | A1 |
20060161407 | Lanza et al. | Jul 2006 | A1 |
20060167696 | Chaar et al. | Jul 2006 | A1 |
20060167837 | Ramaswamy et al. | Jul 2006 | A1 |
20060178883 | Acero et al. | Aug 2006 | A1 |
20060182234 | Scherer | Aug 2006 | A1 |
20060190226 | Jojic et al. | Aug 2006 | A1 |
20060190253 | Hakkani-Tur et al. | Aug 2006 | A1 |
20060195321 | Deligne et al. | Aug 2006 | A1 |
20060195440 | Burges et al. | Aug 2006 | A1 |
20060198504 | Shemisa et al. | Sep 2006 | A1 |
20060200353 | Bennett | Sep 2006 | A1 |
20060206330 | Attwater et al. | Sep 2006 | A1 |
20060206336 | Gurram et al. | Sep 2006 | A1 |
20060206481 | Ohkuma et al. | Sep 2006 | A1 |
20060212286 | Pearson et al. | Sep 2006 | A1 |
20060212446 | Hammond et al. | Sep 2006 | A1 |
20060235861 | Yamashita et al. | Oct 2006 | A1 |
20070005531 | George et al. | Jan 2007 | A1 |
20070033189 | Levy et al. | Feb 2007 | A1 |
20070041565 | Williams et al. | Feb 2007 | A1 |
20070043571 | Michelini et al. | Feb 2007 | A1 |
20070043696 | Haas et al. | Feb 2007 | A1 |
20070063854 | Zhang et al. | Mar 2007 | A1 |
20070208579 | Peterson | Sep 2007 | A1 |
20090234784 | Buriano et al. | Sep 2009 | A1 |
Number | Date | Country |
---|---|---|
2248897 | Sep 1997 | CA |
2301664 | Jan 1999 | CA |
2485238 | Jan 1999 | CA |
0077175 | Apr 1983 | EP |
0700563 | Mar 1996 | EP |
0977175 | Feb 2000 | EP |
1191772 | Mar 2002 | EP |
1324534 | Jul 2003 | EP |
1424844 | Jun 2004 | EP |
1494499 | Jan 2005 | EP |
2343772 | May 2000 | GB |
10133847 | May 1998 | JP |
2002055695 | Feb 2002 | JP |
2002189483 | Jul 2002 | JP |
2002366552 | Dec 2002 | JP |
2002374356 | Dec 2002 | JP |
2004030503 | Jan 2004 | JP |
2004104353 | Apr 2004 | JP |
2004118457 | Apr 2004 | JP |
2004220219 | Aug 2004 | JP |
2004241963 | Aug 2004 | JP |
2004304278 | Oct 2004 | JP |
2005258825 | Sep 2005 | JP |
WO9215951 | Sep 1992 | WO |
WO9321587 | Oct 1993 | WO |
WO9502221 | Jan 1995 | WO |
WO9527360 | Oct 1995 | WO |
WO9904347 | Jan 1999 | WO |
WO9953676 | Oct 1999 | WO |
WO0018100 | Mar 2000 | WO |
WO0070481 | Nov 2000 | WO |
WO0073955 | Dec 2000 | WO |
WO0075851 | Dec 2000 | WO |
WO0104814 | Jan 2001 | WO |
WO0133414 | May 2001 | WO |
WO0135617 | May 2001 | WO |
WO0137136 | May 2001 | WO |
WO0139028 | May 2001 | WO |
WO0139082 | May 2001 | WO |
WO0139086 | May 2001 | WO |
WO0182123 | Nov 2001 | WO |
WO0209399 | Jan 2002 | WO |
WO0209399 | Jan 2002 | WO |
WO0219603 | Mar 2002 | WO |
WO0227426 | Apr 2002 | WO |
WO0261730 | Aug 2002 | WO |
WO0273331 | Sep 2002 | WO |
WO03009175 | Jan 2003 | WO |
WO03021377 | Mar 2003 | WO |
WO03069874 | Aug 2003 | WO |
WO2004059805 | Jul 2004 | WO |
WO2004081720 | Sep 2004 | WO |
WO2004091184 | Oct 2004 | WO |
WO2004107094 | Dec 2004 | WO |
WO2005006116 | Jan 2005 | WO |
WO2005011240 | Mar 2005 | WO |
WO2005069595 | Jul 2005 | WO |
WO2005013094 | Oct 2005 | WO |
WO2006050503 | May 2006 | WO |
WO2006062854 | Jun 2006 | WO |
WO2007033300 | Mar 2007 | WO |
Number | Date | Country | |
---|---|---|---|
60892299 | Mar 2007 | US | |
60747896 | May 2006 | US |