The present disclosure generally relates to predictive analytics that may utilize statistical techniques such as modeling, machine learning, data mining and other techniques for analyzing data to make predictions about future events. For example, predictive analytics may be used in a variety of disciplines such as actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields.
The subject technology provides for a computer-implemented method, the method including: specifying a business problem to determine a probability of an event occurring in which the business problem includes a constraint; selecting a data source for a predictive model associated with a predictive algorithm in which the predictive model includes one or more queries and parameters; determining a set of transformations based on the queries and parameters for at least a subset of data from the data source to be processed by the predictive algorithm; identifying a set of patterns based on the set of transformations for at least the subset of data from the data source; and providing a trained predictive model including the determined set of patterns, the set of transformations, and the associated predictive algorithm for solving the specified business problem.
The subject technology provides for a computer-implemented method, the method including: selecting a data source for a trained predictive model in which the trained predictive model includes a set of patterns, a set of transformations, and is associated with a predictive algorithm for solving a business problem; applying the set of patterns according to the predictive algorithm to return a set of data from the data source; performing the set of transformations on the set of data; and providing a score indicating a probability of an event specified by the business problem based on the predictive algorithm on the set of data.
The subject technology provides for a computer-implemented method, the method including: receiving a score corresponding to a predictive model for solving a business problem; converting the score into a semantically meaningful format for an end-user; and providing the converted score to the end-user.
Yet another aspect of the subject technology provides a system. The system includes one or more processors, and a memory including instructions stored therein, which when executed by the one or more processors, cause the processors to perform operations including: specifying a business problem to determine a probability of an event occurring in which the business problem includes a constraint; selecting a data source for a predictive model associated with a predictive algorithm in which the predictive model includes one or more queries and parameters; determining a set of transformations based on the queries and parameters for at least a subset of data from the data source to be processed by the predictive algorithm; identifying a set of patterns based on the set of transformations for at least the subset of data from the data source; and providing a trained predictive model including the determined set of patterns, the set of transformations, and the associated predictive algorithm for solving the specified business problem.
The subject technology further provides for a non-transitory machine-readable medium comprising instructions stored therein, which when executed by a machine, cause the machine to perform operations including: specifying a business problem to determine a probability of an event occurring in which the business problem includes a constraint; selecting a data source for a predictive model associated with a predictive algorithm in which the predictive model includes one or more queries and parameters; determining a set of transformations based on the queries and parameters for at least a subset of data from the data source to be processed by the predictive algorithm; identifying a set of patterns based on the set of transformations for at least the subset of data from the data source; and providing a trained predictive model including the determined set of patterns, the set of transformations, and the associated predictive algorithm for solving the specified business problem.
It is understood that other configurations of the subject technology will become readily apparent from the following detailed description, where various configurations of the subject technology are shown and described by way of illustration. As will be realized, the subject technology is capable of other and different configurations and its several details are capable of modification in various other respects, all without departing from the scope of the subject technology. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
The features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several configurations of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and may be practiced without these specific details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Predictive analytics may utilize statistical techniques such as modeling, machine learning, data mining and other techniques for analyzing data to make predictions about future events. For instance, predictive models may be utilized to identify patterns found in historical data, transactional data and other types of data to predict trends, behavior patterns, future events, etc. However, existing implementations of applying predictive modeling may not provide results in a manner that is easily interpreted and meaningful to an end-user. As a result, the end-user may have difficulty in understanding the results provided in such implementations. Existing predictive modeling implementations may neglect to consider restrictions specified in a given business problem resulting in a solution that is not relevant for the business problem. Further, predictive models that are provided for a particular enterprise may not support reutilization of predictive models for other data sets associated with other enterprises or end-users.
As described herein, a machine learning semantic model (MLSM) functions as a bridge from raw data to a predictive analytics solution and has the following properties:
In one example, a data set contains input data schema, and transformation operations to create a case set from a data source. A data set will typically be mapped to a data source, and can be remapped to an arbitrary data source. The data set will continue to contain all models that have been created on the case set (a “case”). A predictive model is contained by a data set. Thus, a predictive model is an instantiation of a MLSM against a particular predictive algorithm, and operates on cases created by the MLSM.
In some configurations, the computing system 110 includes a machine learning semantic model (MLSM) server for applying predictive models on one or more data sources. One or more clients devices or systems may access the computing system 110 in order to generate predictive models for solving business problems on a data source(s).
As further illustrated in
As illustrated in the example of
Although the example shown in
The process begins at 205 by specifying a business problem to determine a probability of an event occurring in which the business problem includes a constraint. The business problem may attempt to determine a likelihood of an occurrence of an event and the constraint may specify a set of conditions that have to be met for the event. For instance, the conditions may include a specified budget, a cost scenario, a ratio of a number of false positives and/or false negatives that occur in a given model, etc. By way of example, the business problem can determine potential customers given a budget constraint, determine potential patients that are likely to suffer a medical illness given a length of stay, determine an incident of a fraudulent transaction given a number of transactions over a period of time, etc. Other types of business problems may be considered and still be within the scope of the subject technology.
The process at 210 selects a data source for a predictive model associated with a predictive algorithm. In one example, the predictive model includes one or more queries and parameters associated with the queries for processing data from the data source. Some examples of a data source may include a client data source such as data tables that are pushed to the MLSM server, or a server data source that is pulled from an external source such as a database. A parameter may specify a value, values, a range of values, etc., that a query attempts to match from querying data from the data source.
The process at 215 determines a set of transformations based on the queries and parameters for at least a subset of data from the data source to be processed by the predictive algorithm. The subset of data may be related to a case of data for the predictive algorithm. The set of transformations may include 1) a physical transformation, 2) a data space or distribution modification transformation, or 3) a business problem transformation. An example of a physical transformation includes binarization or encoding of data, such as categorical variables, into a format that is accessible by the predictive algorithm. An example of a data space transformation may be a mathematical operation(s) such as a logarithm performed on numerical data (e.g., price of a product, etc.) that reshapes the data for the predictive algorithm. In some configurations, the data space transformation is automatically performed based on the requirements of the actual predictive algorithm. For example, one algorithm identified by the MLSM may not accept numerical values for which the system will automatically convert such values to binned (e.g., categorical) values while not transforming the same value for algorithms that do accept numeric values. Similarly, an MLSM may define an algorithm that does not accept categorical values. In one example, the system will convert a categorical value into a series of numerical values where for each category a value of 0 means that the category was not observed and a value of 1 means that the value was observed. Algorithms that accept categorical values will not have the input transformed thusly in some implementations.
An example of a business problem transformation may include grouping more relevant data according to the objectives of the business problem. For example, customers that fall within zip codes or geographical areas close to a business of interest may be grouped together in a corresponding bucket, or other remaining customers in other zip codes or geographical areas may be grouped into another bucket for the predictive algorithm. The aforementioned types of transformations may include unary, binary or n-ary operators that are performed on data from the data source. In this manner, the process may provide data that is meaningful in a machine learning space associated with the predictive algorithm.
Another type of transformation that can be included involves one or more operations performed on an entirety of data available from a given data source. For instance, a row of data may include a column of data that is considered invalid (e.g., an age of a person that is out of range such as ‘999’). Thus, an example transformation may be provided that deletes or ignores the row in the data from the data source. Additionally, in an example in which a predictive model is predicting a rare event, a transformation may be provided that rebalances, amplifies, or makes more statistically prominent a portion(s) of the data according to the requirements of the predictive algorithm. For instance, a predictive algorithm that predicts instances of fraudulent transactions, which may be statistical insignificant among a set of data with an arbitrary size, may perform a rebalancing technique that amplifies the statistical significance of fraudulent data and reduces the statistical significant of instances of non-fraudulent data.
The process at 220 identifies a set of patterns based on the set of transformations for at least the subset of data from the data source. For instance, the process may determine patterns and correlations by scanning through the data according to the queries and parameters included in the predictive model. In this regard, one or more machine learning techniques may be utilized to identify patterns in the subset of data. By way of example, a neural network, logistic regression, linear regression, decision tree, naive Bayes classifier, Bayesian network, etc., may be utilized to determine patterns. Other examples may include rule systems, support vector machines, genetic algorithms, k-means clustering, expectation—maximization clustering, forecasting, and association rules. Other types of techniques or combination of techniques may be utilized to determine patterns in at least the subset of data and still be within the scope of the subject technology. By way of example, for a predictive model that attempts to predict which patients will have a high probability of failure in surgery, the process may determine that a patient has a high probability of failure in surgery if the following set of characteristics are identified in the data: 1) beyond a certain age, 2) overweight, 3) on anti-depressants, and 4) diabetic. An identified pattern may comprise a set of rules, a tree structure, or other type of data structure.
The process at 225 provides a trained predictive model including the determined set of patterns, the set of transformations, and the associated predictive algorithm for solving the specified business problem. The predictive algorithm may utilize queries, parameters for the queries and one or more machine learning techniques for solving the business problem. The process then ends. Although the above discussion applies to a single predictive model, the process in
The process at 305 selects a data source for a trained predictive model in which the trained predictive model includes a set of patterns, a set of transformations, and is associated with a predictive algorithm for solving a business problem. As mentioned before, the predictive algorithm may utilize queries, parameters for the queries, and machine learning techniques for identifying patterns in the data from the data source. In one example, the trained predictive model corresponds with a trained predictive model described at 225 in
The process at 310 applies the set of patterns according to the predictive algorithm to return a set of data from the data source. The process at 315 performs the set of transformations on the set of data. For example, in an example in which the predictive algorithm requires an attribute for a length of stay, a transformation may be performed that computes the length of stay from attributes corresponding to a release data and an admissions date. As described before in
As mentioned before, one type of transformation among the set of transformations may ignore or delete a row of data that is considered invalid (e.g., an invalid age). However, in the context of scoring the predictive model, such a type of transformation may not be performed in some instances. For example, if the predictive model is asking if a particular person may be considered a likely purchaser of a product, the process may not perform the transformation that deletes the data corresponding to the new customer even if the customer's data is invalid (e.g., customer's age is ‘999’). In this manner, the subject technology provides different ways of processing data during the training and scoring processes because these processes are handled separately by the MLSM server.
After performing the set of transformations, the process at 320 provides a score indicating a probability of an event specified by the business problem based on the predictive algorithm on the set of data. By way of example, the process performs the predictive algorithm on the set of data to provide a probability for indicating a likelihood of a patient having a heart attack, a likelihood that a transaction is fraudulent, a likelihood that a customer may purchase a product, etc., for a corresponding predictive model and business problem Although the discussion of
In some instances, interpreting the score provided at 320 in
The process begins at 405 by receiving a score corresponding to a predictive model for solving a business problem. The received score may correspond with a score provided at 320 in the process of
The process at 410 converts the score into a semantically meaningful format for an end-user. In this regard, the process at 410 may perform a set of operations including assigning a label or labels to the score based on a set of conditions. In one example, based on a cost function or a constraint specified by the business problem, the score may be labeled accordingly. By way of example, for a predictive model that predicts a patient's probability of having a heart attack, the process may 1) label a given score with a value greater than 0.9 as “very high,” 2) label the score with a value between 0.6 to 0.9 as “high”, 3) label the score with a value between 0.4 to 0.6 as “medium”, or 4) label the score with a value lower than 0.4 as “low.” In this fashion, the process may assign a label to the score that is meaningful to the end-user. Other types of labels and descriptions may be assigned to the score and still be within the scope of the subject technology.
The process at 415 provides the converted score to an end-user. For instance, the converted score may be provided for display with its assigned label. The process then ends. In this manner, the process in
In addition to the above described processes for training, scoring and post-processing a predictive model, the MLSM server may publish the predictive model so that another end-user or enterprise may utilize the published predictive model for their own data, modify the predictive model, and then generate a new predictive model tailored to the data and particular needs of the other end-user or enterprise. The published predictive model may be in a data format such as XML or a compressed form of XML in one example.
Some configurations of the subject technology allow a semantically similar set of data to the data utilized in training to be projected over the identified patterns during training As described before (e.g., in
By way of example, if a pattern is identified during training that indicates people with blue eyes and dark hair tend to drive red cars, then the MLSM server may project all customers in the data set over the identified pattern to determine which customers fall within the identified pattern. These customers may then be grouped in a category associated with the detected pattern. Further, for the customers that are grouped according to the identified pattern, other attributes may be determined such as an average income or average age, other demographic information, etc., within that group of customers that were not originally included during the training process.
In another example, the data may be scanned to detect different single attributes such as state, country, or age during the training process of the predictive model. After patterns are indentified during training, it may be determined that groups of people share a detected pattern. Thus, each group is an entity that is discovered during the training of a predictive model.
In yet another example, a tree may be built for a predictive model in which one or more branches of the tree indicates different attributes for an event. For instance, in an example predictive model that predicts a likelihood of a person having a heart attack, a branch of this tree may indicate that if a person is over eighty (80) years old, diabetic, and obese then that person is likely to have a heart attack. The subject technology may then analyze branches of the tree to determine which instances resulted in a fatality. Subsequently, it may be determined that people with the most instances of heart attacks did not always result in a fatality. Consequently, an enterprise (e.g., hospital) may be able to divert resources that were previously assigned to people with the most instances of heart attacks over to a new group of people in which a likelihood of a fatality is much greater, which may result in a more efficient usage of the enterprise's resources.
In view of the above, the identified patterns during training enable the MLSM server to project new data for the data set that was not discerned by identifying patterns alone during the training process of the predictive model.
Some configurations are implemented as software processes that include one or more application programming interfaces (APIs) in an environment with calling program code interacting with other program code being called through the one or more interfaces. Various function calls, messages or other types of invocations, which can include various kinds of parameters, can be transferred via the APIs between the calling program and the code being called. In addition, an API can provide the calling program code the ability to use data types or classes defined in the API and implemented in the called program code.
As illustrated in
In another example, the end-user may utilize the spreadsheet application for processing data located in a different location, such as an SQL server provided in a data system 540 that provides data 545. In this example, the end-user may utilize the spreadsheet application to point to the data located on the SQL server and then send one or more API calls provided by an API 520 in order to instruct the MLSM server to apply a trained predictive model on the data 545 on the SQL server. The results of applying the predictive model are then stored on the SQL server. In further detail, once the MLSM server receives the API calls from the end-user, the MLSM server pushes the predictive model and some custom code into the SQL server so that the SQL server may execute the desired commands or functions. In one example, the SQL may allow for custom code to be executed by the SQL server via support of extended procedures that provide functions for applying the predictive model. As a result, the data is not required to leave the domain of the SQL server when applying the predictive model.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a machine readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of machine readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The machine readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory and/or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some implementations, multiple software components can be implemented as sub-parts of a larger program while remaining distinct software components. In some implementations, multiple software subject components can also be implemented as separate programs. Finally, a combination of separate programs that together implement a software component(s) described here is within the scope of the subject technology. In some implementations, the software programs, when installed to operate on one or more systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
A computer program (also known as a program, software, software application, script, or code) can be written in a form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in some form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The following discussion relates to examples for user interfaces in which configurations of the subject technology be implemented. More specifically, the following examples relate to creating a predictive model that predicts whether a patient will be readmitted to a hospital based on a length of stay. In some configurations, the predictive model, after being created, may be applied on a new set of data.
In some configurations, the quality of hospital care may be measured, in part, by a number of patients that are readmitted to a hospital after a previous hospital stay. Hospital readmissions may be considered wasteful spending for the hospital. Thus, identifying which patients that are likely to be readmitted may be beneficial to improving the quality of the hospital. A readmission can be defined as any admission to the same hospital occurring within a predetermined number of days (e.g., 3, 7, 15, 30 days, etc.) after discharge from the initial visit in some examples. In this regard, the subject technology may be utilized to create a predictive model to determine potential patients that are likely to suffer a medical illness given a length of stay and/or when a set of characteristics are identified in data pertaining to a patient(s).
In some configurations, the subject technology may provide a tool(s) (e.g., plugin, extension, software component, etc.) that extends the functionality of a given spreadsheet application. As illustrated in
The following description describes an example system in which aspects of the subject technology can be implemented.
The bus 2005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the system 2000. For instance, the bus 2005 communicatively connects the processing unit(s) 2010 with the read-only memory 2020, the system memory 2015, and the storage device 2025.
From these various memory units, the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of the subject technology. The processing unit(s) can be a single processor or a multi-core processor in different implementations.
The read-only-memory (ROM) 2020 stores static data and instructions that are needed by the processing unit(s) 2010 and other modules of the system 2000. The storage device 2025, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the system 2000 is off. Some implementations of the subject technology use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the storage device 2025.
Other implementations use a removable storage device (such as a flash drive, a floppy disk, and its corresponding disk drive) as the storage device 2025. Like the storage device 2025, the system memory 2015 is a read-and-write memory device. However, unlike storage device 2025, the system memory 2015 is a volatile read-and-write memory, such a random access memory. The system memory 2015 stores some of the instructions and data that the processor needs at runtime. In some implementations, the subject technology's processes are stored in the system memory 2015, the storage device 2025, and/or the read-only memory 2020. For example, the various memory units include instructions for processing multimedia items in accordance with some implementations. From these various memory units, the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of some implementations.
The bus 2005 also connects to the optional input and output interfaces 2030 and 2035. The optional input interface 2030 enables the user to communicate information and select commands to the system. The optional input interface 2030 can interface with alphanumeric keyboards and pointing devices (also called “cursor control devices”). The optional output interface 2035 can provide display images generated by the system 2000. The optional output interface 2035 can interface with printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some implementations can interface with devices such as a touchscreen that functions as both input and output devices.
Finally, as shown in
These functions described above can be implemented in digital electronic circuitry, in computer software, firmware or hardware. The techniques can be implemented using one or more computer program products. Programmable processors and computers can be included in or packaged as mobile devices. The processes and logic flows can be performed by one or more programmable processors and by one or more programmable logic circuitry. General and special purpose computing devices and storage devices can be interconnected through communication networks.
Some implementations include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, optical or magnetic media, and floppy disks. The computer-readable media can store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some implementations are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some implementations, such integrated circuits execute instructions that are stored on the circuit itself
As used in this specification and the claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and the claims of this application, the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude wireless signals, wired download signals, and other ephemeral signals.
To provide for interaction with a user, implementations of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be a form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in a form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
Configurations of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or a combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by a form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some configurations, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
It is understood that a specific order or hierarchy of steps in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes can be rearranged, or that all illustrated steps be performed. Some of the steps can be performed simultaneously. For example, in certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the configurations described above should not be understood as requiring such separation in all configurations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The previous description is provided to enable a person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein can be applied to other aspects. Reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject technology.
A phrase such as an “aspect” does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. A disclosure relating to an aspect can apply to all configurations, or one or more configurations. A phrase such as an aspect can refer to one or more aspects and vice versa. A phrase such as a “configuration” does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. A disclosure relating to a configuration can apply to all configurations, or one or more configurations. A phrase such as a configuration can refer to one or more configurations and vice versa.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims.
The present application claims the benefit of priority under 35 U.S.C. §119 from U.S. Provisional Patent Application Ser. No. 61/682,716 entitled “MACHINE LEARNING SEMANTICS MODEL,” filed on Aug. 13, 2012, the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61682716 | Aug 2012 | US |