Embodiments disclosed herein relate generally to managing inference models. More particularly, embodiments disclosed herein relate to systems and methods to manage latent bias in tree based inference models.
Computing devices may provide computer implemented services. The computer implemented services may be used by users of the computing devices and/or devices operably connected to the computing devices. The computer implemented services may be performed with hardware components such as processors, memory modules, storage devices, and communication devices. The operation of these components may impact the performance of the computer implemented services.
Embodiments disclosed herein are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
Various embodiments will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments disclosed herein.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment. The appearances of the phrases “in one embodiment” and “an embodiment” in various places in the specification do not necessarily all refer to the same embodiment.
In general, embodiments disclosed herein relate to methods and systems for providing computer implemented services. The computer implemented services may be provided using inferences obtained from inference models.
The quality of the computer implemented services may depend on the quality of the inferences provided by the inference models. The quality of the inferences provided by the inference models may depend on the source of, type of, and/or quantity of training data used to obtain the inference models, the manner in which inference models are configured using the training data, and/or other factors.
Latent bias may be introduced into inference models from training data used to train the inference models. The latent bias may cause the inference models to exhibit latent bias in the inferences provided by the inference models. These inferences may cause undesirable impacts on the computer implemented services performed using such inferences.
To reduce latent bias exhibited inference models, a training procedure may be implemented that takes into account the potential for trained inference models to exhibit latent bias. The training process may proactively attempt to reduce the likelihood of trained models exhibiting latent bias.
The inference models may be tree based models, and the training procedure may utilize a splitting rule that incentivizes predictive power for desired labels and disincentivizes predictive power for bias features. By training the tree based models using a splitting rules with these incentivizes, the resulting trained inference models may be less likely to exhibit latent bias with respect to the bias features.
Once obtained, the inference models may be used to generate inferences. The inferences may be used to provide the computer implemented services. Accordingly, by providing inferences models that are less likely to exhibit latent bias in generated inferences, the computer implemented services may be more likely to be provided in a desirable manner. Thus, embodiments disclosed herein may address, among others, the technical problem of latent bias exhibited by inference models. By training inferences models as disclosed herein, resulting inference models may be less likely to exhibit latent bias.
In an embodiment, a method for providing computer implemented services using inference models is provided. The method may include identifying an occurrence of a condition that indicates an inference is necessary to provide the computer implemented services; based on the occurrence: obtaining an inference model of the inference models, the inference model being a tree based inference model based on a splitting rule that partitions training data used to obtain the inference model for information gain: for labels of the training data, and adversely for bias features of the training data; obtaining the inference using the inference model; and providing computer implemented services using the inference.
Obtaining the inference model may include reading the inference model from storage.
Obtaining the inference model may include, prior to identifying the occurrence: training an instance of the tree based inference model using the training data.
The training data may include records, and each of the records may include at least one feature value; at least one label value associated with the at least one feature value; and at least one bias feature values associated with the at least one feature value.
Training the instance of the tree based inference model may include obtaining, based on the training data and the splitting rule, a root node and a question; obtaining two answer to the question that partitions the records into two groups; obtaining a second node and a third node based on the two groups; and establishing a first edge between the root node and the second based on a first of the two answers; and establishing a second edge between the root node and the third node based on a second of the two answers.
The splitting rule may partition the records into the two groups using a function that rewards predictability of the labels by the instance of the tree based model and discourages predictability of the bias features instance of the tree based model.
The function may assign a numerical value based on division of the records among the two groups, and the records are partitioned through optimization of the function.
In an embodiment, a non-transitory media is provided that may include instructions that when executed by a processor cause the computer implemented method to be performed.
In an embodiment, a data processing system is provided that may include the non-transitory media and a processor and may perform the computer implemented method when the computer instructions are executed by the processor.
Turning to
Any of the computer implemented services may be provided using inferences. For example, the inferences may indicate content to be displayed as part of the computer implemented services, how to perform certain actions, and/or may include other types of information used to provide the computer implemented services.
To obtain the inferences, one or more inference models (e.g., hosted by data processing systems and/or other devices operably connected to the data processing systems) may be used. The inference models may, for example, ingest input and output inferences based on the ingested input. The content of the ingest input and output may depend on the goal of the respective inference model, the architecture of the inference model, and/or other factors.
However, if the inferences generated by the inference models do not meet expectations of the consumers (e.g., the computer implemented services) of the inferences, then the computer implemented services may be provided in an undesired manner. For example, the computer implemented services may presume that the inferences generated by the inference models exhibit certain characteristics such as accuracy with respect to predicting certain quantities or trends. If the inferences fail to meet these expectations, then the computer implemented services may be negatively impacted.
The inferences generated by an inference model may be undesirable if the inferences exhibit inference models do not make inferences based on input as expected by the manager of the inference model. As noted above, to obtain inferences, the inference model may ingest input and provide output. The relationship between ingested input and output used by the inference model may be established based on training data. The training data may include known relationships between input and output. The inference model may attempt to generalize the known relationships between the input and the output.
However, the process of generalization (e.g., training processes) may result in unforeseen outcomes. For example, the generalization process may result in latent bias being introduced into the generalized relationship used by the inference model to provide inferences based on ingest input data. Latent bias may be an undesired property of a trained inference model that results in the inference model generating undesirable inferences (e.g., inferences not made as expected by the manager of the inference model). For example, training data may include a correlation that is not obvious but that may result in latent bias being introduced into inference model trained using training data. If consumed by computer implemented services, these inaccurate or otherwise undesirable inferences may negatively impact the computer implemented services.
Latent bias may be introduced into inference models based on training data limits and/or other factors. These limits and/or other factors may be based on non-obvious correlations existing in the training data. For example, data processing system 100 may have access to a biased source of data (e.g., a biased person) in which the training data is obtained from. The biased person may be a loan officer working at a financial institution, and the loan officer may have authority to view personal information of clients of the financial institution to determine loan amounts for each of the clients. Assume the loan officer carries discriminatory views against those of a particular ethnicity. The loan officer may make offers of low loan amounts to clients that are of the particular ethnicity, in comparison to clients that are not of the particular ethnicity. When training data is obtained from a biased source, such as the loan officer, the training data may include correlations that exist due to the discriminatory views of the loan officer. This training data may be used when placing an inference model of data processing system 100 in a trained state in order to provide inferences used in the computer implemented services.
Due to these limits and/or other factors, such as biased sources, the training data used to train the inference model may include information that correlates with a bias feature, such as sex (e.g., male and/or female), that is undesired from the perspective of consumers of inferences generated by the inference model. This correlation may be due to the features (input data) used as training data (e.g., income, favorite shopping locations, number of dependents, etc.).
For example, a trained inference model that includes latent bias, when trained to provide inferences used in computer implemented services (to determine a risk an individual has of defaulting on loans) provided by a financial institution, may consistently generate inferences indicating female persons have a high risk of defaulting on loans. This inadvertent bias (i.e., latent bias) may cause undesired discrimination against female persons and/or other undesired outcomes by consumption of the inferences by the financial institution.
In general, embodiments disclosed herein may provide methods, systems, and/or devices for providing computer implemented services. To provide the computer implemented services, inference models may be used to provide inferences used to provide the computer implemented services.
The inference models may include, for example, tree based models (e.g., decision trees). The tree based models may be obtained by (i) obtaining training data, (ii) training a new instance of a tree based model using the training data.
When training new instances of the tree based models, a training procedure may be used that may improve the likelihood of the resulting trained models providing desired inferences. The training procedure may reduce the likelihood of the trained models exhibiting latent bias.
Latent bias may be exhibited by a trained inference model, for example, when predictions by the model appear to be based on a feature that is not included in a training data set. The resulting inferences of a trained inference model that exhibits latent bias may be undesirable.
For example, consider a scenario where bank wishes to use an inference model to decide on whether and to what extent financial offers are to be made to its clients. To obtain an inference model, a training data set may be established based on past financial offers made to the clients. From the bank's records, the financial offers may appear to only be based on financial and location characteristics (e.g., credit score, income, domicile location, etc.) of its clients. However, the bank's employees that made the decisions may actually have taken into account other characteristics of the clients, such as race, sex, etc., even if unintentionally due to their own personal biases. Consequently, the resulting decisions may, in fact, be based on in part on these other characteristics of the clients.
If the training data that only takes into account the relationships between financial and location characteristics of the clients and the resulting decisions regarding the financial offers, inference models trained using only this training data may exhibit latent bias with respect to these other characteristics (e.g., race, sex, etc.).
To reduce the likelihood of trained inference models exhibiting latent bias, the training procedure used by the system of
When performing its functionality, client device 102 and/or data processing system 100 may perform all, or a portion, of the methods and/or actions described in
Data processing system 100 and/or client device 102 may be implemented using a computing device such as a host or a server, a personal computer (e.g., desktops, laptops, and tablets), a “thin” client, a personal digital assistant (PDA), a Web enabled appliance, a mobile phone (e.g., Smartphone), an embedded system, local controllers, an edge node, and/or any other type of data processing device or system. For additional details regarding computing devices, refer to
Any of the components illustrated in
While illustrated in
To further clarify embodiments disclosed herein, a data structure diagram is shown in
Turning to
Training data 200 may include any number of records 202-210. Each of records 202-210 may include features values 204, label values 206, and bias feature values 208. Each of these portions of the records is discussed below.
To establish training data 200, a set of features may be selected. The features may include any number and type of features. Feature values 204 of each record may reflect the values of the selected features for a particular data point in training data 200.
Label values 206 may be the values corresponding to the feature values. Returning to the financial offer decision example, feature values 204 may reflect the characteristics (e.g., credit score, income, etc.) of a single client, and label values 206 reflect the decisions (e.g., whether to extent an offer and/or the terms of the offer) made based on feature values 204 in a past transaction.
Bias feature values 208 may be the values for the bias features corresponding to the feature values. Returning to the financial offer decision example, feature values 204 may reflect the characteristics (e.g., credit score, income, etc.) of a single client, and bias feature values 208 reflect the other characteristics (e.g., sex, race, etc.) of the client.
Training data 200 may include any number of records, and may be implemented using any number of data structures. For example, training data 200 may be implemented using a database, a linked list, a table, and/or other types of data structures.
Turning to
The tree based model may include nodes (e.g., 220, 230, 232, 240, 242, 244, 246) interconnected by edges (e.g., 250, 252). In
Some of the nodes (e.g., root nodes and decision nodes) may be associated with a question regarding a feature, and the edges extending from these nodes may be associated with different answers to the questions. Other nodes (e.g., terminal nodes) may be associated with inferences.
To obtain an inference, the tree based model may be traversed by starting at root node 220. To traverse the tree based model, the new ingest data may be used to answer the question associated with root node 220. Returning to the financial offer example, the question may be whether a new client's credit score is above or below a threshold amount.
Once the answer to this question is determined, then the edge corresponding to the answer may be followed. Returning to the financial offer example, if the threshold amount is 700, a new client has a score of 675, edge 250 is associated with credit scores that exceed the threshold, and edge 252 is associated with credit scores that do not exceed the threshold, then the tree may be traversed along edge 250 to decision node 230. In contrast, if the new client has a score of 725, then the tree may be traversed along edge 252 to terminal node 240.
Once a terminal node is reached, an inference associated with the terminal node 240 may be used as the output of the inference model.
However, the traversal leads to a decision node rather than a terminal node, a similar process may be performed as that described with respect to root node 220. For example, decision node 230 may be associated with another question, and the edges extending from decision node 230 to terminal node 242 and decision node 232 may be associated with different answers to the other question.
The structure of the tree based model may be obtained through training using training data. The training process may include (i) selecting a feature of the features of the training data, (ii) establishing a question based on the feature and answers to the question using a splitting rule (may also be used to select a feature for each node), and (iii) repeating the above process for additional features to establish nodes. The number of nodes may be limited, for example, during the training process and/or after training through subsuming groups of nodes.
The splitting rule may be a rule used to partition records of the training data into two groups. In an embodiment, the splitting rule attempts to split the records into two groups that have (i) stronger predictive power for labels of the training data and (ii) weaker predictive power for the bias features of the training data. For example, the splitting rule may be implemented using a function that provides a numerical score reflecting the predictive power for the labels and the lack of the predictive power for the bias features for a given group. The function may then be optimized for a best value by modifying the membership of each of the two groups using an optimization technique (e.g., gradient descent, evolution based algorithms, etc.).
As discussed above, the components and/or data structures of
Turning to
At operation 300, an occurrence of a condition that indicates that an inference is necessary to provide the computer implemented services. The occurrence may be, for example, a request for a new inference.
At operation 302, an inference model is obtained. The inference model may be a tree based inference model. The tree based inference model may be based on a splitting rule that partitions training data, used to obtain the inference model, (i) for predictive power for labels of the training data and (ii) for adverse predictive power for bias features of the training data.
The inference model may be obtained by (i) reading it (e.g., if it already exists) from storage (into memory, or it already may be in memory), or (ii) generating the inference model (if it does not exist).
The splitting rule may be used to partition records of the training data into groups. The splitting rule may incentivize predict power for the labels and disincentive predictive power for the bias features. For example, the splitting rule may be implemented using an objective function that is optimized when neither of the groups has predictive power for the bias features (e.g., equally distributes records having feature values associated with particular bias feature values across the two groups) and each group has predictive power for the features (e.g., segregates records having feature values associated with different label values in different groups).
In an embodiment, the inference model is generated using the method illustrated in
At operation 304, an inference is obtained using the inference model. The inference may be obtained by ingesting data into the inference model. The data may correspond to a set of feature values for features of the training data. The inference model may generate the inference as output.
As discussed with respect to
The value may or other information may be based, for example, on the records from the training data that are associated with the terminal node (the associations may be established by the splitting rule). The labels of the records associated with the terminal node may be used to obtain the value or other information. For example, the values of the labels may be averaged or otherwise used to obtain a value or other information for the terminal node.
At operation 306, the computer implemented services are provided using the inference. The computer implemented services may be provided using the inference, for example, by performing one or more actions based on the inference.
The method may end following operation 306.
Turning to
At operation 310, training data that associated features with labels and bias features is obtained. The training data may be similar to that described with respect to
At operation 312, a feature of the features is selected. The feature of the features may be selected using a splitting rule. The splitting rule may be used to evaluate the predictive power of each feature with respect to the labels and predictive power of each with respect to the bias features. The feature having the highest predictive powers may be selected.
At operation 314, a node for a decision tree is obtained based on the feature. The node may be obtained by adding information regarding the node to a data structure. The information may associate the node with the feature.
At operation 316, a question and answers for the node are obtained using the splitting rule. The question may be based on the selected feature, and the answers to the question may be obtained by performing an optimization of partitioning of records of the training data into two groups based on information gain (i) for the labels and (ii) adversely for the bias features.
During the optimization process, different memberships in two groups may be selected using an optimization process (e.g., gradient descent). The resulting distribution may then be used to calculate the information gain for the labels based on the distribution, and adverse information gain for the bias features. In other words, the resulting distribution of records among the two groups may be tested to ascertain whether distributions are predictive of the labels of the groups and anti-predictive of the bias feature labels of the groups, and the extent of the predictiveness/anti-predictiveness of the groupings. The distribution of the records may then be adjusted during the optimization process.
The answer to the questions may then be established based on the features values of the members of each group. For example, the answer to the question may be established such that the answers properly classify each of the members of each group into the respective group.
The answers may then be ascribed to edges descending from the node.
At operation 318, a determination is made regarding whether the decision tree is complete. The decision may be made, for example, based on the predictive power level for the labels and anti-predictive power level for the bias features for the respective groups of records. The predictive power levels may be compared to thresholds or other criteria. The outcome of the comparison may indicate whether the decision tree is complete.
For example, if the comparison, for either group indicates that the predictive power level for the group is too low, then the method may return to operation 312, and a feature for a new decision node for the group may be selected.
If the comparison for either group indicates that the predictive power level for the group is sufficiently high, then a terminal node may be established. The labels for the group members may be used to establish the inference for the terminal node. For example, the labels of the records corresponding to the group may be averaged or otherwise used to obtain an output that will be provided by the inference model if the terminal node is traversed to during inference generation.
The method may end following operation 318.
Following operation 318, various groups of nodes may be combined or otherwise modified based on various metrics to obtain a final trained inference model.
While illustrated in
Any of the components illustrated in
In one embodiment, system 400 includes processor 401, memory 403, and devices 405-407 via a bus or an interconnect 410. Processor 401 may represent a single processor or multiple processors with a single processor core or multiple processor cores included therein. Processor 401 may represent one or more general-purpose processors such as a microprocessor, a central processing unit (CPU), or the like. More particularly, processor 401 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 401 may also be one or more special-purpose processors such as an application specific integrated circuit (ASIC), a cellular or baseband processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, a graphics processor, a network processor, a communications processor, a cryptographic processor, a co-processor, an embedded processor, or any other type of logic capable of processing instructions.
Processor 401, which may be a low power multi-core processor socket such as an ultra-low voltage processor, may act as a main processing unit and central hub for communication with the various components of the system. Such processor can be implemented as a system on chip (SoC). Processor 401 is configured to execute instructions for performing the operations discussed herein. System 400 may further include a graphics interface that communicates with optional graphics subsystem 404, which may include a display controller, a graphics processor, and/or a display device.
Processor 401 may communicate with memory 403, which in one embodiment can be implemented via multiple memory devices to provide for a given amount of system memory. Memory 403 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Memory 403 may store information including sequences of instructions that are executed by processor 401, or any other device. For example, executable code and/or data of a variety of operating systems, device drivers, firmware (e.g., input output basic system or BIOS), and/or applications can be loaded in memory 403 and executed by processor 401. An operating system can be any kind of operating systems, such as, for example, Windows operating system from Microsoft®, Mac OS®/iOS® from Apple, Android® from Google®, Linux, Unix®, or other real-time or embedded operating systems such as VxWorks.
System 400 may further include IO devices such as devices (e.g., 405, 406, 407, 408) including network interface device(s) 405, optional input device(s) 406, and other optional IO device(s) 407. Network interface device(s) 405 may include a wireless transceiver and/or a network interface card (NIC). The wireless transceiver may be a WiFi transceiver, an infrared transceiver, a Bluetooth transceiver, a WiMax transceiver, a wireless cellular telephony transceiver, a satellite transceiver (e.g., a global positioning system (GPS) transceiver), or other radio frequency (RF) transceivers, or a combination thereof. The NIC may be an Ethernet card.
Input device(s) 406 may include a mouse, a touch pad, a touch sensitive screen (which may be integrated with a display device of optional graphics subsystem 404), a pointer device such as a stylus, and/or a keyboard (e.g., physical keyboard or a virtual keyboard displayed as part of a touch sensitive screen). For example, input device(s) 406 may include a touch screen controller coupled to a touch screen. The touch screen and touch screen controller can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen.
IO devices 407 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices 407 may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. IO device(s) 407 may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips. Certain sensors may be coupled to interconnect 410 via a sensor hub (not shown), while other devices such as a keyboard or thermal sensor may be controlled by an embedded controller (not shown), dependent upon the specific configuration or design of system 400.
To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage (not shown) may also couple to processor 401. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a solid state device (SSD). However, in other embodiments, the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also a flash device may be coupled to processor 401, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system.
Storage device 408 may include computer-readable storage medium 409 (also known as a machine-readable storage medium or a computer-readable medium) on which is stored one or more sets of instructions or software (e.g., processing module, unit, and/or processing module/unit/logic 428) embodying any one or more of the methodologies or functions described herein. Processing module/unit/logic 428 may represent any of the components described above. Processing module/unit/logic 428 may also reside, completely or at least partially, within memory 403 and/or within processor 401 during execution thereof by system 400, memory 403 and processor 401 also constituting machine-accessible storage media. Processing module/unit/logic 428 may further be transmitted or received over a network via network interface device(s) 405.
Computer-readable storage medium 409 may also be used to store some software functionalities described above persistently. While computer-readable storage medium 409 is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The terms “computer-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of embodiments disclosed herein. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, or any other non-transitory machine-readable medium.
Processing module/unit/logic 428, components and other features described herein can be implemented as discrete hardware components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, processing module/unit/logic 428 can be implemented as firmware or functional circuitry within hardware devices. Further, processing module/unit/logic 428 can be implemented in any combination hardware devices and software components.
Note that while system 400 is illustrated with various components of a data processing system, it is not intended to represent any particular architecture or manner of interconnecting the components; as such details are not germane to embodiments disclosed herein. It will also be appreciated that network computers, handheld computers, mobile phones, servers, and/or other data processing systems which have fewer components or perhaps more components may also be used with embodiments disclosed herein.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as those set forth in the claims below, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Embodiments disclosed herein also relate to an apparatus for performing the operations herein. Such a computer program is stored in a non-transitory computer readable medium. A non-transitory machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices).
The processes or methods depicted in the preceding figures may be performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes or methods are described above in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially.
Embodiments disclosed herein are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments disclosed herein.
In the foregoing specification, embodiments have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the embodiments disclosed herein as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.