The present description generally relates to developing machine learning applications.
Software engineers and scientists have been using computer hardware for machine learning to make improvements across different industry applications including image classification, video analytics, speech recognition and natural language processing, etc.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several embodiments of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Machine learning has seen a significant rise in popularity in recent years due to the availability of massive amounts of training data, and advances in more powerful and efficient computing hardware. Machine learning may utilize models that are executed to provide predictions in particular applications (e.g., analyzing images and videos, fraud detection, etc.) among many other types of applications.
The subject technology provides techniques for an adaptive loss function that works within the context of metric learning and classification. Specifically, a technique called adaptive loss alignment (ALA) is provided which automatically adjusts loss function parameters to directly optimize an evaluation metric on a validation set. Further, this technique also can result in a reduction of a generalization gap, which refers to a difference between a training error and a generalization error which may occur when capacity is increased and training error decreases, but the gap between training error and generalization error increases. A machine learning model's “capacity” refers to its ability to fit a wide variety of functions. Models with low capacity may struggle to fit a training set of data. Models with high capacity can overfit data by memorizing properties of the training set that do not serve them well on the test set.
In an example, a loss function provides a measure of a difference between a predicted value and an actual value, which can be implemented using a set of parameters where the type of parameters that are utilized can impact different error measurements. One challenge in machine learning is that a given machine learning model algorithm should, in order to provide a good model, perform well on new, previously unseen inputs, and not solely on those inputs which the model was trained. The ability to perform well on previously unobserved inputs is called generalization. When training a machine learning model, an error measure on a given training set of data can be determined, called the training error, with a goal of reducing this training error during training. However, in developing the machine learning algorithm, it is also a goal to lower the generalization error, also called the test error. In an example, the generalization error is defined as the expected value of the error on a new input where the expectation can be taken across different possible inputs that are taken from the distribution of inputs that the system is expected to encounter in practice. As described further herein, the ALA technique provides an adaptive loss function that can update parameters of the loss function in order to advantageously improve the aforementioned error measurements.
Machine learning models, including deep neural networks, are difficult to optimize, particularly for real world performance. One reason is that default loss functions are not always good approximations to evaluation metrics, a phenomenon called a loss-metric mismatch. In at least an implementation, the ALA technique learns to adjust the loss function using reinforcement learning (RL) at the same time as the model weights are being learned using a gradient descent technique. A gradient descent technique can refer to a technique for minimizing a function based at least in part on a derivative of the function, which can be used during training of a given machine learning model (e.g., neural network). This approach helps align the loss function to the evaluation metric cumulatively over successive training iterations. ALA as described herein differs from other techniques by optimizing the evaluation metric directly via a sample efficient RL policy that iteratively adjusts the loss function.
Implementations of the subject technology improve the computing functionality of a given electronic device by providing a sample efficient RL approach to address the loss-metric mismatch directly by continuously adapting the loss thereby reducing processing requirements for a machine learning model. Prior approaches waited until training the model was completed in order to update the loss function. The subject technology therefore avoids this by advantageously continuously adapting the loss function. These benefits therefore are understood as improving the computing functionality of a given electronic device, such as an end user device which may generally have less computational and/or power resources available than, e.g., one or more cloud-based servers.
The network environment 100 includes an electronic device 110, and a server 120. The network 106 may communicatively (directly or indirectly) couple the electronic device 110 and/or the server 120. In one or more implementations, the network 106 may be an interconnected network of devices that may include, or may be communicatively coupled to, the Internet. For explanatory purposes, the network environment 100 is illustrated in
The electronic device 110 may be, for example, desktop computer, a portable computing device such as a laptop computer, a smartphone, a peripheral device (e.g., a digital camera, headphones), a tablet device, a wearable device such as a watch, a band, and the like. In
In one or more implementations, the electronic device 110 may provide a system for training a machine learning model using training data, where the trained machine learning model is subsequently deployed to the electronic device 110. Further, the electronic device 110 may provide one or more machine learning frameworks for training machine learning models and/or developing applications using such machine learning models. In an example, such machine learning frameworks can provide various machine learning algorithms and models for different problem domains in machine learning. In an example, the electronic device 110 may include a deployed machine learning model that provides an output of data corresponding to a prediction or some other type of machine learning output.
The server 120 may provide a system for training a machine learning model using training data, where the trained machine learning model is subsequently deployed to the server 120. In an implementation, the server 120 may train a given machine learning model for deployment to a client electronic device (e.g., the electronic device 110). The machine learning model deployed on the server 120 can then perform one or more machine learning algorithms. In an implementation, the server 120 provides a cloud service that utilizes the trained machine learning model and continually learns over time.
As illustrated, the electronic device 110 includes training data 210 for training a machine learning model. In an example, the electronic device 110 may utilize one or more machine learning algorithms that uses training data 210 for training a machine learning (ML) model 220. The electronic device 110 further includes an adaptive loss alignment (ALA) controller 230 which performs operations for providing an adaptive loss alignment function which automatically adjusts loss function parameters to directly optimize an evaluation metric on a validation set, which is discussed in more detail below in
In an implementation, a machine learning problem addressed by the ML model 220 can be defined as improving an evaluation metric M(fw, Dval) on a validation set Dval, for a parametric model fw: X→Y. In an example, a parametric model learns a function described by a parameter vector that has a finite size that is fixed before any data is observed. The evaluation metric M between the ground-truth y and model prediction fw (x) given an input x, can be either decomposable over samples (e.g., classification error) or non-decomposable similar to an area under the precision recall curve (AUCPR) and Recall@k. The machine learning model 220 learns to optimize for the validation metric and expects the validation metric to be a good indicator of the model performance M(fw, Dtest) on a test set Dtest. In an example, the ML model 220 can be a neural network as shown in
In an example, the “evaluation metric” and “validation metric” refer to the same metric. The term “evaluation” refers to applying the metric on the test set, whereas “validation” refers to applying the metric on the validation set. In an example, the evaluation/validation metric does not change during training. Instead, the loss function (which is the differentiable function being optimized by stochastic gradient descent) changes in different training stages.
In an example, the term “ground truth” refers to a particular label used to assess whether a machine learning model makes a correct prediction or not. In a classification task, the ground truth can be the category that corresponds to the image, e.g., “dog” when shown a picture of a dog. In a metric learning task, the ground truth can be whether two images are of the “same” or “different” category (e.g., in a facial recognition application, it can be whether two images are of the same person or not).
In an example, the term “parametric model” refers to the learnable parameters or weights (e.g., the “weights” in a neural network that define the mapping from input to output). In an example, AUCPR and Recall@k are two evaluation metrics for indexing the performance of a machine learning system. AUCPR refers to the area under the precision-recall curve where a larger value means more area and therefore better performance. Recall@k can often be used to index how good an information retrieval system is. For example, when searching for a particular face identity in a data set of faces of different people, Recall@k may be calculated as (# of that person's face images @k retrieved image)/(total # of that person's face images).
Optimizing directly for evaluation metrics for the ML model 220, however, can be a challenging task. In an example, this is because model weights w are obtained by optimizing a loss function l on training set Dtrain, i.e., by solving minw Σ(x,y)∈Dtrainl(fw(x), y). In some cases, however, the loss l is only a surrogate of the evaluation metric M, which can be non-differentiable with regard to the model weights w. Moreover, the loss l is optimized on the training set Dtrain instead of Dval or Dtest.
To address the above loss-metric mismatch issue (e.g., where a given loss function does not provide a good approximation with respect to an evaluation metric), the ML model 220 learns an adaptive loss function lΦ(fw (x), y) with loss parameters Φ∈Ru. In this example, the goal is to align the adaptive loss with the evaluation metric M on a held-out validation data Dval. This leads to an alternate direction optimization problem: (1) finding metric-minimizing loss parameters Φ and (2) updating the model weights WO under the resultant loss lΦ by, e.g., Stochastic Gradient Descent (SGD), which is illustrated in the following notation:
In an implementation, the outer loop and the inner loop are approximated by a few steps of iterative optimization (e.g., SGD updates). Hence, Φt is denoted as the loss function parameters at time step t and wΦt as the corresponding model parameters. An aspect here is to bridge the gap between evaluation metric M and loss function lΦt over time, conditioned on the found local optima wΦt.
As discussed below, the ALA controller 230 performs a task where it proposes one or more updates to the aforementioned loss function that the main network (e.g., the ML model 220) is utilizing. The ALA controller 230 is enabled to see one or more past states in order to propose such updates of the loss function to the main network. After the main network trains using the updated loss function for a number of iterations, the ALA controller 230 can evaluate the main network using a validation data set to determine an improvement between prior to an update and after the update and/or between two respective updates.
As shown in
In an example, a developer of a machine learning model often cares about an evaluation metric such as a classification error of a model being trained. Since a classification error is a numeric value, and a lower value of the classification error is considered better, an improvement in this metric is measured by its decrease during training. Similarly, when using AUCPR, where a higher value is considered better, an improvement in this metric is measured by its increase during training.
To capture the conditional relations between loss and evaluation metric, a reinforcement learning problem is formulated. In an example, to address such a problem, the task is to predict the best change ΔΦt to loss parameters Φt such that optimizing the adjusted loss aligns better with the evaluation metric M(fwΦt, Dval). Stated another way, taking an action that adjusts the loss function should produce a reward that reflects how much the metric M will improve on validation data Dval. This is analogous to teaching the ML model 220 how to better optimize on seen training data Dtrain and to better generalize (in terms of M) on unseen validation data Dval.
In an example, the underlying model behind reinforcement learning (RL) is a Markov Decision Process (MDP) defined by states s ∈S and actions a ∈A at discrete time steps t within an episode. In the example shown in
The loss-controlling policy is optimized with a policy gradient approach where the objective is to maximize the expected total return as defined by the following notation:
J(θ)=τ[R(τ)], (2)
where R(τ) is the total return R(τ)=Σt=0Trt of an episode τ={st, at|t∈[0, T]}.
The updates to the policy parameters θ are given by the gradient denoted by the following:
where bk is a variance-reducing baseline reward implemented as the exponential moving average of previous rewards.
The following discusses locally policy learning. As shown in
Although the example of
The following discussion describes the concrete RL algorithm for simultaneous learning of the ALA policy parameters θ and model weights w.
The reward function rt measures the relative reduction in validation metric M(fwΦt, Dval), after K number of iterations of performing gradient descent with an updated loss function lΦt+1. The following notation represents the cumulative performance between model updates:
where γ is a discount factor that weighs more on the recent metric.
The main model weights TO are updated for K iterations. The reward rt to ±1 is then quantized by sign(Mt−Mt+1), which advantageously encourages consistent error metric decreases regardless of magnitude. Further, the terminal reward is defined in case of arriving at the maximum training iteration. In this case, Mt is compared with a pre-defined threshold ϑ∈R, which can be set as the converged evaluation metric from regular training without a loss controller (e.g., the ALA controller 230), which may be represented by the following notation:
For every element Φt(i) of the loss parameters Φt, action at(i) is sampled from the discretized space A={−β, 0, β}, with β being a predefined step-size. Actions are used to update the loss parameters Φt(i)=Φt−1(i)+at(i) at each time step. The ALA policy network no has |A| output neurons for each loss parameter, and at is sampled from a softmax distribution over these neurons.
In an implementation, the policy network state st∈S includes four components:
For the policy network state, validation statistics are utilized, among others, to capture model training states. One objective is to find rewarding loss-updating actions at=ΔΦt to improve the evaluation metric on a validation set. A successful loss control policy can be able to model the implicit relation between the validation statistics, which is the state of the RL problem, and the validation metric, which is the reward. Stated another way, the ALA technique learns to mimic the loss optimization process for decreasing the validation metric cumulatively. Validation statistics are chosen instead of training statistics in the policy network state st because the former is a natural proxy of the validation metric. In an example, the validation statistics are normalized in the state representation. This allows for generic policy learning which is independent of the actual model predictions from different tasks or loss formulations.
As shown in
It is worth noting that the initial loss parameters (Do can be important for efficient policy learning. In an example, proper initialization of (Do must ensure default loss function properties that depend on the particular form of loss parameterization for a given task, e.g., identity class correlation matrix in classification. Stochastic action sampling from no and a standard E-greedy strategy can be utilized to encourage exploration in learning.
The following discussion relates to instantiation in example learning problems, including classification and metric learning.
With respect to classification, the subject technology learns to adapt the following parametric classification loss function:
l
Φ
(fw(x),y)=−σ(yTΦt log fw(y|x)), (6)
where σ(⋅) is the sigmoid function, and Φi ∈|y|×|y| denotes the loss function parameters with |y| being the number of classes. y ∈{0, 1}|y| denotes the one-hot representation of class labels, and fw(y/x)∈|y| denotes the output multinomial distribution of the model,
The matrix Φt encodes time-varying class correlations. A positive value of Φt(i, j) encourages the model to increase the prediction probability of class/given ground-truth class i. A negative value of Φt(i, j), on the other hand, penalizes the confusion between class i and j. Thus, when Φt changes as learning progresses, it is possible to implement a hierarchical curriculum for classification, where similar classes are grouped as a super class earlier in training, and discriminated later as training goes further along. To learn the curriculum automatically, Φ0 is initialized as an identity matrix (e.g., reduced to the standard cross-entropy loss in this case), and update Φt over time by the ALA policy ηθ.
To learn to update (IN, a confusion matrix C(fwΦt, Dval) ∈R|Y|×|Y| of model prediction on validation set Dval is constructed, which can be represented by the following example notation:
whew I(yd,i) is an indicator function outputting 1 when yd equals class i and 0 otherwise,
In an example, a parameter efficient approach is implemented to update each loss parameter Φt(i, j) based on the observed class confusions [Ci,j, Cj,i]. In other words, the ALA controller 230 collects the validation statistics S(fwΦt, Dval) only for class pairs at each time step t, in order to construct state st for updating the corresponding loss parameter Φt(i, j). Different class pairs share the same loss controller (e.g., the ALA controller 230), and the loss parameters Φt(i,j) and Φt(i, j) are updated to the same value (normalized between [−1, 1]) to ensure class symmetry. This implementation is more parameter efficient than learning to update the whole matrix Φt based on the matrix C. Furthermore, it does not depend on the number of classes for a given task, thus enabling transferring the learned policy to another classification task with an arbitrary number of classes.
With respect to a metric learning problem for machine learning, the metric learning problem learns a distance metric to encode semantic similarity. A distance metric refers to a measure of semantic similarity. In an example, the resulting distance metric cannot directly cater to different, and sometimes contradicting performance metrics of interest (e.g., verification vs. identification rate, or precision vs. recall). This gap is more pronounced in the presence of common techniques such as hard mining that only have indirect effects on final performance. Therefore, metric learning can serve as a strong testbed for learning methods that directly optimize evaluation metrics. A standard triplet loss for metric learning can be formulated as the following notation:
l
tri(fw(xi,i+,i−))=max(0,F(d+)−F(d−)+η), (8)
where F(d)=d2 is the squared distance function over both d+ (distance between anchor instance fw (xi) and positive instance fw(xt+)) and d− (distance between fw(xi) and negative instance fw(xi−)), while η is margin parameter.
In an example, the shape of the distance function matters. The concave shape of −d2 for negative distance d− will lead to diminishing gradients when d− approaches zero. In this instance, the distance function is reshaped adaptively with two types of loss parameterizations.
For the first parametric loss, called distance mixture, five (5) different-shaped distance functions are adopted for both d+ and d−, and learn a linear combination of them via Φt which can be represented by the following:
when Fi+ and F3− (⋅) correspond to the increasing and decreasing distance functions to penalize large d+ and small d− respectively.
In this case Φt=[{Φt(i)}, {Φt(j)}]∈[0, 1]10, and is initialized as a binary vector such that Φ0 selects the default distance functions d2 and 0.5 d−1 for d+ and d−. For RL, the validation statistics in state st are simply represented by the computed distance Fi+(⋅) or Fj−(⋅), and the ALA controller 230 updates Φt(i) or Φt(j) accordingly. In an example, performance is not sensitive to the design choices of Fi+(⋅) and Fj−(⋅). Rather, it can be more important to learn the dynamic weightings of Fi+(⋅) and Fj−(⋅).
A Focal weighting-based loss formulation can be represented by the following:
where Φt∈2, and di++ and di−− denote the distance between anchor instance and the positive i+ and negative i− instances in the batch.
In the above example, these distances are utilized as validation statistics for ALA controller to update Φt while α is the distance offset.
In an implementation, the main model architecture as illustrated in
In an example, the ALA controller 230 is learned using a REINFORCE policy gradient method with a learning rate of 0.001. Experience is collected from all child networks every K=200 SGD time steps. In the E-greedy strategy, linear annealing of ϵ is performed from 1 to 0.1 during policy learning. The following are set: discount factor y=0.9, loss update quantity β=0.1 and distance offset α=1. Performance is robust to variations of these hyper-parameters.
As shown, chart 520, chart 530, and chart 540 provide an examination of both optimization and generalization when ALA uses validation loss as a reward where training and test (generalization) errors for ALA and fixed cross-entropy loss are monitored. In these charts, the generalization error on a test set is improved, and the optimization in training error also provides small improvements. The optimization and generalization gains are larger when the validation metric is used as a reward, which further addresses the loss-metric mismatch. By comparison, the training metric-based reward yields faster training error decrease, but smaller gains in test error are seen potentially due to the diminishing error/reward in the training data.
ALA improves optimization by dynamically smoothing the loss landscape. To verify, measurements of the loss surface convexity are taken, and the Gaussian curvature of the loss surface around each model checkpoint following are calculated. In
As discussed before, default loss functions may not provide good approximations to evaluation metrics, which is referred to as a loss-metric mismatch. This is illustrated in chart 710 and 720. In chart 710, a loss-metric mismatch on a CIFAR-10 data set is shown comparing cross-entropy loss on the training set vs. classification error on the test set. In chart 720, a comparison is shown corresponding to a cross-entropy loss on the training set vs. area under the precision recall curve (AUCPR) on the test set. In these examples, all curves are averaged over 10 runs initialized with different random seeds, with shaded areas showing the standard deviations. Training loss and evaluation metric curves differ in shape and noise characteristics in chart 710 and 720.
As further illustrated in
ALA improves performance by addressing the loss-metric mismatch in chart 750 and 760. In
In the example charts of
In
It is appreciated that the ALA techniques described herein can be applied to other applications. In an example, ALA can be applied to multiple simultaneous objectives, where the ALA controller 230 weighs between these objectives dynamically. In another example, the ALA techniques can be applied to cases where the output of a given model is an input into a more complex pipeline, which may be common in production systems (e.g., detection—alignment—recognition pipelines). In these systems, further machinery can be developed for making reward evaluation efficient enough to learn the policy jointly with training the different modules.
Another area where ALA can be applied is to make ALA less dependent on specific task types and loss/metric formulations. In an example, the ALA controller 230 can be trained through continual learning and/or crowd learning to handle different scenarios flexibly. This enables the use of ALA in distributed crowd learning settings where model training gets better and better over time.
Further, ALA may be applied in dynamically changing environments where available training data can change over time (e.g., life-long learning, online learning, meta-learning).
The process 1100 describes different sets of iterations that are a subset of a total number of iterations for a full training run of a given machine learning model. The electronic device 110 trains, for a first set of iterations, a first machine learning model using a loss function with a first set of parameters (1110). In an example, the first set of iterations can refer to an initial episode, at a first time step, that includes K number of training iterations, and the first machine learning model refers to the main model or “child model” as discussed before. The electronic device 110 determines, by a second machine learning model, a state of the first machine learning model corresponding to the first set of iterations of training the first machine learning model (1112). In an example, the second machine learning model refers to the loss controller (e.g., ALA controller 230) which is implemented as a neural network (e.g., the policy network) as discussed before. The electronic device 110 determines, by the second machine learning model, an action for updating the loss function based at least in part on the state of the first machine learning model, where the action corresponds to a change in values of the first set of parameters (1114). In an example, the action refers action at(i) as discussed before. The electronic device 110 updates, by the second machine learning model, the loss function based at least in part on the action, where the updated loss function includes a second set of parameters corresponding to the change in values of the first set of parameters (1116). In an example, the updated loss function refers to updated loss function lΦt+1 as discussed before. The electronic device 110 trains, for a second set of iterations, the first machine learning model using the updated loss function with the second set of parameters, where the first set of iterations and the second set of iterations are a subset of a total number of iterations for a full training run of the first machine learning model (1118). In an example, the second set of iterations refers to a subsequent episode, at a second time step, that includes another K number of training iterations, and the full training run includes a T number of episodes.
The bus 1208 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1200. In one or more implementations, the bus 1208 communicatively connects the one or more processing unit(s) 1212 with the ROM 1210, the system memory 1204, and the permanent storage device 1202. From these various memory units, the one or more processing unit(s) 1212 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1212 can be a single processor or a multi-core processor in different implementations.
The ROM 1210 stores static data and instructions that are needed by the one or more processing unit(s) 1212 and other modules of the electronic system 1200. The permanent storage device 1202, on the other hand, may be a read-and-write memory device. The permanent storage device 1202 may be a non-volatile memory unit that stores instructions and data even when the electronic system 1200 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1202.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1202. Like the permanent storage device 1202, the system memory 1204 may be a read-and-write memory device. However, unlike the permanent storage device 1202, the system memory 1204 may be a volatile read-and-write memory, such as random access memory. The system memory 1204 may store any of the instructions and data that one or more processing unit(s) 1212 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1204, the permanent storage device 1202, and/or the ROM 1210. From these various memory units, the one or more processing unit(s) 1212 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1208 also connects to the input and output device interfaces 1214 and 1206. The input device interface 1214 enables a user to communicate information and select commands to the electronic system 1200. Input devices that may be used with the input device interface 1214 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 1206 may enable, for example, the display of images generated by electronic system 1200. Output devices that may be used with the output device interface 1206 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.