Example embodiments of the present disclosure relate generally to artificial intelligence and machine learning, and more particularly to artificial intelligence and machine learning on resource constrained devices or systems utilizing classifier learning from a stream of a limited amount of data as the data are acquired by a sensor in temporal order.
Artificial intelligence and machine learning utilizing neural networks require the training of the neural networks. Training is a process to let a neural network learn output parameters from available samples of input data, including how to determine one or more outputs based on one or more inputs. Such inputs generally include large amounts of data that require large amounts of storage space and processing power for the training or the learning of the neural network. Apparatuses and systems performing the training or learning are commonly large systems, such as cloud-based multi-CPU, multi-GPU, or multi-TPU systems, that utilize the large amount of data over long periods of time to train the neural network. These systems utilize computation resources allowing for several teraflops to petaflops that have very high energy consumption. Thus, the training process may happen on an apparatus or system, such as a cloud based system, that is different from a system or computing machine that may be acquiring data. This allows for the apparatus or system performing the training process to utilize large computational resources and large memory storage. Therefore, the training take place much ahead of time with a system distinct from a device that may later use the trained neural network. Once trained, that neural network would then be loaded to different devices for operation. For example, sensor data may be collected over time and stored in large data sets, such as over 1,000,000 samples of sensor data for any one class, that are used at later times to train neural networks. The storage of large amounts of data may be in data sets. Each data sample of a data set may be labeled to identify an associated class or classification of the data sample so that a neural network may be trained with the sample data. The labeling allows for the determination if the training is being performed correctly. The training of or learning done by the neural networks includes the setting of coefficients between the plurality of a fixed number of neurons that make up a neural network, which may use back-propagation and stochastic gradient descent techniques and recipes (such as learning rate, knowledge distillation, k-fold procedures) to drive this training or learning process. Once the training or learning is finished, the coefficients are stabilized to set values for use in operation of the neural network.
However, the use of large data sets requires large amounts of device or system resources. For example, the large amount of data requires a large amount of storage space to store the data samples. Additionally, large computing resources are required to perform the computationally complex training or learning procedures. For example, over time as cloud-based systems have offered greater amount of computing resources (e.g., billions of petaflops) and large amounts of storage (e.g., 10's of GB of dedicated storage) the training of neural networks has utilized the increases in computational resources and memory to grow more complicated. Indeed, after the training, the neural networks optimized for the large data sets may include large numbers of neurons (fixed and not changeable in their amount during training) that may take a large amount of space to store and processing power to utilize the large amounts of neurons in a trained neural network.
New methods, apparatuses, systems, and computer programming products in artificial intelligence and machine learning for resource constrained devices and systems are needed. The inventors have identified numerous areas of improvement in the existing technologies and processes, which are the subjects of embodiments described herein. Through applied effort, ingenuity, and innovation, many of these deficiencies, challenges, and problems have been solved by developing solutions that are included in embodiments of the present disclosure, some examples of which are described in detail herein.
Various embodiments described herein relate to methods, apparatuses, systems, and computer programming products for artificial intelligence and machine learning for resource constrained devices and systems are needed.
In accordance with some embodiments of the present disclosure, an example method for classifier learning is provided. In some embodiments, the example method for classifier learning may comprise: sampling a sensor data stream to generate a plurality of sensor data samples; extracting, via a feature extractor, a plurality of extracted features from the sensor data samples; determining, via a classifier and based the extracted features, a detection of a new feature of the one or more of the extracted features, wherein the classifier comprises at least an input layer of input neurons, a hidden layer of hidden neurons, and an output layer of output neurons, wherein each hidden neuron is associated with one output neuron, wherein each output neuron is associated with one class; training the classifier based on the plurality of extracted features comprising: adding a new hidden neuron, wherein the new hidden neuron is associated with the new feature; determining an age for each of the hidden neurons; and removing one or more hidden neurons based on the age of each the hidden neurons.
In some embodiments, each sensor data sample of the sensor data samples includes a first plurality of dimensions, extracting a plurality of extracted features from the sensor data samples includes extracting a plurality of extracted features that includes a second plurality of dimensions; and the second plurality of dimensions is reduced from the first plurality of dimensions.
In some embodiments, removing one or more hidden neurons based on the age of each of the hidden neurons includes removing at least one hidden neuron for each class.
In some embodiments, determining an age for each of the hidden neurons is based on the activation of each of the hidden neurons, wherein an activation is based on a distance associated with one or more extracted features from one or more of the plurality of hidden neurons being less than a radius of the one or more of the plurality of hidden neurons.
In some embodiments, determining an age for each of the hidden neurons may comprise: determining, for each feature extracted, one or more activated hidden neurons; decrementing the age of each of the hidden neuron activated that are associated with an output neuron of an incorrect class; and incrementing the age of each of the hidden neurons activated that are associated an output neuron of a correct class.
In some embodiments, the method may further comprise: requesting, prior to removing one or more hidden neurons, a threshold from a user via a user interface; receiving, via a user interface, the threshold; and removing one or more hidden neurons is further based on a total number of hidden neurons exceeding a threshold.
In some embodiments, sampling the sensor data stream for a plurality of sensor data samples may comprise: generating training data samples from the sensor data stream, wherein the training data samples are a first portion of the sensor data stream for a first period of time, and wherein each of the training data samples are associated with a classification label; generating testing data samples from the sensor data stream, wherein the testing data samples are a second portion of the sensor data stream for the first period of time; and the plurality of sensor data samples is comprised of sensor data from the training data samples.
In some embodiments, the feature extractor comprises a convolutional neural network.
In some embodiments, a plurality of coefficients of the convolutional neural network of the feature extractor is randomly initialized.
In some embodiments, the method may further comprise operating, after training the classifier, the classifier on one or more features extracted from a second sensor data stream.
In accordance with some embodiments of the present disclosure, an example apparatus is provided. In some embodiments, the example apparatus may comprise at least one processor and at least one memory coupled to the processor, wherein the processor is configured to: sample a sensor data stream to generate a plurality of sensor data samples; extract, via a feature extractor, a plurality of extracted features from the sensor data samples; determine, via a classifier and based the extracted features, a detection of a new feature for one or more of the extracted features, wherein the classifier comprises at least an input layer of input neurons, a hidden layer of hidden neurons, and an output layer of output neurons, wherein each hidden neuron is associated with one output neuron, wherein each output neuron is associated with one class; train the classifier based on the plurality of extracted features comprising: add a new hidden neuron, wherein the new hidden neuron is associated with the new feature; determine an age for each of the hidden neurons; and remove one or more hidden neurons based on the age of each the hidden neurons.
In some embodiments, each sensor data sample of the sensor data samples includes a first plurality of dimensions; to extract a plurality of extracted features from the sensor data samples includes to extract a plurality of extracted features that includes a second plurality of dimensions; and the second plurality of dimensions is reduced from the first plurality of dimensions.
In some embodiments, to remove one or more hidden neurons based on the age of each of the hidden neurons the processor is further configured to remove at least one hidden neuron for each class.
In some embodiments, to determine an age for each of the hidden neurons is based on the activation of each of the hidden neurons, wherein an activation is based a distance associated with one or more extracted features from one or more of the plurality of hidden neurons being less than a radius of the one or more of the plurality of hidden neurons.
In some embodiments, to determine an age for each of the hidden neurons the processor is further configured to: determine, for each feature extracted, one or more activated hidden neurons; decrement the age of each of the hidden neurons activated that are associated with an output neuron of an incorrect class; and increment the age of each of the hidden neurons activated that are associated an output neuron of a correct class.
In some embodiments, the processor is further configured to: request, prior to removing one or more hidden neurons, a threshold from a user via a user interface; receive, via a user interface, the threshold; and wherein to remove one or more hidden neurons is further based on a total number of hidden neurons exceeding a threshold.
In some embodiments, to sample the sensor data stream for a plurality of sensor data samples the processor is further configured to: generate training data samples from the sensor data stream, wherein the training data samples are a first portion of the sensor data stream for a first period of time, wherein each of the training data samples are associated with a classification label; generate testing data samples from the sensor data stream, wherein the testing data samples are a second portion of the sensor data stream for the first period of time; and wherein the plurality of sensor data samples is comprised of sensor data from the training data samples.
In some embodiments, the feature extractor comprises a convolutional neural network.
In some embodiments, a plurality of coefficients of the convolutional neural network of the feature extractor is randomly initialized.
In some embodiments, the processor is further configured to operate, after training the classifier, the classifier on one or more features extracted from a second sensor data stream.
The above summary is provided merely for purposes of summarizing some example embodiments to provide a basic understanding of some aspects of the disclosure. Accordingly, it will be appreciated that the above-described embodiments are merely examples and should not be construed to narrow the scope or spirit of the disclosure in any way. It will also be appreciated that the scope of the disclosure encompasses many potential embodiments in addition to those here summarized, some of which will be further described below.
Having thus described certain example embodiments of the present disclosure in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
Some embodiments of the present disclosure will now be described more fully herein with reference to the accompanying drawings, in which some, but not all, embodiments of the disclosure are shown. Indeed, various embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
As used herein, the term “comprising” means including but not limited to and should be interpreted in the manner it is typically used in the patent context. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of.
The phrases “in one embodiment,” “according to one embodiment,” “in some embodiments,” and the like generally mean that the particular feature, structure, or characteristic following the phrase may be included in at least one embodiment of the present disclosure and may be included in more than one embodiment of the present disclosure (importantly, such phrases do not necessarily refer to the same embodiment).
The word “example” or “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.
If the specification states a component or feature “may,” “can,” “could,” “should,” “would,” “preferably,” “possibly,” “typically,” “optionally,” “for example,” “often,” or “might” (or other such language) be included or have a characteristic, that a specific component or feature is not required to be included or to have the characteristic. Such a component or feature may be optionally included in some embodiments or it may be excluded.
The use of the term “circuitry” as used herein with respect to components of a system or an apparatus should be understood to include particular hardware configured to perform the functions associated with the particular circuitry as described herein. The term “circuitry” should be understood broadly to include hardware and, in some embodiments, software for configuring the hardware. For example, in some embodiments, “circuitry” may include processing circuitry, communication circuitry, input/output circuitry, and the like. In some embodiments, other elements may provide or supplement the functionality of particular circuitry.
Various embodiments of the present disclosure are directed to improved methods, apparatuses, systems, and computer programming products for artificial intelligence and/or machine learning, and more particularly for classifier learning for resource constrained devices.
Artificial intelligence is increasingly incorporated into apparatuses and/or systems that do not have large computational resources and/or large storage capabilities. Moreover, these apparatuses and/system have limited resources for which to utilize for artificial intelligence and machine learning. Despite their limited computational resources and/or storage capabilities, these apparatus and/or systems are being utilized to train neural networks and utilize the neural network on the apparatus and/or system. In various embodiments these may be tiny devices and/or sensors that lack computational resources or large amounts of memory. These resource-constrained apparatuses and/or systems are also increasingly being used in areas where access to the internet and/or cloud computing is not available and/or to achieve scalability in comparison to a centralized approach. Additionally or alternatively, apparatuses and/or systems are desired where time needed to communicate with cloud-based or network-based resources is not required, which is due to such time introducing lag into the processing of data. There is also a desire to utilize sensor data as it is generated to train the artificial intelligence and machine learning models. The real-time and streaming nature of utilizing sensor data also means that training or learning needs to occur on an apparatus and/or system without the use of remote network resources or cloud resources. This may be referred to as online training or online learning as the apparatus and/or system is trained on the apparatus or system and the large resources of, for example, cloud or too powerful systems are not used. Further, apparatus and/or systems utilizing these models continue to shrink in size and have fewer computing and memory resources available.
Apparatuses and/or systems in accordance with present disclosure use machine learning models of neural networks that may include an input layer, an output layer, and at least one hidden layer connecting the input layer to the output layer. Each of these layers may be comprised up of multiple neurons or nodes. The neural networks may be used in a classifier. A classifier may receive an input and determine a class or classification of the input based on the connections between the neurons of each layer. For example, a smart watch may generate sensor data from accelerometers and/or gyroscopes that is used to determine a class of the type of activity that a user of the smartwatch may be engaged in (e.g., walking, running, standing, etc.).
The classic manner of training neural networks, including classifiers, does not consider the limitations of resource-constrained devices. Prior manners entail training or learning requiring storing large amounts of data and running complex procedures to determine and/or set the coefficients of a neural network. In addition to the size of the data used, the neural networks are large because they run through large numbers of iterations of training or learning, which is computationally very costly, and use backpropagation and/or stochastic gradient descent during the training or learning. In training neural networks this may cause the neural network to grow in size while also resulting in many portions of these neural networks (e.g., hidden neurons) being unutilized or underutilized. For example, such training may use a fixed amount of, among other things, hidden neurons that are associated with features and classes in a training set of data samples but that are otherwise not all used or not used enough during operation.
The present disclosure provides improvements that address the limitations of resource-constrained devices and systems. The present disclosure includes multiple improvements. For example, various embodiments in accordance with the present disclosure do not require network resources or connection to the internet. Additionally or alternatively, without requiring transmitting data over the internet or to another device or system, privacy is improved on as no data is required to be transmitted off of a device. Additionally or alternatively, energy consumption of devices and systems in accordance with the present disclosure may be reduce due to the more efficient and lower memory size of the neural networks utilized. Additionally or alternatively, speeds for classifier learning may be improved for an apparatus and/or system due to, among other things, not needing to transmit data off the device nor needing to iterate certain operations through large amounts of hidden neurons. Additionally or alternatively, speeds may be improved for classifier learning as various embodiments do not include or require backpropagation nor stochastic gradient descent.
In various embodiments in accordance with the present disclosure, classifier learning may include identifying, adding, and removing hidden neurons and/or classes. These additions or removals may increase or reduce the size of a classifier with respect to the maximum amount of storage allowed. The removal of hidden neurons may be based on an age of the hidden neuron related to how often the hidden neuron is activated by sensor data samples input into the neural network. Additionally or alternatively, the classifier learning may include determining only one hidden neuron for a input data in embodiments having a plurality of hidden neurons being activated by one input. Also, various embodiments may utilize one or more sampling operations for generating sensor data samples to allow a classifier to learn in real-time as sensor data is generated.
In various embodiments, one or more sensors 110 may generate sensor data. The sensor data may be generated as time progresses, which may generate a stream of time varying sensor data. For example a mobile device of a smartwatch may include a plurality of accelerometers and/or a plurality of gyroscopes that take measurements and generate sensor data over time. In another example, a sensor may be an environmental sensor, such as a temperature sensor that may take temperature measurements and generate temperature data over time. As the sensor data is generated, it may be transmitted to and received by one or more other components of an apparatus and/or system, such as by the processor or by artificial intelligence circuitry (a.k.a., AI circuitry), which is described further herein. The sensor data may be received as stream, which may be referred to as a stream of sensor data. The stream of sensor data may be received in real-time and/or stored in a limited memory, such as in a buffer to use once a sampleable amount of sensor data is stored. The sensor data may be sampled continuously and/or selectively for various time periods to generate sensor data samples.
During classifier learning, the sensor data samples may be comprised of a plurality of dimensions. For example, a mobile device having a plurality of accelerometers and/or a plurality of gyroscopes may generate a stream of sensor data that has a plurality of dimensions associated with the number of types of sensors generating sensor data and the frequency of the sensor data. In an embodiment with three axis accelerometers and three axis gyroscopes with 100 Hz frequency there may be 600 dimensions of data representing the combination of axes and magnitudes of measurements for each of the sensors for a time period of one second. Other embodiments may contain more or less dimensions.
The sensor data samples may be transmitted to a feature extractor 120. The feature extractor 120 may extract or determine a plurality of features from the sensor data samples. In various embodiments, the feature extractor 120 may extract one or more features and also reduce the dimensions of the sampled sensor data. The dimensions of the extracted features may be different from the dimensions (e.g., x-y coordinates) of the dimensions of the sensor data samples (e.g., magnitude and direction of acceleration of an accelerometer and/or rotation by a gyroscope). Thus the feature extractor 120 may generate and/or convert the sensor data samples into a plurality of features represented by a plurality of dimensions. For example, a mobile device generating sensor data of 600 dimensions for one second may have the dimensions reduced by the feature extractor 120 to have 24 dimensions. Additionally or alternatively, the feature extractor 120 may also flatten the sensor data samples in extracted features. Such operations of reduction of dimensions and/or flattening of sampled sensor data may improve the performance for classifier learning as described herein, including by requiring less resources to perform the classifier learning. In various embodiments, a feature extractor 120 may be configured to extract as many features as there are input neurons 142 in the input layer 140. The plurality of features extracted by the feature extractor 120 may be provided to the classifier 130.
In various embodiments, a feature extractor 120 may be comprised of a convolutional neural network. Alternatively, in various embodiments the feature extractor 120 may be comprised of a reservoir neural network. In various embodiments, the reservoir neural network may be an Echo State Network. The convolutional neural network and/or the reservoir neural network of the feature extractor 120 may be randomly initialized. Random initialization may include the coefficients of the neural network being randomly generated, such as with a uniform probability distributions, binomial probability distribution, or another probability distribution or combination of probability distributions. Alternatively, the neural network of the feature extractor 120 may be off-line trained. For example, the neural network of the feature extractor 120 may be trained by an artificial intelligence system (e.g., using backpropagation) not associated with the device and then loaded to the device. Large and more computationally complex systems may generate and optimize the convolutional neural network of the feature extractor 120 on the available training dataset for the feature extractor 120, and this convolutional neural network may be transmitted and loaded to the device for operation as a feature extractor 120. The feature extractor 120, once loaded, would processes sensor data consistently and not need to be retrained.
In accordance with various embodiments, a classifier 130 may include a plurality of layers. In various embodiments, three layers may be used that include an input layer 140, a hidden layer 150, and an output layer 160. Each of the plurality of layers may include a plurality of neurons (e.g., 140 contains 142, 150 contains 152, and 160 contains 162, etc.). Various embodiment may user hyperspherical classifiers. For example, embodiments may utilize a Restricted Coulomb Energy (RCE) classifier 130 having at least three layers of neurons. In various embodiments, a classifier may be referred to as a TinyRCE classifier.
The input layer 140 may include a plurality of input neurons 142, such as a first input neuron 142A, a second input neuron 142B, a third input neuron 142C, a fourth input neuron 142D, etc. until the last input neuron 142N of the input neuron layer 140. In various embodiments, the total number of input neurons 142 may total the number of dimensions of the features extracted by the feature extractor 120. Each of the input neurons 142 may be associated with each of the hidden neurons 152 of the hidden layer 150. The association may be with a coefficient, which is represented in
The hidden layer 150 may include a plurality of hidden neurons 152. The hidden layer 150 may not be visible outside of the neural network of the classifier 130 as that the hidden layer 150 only interacts with other layers of the neural network of the classifier 130. Each of the hidden neurons 152 of the hidden layer 150 may be associated with each of the input neurons 142 of the input layer 140. In various embodiments, each of the hidden neurons 152 is associated with only one of the output neurons 162 of the output layer 160. For example, hidden neuron 152B is associated with only output neuron 162A. In various embodiments, more than one hidden neuron (e.g., 152A and 152D) may be associated with a single output neuron (e.g., 162B). The number of hidden neurons 152 in the hidden layer 150 may vary over time, such as through the addition or removal of hidden neurons 152 as described herein.
In various embodiments not illustrated, there may be more than one hidden layer 150 in the convolutional neural network of the feature extractor 120 and/or the classifier 130. For example, there may be a first hidden layer and a second hidden layer. Each of the hidden neurons of the first hidden layer may be associated with each of the hidden neurons of the second hidden layer. Additionally or alternatively, various embodiments might include more than one association between the hidden neurons of the hidden layer associated with the output layer and the output neurons of the output layer.
The output layer 160 may include a plurality of output neurons 162. The output neurons 162 may be associated with one or more of the hidden neurons 152 such that each hidden neuron 152 is associated with only one of the output neurons 162. In various embodiments, the output neurons 162 may be OR output neurons 162. For OR output neurons 162, only one of the OR output neurons 162 may activated at a time and, thus, only one may be used by the classifier 130 to determine a single class associated with a feature of an input neuron 142. The output neurons 162 may be associated with the output 170 such that the activated output neuron 162 may provide its associated class to the output 170. Thus, when a hidden neuron 152 is activated by the input to an input neuron 142 and activates the output neuron 162, the activated output neuron 162 will provide its class to the output 170. In various embodiments, a feature extracted may activate two or more hidden neurons 152 which may be associated with two or more output neurons. In such embodiments, and as described herein, a determination of a closest hidden neuron 152 may be used to determine the correct hidden neuron 152 to be activated.
In various devices and/or systems, the classifier 130 may be trained by classifier learning on the apparatus and/or system. This on device and/or on system training may be referred to as online training or online learning because the classifier 130 is trained on the device and/or system. In contrast, offline training or offline learning may refer to a neural network trained off the device and/or system, such as by servers with large storage and computing capabilities.
In various embodiments, the neural network of the feature extractor 120 may be combined with the neural network of the classifier 130. In such embodiments the neural network may include extracting the features from the sampled sensor data and determining a class with the same neural network.
For example, a classifier 130, through its neural network and based on the features input to the input neurons 142 of the input layer 140, may determine and an output of a class. Each of the coefficients of the neural network determined during a classifier learning may operate to determine how the extracted features, particularly the dimensions of the extracted features, are associated with a class via the neural network of the classifier 130. In various embodiments, the hidden neurons 152 of the hidden layer 150 may each comprise or be associated with a region of influence or feature space. The region of influence may be based a value for each dimension of the feature represented by the hidden neuron 152, which may determine a center 212A, and a radius 214A that extends from the center 212A. For example, a two-dimensional feature space may be illustrated as a circle (e.g., 210) having a radius 214A extending from a center point 212A. The center point is associated with the value of each dimension (e.g., x coordinate and y coordinate) of the feature of a hidden neuron 152. This feature space is the region of influence identifying the dimensional coordinates of other features that may be in sensor data samples that may be identified as activating the hidden neuron 152. For example, a feature having the dimensions represented by 250A would not activate any of the feature spaces illustrated in
In various embodiments, a mobile device may (e.g., a smartwatch) may be used to determine an activity of a user. The activities a user may engage in may be associated with classes, such as a class for walking, a class for running, a class for lying, and a class for standing. Sensors 110 in the mobile device generate sensor data. The sensor data is sampled and a feature extractor 120 extracts features from the sensor data samples. The feature extractor 120 may also reduce the features extracted from the sensor data to a two-dimensional feature set. In this manner, a time-series of sensor data samples may be converted into two-dimensional features (e.g., a x value and a y value in a two dimensional x-y coordinate system). The features may be provided to a classifier 130. Each input neuron 142 of the input layer 140 of the classifier 130 may be associated with a different feature of the extracted features input into the classifier 130. The classifier 130 iterates through each input neuron 142 to determine the hidden neuron 152 activated. Based on the features and the coefficients of the neural network of the classifier 130, different hidden neurons 152 of the hidden layer 150 of the classifier 130 may be activated and a class may be determined. Thus, the mobile device may be able to be determine the activity of the user based on the sensor data generated by the sensors.
The activation of a hidden neuron 152 may be determined based on a distance of the dimensions of a feature from the features space of a hidden neuron 152. This may include determining a multi-dimensional distance between the feature and every hidden neuron 152 being determined. In various embodiments with binary values, a Hamming distance may be used with binary values. In various features spaces with real values, Euclidean distances may be used.
The data stream may be sampled to generate a plurality of data samples, such as sample 310, sample 320, sample 330, sample 340, etc. In various embodiments, the plurality of data samples may be time stamped. One or more of these data samples utilized for classifier may be further sampled to generate a training sample set of training data samples and a testing sample set of testing data samples. For example, a sample 320 may be used to generate a cluster of sensor data as a training sample 320A of training data samples and a cluster of sensor data as a testing sample 320B of testing data samples. For example, the cluster of sensor data in training sample 320A may be 20% of sample 320 and the cluster of testing sample 320B may be 80% of sample 320. The classifier learning may be based on the sensor data in the training sample 320A. After classifier learning trains a classifier based on a training sample 320A then the testing sample 320B may be used in an operating mode to test the classifier. The operating mode that may utilize the testing sample 320B may test the trained classifier to determine if the training was successful and the classifier is able to determine the classes associated with the sensor data in the testing sample 320B. The testing and training may be performed iteratively for subsequent samples of the data stream. In various embodiments, each of the training data samples may be associated with one or more classes and/or classification labels. In various embodiments, each of the testing data samples may not be associated with a class and/or classification label. In various embodiments, classification labels for data samples may be determined as described herein. In various embodiments, the training data samples may be used during a learning phase or a training phase. In various embodiments, the testing data samples may be used during a testing phase.
In various embodiments, the one or more sensors 110 may generate an amount of data that may be allow for classifier learning in real time. The data stream may be temporal stream of data from the one or more sensors such that data is acquired in temporal order. Real-time processing may occur when the processing of a sensor data samples in a training sample may be used for classifier learning that takes less time to occur than to reach the next chunk or sample of sensor data. Alternatively, the data stream may be sampled and sensor data may be stored for classifier learning at a later time subsequent to the sampling of the data stream. In various embodiments, the amount of sensor data to store for later classifier learning may be based on a sensor data storage threshold. In an exemplary embodiment, the sensor data storage threshold may allow for the storage of 50 to 100 seconds of sensor data that may be used to classifier learning, including being sampled for training data samples and testing data samples. Storing sensor data for later classifier learning may allow for a user to provide classification identifiers or labels at a later time. For example, if a user is actively involved in an activity (e.g., running), the user may not be available to provide a classification identifier or label while engaged in the activity but could provide it at a later time.
It should be readily appreciated that the embodiments of the methods, apparatuses, systems, and computer programming products described herein may be configured in various additional and alternative manners in addition to those expressly described herein.
Embodiments of the present disclosure herein include methods, apparatuses, systems, and computer program products for performing one or more operations for classifier learning and/or training a classifier. The operations described herein may also be applicable to regressor learning as well as to let a neural network predict measurements when such a measurement otherwise exceeds the capabilities of a sensor device. In various embodiments, one or more of the operations described herein may be performed during a learning phase. Additionally or alternatively, one or more operations described herein may be performed during a testing phase. In an exemplary embodiment, one or more operations described in association with
In accordance with the present disclosure, a classifier 130's neural network may be optimized by reducing the size of the hidden layer 150 of the neural network as the classifier 130 is trained, including as the neural network incorporates new classes. The hidden layer 150 may be reduced by removing hidden neurons 152 that are not utilized or comparatively underutilized. Additionally or alternatively, hidden neurons 152 that are redundant of other hidden neurons 152 may be removed. Additionally or alternatively, the feature space associated with hidden neurons 152 may be reduced.
At operation 402, sensor data is generated. Sensor data may be generated by one or more sensors 110 in or associated with an apparatus and/or system. For example, a mobile device, such as a wearable watch, may include a plurality of accelerometers and gyroscopes that may be used to generate data. In another example, a temperature sensor may generate temperature data. In another example, a mobile device may generate GPS data from a GPS sensor. Sensor data may be generated over time to create a data stream.
In various embodiments, a data stream may be analyzed in real-time, which may include being analyzed without loss of any sample(s) during the performance of further operations described herein. To the extent there may be unknown features in the sensor data, such unknown features may be stored for later. For example, the storing of sensor data may be based on waiting to receive input, such as from a user via a user interface, regarding a class to be associated with one or more features. In various embodiments, the sensor data may be stored in memory. In various embodiments, sensor data may be transmitted and received by one or more components of the device from the one or more sensors. The received sensor data may then be sampled, such as during classifier learning.
At operation 404, the sensor data is sampled. The sampling of sensor data may include receiving sensor data generated by a sensor 110 as a data stream or for a time period and creating a train sample and a test sample as described herein. Such train samples and test samples may be utilized during classifier learning to train a classifier and/or test a trained classifier. Each train sample and test sample may be comprised of a plurality of sensor data samples.
At operation 406, features may be extracted from sensor data samples. In various embodiments, a feature extractor 120 may extract the features from the sensor data samples. In extracting the features the feature extractor 120 may also reduce the dimensionality of the sensor data samples. In an exemplary embodiment of a mobile device with six sensor generating sensor data, the dimensionality may be reduced to 24 dimensions that may associated with and/or represented in an imaginary two-dimensional coordinate space.
At operation 408, if classes for the features are in the classifier. The classifier 130 is utilized to determine a class for each of the extracted features or, alternatively, that a class does not exist for one or more features. If the extracted features are associated with existing classes the classifier has already learned or been trained to determine then the classifier learning may end or new sensor data may be used for further classifier learning (e.g., proceed to operation 402). Alternatively, if it is determined that there are one or more classes that are not existing in the classifier 130 or that a class cannot be determined for a feature, then the classifier learning may proceed to operation 410.
At operation 410, it is determined that new feature(s) have been detected. Detecting the determination of new features may be done by the classifier not being able to determine a class for one or more extracted features. The feature(s) at an input neuron 152 associated with the input associated with a new class are determined so that a class identifier may be requested to be associated with these feature(s).
At operation 412, a class identifier for the new feature(s) is requested. In various embodiments, the request for a new class may be request a user to provide a new class. In various embodiments, such as a wearable mobile device (e.g., a smartwatch), a notification for a new class may be generated and the notification for the new class may be rendered on a user interface. The rendering of the notification for the new class may include, among other things, an identification of one or more features associated with the new class. In various embodiments with a command line interface, a request for a class identifier may be with a message and a command line prompt. In various embodiments with serial communication, request may be sent serially.
At operation 414, a class identifier may be received. The class identifier may specify a name for a new class and/or include a selection of an existing class. In various embodiments of wearable mobile device, the identifier may be a name for the new class (e.g., running, walking, jogging, lying, standing, etc.). Alternatively or additionally, the identifier may be a selection of an image from amongst a set of images, where each of the images may be associated with a predetermined class. For example, and image of a runner, a walker, a person lying down, or a person sitting down.
At operation 416, a class may be added to classifier based on the class identifier. This may include adding a new class at a new output neuron 162. Alternatively or additionally, this may include adding a new hidden neuron 152 associated with the feature(s) and adding a connection to an existing output neuron 162 associated with an existing class. Thus the features may be associated with the identifier received and the previously unassociated feature(s) extracted and applied to the classifier will have a class associated with the new feature(s).
At operation 418, the age of hidden neurons 152 may be determined, which is described further herein, including in association with
At operation 420, one or more hidden neurons 152 may be removed, which is described further herein, including in association with
To determine which hidden neurons 152 to remove, the hidden neurons 152 may be assigned an age or age score. During classifier learning, every time a hidden neuron 152 is activated it may be aged, including either decrementing the age or incrementing the age. Then the hidden neurons 152 with the highest age or highest age score are determined to be activated more often and, thus, should be kept. In contrast, the hidden neurons 152 with the lower age or age score are determined not to be used and, thus, may be removed. An age or an age score may be based on decrements and/or increments, including an age or age score being negative (i.e., below zero) if sufficiently decremented. The hidden neurons 152 removed provide the opportunity to add new hidden neurons 152 that may have an increased age, such as in testing sessions.
At operation 502, the hidden neuron(s) activated by the extracted features is determined. New sensor data samples have their features extracted, which are input into the classifier 130. During classifier learning, the classifier 130 then determine each of the hidden neurons activated for each of the extracted features. In other words, for each input neuron 142, it is determined which hidden neurons 152 are activated.
In various embodiments, the hidden neurons activated may be associated or may not be associated with the class of the extracted features. For example, the extracted features may activate a first hidden neuron of a classifier and a second hidden neuron of the classifier, and each of the first hidden neuron and the second hidden neuron may be associated with separate classes.
In various embodiments, such as an embodiment with a two dimensional feature space, each hidden neuron may have a region of influence represented by a circle, such as illustrated in
At operation 504, the class associated with the hidden neuron(s) activated is determined. The class for each of the hidden neurons 152 activated is determined. This class may the same as the class associated with the feature(s) in the input neurons 142 or it may be different.
At operation 506, it is determined if the class determined for the hidden neuron is the same as the class for the extracted features. If the class is different, which would be a misclassification, then the classifier learning may proceed to operation 508. If the class if the same, which would be a correct classification, then the classifier learning may proceed to operation 512. Thus, for a misclassification of an incorrect class, which is where the output neuron is different from a ground truth label, a radius may be reduced and an age decremented; and for a correct classification of a correct class, which is where the output neuron is the same as the ground truth, an age may be incremented.
At operation 508, and if the class determined is not the same as the class for the extracted features, the feature space of the activated hidden neuron 152 is reduced by reducing radius. As there is a difference between the class of this hidden neuron 152 activated and the class of the extracted features, the radius of the feature space of this hidden neuron 152 is too large to correctly identify the correct class. To reduce the radius of the feature space may improve subsequent iterations of determining classes with the classifier 130 by preventing incorrect determinations of classes.
At operation 510, and if the class determined is not the same as the class for the extracted features, the age of the hidden neuron is decremented. As there is a difference between the class of the hidden neuron 152 activated and the class of the extracted features, the hidden neuron 152 activated was not associated with the correct class referred to as a ground truth. Thus the age of the hidden neuron 152 is decremented.
At operation 512, and if the class determined is the same as the class for the extracted features, the age of the hidden neuron is incremented. As there is no difference between the class of the hidden neuron activated and the class of the extracted features—in other words the classes match, the hidden neuron activated was associated with the correct class. Thus the age of the hidden neuron is incremented. In various embodiments, the reducing or radius and decrementing and/or incrementing of ages is performed during classifier learning.
In various embodiments, the decrementing and/or incrementing of the age of a hidden neuron may be flat value or may be a ratio. For example, a flat value of one may be used for the decrementing or incrementing. Alternatively, a ratio may be used for the decrementing or incrementing. In various embodiment, the ratio may be based on the distance of the extracted features from the center (e.g., 212A) and the radius (e.g., 214A) of the feature space of a hidden neuron 152.
The values of decrementing the age may be expressed as:
The value of incrementing the age may be expressed as:
For example, in an embodiment with a two dimensional feature space, each hidden neuron may have a feature space represented by a circle, such as illustrated in
The smaller this ratio for a hidden neuron, the closer the extracted feature is to a hidden neuron. How close an extracted features is may determine how to increment or decrement a plurality of hidden neurons.
In various embodiments where a feature may be located inside two or more feature spaces of hidden neurons 152 that are all associated with the same class, only one hidden neuron that is closest may be incremented and the others will be decremented.
The decrementing and incrementing of the age of hidden neurons may be utilized to associate each hidden neuron 152 with an age. If a hidden neuron 152 was not activated, then that hidden neuron may keep the same age without a decrement or an increment. In various embodiments, the age of a hidden neuron 152 may, prior to the operations of
As the determination of an age of hidden neurons 152 requires there to be a hidden neuron 152 to be age, if there are new classes added where a hidden neuron 152 has not yet been added to the classifier 130, such as in accord with the operations of
At operation 514, a new hidden neuron for a new class is added. The addition of a new hidden neuron may include providing the new hidden neuron 152 an age. In various embodiments it may be desired not to remove a newly added hidden neuron 152 even if it has a low age, which may or may not be a lowest age among the hidden neurons 152. For example, a hidden neuron 152 may have an age set to a value that would be the same as other, already existing, hidden neurons 152 were set with, such as 0. Alternatively or additionally, a newly added hidden neuron 152 may be associated with an new hidden neuron identifier. The new hidden neuron identifier may be used during classifier learning to identify the new hidden neuron 152 and prevent it from being removed during one or more operations described herein even though the new hidden neuron 152 may have a low or the lowest age. Thus the new hidden neuron 152 may not be considered for removal.
At operation 602, a threshold is determined. This threshold may be a threshold for the number of hidden neurons 152 for the classifier to utilize. In various embodiments, the threshold may be a total number of hidden neurons 152 the classifier 130 may be able to include for a device, which may any newly added hidden neurons 152. The threshold may set by a user and/or based on the available resources of a device and/or system. In various embodiments, a user may provide the threshold via a user interface. Such a threshold may be received in response to a request to the user to provide input such as described herein. The threshold may be provided by a user previously and stored in the system. Alternatively or additionally, the threshold may be determined based on the available resources of the device and/or system. For example, it may be based on a value of storage or memory available and/or dedicated to the classifier. This may be a flat value or a percentage value. This may also be a static value and/or a dynamic value that may change as resources may change over time. In various embodiments, the threshold may be determined based on a length or count associated with the number of hidden variables.
At operation 604, it is determined if the number of hidden neurons is greater than the threshold. If the number of hidden neurons 152 is not greater than the threshold then it may be determined that no hidden neurons 152 need to be removed before the classifier learning may proceed to the next sensor data samples (e.g., operation 402). In various embodiments, this may end the classifier learning. If the number of hidden neurons is greater than the threshold then the classifier learning may proceed to operation 606.
In various embodiments, the threshold may be associated with a number of hidden neurons 152 per class. For example, there may be a total of 300 hidden neurons 152 for 5 classes and a threshold of 200 may be determined. This threshold of 200 may be utilized to determine a difference of 100 between the total of 300 hidden neurons and the threshold of 200, which per class is a difference of 20 hidden neurons 152, which may also be referred to as a per class difference. The per class difference of 20 may be used to determine that 20 hidden neurons 152 are to be removed from each of the 5 classes to meet the threshold of 200 hidden neurons 152. In this embodiment, which hidden neurons 152 to remove per class may be determined by the age of the each of the hidden neurons 152 for each class and then removing the per class difference for each class. In various embodiments, the age of each class may be processed separately and a per class threshold to use for processing each class separately may be set the per class difference.
At operation 606, the hidden neurons may be sorted by age. The sorting of the hidden neurons 152 by age may generate a ranking of or may rank the hidden neurons 152 by age, which may be done for each class separately. This may, based on the sorting, allow for quick determination of the hidden neurons 152 with low ages and the hidden neurons 152 with high ages, which may be done for each class separately. In various embodiments, the high ages will be the hidden neurons 152 with greater numbers of activations and the lower ages will be the hidden neurons 152 with lower number of activations. It will also be appreciated that various embodiments could use an age such that high ages would be lower activations and low ages would be higher activations by modifying operations described herein.
At operation 608, classes with only one hidden neuron are determined. As the removal of a hidden neuron 152 for a class with only one hidden neuron 152 would result in the removal of the class, the identification of such hidden neurons 152 may be used to protect or keep classes when the removal of other hidden neurons 152 may not result in a class being removed. In various embodiments, the determination may be based on a hidden neuron 152 being associated with a new hidden neuron identifier or not.
At operation 610, hidden neurons from classes with more than one hidden neuron are removed based on age. For the classes identified as having more than one hidden neuron 152, the removal of a hidden neuron 152 will not remove the class. In various embodiments, operation 610 will remove a hidden neuron 152 with the lowest age.
In various embodiments, multiple hidden neurons 152 may be removed. The removal of multiple hidden neurons 152 may be performed by iterating one or more operations of
In various embodiments, after removing one or more hidden neurons 152 from classes with more than one hidden neuron 152 the number of hidden neurons may still be greater than the threshold. Then the classifier learning may determine one or more classes to remove by removing one or more hidden neurons 152 associated with the classes with only one hidden neuron.
At operation 612, and after removing one or more hidden neurons from classes with more than one hidden neuron, it is determined if the number of hidden neurons is greater than the threshold. If the number of hidden neurons 152 is not greater than the threshold then it may be determined that no further hidden neurons 152 need to be removed and the classifier learning may proceed to the next sampled sensor data. This may include further classifier learning. In various embodiments, this may end the classifier learning. If the number of hidden neurons 152 is greater than the threshold then the classifier learning may proceed to operation 614.
At operation 614, the age of classes may be determined. The determination of age of classes may be based on the ages of the hidden neurons 152 associated with a class. Determining the age of a class may determine activation of the class and, thus, which classes should be kept and/or which classes should be removed. In various embodiments, the determining an age of a class may be omitted.
At operation 616, hidden neurons and/or classes are removed based on age. This may utilize the sorting of hidden neurons 152 based on age. Alternatively or additionally, this may utilize a similar sorting of classes based on age (an operation not illustrated in
In various embodiments where one hidden neuron 152 is removed at a time, the operations of
For example, the dimensions of an extracted feature may be determined to be in the region of influence of more than one hidden neuron 152. This may occur, for example, if there are features associated with too many classes that overlap, collide, or collapse with each other. In the two-dimensional examples illustrated in
At operation 702, the hidden neurons activated by an extracted feature are determined. The hidden neurons activated by an extracted feature may be determined based on the distance of the dimensions of the extracted feature to the centers of the feature spaces as described herein. At operation 704, the hidden neuron closest to the extracted feature is determined. Of the plurality of hidden neurons activated, one will be the closest. In various embodiments, this may be determined by which hidden neuron has the least amount of distance to the center of a region of influence of each of the activated hidden neuron. Alternatively or additionally, the closest hidden neuron may be determined by a ratio of the radius (e.g., 214A) of each of the hidden neurons 152's feature spaces by the respective distance from the feature to the center of the feature space of the respective activated hidden neuron. The hidden neuron associated with the higher ratio may be determined to be the closest hidden neuron.
In various embodiments this may be done by creating a ratio of the radius of the hidden neuron 152 and the distance. This ratio may be a score determined for each hidden neuron 152, and this score may be tracked. After the score for each of the activated hidden nodes 152 has been determined, the hidden neuron 152 associated with the highest score is determined. In various embodiments, the hidden neuron 152 with the highest score may be considered the most reliable because it may be the closest hidden neuron 152. By dividing the radius of the hidden neurons 152 by the distance the hidden neurons 152 larger radii will be accounted for how large a feature space is encompassed by the hidden neuron 152. Use of such a score may also address overlap of feature spaces. Thus even though distances may be the same between two hidden neurons (e.g., 210A, 210B), the radii will help address which hidden neuron 152 is determined to be the closest hidden neuron 152.
At operation 706, the closest hidden neuron is set as the hidden neuron with the same class as the extracted feature. The closest hidden neuron 152 of the plurality of hidden neurons 152 may be set to be as the activated hidden neuron and as the hidden neuron 152 with the same class as the extracted feature. The remainder of hidden neurons 152 that otherwise would have been activated may be determined to not be activated and/or to have a different class than the extracted feature. Thus, only one hidden neuron 152 would be determined to be activated and/or have the same class as the extracted feature.
Embodiments of the present disclosure herein include systems and apparatuses configured for and to perform one or more operations described herein.
The processor 802, although illustrated as a single block, may be comprised of a plurality of components and/or processor circuitry. The processor 802 may be implemented as, for example, various components comprising one or a plurality of microprocessors with accompanying digital signal processors; one or a plurality of processors without accompanying digital signal processors; one or a plurality of coprocessors; one or a plurality of multi-core processors; processing circuits; and various other processing elements. The processor may include integrated circuits, such as ASICs, FPGAs, systems-on-a-chip (SoC), or combinations thereof. In various embodiments, the processor 802 may be configured to execute applications, instructions, and/or programs stored in the processor 802, memory 804, or otherwise accessible to the processor 802. When executed by the processor 802, these applications, instructions, and/or programs may enable the execution of one or a plurality of the operations and/or functions described herein. Regardless of whether it is configured by hardware, firmware/software methods, or a combination thereof, the processor 802 may comprise entities capable of executing operations and/or functions according to the embodiments of the present disclosure when correspondingly configured.
The memory 804 may comprise, for example, a volatile memory, a non-volatile memory, or a certain combination thereof. Although illustrated as a single block, the memory 804 may comprise a plurality of memory components. In various embodiments, the memory 804 may comprise, for example, a random access memory, a cache memory, a flash memory, a hard disk, a circuit configured to store information, or a combination thereof. The memory 804 may be configured to write or store data, information, application programs, instructions, etc. so that the processor 802 may execute various operations and/or functions according to the embodiments of the present disclosure. For example, in at least some embodiments, a memory 804 may be configured to buffer or cache data for processing by the processor 802. Additionally or alternatively, in at least some embodiments, the memory 804 may be configured to store program instructions for execution by the processor 802. The memory 804 may store information in the form of static and/or dynamic information. When the operations and/or functions are executed, the stored information may be stored and/or used by the processor 802.
In various embodiments, the processing capabilities of the processor 802 and the size of the memory 804 of device 800 may be limited. An example embodiment of a device 800 with such limitations may be a microcontroller. The limitations may be utilized by one or more operations as described herein, which may allow for such embodiments to overcome what are otherwise found to be limitations for implementing such operations.
The communication circuitry 806 may be implemented as a circuit, hardware, computer program product, or a combination thereof, which is configured to receive and/or transmit data from/to another component or apparatus. The computer program product may comprise computer-readable program instructions stored on a computer-readable medium (e.g., memory 804) and executed by a processor 802. In various embodiments, the communication circuitry 806 (as with other components discussed herein) may be at least partially implemented as part of the processor 802 or otherwise controlled by the processor 802. The communication circuitry 806 may communicate with the processor 802, for example, through a bus 810. Such a bus 810 may connect to the processor 802, and it may also connect to one or more other components of the processor 802. The communication circuitry 806 may be comprised of, for example, transmitters, receivers, transceivers, network interface cards and/or supporting hardware and/or firmware/software, and may be used for establishing communication with another component(s), apparatus(es), and/or system(s). The communication circuitry 806 may be configured to receive and/or transmit data that may be stored by, for example, the memory 804 by using one or more protocols that can be used for communication between components, apparatuses, and/or systems.
In various embodiments, the communication circuitry 806 may convert, transform, and/or package data into data packets and/or data objects to be transmitted and/or convert, transform, and/or unpackage data received, such as from a first protocol to a second protocol, from a first data type to a second data type, from an analog signal to a digital signal, from a digital signal to an analog signal, or the like. The communication circuitry 806 may additionally, or alternatively, communicate with the processor 802, the memory 804, the input/output circuitry 808, the AI circuitry 810 and/or the sensors 110, such as through a bus 210. Additionally, or alternatively, in some embodiments the communication circuitry may be at least partially implemented as a part of a sensor 110.
The input/output circuitry 808 may communicate with the processor 802 to receive instructions input by an operator and/or to provide audible, visual, mechanical, or other outputs to an operator. The input/output circuitry 808 may comprise supporting devices, such as a keyboard, a mouse, a user interface, a display, a touch screen display, lights (e.g., warning lights), indicators, speakers, and/or other input/output mechanisms. The input/output circuitry 808 may comprise one or more interfaces to which supporting devices may be connected. In various embodiments, aspects of the input/output circuitry 808 may be implemented on a device used by the operator to communicate with the processor 802. The input/output circuitry 808 may communicate with the memory 804, the communication circuitry 806, the AI circuitry 810, and/or any other component, for example, through a bus 812.
The AI circuitry 810 may be implemented as any apparatus included in a circuit, hardware, computer program product, or a combination thereof, which is configured to perform one or more AI operations and/or function, such as those described herein. The AI circuitry 810 may include computer-readable program instructions for AI operations and/or functions stored on a computer-readable medium (e.g., memory 804) and executed by a processor 802. In various embodiments, the AI circuitry 810 may be at least partially implemented as part of the processor 802 or otherwise controlled by the processor 802. The AI circuitry 806 may communicate with the processor 802, for example, through a bus 810.
In various embodiments, the device 800 of
Operations and/or functions of the present disclosure have been described herein, such as in flowcharts. As will be appreciated, computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the operations and/or functions described in the flowchart blocks herein. These computer program instructions may also be stored in a computer-readable memory that may direct a computer, processor, or other programmable apparatus to operate and/or function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture, the execution of which implements the operations and/or functions described in the flowchart blocks. The computer program instructions may also be loaded onto a computer, processor, or other programmable apparatus to cause a series of operations to be performed on the computer, processor, or other programmable apparatus to produce a computer-implemented process such that the instructions executed on the computer, processor, or other programmable apparatus provide operations for implementing the functions and/or operations specified in the flowchart blocks. The flowchart blocks support combinations of means for performing the specified operations and/or functions and combinations of operations and/or functions for performing the specified operations and/or functions. It will be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified operations and/or functions, or combinations of special purpose hardware with computer instructions.
While this specification contains many specific embodiments and implementation details, these should not be construed as limitations on the scope of any disclosures or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular disclosures. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
While operations and/or functions are illustrated in the drawings in a particular order, this should not be understood as requiring that such operations and/or functions be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, operations and/or functions in alternative ordering may be advantageous. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results. Thus, while particular embodiments of the subject matter have been described, other embodiments are within the scope of the following claims.
While this specification contains many specific embodiment and implementation details, these should not be construed as limitations on the scope of any disclosures or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular disclosures. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are illustrated in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, operations in alternative ordering may be advantageous. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.
Thus, while particular embodiments of the subject matter have been described, other embodiments are within the scope of the following claims.
While this detailed description has set forth some embodiments of the present disclosure, the appended claims may cover other embodiments of the present disclosure which differ from the described embodiments according to various modifications and improvements.
Within the appended claims, unless the specific term “means for” or “step for” is used within a given claim, it is not intended that the claim be interpreted under 35 U.S.C. § 112, paragraph (f).