The present disclosure relates generally to generating query results from databases, and particularly to generating query results based on a neural network.
It is becoming increasingly more resource intensive to produce useful results from the growing amount of data generated by individuals and organizations. Business organizations in particular can generate petabytes of data and could benefit greatly from mining such data to extract useful insights from their generated data that is automatically gathered and stored in the course of usual business operations.
A typical approach in attempting to gain insight from data includes querying a database storing the data to get a specific result. For example, a user may generate a query (e.g., an SQL query) and the query is sent to a database management system (DBMS) that executes the query on one or more tables stored on the database. This is a relatively simple case; however, with organizations relying on a multitude of vendors for managing their data, each with their own technology for storing data, retrieving useful insights from data is becoming ever increasingly complex. It is also not uncommon for queries to take several minutes, or even hours, to complete when applied to vast amount of stored data.
The advantages to speeding up the process are clear, and some solutions attempt to accelerate access to the databases. For example, one solution includes indexing data stored in databases. Another solution includes caching results of frequent queries. Yet another solution includes selectively retrieving results from the database, so that the query would be served immediately.
However, while these database optimization and acceleration solutions are useful in analyzing databases of a certain size or known data sets, they can fall short of providing useful information when applied to large and unknown data sets, which may include data that an indexing or caching algorithm has not been programmed to process.
It would therefore be advantageous to provide a solution that would overcome the challenges noted above.
A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
Certain embodiments disclosed herein include a method for providing local approximations of query results. The method includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process. The process includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
Certain embodiments disclosed herein also include a system for providing local approximations of query results. The system includes a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: query a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receive from the primary neural network a predicted test result in response to the at least one test query; send, based on the predicted test result, a model of a primary neural network to a local machine; and store the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
In one general aspect, method may include receiving a plurality of query pairs, each query pair of the plurality of query pairs including a database query and a response, the response generated by executing the database query on a database. Method may also include detecting a variable in each query of each query pair; determining a variance of the variable; generating a first subset of potential values for the detected variable based on the determined variance, where each potential value is different from the response of each query pair; generating a plurality of training queries, each training query based on a database query of a query pair of the plurality of query pairs and a corresponding potential value from the first subset; executing each training query to generate a training response; and training the RNN based on the plurality of training queries and a corresponding training response. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features. Method may include: generating a response from the RNN based on providing the RNN with a training query of the plurality of training queries; and adjusting a weight value of a neuron of the RNN based on the generated response from the RNN and the training response. Method may include: generating an error function result based on the generated response from the RNN and the training response; and adjusting the weight value to minimize the error function result. Method may include: providing a database query to the trained RNN; and configuring the trained RNN to process the database query to generate a predicted result. Method may include: executing the provided database query on the database to generate a real result; and generating an output based on the predicted result and the real result. Method may include: continuously generating training queries; and continuously training the RNN based on the generated training queries. Method may include: continuously generating training queries until a predetermined number of training queries is generated. Method may include: continuously training the RNN until an error function result is below a predetermined threshold. Method may include: continuously generating training queries until the error function result is below the predetermined threshold. Implementations of the described techniques may include hardware, a method or process, or a computer tangible medium.
In one general aspect, non-transitory computer-readable medium may include one or more instructions that, when executed by one or more processors of a device, cause the device to: receive a plurality of query pairs, each query pair of the plurality of query pairs including a database query and a response, the response generated by executing the database query on a database; detect a variable in each query of each query pair determine a variance of the variable generate a first subset of potential values for the detected variable based on the determined variance, where each potential value is different from the response of each query pair generate a plurality of training queries, each training query based on a database query of a query pair of the plurality of query pairs and a corresponding potential value from the first subset execute each training query to generate a training response; and train the RNN based on the plurality of training queries and a corresponding training response. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
In one general aspect, system may include a processing circuitry. System may also include a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive a plurality of query pairs, each query pair of the plurality of query pairs including a database query and a response, the response generated by executing the database query on a database. System may in addition detect a variable in each query of each query pair. System may moreover determine a variance of the variable. System may also generate a first subset of potential values for the detected variable based on the determined variance, where each potential value is different from the response of each query pair. System may furthermore generate a plurality of training queries, each training query based on a database query of a query pair of the plurality of query pairs and a corresponding potential value from the first subset. System may in addition execute each training query to generate a training response. System may moreover train the RNN based on the plurality of training queries and a corresponding training response. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: generate a response from the RNN based on providing the RNN with a training query of the plurality of training queries; and adjust a weight value of a neuron of the RNN based on the generated response from the RNN and the training response. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: generate an error function result based on the generated response from the RNN and the training response; and adjust the weight value to minimize the error function result. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: provide a database query to the trained RNN; and configure the trained RNN to process the database query to generate a predicted result. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: execute the provided database query on the database to generate a real result; and generate an output based on the predicted result and the real result. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: continuously generate training queries; and continuously train the RNN based on the generated training queries. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: continuously generate training queries until a predetermined number of training queries is generated. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: continuously train the RNN until an error function result is below a predetermined threshold. System where the memory contains further instructions which when executed by the processing circuitry further configure the system to: continuously generate training queries until the error function result is below the predetermined threshold. Implementations of the described techniques may include hardware, a method or process, or a computer tangible medium.
The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
The one or more databases 120-1 through 120-N (hereinafter referred to as database 120 or databases 120, merely for simplicity) may store one or more structured data sets. In some embodiments, a database 120 may be implemented as any one of: a distributed database, data warehouse, federated database, graph database, columnar database, and the like. A database 120 may include a database management system (DBMS), not shown, which manages access to the database. In certain embodiments, a database 120 may include one or more tables of data.
The network 110 is connected to the neural network (NN) 200 and, in some embodiments, the training set generator 130. The NN 200 may be implemented as a recurrent NN (RNN). In an embodiment, a plurality of NNs may be implemented. For example, a second NN may have more layers than a first NN, as described herein below. The second NN may generate predictions with a higher degree of certainty (i.e., have a higher confidence level) than the first NN, while requiring more memory to store its NN model than the first NN.
The network 110 is further connected to a plurality of user nodes 140-1 through 140-M (hereinafter referred to as user node 140 or user nodes 140, merely for simplicity). A user node 140 may be a mobile device, a smartphone, a desktop computer, a laptop computer, a tablet computer, a wearable device, an Internet of Things (IoT) device, and the like. The user node 140 is configured to send a query to be executed on one or more of the databases 120. In an embodiment, a user node 120 may send the query directly to a database 120, to be handled, for example by the DBMS. In a further embodiment, the query is sent to an approximation server 150.
The training set generator 130 is configured to receive, for example from a DBMS of a database 120, a plurality of training queries, from which to generate a training set for the neural network 200. An embodiment of the training set generator 130 is discussed in more detail with respect to
In an embodiment, the approximation server 150 is configured to receive queries from the user nodes 140, and send the received queries to be executed on the appropriate databases 120. The approximation server 150 may also be configured to provide a user node 140 with an approximate result, generated by the NN 200. This is discussed in more detail below with respect to
The approximation server 150 may include a processing circuitry (not shown) that may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information. In a further embodiment, the processing circuitry of the approximation server 150 is configured to include the training set generator and the neural network.
In an embodiment, the approximation server 150 may further include memory (not shown) configured to store software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry to perform the various processes described herein.
In one embodiment, the central link 160 includes an approximation server 150, a training set generator (TSG) 130, and a neural network 200. In a further embodiment, the approximation server 150 includes the training set generator 130 and the neural network 200. In other deployments, additional networks 110-i are further connected to the central link 160. Specifically, each additional network 110-i is connected to one or more user nodes 140-L through 140-J and a local neural network machine 200-K. In the exemplary embodiment, ‘M’, ‘N’, ‘i’, ‘J’, ‘K’ and ‘L’ are integers greater than or equal to 1.
The second network 110-2 and each additional network 110-i may include local networks, such as, but not limited to, virtual private network (VPNs), local area network (LANs), and the like. Each local network includes a local NN machine 200-1 through 200-K for storing a NN model, which is generated by the approximation server 150. In an example, a NN model may be stored on one or more of the user nodes 140-1 through 140-M, which are communicatively connected to the local network 110-1. The user node 140-1 is configured to send a query to be executed on one or more of the databases 120 either directly (not shown) or via the approximation server 150 of the central link 160. The approximation server 150 may be configured to provide the user node 140-1 with an approximate result generated by the NN 200.
In some embodiments, a first NN and second NN are trained on a data set of one or more databases 120. For example, the first NN may include fewer layers and neurons than the second NN. The first NN may be stored in one or more local NN machines 200-1 through 200-K, such as local NN machine 200-1, and the second NN may be stored on the approximation server 150, e.g. the neural network 200. When a user node 140-1 sends a query for execution, the first NN stored on local NN machine 200-1 may provide an initial predicted result to the user node 140-1. The approximation server 150 will then provide a second predicted result having a greater accuracy than the initial predicted result. In some embodiments, the approximation server 150 may send the query for execution on the data set from the database 120, and provide the real result to the user node 140-1.
In an embodiment, the first NN is executed on a user node only if the user node has the computational resources, e.g., sufficient processing power and memory, to efficiently execute the query on the second neural network. If not, then the user node may be configured to either access a local machine (e.g., a dedicated machine, or another user node on the local network) to generate predictions from a local neural network, or be directed to the approximation server 150 of the central link 160.
It should be appreciated that the arrangement discussed with reference to
The input numerical translator matrix 205 is configured to determine what elements, such as predicates and expressions, are present in the received query. In an embodiment, each element is mapped by an injective function to a unique numerical representation. For example, the input numerical translator matrix 205 may receive a query and generate, for each unique query, a unique vector. The unique vectors may be fed as input to one or more of input neurons 215, which together form an input layer 210 of the neural network 200.
Each neuron (also referred to as a node) of the neural network 200 is configured to apply a function to its input, sending the output of the function forward (e.g., to another neuron), and may include a weight function. A weight function of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network.
The neural network 200 further includes a plurality of hidden neurons 225 in a hidden layer 220. In this exemplary embodiment, a single hidden layer 220 is shown, however a plurality of hidden layers may be implemented, without departing from the scope of the disclosed embodiments.
In an embodiment, the neural network 200 is configured such that each output of an input neuron 215 of the input layer 210 is used as an input of one or more hidden neurons 225 in the hidden layer 220. Typically, all outputs of the input neurons 215 are used as inputs to all the hidden neurons 225 of the hidden layer 220. In embodiments where a plurality of hidden layers is implemented, the output of the input layer 210 is used as the input for the hidden neurons of a first hidden layer.
The neural network 220 further includes an output layer 230, which includes one or more output neurons 235. The output of the hidden layer 220 is the input of the output layer 230. In an embodiment where a plurality of hidden layers is implemented, the output of the final hidden layer is the input of the output layer's 230 output neurons 235. In some embodiments, the output neurons 235 of the output layer 230 may provide a result to an output numerical translator matrix 206, which is configured to translate the output of the output layer 230 from a numerical representation to a query result. The result may then be sent to a user node which has sent the query.
In some embodiments, the neural network 200 may be stored on one or more user nodes (e.g., the user nodes 140 of
A neural network 200 may be trained by executing a number of training queries and comparing the results from the neural network 200 to real results determined from querying a database directly. The training of a neural network 200 is discussed in further detail below regarding
In an embodiment, a user node may periodically poll the approximation server to check if there is an updated version of the neural network. In another embodiment, the approximation server may push a notification to one or more user nodes to indicate that a new version on the neural network is available, and downloadable over a network connection. In some embodiments, the approximation server may have stored therein a plurality of trained neural networks, wherein each neural network is trained on a different data set. While a plurality of neural networks may be trained on different data sets, it is understood that some overlap may occur between data sets.
It should be noted that the neural network discussed with reference to
At S320, the batch of training queries is fed to a neural network to generate a predicted result for each query. The neural network is configured to receive a batch of training queries, where a plurality of batches is called an epoch. The queries may be fed through one or more layers within the neural network. For example, a query may be fed through an input layer, a hidden layer, and an output layer. In an embodiment, each query is first fed to an input numerical translator matrix to determine elements present within the query. Each element is mapped, e.g., by an injective function, to a numerical representation, such as a vector. The vectors may be fed to one or more neurons, where each neuron is configured to apply a function to the vector, where the function includes at least a weight function. In an example embodiment, the weight function determines the contribution of each neuron function toward a final query predicted result.
At S330, a comparison is made between the predicted result of a query and the real result of that query. The comparison includes determining the differences between the predicted result and the real result. For example, if the real result is a number value, the comparison includes calculating the difference between a number output value from the predicted result and the number value of the real result.
At S340, a determination is made if a weight of one or more of the neurons of the neural network should be adjusted. The determination may be made, for example, if the difference between the first predicted result and the first real result exceeds a first threshold. For example, if the difference of number value exceeds 15%, it may be determined to exceed the first threshold. If it is determined that the weight should be adjusted, execution continues at S350, otherwise execution continues at S360.
At S350, the weight of a neuron is adjusted via a weight function. The weight of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network. Adjusting weights may be performed, for example, by methods of back propagation. One example for such a method is a “backward propagation of errors,” which is an algorithm for supervised learning of neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights.
At S360, it is determined if the training for the neural network should continue. In an embodiment, training will end if an epoch has been fully processed, i.e., if the entire plurality of batches has been processed via the neural network. If the epoch has not ended, execution continues at S310 where a new batch is fed to the neural network; otherwise execution terminates. In some embodiments, a check is performed to determine the number of epochs which the system has processed. The system may generate a target number of epochs to train the neural network with, based on the amount of training queries generated, the variance of the data set, the size of the data set, and the like.
In the exemplary embodiment of the method presented in
Typically, a large training set is required to achieve accurate results. However, training sets having both a sufficient depth of data (e.g., queries which require different areas of data for their results, take variance into account, and the like), and a sufficiently large quantity of query examples are not always available. Therefore, it may be advantageous to generate a qualified training set. An exemplary method is discussed herein.
At S510, a first set of queries is received, e.g., by a training set generator. The first set of queries may be queries that have been generated by one or more users, for example through user nodes. Typically, this first set of queries does not include enough queries to train a neural network to a point where the predictions are sufficiently accurate.
At S520, a variable element of a first query of the first plurality of queries is determined. For example, a query may be the following:
At S530, a variance of the data set is determined, where a variance includes a subset of the determined variable. Following the above example where the variable ‘sales’ has a value between 18 and 79, the real full data set may have values ranging between 0 and 1,000. Thus, querying for the sum of income values between 18 and 79 may not be representative of the sum of income for the entire data set, which would bias the NN model. In order to avoid this, the variance of the training queries is determined to take into account this potential bias.
At S540, a training query is generated based on the determined variable and the variance thereof. In the above example, the following query will be generated:
At S550, a determination is made whether to generate another training query. If so, execution continues at S520; otherwise execution continues at S560. The determination may be based on, for example, whether a total number of queries (real and generated) has exceeded a predetermined threshold, whether the total number of generated queries is above a threshold, and the like. For example, it may be determined if the training queries are equal to a representative sample of the data set (i.e. queries that are directed to all portions of the data, or to a number of portions of the data above a predetermined threshold). In another example, it may be determined if additional variance is required for certain predicates.
At S560, the training queries are provided to the input layer of the neural networks for training. Typically, the training queries are executed on the data set, to generate a query pair which includes the query and a real result thereof. The training queries and real results are then vectorized to a matrix representation which the neural network is fed (as described in more detail with respect to
At S620, the query is sent to a trained neural network. In an embodiment, it is determined if the neural network is trained to provide a sufficiently accurate response to the received query. This may be based on, for example, a version number of the neural network indicating the training level thereof.
At S630, a determination is performed to ascertain if the query should be executed on the data set. In some embodiments, it may be advantageous to first supply an approximate answer immediately as the query is received, while additionally computing the real result of the query on the data set. This determination may be based on, for example, the version number of the neural network, the resources available to run the query through the neural network, the time required to execute the query, and so on. If a real result is determined to be provided, execution continues at S640; otherwise execution continues at S635.
At S635, a first result, or a predicted result, is provided, e.g., sent to a user node where the query was received from.
At S640, the predicted result is provided, e.g., to the user node where the query was received from, while the query is executing on a relevant one or more data sets to determine the real results thereof. Execution may include sending all or part of the query to a DBMS of a database for execution thereon.
At S650, a second result, or an updated result, is provided, e.g., to the user node, where the update is based on the calculated real results. In an embodiment, a notification may be provided to indicate that the result has been updated from an approximate and predicted result to real result. The notification may be a textual notification, a visual notification (such as the text or background of the notification changing colors), and the like.
At S660, it is determined whether to use the real result to further train the neural network. For example, if the difference between the real result and the predicted result is below a second threshold, it may be determined not to train the neural network, as the results are sufficiently accurate. Alternatively, it may be determined that the same result should be used for training, even if below a second threshold, in order to reinforce the quality of the prediction. If a real result is to be used for training, execution continues at S670; otherwise execution terminates.
At S670 the query and real result are sent to the neural network as an input to the input layer of the neural network. The neural network may be trained based on its latest state, i.e., its version number. The version number may be updated every time the neural network is trained based on the real result and the predicted result.
In an example embodiment, an approximation server of the neural network receives a plurality of queries and their real results, e.g., from S640, and stores them for periodically training the neural network. In another example embodiment, the query and result may be used by a training set generator to generate another set of training queries. In certain embodiments where the neural network further includes a version number, the version number may be updated each time the neural network is retrained. A copy with a version number of the neural network may be stored on any of the devices discussed with respect to
In an embodiment, the received query may be provided to a plurality of neural networks to be executed on each of their models, e.g., at S620, where at least two NN of the plurality of NNs differ from each other in the number of layers and/or neurons. For example, a first neural network will receive the query and generate a first predicted result. The first predicted result may be sent to a user node, sent to a dashboard, report, and the like. In parallel, or subsequently, the query is sent to a second neural network that has more layers, neurons, or both, than the first neural network.
Upon receiving a second predicted result from the second neural network, the result available to the user node may be updated, e.g., at S650. In certain embodiments, a loss function may be determined and a result thereof generated, for example by the approximation server. A loss function may be, for example, a root mean squared error. The loss function may be used to determine a confidence level of the predication of a neural network. In an embodiment, it may be desirable to provide the query to the “leanest” neural network (i.e. the NN with the least a number of layers, neurons, or both), which would require less computational resources.
A confidence level may be determined for the prediction, and if it falls below a threshold (i.e., the confidence level is too low) then the query may be provided to the next neural network, which would require more computational resources than the first NN, but may require less computational resources than a third NN or than executing the query on the data set itself to generate real results.
The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereleastof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
This application is a continuation of U.S. Non-Provisional patent application Ser. No. 15/858,943, now allowed, which itself claims the benefit of the following applications: U.S. Provisional Application No. 62/545,046 filed on Aug. 14, 2017;U.S. Provisional Application No. 62/545,050 filed on Aug. 14, 2017;U.S. Provisional Application No. 62/545,053 filed on Aug. 14, 2017; andU.S. Provisional Application No. 62/545,058 filed on Aug. 14, 2017. All of the applications referenced above are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62545058 | Aug 2017 | US | |
62545053 | Aug 2017 | US | |
62545050 | Aug 2017 | US | |
62545046 | Aug 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15858943 | Dec 2017 | US |
Child | 18772825 | US |