The present disclosure relates to the field of environment control systems. More specifically, the present disclosure relates to a computing device and method using a neural network to infer a predicted state of a communication channel.
Systems for controlling environmental conditions, for example in buildings, are becoming increasingly sophisticated. A control system may at once control heating and cooling, monitor air quality, detect hazardous conditions such as fire, carbon monoxide release, intrusion, and the like. Such control systems generally include at least one environment controller, which receives measured environmental values, generally from external sensors, and in turn determines set-points or command parameters to be sent to controlled appliances.
Communications between an environment controller and the devices under its control (sensors, controlled appliances, etc.) were traditionally based on wires. The wires are deployed in the building where the environment control system is operating, for instance in the walls, ceilings, and floors of multiple rooms in the building. Deploying wires in a building is usually disrupting for the daily operations in the building and costly. Thus, recently deployed environment controllers and devices under their control (sensors, controlled appliances, etc.) are using one or more wireless communication protocol (e.g. Wi-Fi, mesh, etc.) to exchange environmental data.
The environment controller and the devices under its control (sensors, controlled appliances, etc.) are generally referred to as Environment Control Devices (ECDs). An ECD comprises processing capabilities for processing data received via one or more communication interface and/or generating data transmitted via the one or more communication interface. Each communication interface may be of the wired or wireless type.
A communication channel is associated to the communication interface of an ECD. The communication channel represents a physical and/or logical media allowing an exchange of data between the ECD and at least one remote computing device. For example, the communication channel consists of a cable plugged into the communication interface. Alternatively, the communication channel consists of one or more radio channels established by the communication interface.
The communication channel requires high availability and resistance to transmission errors. However, when using certain protocols such as the Internet Protocol (IP), nothing is actually occurring on the communication channel unless an actual communication over the communication channel is taking place. This makes it hard to detect if the communication channel is not in an operational state when nothing is transmitted over the communication channel for a certain amount of time. For example, if a computing device is in a listening only state on the communication interface, it is hard to tell whether the fact of receiving no data through the communication channel is normal or if the communication channel is not in an operational state.
However, current advances in artificial intelligence, and more specifically in neural networks, can be taken advantage of to define a model taking into consideration sample data representative of past operating conditions of the communication channel to predict a current state of the communication channel.
Therefore, there is a need for a new computing device and method using a neural network to infer a predicted state of a communication channel.
According to a first aspect, the present disclosure relates to a computing device. The computing device comprises a communication interface. The communication interface allows an exchange of data between the computing device and at least one remote computing device over a communication channel associated to the communication interface. The computing device comprises memory for storing a predictive model generated by a neural network training engine. The computing device comprises a processing unit for collecting a plurality of data samples representative of operating conditions of the communication channel. Each data sample comprises a measure of the amount of data transmitted by the communication interface over the communication channel during a period of time, a measure of the amount of data received by the communication interface over the communication channel during the period of time, and a connection status of the communication channel during the period of time. The processing unit further executes a neural network inference engine using the predictive model for inferring a predicted state of the communication channel based on the plurality of data samples.
According to a second aspect, the present disclosure relates to a method using a neural network to infer a predicted state of a communication channel. The method comprises storing a predictive model generated by a neural network training engine in a memory of a computing device. The method comprises collecting, by a processing unit of the computing device, a plurality of data samples representative of operating conditions of the communication channel. The communication channel is associated to a communication interface of the computing device. The communication interface allows an exchange of data between the computing device and at least one remote computing device over the communication channel. Each data sample comprises a measure of the amount of data transmitted by the communication interface over the communication channel during a period of time, a measure of the amount of data received by the communication interface over the communication channel during the period of time, and a connection status of the communication channel during the period of time. The method further comprises executing, by the processing unit of the computing device, a neural network inference engine using the predictive model for inferring the predicted state of the communication channel based on the plurality of data samples.
According to a third aspect, the present disclosure relates to a non-transitory computer program product comprising instructions executable by a processing unit of a computing device. The execution of the instructions by the processing unit of the computing device provides for using a neural network to infer a predicted state of a communication channel, by implementing the aforementioned method.
Embodiments of the disclosure will be described by way of example only with reference to the accompanying drawings, in which:
The foregoing and other features will become more apparent upon reading of the following non-restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings.
Various aspects of the present disclosure generally address one or more of the problems related to predicting a state of a communication channel associated to a communication interface of a computing device. More specifically, the present disclosure addresses computing devices consisting of environment control devices (ECDs), which exchange environmental data with other components of an environment control system via a communication channel associated to a communication interface of the ECDs.
The following terminology is used throughout the present disclosure:
Referring now concurrently to
The ECD 100 comprises a processing unit 110, memory 120, and a communication interface 130. The ECD 100 may comprise additional components (not represented in
The processing unit 110 comprises one or more processors (not represented in
The memory 120 stores instructions of computer program(s) executed by the processing unit 110, data generated by the execution of the computer program(s), data received via the communication interface 130 (or another communication interface), etc. Only a single memory 120 is represented in
The communication interface 130 allows the ECD 100 to exchange data with one or more remote device(s) 200 over a communication network 10. For example, the communication network 10 is a wired communication network, such as an Ethernet network; and the communication interface 130 is adapted to support communication protocols used to exchange data over the Ethernet network 10. Other types of wired communication networks 10 may also be supported by the communication interface 130. In another example, the communication network 10 is a wireless communication network, such as a Wi-Fi network; and the communication interface 130 is adapted to support communication protocols used to exchange data over the Wi-Fi network 10. Other types of wireless communication network 10 may also be supported by the communication interface 130, such as a wireless mesh network.
Referring more specifically to
The communication channels 12 and 12′ represented in
Reference is now made more specifically to
A dedicated computer program has instructions for implementing at least some of the steps of the method 400. The instructions are comprised in a non-transitory computer program product (e.g. the memory 120) of the ECD 100. The instructions provide for using a neural network to infer a predicted state of the communication channel 12, when executed by the processing unit 110 of the ECD 100. The instructions are deliverable to the ECD 100 via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via the communication network 10 through the communication interface 130).
The dedicated computer program product executed by the processing unit 110 comprises a neural network inference engine 112 and a control module 114.
Also represented in
The execution of the neural network training engine 312 generates a predictive model, which is transmitted to the ECD 100 via the communication interface of the training server 300. For example, the predictive model is transmitted over the communication network 10 and received via the communication interface 130 of the ECD 100. Alternatively, the predictive model is transmitted over another communication network not represented in
The method 400 comprises the step 405 of executing the neural network training engine 312 to generate the predictive model. Step 405 is performed by the processing unit of the training server 300.
The method 400 comprises the step 410 of transmitting the predictive model to the ECD 100, via the communication interface of the training server 300. Step 410 is performed by the processing unit of the training server 300.
The method 400 comprises the step 415 of receiving the predictive model by the ECD 100, via the communication interface 130 of the ECD 100. Step 415 further comprises storing the predictive model in the memory 120 of the ECD 100. Step 415 is performed by the processing unit 110 of the ECD 100.
The method 400 comprises the step 420 of collecting a plurality of data samples representative of operating conditions of the communication channel 12 associated to the communication interface 130 of the ECD 100. Each data sample comprises: a measure of the amount of data transmitted by the communication interface 130 over the communication channel 12 during a period of time, a measure of the amount of data received by the communication interface 130 over the communication channel 12 during the period of time, and a connection status of the communication channel 12 during the period of time. Each data sample may include one or more additional type of data representative of the operating conditions of the communication channel 12. Step 420 is performed by the control module 114 executed by the processing unit 110.
The method 400 comprises the step 425 of executing the neural network inference engine 112 by the processing unit 110. The neural network inference engine 112 uses the predictive model (stored in memory 120 at step 415) for inferring a predicted state of the communication channel 12 based on the plurality of data samples (collected at step 420).
The number of data samples used as inputs of the neural network inference engine 112 may vary (e.g. 2, 3, 5, etc.). The period of time during which each data sample is collected may also vary (e.g. thirty seconds, one minute, five minutes, etc.). The period of time has the same duration for each data sample. Alternatively, the period of time is not the same for each data sample. Furthermore, the data samples are collected over consecutive period of times. Alternatively, the data samples are collected over period of times separated by a time interval.
The predicted state of the communication channel 12 comprises two states: operational and non-operational. Generally speaking, the operational state means that data can be exchanged over the communication channel 12 with a satisfying level of reliability, while the non-operational state means that data cannot be exchanged over the communication channel 12 with a satisfying level of reliability. For instance, bellow a given error rate (e.g. one IP packet lost per second), the communication channel 12 is considered to be operational; while above the given error rate, the communication channel 12 is considered to be non-operational. The error rate is defined for a bi-directional exchange of data over the communication channel 12. Alternatively, a first error rate is defined for transmission of data by the ECD 100 over the communication channel 12; and a second error rate is defined for reception of data by the ECD 100 over the communication channel 12. Thus, the predicted state of the communication channel 12 being operational means that the predicted error rate of the communication channel 12 is bellow a given threshold (e.g. one IP packet lost per second); and the predicted state of the communication channel 12 being non-operational means that the predicted error rate of the communication channel 12 is above a given threshold. Other measures may be used in place of (or in combination with) the error rate to quantify the operational and non-operation states of the communication channel 12. For example, a retransmission rate (e.g. one IP packet retransmitted per second) can also be used. The error rate, retransmission rate, etc. can be used during a training phase to generate the predictive model, as will be detailed later in the description.
The predicted state of the communication channel 12 may comprise more than two states (e.g. non-operational, degraded and fully-operational) to provide a better granularity for predicting the operation conditions of the communication channel 12.
With respect to the measure of the amount of data transmitted by the communication interface 130 over the communication channel 12 during the period of time (collected by the control module 114 for each data sample), it may consist of several metrics. For example, the measure consists of the number of bytes transmitted by the communication interface 130 over the communication channel 12 during the period of time, the number of Internet Protocol (IP) packets transmitted by the communication interface 130 over the communication channel 12 during the period of time, etc.
With respect to the measure of the amount of data received by the communication interface 130 over the communication channel 12 during the period of time (collected by the control module 114 for each data sample), it may also consist of several metrics. For example, the measure consists of the number of bytes received by the communication interface 130 over the communication channel 12 during the period of time, the number of Internet Protocol (IP) packets received by the communication interface 130 over the communication channel 12 during the period of time, etc.
With respect to the connection status of the communication channel 12 during the period of time, it may also consist of several metrics. For example, the connection status consists of a Boolean indicating whether the communication channel 12 is connected or disconnected during the period of time, the number of times the communication channel 12 has been disconnected during the period of time, etc. In the case of the Boolean, if the communication channel 12 has not been disconnected during the period of time, the Boolean is set to false. If the communication channel 12 has been disconnected at least once during the period of time, the Boolean is set to true.
In the case of a wired (e.g. Ethernet) interface 130, being connected means that a cable is plugged into the wired interface 130; and being disconnected means that a cable is not plugged into the wired interface 130 In the case of a wireless technology, the status of being connected or disconnected may vary from one wireless technology to another. For example, for a communication interface 130 compliant with one of the 802.11 standards, being connected means that the communication interface 130 is associated with a 802.11 (Wi-Fi) access point; and being disconnected means that the communication interface 130 is not associated with a 802.11 (Wi-Fi) access point.
The communication interface 130 (solely or in combination with the processing unit 110) executes monitoring software(s) capable of measuring/determining the data used in the data samples (measure of the amount of data transmitted and received, and connection status). These monitoring software(s) are well known in the art of communication technologies and protocols and will therefore not be detailed in the present disclosure.
Steps 420 and 425 of the method 400 are repeated to constantly evaluate the state of the communication channel 12.
In the case where the predicted state inferred at step 425 is representative of non-satisfying operating conditions of the communication channel 12 (e.g. non-operational as mentioned previously), further actions may be taken by the control module 114 executed by the processing unit 110 of the ECD 100. For example, a testing software is launched to further evaluate the operational state of the communication channel 12. The testing software generally relies on active probing (injection of test traffic to evaluate the operational state of the communication channel 12). Alternatively or complementarily, a warning or error message is displayed on a display of the ECD 100, to inform a user of the ECD 100 that the communication channel 12 is not operational. The warning or error message can also be logged in the memory 120 of the ECD 100, and/or transmitted to another computing device.
During the training phase, the neural network training engine 312 is trained with a plurality of inputs and a corresponding plurality of outputs.
Each input consists of a set of data samples and the corresponding output consists of the state of the communication channel 12. The same number of data samples is used as inputs of the neural network training engine 312 during the training phase and as inputs of the neural network inference engine 312 during the operational phase. As is well known in the art of neural networks, during the training phase, the neural network implemented by the neural network training engine 312 adjusts its weights. Furthermore, during the training phase, the number of layers of the neural network and the number of nodes per layer can be adjusted to improve the accuracy of the model. At the end of the training phase, the predictive model generated by the neural network training engine 312 includes the number of layers, the number of nodes per layer, and the weights.
The inputs and outputs for the training phase of the neural network can be collected through an experimental process. For example, a test ECD 100 is placed in various operating conditions and a plurality of tuples of data samples/corresponding state of the communication channel 12 are collected. The parameters of the data samples are varied dynamically by a user controlling the ECD 100. A first exemplary input comprises the 2 following data samples: [amount of data transmitted=500 packets, amount of data received=300 packets, nb_of_deconnections=1] in the time interval [T, T+1] and [amount of data transmitted=100 packets, amount of data received=60 packets, nb_of_deconnections=3] in the time interval [T+1, T+2]. The corresponding output is: communication channel non-operational in the time interval [T+2, T+3]. A second exemplary input comprises the 2 following data samples: [amount of data transmitted=500 packets, amount of data received=300 packets, nb_of_deconnections=1] in the time interval [T, T+1] and [amount of data transmitted=600 packets, amount of data received=350 packets, nb_of_deconnections=0] in the time interval [T+1, T+2]. The corresponding output is: communication channel operational in the time interval [T+2, T+3]. As mentioned previously, the state of the communication channel 12 can be evaluated by measuring parameter(s) such as error rate(s), retransmission rate(s), etc.; and comparing the measured parameter(s) to pre-defined threshold(s) to determine whether the state of the communication channel 12 is operational or non-operational.
Alternatively, the inputs and outputs for the training phase of the neural network can be collected through a mechanism for collecting data while the ECD 100 is operating in real conditions. For example, a collecting software is executed by the processing unit 110 of the ECD 100. The collecting software records the data samples over a plurality of time intervals. The collecting software further evaluates the state of the communication channel 12 during the plurality of time intervals. As mentioned previously, the state of the communication channel 12 can be evaluated by measuring parameter(s) such as error rate(s), retransmission rate(s), etc.; and comparing the measured parameter(s) to pre-defined threshold(s) to determine whether the state of the communication channel 12 is operational or non-operational.
Various techniques well known in the art of neural networks are used for performing (and improving) the generation of the predictive model, such as forward and backward propagation, usage of bias in addition to the weights (bias and weights are generally collectively referred to as weights in the neural network terminology), reinforcement learning, etc.
During the operational phase, the neural network inference engine 112 uses the predictive model (e.g. the values of the weights) determined during the training phase to infer an output (predicted state of the communication channel 12) based on inputs (a plurality of data samples), as is well known in the art.
Reference is now made concurrently to
The environment control system represented in
The sensor 200 detects an environmental characteristic and transmits corresponding environmental data (e.g. an environmental characteristic value) to the environment controller 100 via the Wi-Fi communication channel 12. The environment controller 100 receives the environmental characteristic value from the sensor 200, and determines an environmental state based on the received environmental characteristic value. Then, the environment controller 100 generates a command based on the environmental state; and transmits the command to the controlled appliance 200′ via the Wi-Fi communication channel 12.
Although a single sensor 200 is represented in
The method 400 is performed by the environment controller 100 to infer a predicted state of the Wi-Fi communication channel 12 associated to the Wi-Fi interface 130 of the environment controller 100. Similarly, the sensor 200 and the controlled appliance 200′ may perform the method 400 to infer a predicted state of a Wi-Fi communication channel (not represented in
Furthermore, as mentioned previously, the communication channel 12 is not limited to the Wi-Fi standard. Any of an environment controller, sensor and controlled appliance may apply the method 400 for inferring a predicted state of a communication channel associated to a communication interface of respectively the environment controller, sensor and controlled appliance. The communication interface can be wired or wireless, and support various types of standards, such as Wi-Fi, Ethernet, etc.
Additionally, the method 400 is not limited to ECDs, but can be applied to any computing device having a processing unit capable of executing a neural network inference engine and a communication interface having an associated communication channel, as illustrated in
Reference is now made concurrently to
A first plurality of ECDs 100 implementing the method 400 are deployed at a first location. Only two ECDs 100 are represented for illustration purposes, but any number of ECDs 100 may be deployed.
A second plurality of ECDs 100 implementing the method 400 are deployed at a second location. Only one ECD 100 is represented for illustration purposes, but any number of ECDs 100 may be deployed.
The first and second locations may consist of different buildings, different floors of the same building, etc. Only two locations are represented for illustration purposes, but any number of locations may be considered.
The ECDs 100 represented in
Reference is now made to
Reference is now made to
Instead of having the plurality of ECDs individually executing the corresponding plurality of neural network inference engines 112 (as illustrated in
Each ECD 100 collects the data samples according to step 420 of the method 400; and transmits them to the inference server 500. The inference server performs the inference of the predicted state of the communication channel of the ECD 100 based on the data samples transmitted by the ECD, using the predictive model transmitted by the training server 300. The inference server 500 transmits the predicted state of the communication channel to the ECD 100.
The centralized inference server 500 may be used in the case where some of the ECDs 100 do not have sufficient processing power and/or memory capacity for executing the neural network inference engine 112. The centralized inference server 500 is a powerful server with high processing power and memory capacity capable of executing the neural network inference engine 512 using a complex predictive model (e.g. multiple layers with a large number of neurons for some of the layers). The centralized inference server 500 is further capable of executing several neural network inference engines 512 in parallel for serving a plurality of ECDs in parallel. As mentioned previously, the one or more neural network inference engine 512 executed by the inference server 500 may use the same or different predictive models for all of the ECDs 100 served by the inference server 500.
Reference is now made concurrently to
For illustration purposes, the neural network 600 comprises an input layer with 9 neurons for receiving 3 data samples. The number of neurons of the input layer depends on the number of data samples. For instance, if 4 data samples are used, then the input layer comprises 12 neurons for receiving the 4 data samples.
The first data sample represented in
The second data sample represented in
The third data sample represented in
For illustration purposes, the neural network 600 comprises one intermediate hidden layer with 4 neurons N1, N2, N3 and N4. The number of neurons of the hidden layer is determined during the training phase of the neural network 600 and may be different than 4 (e.g. 3, 5 or 6 instead of 4).
Although a single hidden layer is represented in
The neural network 600 comprises an output layer with one neuron for outputting an output value, based on which the predicted state of the communication channel 12 is determined.
The one or more hidden layer and the output layer are fully connected. A layer L being fully connected means that each neuron of layer L receives inputs from every neurons of layer L−1, and that each neuron of layer L applies respective weights to the received inputs. By default, the output layer is fully connected to the last hidden layer.
As illustrated in
Each neuron of a fully connected layer has a weight associated to each one of its connections with the previous layer. These weights correspond to the aforementioned weights of the predictive model, which are calculated during the training phase, and used during the operational phase.
The calculation of the output value of the neural network 600 based on the inputs (the data samples) using the weights allocated to the neurons of the neural network 600 is well known in the art for a neural network using only fully connected hidden layer(s).
For illustration purposes, the neuron N3 has 9 weights (w1_1, w1_2, w1_3, w2_1, w2_2, w2_3, w3_1, w3_2 and w3_3) respectfully associated to the 9 connections with the 9 neurons of the input layer. The output value O3 of neuron N3 is calculated as follows: O3=w1_1*T_1+w1_2*R_1+w1_3*S_1+w2_1*T_2+w2_2*R_2+w2_3*S_2+w3_1*T_3+w3_2*R_3+w3_3*S_3. The same mechanism is applied for calculating the respective output values O1, O2 and O4 of the neurons N1, N2 and N4.
For illustration purposes, the neuron of the output layer has 4 weights (w1, w2, w3 and w4) respectfully associated to the 4 connections with the 4 neurons N1, N2, N3 and N4 of the hidden layer. The output value O of the neuron of the output layer is calculated as follows: O=w1*O1+w2*O2+w3*O3+w4*O4.
In the case where the neural network 600 has more than one hidden layer, the mechanism for calculating the output values of the neurons (N1, N2, N3 and N4) of the first hidden layer based on the inputs of the neurons of the input layer can be easily adapted, for example for calculating the output values of the neurons of a second hidden layer based on the output values of the neurons of the first hidden layer.
The mechanism for the calculation of the output value of a neuron may involve additional steps, as is well known in the art of neural networks. For example, the result of the calculation of the output value of a neuron is corrected by a bias associated to the neuron. In another example, the result of the calculation of the output value of a neuron is corrected by an activation function. Different types of activation functions may be used for the hidden layer(s) and the output layer. Examples of activations functions typically used for the hidden layer(s) include sigmoid functions, hyperbolic tangent functions, Rectified Linear Unit activation functions, etc. Examples of activations functions typically used for the output layer include linear functions, sigmoid functions, softmax functions, etc.
The predicted state of the communication channel 12 is determined by the processing unit 110 based on the output value O of the neuron of the output layer.
We now consider the case where the predicted state of the communication channel 12 takes two discrete values, consisting of operational and non-operational.
In a first exemplary implementation, the activation function of the output layer is a sigmoid function, and the output value O of the neuron of the output layer takes a value between 0 and 1 representing a probability of the communication channel 12 being operational. If the output value O is greater or equal than 0.5 then the communication channel 12 is operational. If the output value O is lower than 0.5 then the communication channel 12 is non-operational.
In a second exemplary implementation, the activation function of the output layer is a linear function, and the output value O of the neuron of the output layer takes a near-one-value (a value substantially equal to 1) or a near-zero-value (a value substantially equal to 0). If the output value O is the near-one-value then the communication channel 12 is operational. If the output value O is the near-zero-value then the communication channel 12 is non-operational.
We now consider the case where the predicted state of the communication channel 12 takes three discrete values, consisting of (fully) operational, degraded and non-operational.
The first exemplary implementation using the sigmoid activation function for the output layer is not adapted to discriminate between more than two discrete values. In this case, the neural network 600 represented in
The second exemplary implementation can be adapted to 3 different discrete values. The output value O of the neuron of the output layer takes a near-one-value (a value substantially equal to 1), a near-zero-value (a value substantially equal to 0) or a near-median-value (a value substantially equal to 0.5). If the output value O is the near-one-value then the communication channel 12 is (fully) operational. If the output value O is the near-zero-value then the communication channel 12 is non-operational. If the output value O is the near-median-value then the communication channel 12 is degraded.
Reference is now made concurrently to
The neural network 600 illustrated in
We consider the case where the predicted state of the communication channel 12 takes discrete values. Each discrete value is represented by one neuron of the output layer.
Each neuron of the output layer is fully connected to all the neurons of the (last) hidden layer. The previously described calculation of the output value of a neuron of the hidden layer(s) and the output value of a neuron of the output layer in relation to the neural network 600 represented in
In an exemplary implementation, the activation function of the output layer is a softmax function, the output value O of each neuron of the output layer takes a value between 0 and 1, and the sum of the output values of the neurons of the output layer is equal to 1 (each output value represents a predicted probability). The state of the communication channel 12 is determined by the neuron of the output layer having the highest output value.
For example, referring to
For example, referring to
The design of the neural network 600 can be adapted to a communication channel 12 having a state consisting of N discrete values, N being any integer greater than 1. The neural network 600 has an output layer with N neurons respectively representative of one of the states of the communication channel.
As mentioned previously, the values of the weights of the neural network 600 are calculated during a training phase, for example by applying a backward propagation algorithm to training data sets, each training data set comprising a set of data samples and corresponding expected output value(s) of the output layer of the neural network 600 representing a corresponding expected state of the communication channel 12. The choice of the activation functions (or absence of activation function for a given layer) is also performed during the training phase. An experimental process for implementing the training phase (in particular for collecting training data sets) has been described previously.
Listening Only State without Receiving Data
Reference is now made concurrently to
A modification to the implementation of the method 400 consists in performing step 425 (the execution of the neural network inference engine 112) only when a determination is made by the processing unit 110 that the environment control device 100 has been in a listening only state on the communication interface 130 for a given amount of time without receiving any data through the communication channel 112.
Although the present disclosure has been described hereinabove by way of non-restrictive, illustrative embodiments thereof, these embodiments may be modified at will within the scope of the appended claims without departing from the spirit and nature of the present disclosure.
This is a Continuation-in-Part application of U.S. patent application Ser. No. 16/003,430, filed Jun. 8, 2018, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
Parent | 16003430 | Jun 2018 | US |
Child | 17490621 | US |