EFFICIENT EXTENDED KALMAN FILTER (EKF) UNDER FEED-FORWARD APPROXIMATION OF A DYNAMICAL SYSTEM

Information

  • Patent Application
  • 20230289574
  • Publication Number
    20230289574
  • Date Filed
    July 27, 2022
    2 years ago
  • Date Published
    September 14, 2023
    a year ago
Abstract
An Extended Kalman filter (EKF) is a general nonlinear version of the Kalman filter and an approximate inference solution which uses a linearized approximation performed dynamically at each step and followed by linear KF application. Extended Kalman Filter involves dynamic computation of the partial derivatives of the non-linear functions system maps with respect to the input or current state. Existing approaches have failed to perform recursive computations efficiently and exactly for such scenarios. Embodiments of the present disclosure efficient forward and backward recursion-based approaches wherein a forward pass is executed through a feed-forward network (FFN) to compute a value that serves as an input to jth node at a layer l from a plurality of network layers of the FFN and partial derivatives are estimated for each node associated with various network layers in the FFN. The feed-forward network is used as state and/or observation equation in the EKF.
Description
PRIORITY CLAIM

This U.S. patent application claims priority under 35 U.S.C. § 119 to: Indian Patent Application No. 202221013456, filed on Mar. 11, 2022. The entire contents of the aforementioned application are incorporated herein by reference.


TECHNICAL FIELD

The disclosure herein generally relates to feed-forward networks, and, more particularly, to efficient Extended Kalman filter (EKF) under feedforward approximation of a dynamical system.


BACKGROUND

The inference problem in a non-linear dynamical system (NLDS) is to predict an unobserved state variable X(k), given the observations Y(k), initial state and inputs u(k). In particular, it is posed as an optimal filtering problem where X(k0) is estimated in a recursive fashion, given observations and inputs up to some k0. To perform inference or predictions optimally (in a minimum mean-square error sense (MMSE)) under such NLDS models, exact methods are unavailable. An Extended Kalman filter (EKF) is one popular approximate inference solution which is very efficient run-time wise. It uses a linearized approximation (based on a Taylor series expansion) performed dynamically at each step and followed by linear KF application. The extended Kalman filter (EKF) is a general nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the extended Kalman filter, the state transition and observation models do not need to be linear functions of the state but may instead be nonlinear differentiable functions.


EKF is employed across domains like signal processing (tracking applications), robotics (estimate position of a robot), transportation, etc. Typically, the non-linear maps governing the state and observations equations are based on the physics of the domain. In situations where the physics-based models are not good enough, universal function approximators such as Neural Networks can be alternatively used if data is available to learn from. The Extended Kalman Filter involves dynamic computation of the partial derivatives of the non-linear functions system maps with respect to the input or current state. Existing approaches have failed to perform recursive computations efficiently and exactly for such scenarios.


SUMMARY

Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.


For example, in one aspect, there is provided a processor implemented method comprising: obtaining, a feed-forward network (FFN) via one or more hardware processors, wherein the FFN comprises a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; and recursively computing in a first direction, via the one or more hardware processors, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying, via the one or more hardware processors, a recursive relationship between outputs associated with two successive network layers from the plurality of network layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nl wijl yil(x)+ω0jl), wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation associated with the identified recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation associated with the identified recursive relationship, using the equation:











y
j

l
+
1






x
q



=



S


(

η
j

l
+
1


)



(


Σ

i
=
1


n
l




w
ij
l






y
j
l





x
q




)



,




wherein ηjl+1i=1nl wijl yil(x)+ω0jl serves as an input to the jth node at the layer l+1, and S′ is a derivative of the continuously differentiable activation function.


In an embodiment, the method further comprises recursively computing, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:








δ
j
l

=



Σ

s
=
1


n

l
+
1








y
r
L





η
s

l
+
1









η
s

l
+
1






η
j
l




=


Σ

s
=
1


n

l
+
1





δ
s

l
+
1




w
js
l




S


(

η
j
l

)




,




wherein yrL is one or more rth outputs of the FFN, δjl is the partial derivative of the one or more rth outputs (yrL) of the FFN with reference to ηjl indicative of









y




η
j
l



,




and ηsl+1 is an output of sth node at the layer l+1.


In an embodiment, the steps of recursively computing, in the first direction and recursively computing, in the second direction, are preceded by executing a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.


In an embodiment, the first direction and the second direction are different from each other.


In an embodiment, ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.


In an embodiment, an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear.


In an embodiment, the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).


In another aspect, there is provided a processor implemented system comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain, a feed-forward network (FFN), the FFN comprising a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; and recursively compute in a first direction, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying, a recursive relationship between outputs associated with two successive network layers from the plurality of network layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nl wijl yil(x)+ω0jl), wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation associated with the identified recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation associated with the identified recursive relationship, using the equation:











y
j

l
+
1






x
q



=



S


(

η
j

l
+
1


)



(


Σ

i
=
1


n
l




w
ij
l






y
j
l





x
q




)



,




wherein ηjl+1Σi=1nl wijl yil(x)+ω0jl serves as an input to the jth node at the layer l+1, and S′ is a derivative of the continuously differentiable activation function.


In an embodiment, the one or more hardware processors are further configured by the instructions to recursively compute, in a second direction, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:








δ
j
l

=



Σ

s
=
1


n

l
+
1








y
r
L





η
s

l
+
1









η
s

l
+
1






η
j
l




=


Σ

s
=
1


n

l
+
1





δ
s

l
+
1




w
js
l




S


(

η
j
l

)




,




wherein yrL is one or more rth outputs of the FFN, δjl is the partial derivative of the one or more rth outputs (yrL) of the FFN with reference to ηjl indicative of









y




η
j
l



,




and ηsl+1 is an output of sth node at the layer l+1.


In an embodiment, prior to recursively computing in the first direction and recursively computing, in the second direction, the one or more hardware processors are configured to execute a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.


In an embodiment, the first direction and the second direction are different from each other.


In an embodiment, ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.


In an embodiment, an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear.


In an embodiment, the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).


In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause obtaining, a feed-forward network (FFN), wherein the FFN comprises a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; and recursively computing in a first direction, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying, via the one or more hardware processors, a recursive relationship between outputs associated with two successive network layers from the plurality of network layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nl wijl yil(x)+ω0jl), wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation associated with the identified recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation associated with the identified recursive relationship, using the equation:











y
j

l
+
1






x
q



=



S


(

η
j

l
+
1


)



(


Σ

i
=
1


n
l




w
ij
l






y
j
l





x
q




)



,




wherein ηjl+1i=1nl wijl yil(x)+ω0jl serves as an input to the jth node at the layer l+1, and S′ is a derivative of continuously differentiable activation function.


In an embodiment, the one or more instructions which when executed by the one or more hardware processors further cause recursively computing, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:








δ
j
l

=



Σ

s
=
1


n

l
+
1








y
r
L





η
s

l
+
1









η
s

l
+
1






η
j
l




=


Σ

s
=
1


n

l
+
1





δ
s

l
+
1




w
js
l




S


(

η
j
l

)




,




wherein yrL is one or more rth outputs of the FFN, δjl is the partial derivative of the one or more rth outputs (yrL) of the FFN with reference to ηjl indicative of









y




η
j
l



,




and ηsl+1 is an output of sth node at the layer l+1.


In an embodiment, the steps of recursively computing, in the first direction and recursively computing, in the second direction, are preceded by executing a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.


In an embodiment, the first direction and the second direction are different from each other.


In an embodiment, ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.


In an embodiment, an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear.


In an embodiment, the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:



FIG. 1 depicts an exemplary system for efficient Extended Kalman Filter (EKF) under feedforward approximation of a dynamical system, in accordance with an embodiment of the present disclosure.



FIG. 2 depicts an exemplary flow chart illustrating a method for efficient Extended Kalman Filter (EKF) under feedforward approximation of a dynamical system, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.



FIG. 3 depicts an exemplary diagram of a Feed-Forward Network as implemented by the system of FIG. 1, in accordance with an embodiment of the present disclosure.





DETAILED DESCRIPTION

Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.


As mentioned earlier, the inference problem in non-linear dynamical systems (NLDS) is to predict an unobserved state variable X (k), given the observations Y(k), initial state and inputs u(k). More specifically, it is posed as an optimal filtering problem where X (k0) is estimated in a recursive fashion, given observations and inputs up to some k0. To perform inference or predictions optimally (in a minimum mean-square error sense (MMSE)) under such NLDS models, exact methods are unavailable. The Extended Kalman filter (EKF) is one popular approximate inference solution which is very efficient run-time wise. It uses a linearized approximation (based on a Taylor series expansion) performed dynamically at each step and followed by linear KF application. The extended Kalman filter (EKF) is a general nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the extended Kalman filter, the state transition and observation models do not need to be linear functions of the state but may instead be nonlinear differentiable functions.


The EKF is employed across domains like signal processing (tracking applications), robotics (estimate position of a robot), transportation, etc. Typically, the non-linear maps governing the state and observations equations are based on the physics of the domain. In situations where the physics-based models are not good enough, universal function approximators such as Neural Networks can be alternatively used if data is available to learn from. Extended Kalman Filter involves dynamic computation of the partial derivatives of the non-linear functions system maps with respect to the input or current state. Existing approaches have failed to perform recursive computations efficiently and exactly for such scenarios. Embodiments of the present disclosure provide system and method for exact and efficient forward and backward recursion-based algorithms.


Referring now to the drawings, and more particularly to FIGS. 1 through 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.



FIG. 1 depicts an exemplary system for efficient Extended Kalman Filter (EKF) under feedforward approximation of a dynamical system, in accordance with an embodiment of the present disclosure. In the present disclosure, the expression ‘dynamical system’ refers to a mathematical model capturing time-varying input-output entities with memory capacity, in one example embodiment. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices (e.g., smartphones, tablet phones, mobile communication devices, and the like), workstations, mainframe computers, servers, a network cloud, and the like.


The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.


The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic-random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises information related to Feed-Forward Network (FFN), associated network layers and their nodes. The database 108 further comprises partial derivates estimated at each node of various network layers of the FFN, and the like. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.



FIG. 2, with reference to FIG. 1, depicts an exemplary flow chart illustrating a method for efficient Extended Kalman Filter (EKF) under feedforward approximation of a dynamical system, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. FIG. 3, with reference to FIGS. 1-2, depicts an exemplary diagram of a Feed-Forward Network as implemented by the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, the flow diagram as depicted in FIG. 2, and the block diagram depicted in FIG. 3.


In an embodiment, at step 202 of the present disclosure, the one or more hardware processors 104 obtain, a feed-forward network (FFN). The FFN comprises a plurality of network layers wherein each of the plurality of network layers comprises one or more nodes. The architecture of the FFN is depicted in FIG. 3 which also illustrates various nodes and network layers comprised therein. The feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF) of a dynamical system (or a Non-Linear Dynamic System (NLDS)), in one embodiment of the present disclosure. The state and observation equations for a Non-Linear Dynamic System (NLDS) can be written as below:






X(k)=Fk(X(k−1))+w(k)  State equation






Y(k)=Gk(X(k),u(k))+v(k)  Observation equation


In the present disclosure, the system 100 (or the one or more hardware processors 104 consider a L-layer feed-forward network (FFN) with a general activation function (S(·)) at each node. S(·) could be a sigmoid or tanh function for instance. The input vector is denoted by x (of dimension Nin), whose qth component is denoted as xq. Note the input vector from a general NLDS perspective (e.g., above equations) is the state vector X(k). The lth hidden layer has nl nodes. The weight connecting the ith node from layer l to the jth node of next layer namely l+1 is denoted by wijl. ω0jl involves the bias term to compute the input to the jth node of the l+1th layer. If a FFN approximation of vector-valued maps like Fk(·) and Gk(·) is used for a general NLDS, then the output layer has multiple outputs with a general activation. Therefore,









y
j
l





x
q






is computed by the system 100, ∀q=1, . . . , Nin (or n1), j=1, . . . , Nou (or nL). The closed form of computing the partial derivates is not easily scalable with the number of hidden layers. Hence the system and method of the present disclosure implement approaches for this which can be employed on feed-forward networks with arbitrary number of hidden layers. The approaches include: a forward recursion and a backward recursion, wherein these approaches are described and illustrated by way of examples below.


In an embodiment, at step 204 of the present disclosure, the one or more hardware processors 104 recursively compute in a first direction, a partial derivative at each node of the plurality of network layers comprised in the FFN. The first direction is a forward direction, in one example embodiment of the present disclosure. More specifically, at step 204, the partial derivative at each node of the plurality of network layers comprised in the FFN is computed by performing a forward recursion. The forward recursion to compute the partial derivative at each node of the plurality of network layers comprised in the FFN comprises identifying a recursive relationship between outputs associated with two successive network layers from the plurality of network layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nl wijl yil(x)+ω0jl), wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation associated with the identified recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation associated with the identified recursive relationship, using the equation:











y
j

l
+
1






x
q



=



S


(

η
j

l
+
1


)



(


Σ

i
=
1


n
l




w
ij
l






y
j
l





x
q




)



,




wherein ηjl+1i=1nl wijl yil(x)+ω0jl serves as an input to the jth node at the layer l+1, and S′ is a derivative of the continuously differentiable activation function. In other words, the partial derivative function is applied on the equation identifying the identified recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives. In an embodiment, the continuously differentiable activation function is an activation function (or a function and interchangeably used herein) such as a sigmoid activation function, a tanh activation function, an exponential activation function, and the like. Such examples of the continuously differentiable activation functions shall not be construed as limiting the scope of the present disclosure.


Algorithm: One forward pass through the FFN needs to be carried our first to compute ηjl, ∀l,j. The algorithm then implements the recursive computation of above equation which runs forward in l, the layer index. It not only needs the partial derivates at the previous layer but also ηjl+1, the net input to the jth node of the l+1 layer. The above recursion starts at l=1. It is to be noted that n1=Nin and yi1(x)=xi. Hence ∇yi1(x)=ei, where ei is a vector with a 1 in the ith coordinate and 0's elsewhere. The recursion ends with computation of all relevant partial derivates of yiL (for j=1, . . . , nL).


Complexity for general NLDS: The initial forward pass through the FFN is custom-character(Nw). Each partial derivative computation at a node j of layer (l+1) needs nl multiplications, where nl is the number of nodes in layer l (previous layer), which is also the number of weights incident on node j of layer (l+1). Hence, custom-character(Nw) number of multiplications is needed to compute partial derivative of all outputs with respect to one component of the input. Since this is carried out for a partial derivative of each input component, the overall complexity of the above algorithm is custom-character(NinNw). However, from above equation it is clear that each of the Nin partial derivatives of all output variables can be computed in parallel. When Nin<<Nw, then the complexity of parallel implementation can be custom-character(Nw). Forward recursion first computes partial derivative (p.d.) of the outputs at layer 1 (or M) with respect to the input and then recursively compute the p.d. of the node outputs at layer l+1 with respect to the inputs in terms of the similar entities at layer l. This culminates in what is desired, the p.d. of the network outputs with respect to the network inputs. Using the above approach, a backward recursion computation is also defined and described herein.


Referring to FIGS. 3, L=1, 2, and 3, wherein L and l refer to layer of the FFN and may be interchangeably used herein. lin1 and lin2 be the linear nodes at layer 1 (L1). Similarly, σ1 and σ2 be the nodes at layer 2 (L2). For layer 3 (L3), the output node is σ1. Let η11 be the input to node 1 of layer 1 (L1) and η21 be the input to node 2 of layer 1 (L1). Similarly, η12 and η22 are inputs to nodes 1 and 2 of η11 (L2). Let η13 be the input to node 1 of layer 3 (L3). In the same above manner, y11 and y21 are the outputs at nodes 1 and 2 of layer 1 (L1). y12 and y22 are the outputs at nodes 1 and 2 of layer 2 (L2). y13 is the output at node 1 of η11 (L3). w111 is the weight for node 1 from layer 1 (L1) to node 1 of layer 2 (L2). w121 is the weight for node 1 from layer 1 (L1) to node 2 of layer 2 (L2), and w211 is the weight for node 2 from layer 1 (L1) to node 1 of layer 2 (L2). w221 is the weight for node 2 from layer 1 (L1) to node 2 of layer 2 (L2). w112 is the weight for node 1 from layer 2 (L2) to node 1 of layer 3 (L3). w212 is the weight for node 2 from layer 2 (L2) to node 1 of layer 3 (L3). It is to be noted from FIG. 3 that layer 1 (L1) has linear activation, and layer 2 (L2) and layer 3 (L3) have sigmoid activation. Let the weights assigned be as follows: w111=2, w211=1, w112=2, w121=3, w22=2, and w212=3.


Therefore,





η11=5, and y11=5, and η21=6, and y21=6.





η12=y11*w111+y21*w211





η12=5*2+5*1=16





η22=y11*w121+y21*w221





η22=5*3+6*2=27






y
1
2=σ(η12)=0.99






y
2
2=σ(η22)=0.99





η13=y12*w112+y22*w212





η13=0.99*2+0.99*3=4.95






y
1
3=σ(η13)=0.993


To perform forward recursion, it is known by the system 100 that









y
1
3

=


σ

(

η
1
3

)

=


σ

(


Σ

i
=
1


n
2




w

i

1

2



y
i
2


)

.

Therefore



,





y
1
3





η
1
1



=


σ
1



η
1
3

*

Σ

i
=
1


n
2




w

i

1

2






y
1
2





η
1
1













y
1
3





η
1
1



=


σ
1



η
1
3

*

[



w
11
2

*




y
1
2





η
1
1




+


w
21
2

*




y
2
2





η
1
1





]










y
1
2





η
1
1



=


σ
1



η
1
2

*

[



w
11
2

*




y
1
1





η
1
1




+


w
21
2

*




y
2
1





η
1
1





]







Since, y11=linear activation of η11,











y
1
1





η
1
1



=
1

,




and since, y21 is independent of η11,










y
2
1





η
1
1



=
0.




Therefore,










y
1
2





η
1
1



=


0.99
*
0.01
*
2

=
0.0198





Similarly,











y
2
2





η
1
1



=


σ
1



η
2
2

*

[



w
12
1

*




y
1
1





η
1
1




+


w
22
1

*




y
2
1





η
1
1





]










y
2
2





η
1
1



=


0.99
*
0.01
*
3

=
0.0297






Therefore, equation of









y
1
3





η
1
1






is obtained (or becomes/derived) as follows:










y
1
3





η
1
1



=


0.993
*
0.007
*

[


2
*
0.0198

+

3
*
0.0297


]


=


0.993
*
0.008
*
0.1287

=
0.00089






Similarly,











y
1
3





η
2
1



=

0.993
*
0.007
*

[



w
11
2

*




y
1
2





η
2
1




+


w
21
2

*




y
2
2





η
2
1





]










y
1
2





η
2
1



=


σ
1



η
1
2

*

[



w
11
1

*




y
1
1





η
2
1




+


w
21
1

*




y
2
1





η
2
1





]







Since, y21=linear activation of η21,











y
2
1





η
2
1



=
1

,




and since, y11 is independent of η21,










y
1
1





η
2
1



=
0.




Therefore,










y
1
2





η
2
1



=


0.99
*
0.01
*
1

=
0.0099





Similarly,











y
2
2





η
2
1



=


σ
1



η
2
2

*

[



w
12
1

*




y
1
1





η
2
1




+


w
22
1

*




y
2
1





η
2
1





]










y
2
2





η
2
1



=


0.99
*
0.01
*
2

=
0.0198






Therefore, equation of









y
1
3





η
2
1






is obtained (or becomes/derived) as follows:










y
1
3





η
2
1



=


0.993
*
0.007
*

[


2
*
0.0099

+

3
*
0.0198


]


=
0.00054





Similarly, the one or more hardware processors 104 recursively compute, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:








δ
j
l

=



Σ

s
=
1


n

l
+
1








y
r
L





η
s

l
+
1









η
s

l
+
1






η
j
l




=


Σ

s
=
1


n

l
+
1





δ
s

l
+
1




w
js
l




S


(

η
j
l

)




,




wherein yrL is one or more rth outputs of the FFN, δjl of is the partial derivative of the one or more rth outputs (yrL) of the FFN with reference to ηjl indicative of









y




η
j
l



,




and ηsl+1 is an output of sth node at the layer l+1. ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction. In an embodiment, an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear in nature.


Algorithm: One forward pass through the FFN needs to be carried our first to compute ηjl, ∀l,j. The algorithm then implements the recursive computation of above equation which runs backward in l, the layer index. It starts at the last layer L. By definition,








δ
j
l

=







y
r
L





η
j
L



.

Since




y
r
L


=

S

(

η
r
L

)



,



j
L


=


S


(

η
r
L

)







is obtained for j=r. While for j≠r, δjL=0. Starting with this δjL, δjl is recursively computed for all network layers. The derivative of the output yrL is needed with respect to the inputs xj, which is nothing but







δ
j
l

=





y
r
L





x
j



=





y
r
L





η
j
1



.






It is to be noted that the activation function at the input layer is linear which means S′(xi)=S′(ηi1)=1 during the last step of the above recursive computation specified in the above equation, specifically for j=1.


Complexity: The partial derivative computation of one of the network outputs involves one forward pass and one backward pass implementing the recursion of above-mentioned equation. Both the forward and backward passes are an custom-character(Nw) computation, where Nw is the number of weights across all network layers. But, since nL=Nou output variables are available with the system 100, hence No, backward passes are necessary. Hence the overall complexity of a serial implementation is custom-character(NouNw). But from the above equation, each of these backward passes can be carried out in parallel. A parallel implementation where Nou<<Nw would incur custom-character(Nw) time. If Nin=Nou, then it is observed that both the forward and backward recursions have the same complexity both in a serial/parallel implementation.


In the present disclosure, the second direction is a backward direction. Therefore, the one or more hardware processors 104 performs backward recursion on the FFN to obtain a set of partial derivates using the above equation (refer above second direction). In backward recursion, the system 100 needs to find δ11 and δ21. It is also known by the system 100 that







δ
1
3

=




δ


y
1
3



δη
1
3




and



y
1
3


=


σ

(

η
1
3

)

.






Therefore, δ13=σ(η13)*(1−σ(η13))=0.993*0.007=0.00693.







δ
1
2

=




s
=
1


n

l
+
1





δ
1
3

*

w

1

s

2

*


σ
1

(

η
1
2

)







Since, nl+1=1


Therefore,





δ1213*w112112)





δ12=0.00693*2*σ(η12)*(1−σ(η12))





δ12=0.00693*2*0.99*0.01





δ12=0.0001372





Similarly,





δ2213*w212122)





δ22=0.00693*3*σ(η23)*(1−σ(η23))





δ22=0.00693*3*0.99*0.01





δ22=0.0002058





So,





δ11s=1nl+1δ12*w1s1*S′(η11)


Since the activation function is linear, it is known that (for layer l all nodes have linear activation) S′(η11)=1, where S(η11)=y11 is the linear activation function.


Therefore, δ1112*w111*S′(η11)+δ22*w121*S′(η11). This is because nl+1=2 since these are 2 nodes at layer 2 (L2).


So,





δ11=0.0001372*2*1+0.0002058*3





δ11=0.0002744+0.0006174





δ11=0008918





Similarly,





δ2112*w211*S′(η21)+δ22*w221*S′(η21)





δ21=0.0001372*1*1+0.0002058*2*1





δ21=0.0001372+0.0004116





δ21=0.0005488


As mentioned earlier, the EKF is employed across domains like signal processing (tracking applications), robotics (estimate position of a robot), transportation, etc. For instance, the system and method of the present disclosure may be implemented in signal processing applications, (tracking applications), robotics (estimate position of a robot), transportation such as vehicle arrival time prediction, etc. Such applications or examples as described above shall not be construed as limiting the scope of the present disclosure. The Extended Kalman Filter involves getting the partial derivatives of the non-linear function with respect to the input. When the system maps are approximated using feed-forward approximators, EKF implementation can be carried out exactly, elegantly, and efficiently wherein the present disclosure implemented a forward and backward recursion-based algorithm to achieve this wherein the system maps for EKF are approximated using Feed Forward approximators. This allows the partial derivatives with respect to input to be calculated for the EKF with ease using 2 methods based on forward and backward recursion.


The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.


It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.


The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.


The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.


Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.


It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

Claims
  • 1. A processor implemented method, comprising: obtaining, a feed-forward network (FFN) via one or more hardware processors, wherein the FFN comprises a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; andrecursively computing in a first direction, via the one or more hardware processors, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying, via the one or more hardware processors, a recursive relationship between outputs associated with two successive network layers from the plurality of layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nlwijlyil(x)+ω0jl),wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation identifying the recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation identifying the recursive relationship, using the equation:
  • 2. The processor implemented method of claim 1, further comprising recursively computing, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:
  • 3. The processor implemented method of claim 2, wherein the steps of recursively computing, in the first direction and recursively computing, in the second direction, are preceded by executing a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.
  • 4. The processor implemented method of claim 2, wherein the first direction and the second direction are different from each other.
  • 5. The processor implemented method of claim 2, wherein ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.
  • 6. The processor implemented method of claim 2, wherein an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear.
  • 7. The processor implemented method of claim 1, wherein the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).
  • 8. A system, comprising: a memory storing instructions;one or more communication interfaces; andone or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to:obtain, a feed-forward network (FFN), wherein the FFN comprises a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; andrecursively compute in a first direction, via the one or more hardware processors, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying a recursive relationship between outputs associated with two successive network layers from the plurality of network layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nlwijlyil(x)+ω0jl),wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; andapplying a partial derivative function on the equation identifying the recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation identifying the recursive relationship, using the equation:
  • 9. The system of claim 8, wherein the one or more hardware processors are further configured by the instructions to recursively compute, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:
  • 10. The system of claim 9, wherein the prior to recursively computing, in the first direction and recursively computing, in the second direction, the one or more hardware processors are further configured by the instructions to execute a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.
  • 11. The system of claim 9, wherein the first direction and the second direction are different from each other.
  • 12. The system of claim 9, wherein ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.
  • 13. The system of claim 9, wherein an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear.
  • 14. The system of claim 8, wherein the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).
  • 15. One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause: obtaining, a feed-forward network (FFN) wherein the FFN comprises a plurality of network layers, wherein each of the plurality of network layers comprises one or more nodes; andrecursively computing in a first direction, via the one or more hardware processors, a partial derivative at each node of the plurality of network layers comprised in the FFN by: identifying, via the one or more hardware processors, a recursive relationship between outputs associated with two successive network layers from the plurality of layers, wherein the recursive relationship is identified using the equation: yjl+1(x)=S(Σi=1nlwijlyil(x)+ω0jl),wherein yjl+1 is an output of jth node at layer l+1 from the plurality of network layers of the FFN, S is a continuously differentiable activation function, nl is number of nodes associated with a layer l from the plurality of network layers of the FFN, i is an iteration number, wijl is a weight between ith node of the layer l and jth node of the layer l+1, yil is an output of the ith node of the layer l, and ω0jl is a bias for jth node from the layer l; and applying a partial derivative function on the equation identifying the recursive relationship, with reference to a specific input xq, to obtain a set of partial derivatives, wherein the partial derivative function is applied on the equation identifying the recursive relationship, using the equation:
  • 16. The one or more non-transitory machine-readable information storage mediums of claim 15, wherein the one or more instructions which when executed by the one or more hardware processors further cause recursively computing, in a second direction via the one or more hardware processors, a partial derivative at each node among the one or more nodes of the plurality of network layers comprised in the FFN by using the equation:
  • 17. The one or more non-transitory machine-readable information storage mediums of claim 16, wherein the steps of recursively computing, in the first direction and recursively computing, in the second direction, are preceded by executing a forward pass through the FFN to compute ηjl, and wherein ηjl serves as an input to the jth node at the layer l from the plurality of network layers of the FFN.
  • 18. The one or more non-transitory machine-readable information storage mediums of claim 16, wherein the first direction and the second direction are different from each other.
  • 19. The one or more non-transitory machine-readable information storage mediums of claim 16, wherein ∂ηsl+1 and ∂ηjl are varied from one or more inputs at a final layer L (ηiL) to one or more corresponding inputs at a first layer M (ηiM) of the FFN in the second direction.
  • 20. The one or more non-transitory machine-readable information storage mediums of claim 16, wherein an activation function at an input layer of the plurality of network layers comprised in the FFN is linear or non-linear, and wherein the feed-forward network (FFN) is used as at least one of a state equation and an observation equation in an Extended Kalman Filter (EKF).
Priority Claims (1)
Number Date Country Kind
202221013456 Mar 2022 IN national