System identification method and program, storage medium, and system identification device

Information

  • Patent Grant
  • 8380466
  • Patent Number
    8,380,466
  • Date Filed
    Thursday, April 12, 2007
    17 years ago
  • Date Issued
    Tuesday, February 19, 2013
    11 years ago
Abstract
A large-scale sound system or communication system is numerically and stably identified. When an input signal is represented by the M(≦N)-th order AR model, high-speed H∞ filtering can be performed with a computational complex 3N+O(M). A processing section determines the initial state of a recursive equation (S201), sets CUk according to an input uk (S205), determines a variable recursively (S207), updates a matrix GkN, calculates an auxiliary gain matrix KUkN (S209), divides it (S211), calculates a variable DkM and a backward prediction error ηm, k (S213), calculate a gain matrix Kk (S215), and updates a filter equation of a high-speed H∞ filter (S217). To reduce the computational complexity, Kk(:, 1)/(1+&ggr;f−2 Hk Kk (:, 1)) is directly used as the filter gain Ks, k.
Description
TECHNICAL FIELD

The present invention relates to a system identification method and program, a storage medium, and a system identification device, and particularly to a numerically stabilized fast identification method. Besides, the present invention relates to various system identification methods such as fast identification of a large-scale sound system or communication system using characteristics of a sound as an input signal.


BACKGROUND ART

System identification means estimating a mathematical model (transfer function, impulse response, etc.) of an input/output relation of a system based on input/output data, and typical application examples include an echo canceller in international communication, an automatic equalizer in data communication, an echo canceller and sound field reproduction in a sound system, active noise control in a vehicle etc. and the like. Hitherto, as an adaptable algorithm in the system identification, LMS (Least Mean Square), RLS (Recursive Least Square), or Kalman filter is widely used. In general, an observed value of output of a system is expressed as follows:









[

Mathematical





Expression





1

]












y
k

=





i
=
0


N
-
1





h
i



u

k
-
i




+

v
k






(
1
)








Where, uk denotes an input, hi denotes an impulse response of the system, and vk is assumed to be a white noise.


The details are described in non-patent document 1 or the like.


1. LMS


In the LMS, an impulse response xk=[h0, . . . , hN−1]T of a system is estimated from an input uk and an output yk as follows:

[Mathematical Expression 2]
{circumflex over (x)}k={circumflex over (x)}k−1+μHkT(yk−Hk{circumflex over (x)}k−1)  (2)

Where, Hk=[uk, . . . , uk−N+1]T, μ>0.


2. RLS


In the RLS, an impulse response xk=[h0, . . . , hN−1]T of a system is estimated from an input uk and an output yk as follows:









[

Mathematical





Expression





3

]













x
^

k

=



x
^


k
-
1


+


K
k



(


y
k

-


H
k




x
^


k
-
1




)







(
3
)







K
k

=



P

k
-
1




H
k
T



ρ
+


H
k



P

k
-
1




H
k
T








(
4
)







P
k

=


(


P

k
-
1


-


K
k



H
k



P

k
-
1




)

/
ρ





(
5
)








Where, x^0−0, P00I, ε0>0, 0 denotes a zero vector, I denotes a unit matrix, Kh denotes a filter gain, and ρ denotes a forgetting factor. (Incidentally, “^”, “v” means an estimated value and should be placed directly above a character as represented by the mathematical expressions. However, it is placed at the upper right of the character for input convenience. The same applies hereinafter.)


3. Kalman Filter


A minimum variance estimate x^k|k of a state xk of a linear system expressed in a state space model as indicated by









[

Mathematical





Expression





4

]













x

k
+
1


=


ρ

-

1
2





x
k



,


y
k

=



H
k



x
k


+

v
k







(
6
)








is obtained by a following Kalman filter.









[

Mathematical





Expression





5

]














x
^


k
|
k


=



x
^


k
|

k
-
1



+


K
k



(


y
k

-


H
k




x
^


k
|

k
-
1





)












x
^



k
+
1

|
k


=


ρ

-

1
2






x
^


k
|
k








(
7
)








K
k

=



Σ
^


k
|

k
-
1







H
k
T



(

ρ
+


H
k




Σ
^


k
|

k
-
1





H
k
T



)



-
1












Σ
^


k
|
k


=



Σ
^


k
|

k
-
1



-


K
k



H
k




Σ
^


k
|

k
-
1










(
8
)








Σ
^



k
+
1

|
k


=



Σ
^


k
|
k


/
ρ





(
9
)








Where,

[Mathematical Expression 6]
{circumflex over (x)}1|0=0, {circumflex over (Σ)}1|00I, ε0>0  (10)

xk: a state vector or simply a state; unknown, and this is an object of estimation.


yk: an observation signal; an input of the filter and known.


Hk: an observation matrix; known.


vk: an observation noise; unknown.


ρ: a forgetting factor; generally determined by trial and error.


Kk: a filter gain; obtained from a matrix Σ^k|k−1.


Σ^k|k: corresponding to a covariance matrix of an error of x^k|k; obtained by a Riccati equation.


Σ^k+1|k: corresponding to a covariance matrix of an error of


x^k+1|k; obtained by a Riccati equation.


Σ^1|0: corresponding to a covariance matrix of an initial state; although unknown, ε0I is used for convenience.


In addition, hitherto, there are techniques described in patent documents 1 and 2 and non-patent documents 2 to 5.

  • Patent document 1: WO 02/035727, JP-A-2002-135171
  • Patent document 2: WO 2005/015737
  • Non-patent document 1: S. Haykin, Adaptive Filter Theory, 3rd Edition, Prentice-Hall, 1996
  • Non-patent document 2: K. Nishiyama, Derivation of a Fast Algorithm of Modified H∞ Filters, Proceedings of IEEE International Conference on Industrial Electronics, Control and Instrumentation, RBC-II, pp. 462-467, 2000
  • Non-patent document 3: K. Nishiyama, An H∞ Optimization and Its Algorithm for Time-Variant System Identification, IEEE Transactions on Signal Processing, 52, 5, pp. 1335-1342, 2004
  • Non-patent document 4: B. Hassibi, A. H. Sayed, and T. Kailath, Indefinite-Quadratic Estimation and Control, 1st Editions, SIAM, 1999
  • Non-patent document 5: G. Glentis, K. Berberidis, and S. Theodoridis, Efficient least squares adaptive algorithms for FIR transversal filtering, IEEE Signal Processing Magazine, 16, 4, pp. 13-41, 1999


DISCLOSURE OF THE INVENTION
Problems that the Invention is to Solve

At present, an adaptive algorithm most widely used in the system identification is the LMS. The LMS has a problem that although the amount of calculation is small, convergence speed is very low. On the other hand, in the RLS or the Kalman filter, the value of the forgetting factor ρ which dominates the tracking performance must be determined by trial and error, and this is generally a very difficult work. Further, there is no means for determining whether the determined value of the forgetting factor is really an optimum value.


Besides, although Σ^k|k−1 or Pk is originally a positive-definite matrix, in the case where calculation is performed at single precision (example: 32 bit), it often becomes negative definite, and this is one of factors to make the Kalman filter or the RLS numerically unstable. Besides, since the amount of calculation is O(N2), when the dimension (tap number) of a state vector xk is large, the number of times of arithmetic operation per step is rapidly increased, and there has been a case where it is not suitable for a real-time processing.


In view of the above, the present invention has an object to provide an identification method for identifying a large-scale sound system or communication system at high speed and numerically stably. Besides, the invention has an object to derive an algorithm to greatly reduce the amount of calculation of a previously proposed fast H filter by using characteristics of a sound as an input signal. Further, the invention has an object to provide a method of numerically stabilizing a fast H filter by using a backward prediction error.


Means for Solving the Problems

According to one aspect, there is provided system identification device, for a communication system or a sound system, for performing real-time identification of a time invariable or time variable system, comprising:


a filter, including a processing section, that is robust against a disturbance by determining that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a predetermined upper limit γf2,


wherein,


the filter satisfies an H evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),


when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), and


the filter satisfies a scalar existence condition of following expressions (45) and (46):











x

k
+
1


=


x
k

+


G
k



w
k




,

w
k

,


x
k




N






(
11
)








y
k

=



H
k



x
k


+

v
k



,

y
k

,


v
k








(
12
)








z
k

=


H
k



x
k



,


z
k




,


H
k





1
×
N







(
13
)








sup


x
0

,

{

w
i

}

,

{

v
i

}









i
=
0

k







e

f
,
i




2

/
ρ








x
0

-


x


0





Σ
0

-
1


2

+





i
=
0


k






w
i



2


+




i
=
0

k







v
i



2

/
ρ





<

γ
f
2





(
14
)









x
^


k
|
k


=



x
^



k
-
1

|

k
-
1



+


K

s
,
k




(


y
k

-


H
k




x
^



k
-
1

|

k
-
1





)











K

s
,
k


=




K
k



(


:





,
1

)



1
+


γ
f

-
2




H
k




K
k



(


:





,
1

)









N
×
1








(
38
)









K
k

=


m
k

-


D
k



μ
k




,


D
k

=

[




0

N
-
M







D
k
M




]










D
k
M

=



D

k
-
1

M

-



m
k



(


N
-
M
+

1


:


N


,

:


)



W






η

M




,
k





1
-


μ

k







W






η

M
,
k










(
39
)







η

M
,
k


=


c

k
-
N


+



C
_

k
M



D

k
-
1

M







(
40
)








[




m
k






μ
k




]

=


K


k
N


,


m
k





N
×
2



,


μ
k





1
×
2







(
41
)










K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+

G
k
N



,



K



k
-
N
+
M
-
1


M
-
1


=


K

k
-
N
+
M
-
1




(


1


:


M

,

:


)












S

M
,
k


=


ρ






S

M
,

k
-
1




+


e

M
,
k

T


W







e
~


M
,
k





,


e

M
,
k


=


c
k

+


C

k
-
1

M



A
k
M









(
42
)








A
k
M

=


A

k
-
1

M

-



K

k
-
1




(


1


:


M

,

:


)



W







e
~


M
,
k





,



e
~


M
,
k


=


c
k

+


C

k
-
1

M



A

k
-
1

M








(
43
)








GkN is updatable as follows:










[




G
k
N






0

1
×
2





]

=


[




0

1
×
2







G

k
-
1

N




]

+





[





e

M
,
k

T

/

S

M
,
k









A
k
M




e

M
,
k

T

/

S

M
,
k









0


(

N
-
M
+
1

)

×
2





]

-


[








0


(

N
-
M
+
1

)

×
2








e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1










A

k
-
N
+
M
-
1

M




e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1








]










where


,











C
_

k
M

=


C
k



(

:

,


N
-
M
+
1

:
N



)



,










C

k
-
1

M

=


C

k
-
1




(

:

,

1
:
M



)



,


C
k

=

[




H
k






H
k




]


,

W
=

[



1


0




0



-

γ
f

-
2






]









(
44
)








ef, i=zvi|i−Hixi, zvi|i=Hix^i|i

(ckεR2×1 is a first column vector of Ck=[ck, . . . , ck−N+1], ck−1=02×1 and k−i<0 are assumed, and initial values are set to be K0=0N×2, G0N=0(N+1)×2, A0M=0M×1, SM, 0=1/ε0, D0M=0M×1, x^0|0=xv0=0N×1, 0m×n denotes an m×n zero matrix)













-
ϱ




Ξ
^

i


+

ργ
f
2


>
0

,

i
=
0

,





,
k




(
45
)








custom character, {circumflex over (Ξ)}i are respectively defined by following expressions:










ϱ
=

1
-

γ
f
2



,



Ξ
^

i

=







ρ






H
i



K

s
,
i




1
-


H
i



K

s
,
i





=


ρ






H
i




K
i



(

:

,
1


)




1
-


(

1
-

γ
f

-
2



)



H
i




K
i



(

:

,
1


)











(
46
)








where, for the communication system or the sound system,


xk is defined as an unknown estimated state vector or a state


x0 is defined as an unknown initial state


wk is defined as an unknown system noise,


vk is defined as an unknown observation noise,


yak is defined as an observation signal and a known input of the filter,


zk is defined as an unknown output signal,


Gk is defined as a drive matrix and becomes known at execution,


Hz is defined as a known observation matrix,


x^k|k is defined as an estimated value of the state xk at a time k using observation signals y0−yk and is given by a filter equation,


Ks, k is defined as a filter gain and is obtained from a gain matrix Kk,

ρ is defined as a forgetting factor and when γf is determined, is automatically determined by ρ=1−χ(γf),


Σ0−1 is defined as a known inverse matrix of a weighting matrix expressing uncertainty of a state,


N is defined as a predetermined dimension (tap number) of a state vector,


M is defined as a predetermined order of an AR model,


μk is defined as N+1th row vector of KUkN; obtained from KUkN,


mk is defined as N×2 matrix including a first to an N-th row of KUkN; obtained from KUkN,


γf is defined as an attenuation level and is given at design time,


DkM is defined as a backward prediction coefficient vector and is obtained from mk, ηM, k and μk,


W is defined as a weighting matrix and is determined from γf,


Am is defined as a forward prediction coefficient vector and is obtained from Kk−1 and e{tilde over ( )}M, k,


Cam is defined as a 2×M matrix including a 1st to an M-th column vector of Ck and is determined from the observation matrix Hk,


CkM is defined as a 2×M matrix including an N−M+1-th to an N-th column vector of Ck and is determined from the observation matrix Hk,


KUkN is defined as the auxiliary gain matrix, and


ηMake is defined as a backward prediction error.


According to another aspect, there is provided a method of system identification using a system identification device for a communication system or a sound system, for performing real-time identification of a time invariable or time variable system, the method comprising:


providing characteristics of sound as an input signal for a system identification device;


providing the system identification device comprising a filter, including a processing section, that is robust against a disturbance by determining that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a predetermined upper limit γf2,


wherein,


the filter satisfies an H evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),


when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), and


the filter satisfies a scalar existence condition of following expressions (45) and (46):











x

k
+
1


=


x
k

+


G
k



w
k




,

w
k

,


x
k




N






(
11
)








y
k

=



H
k



x
k


+

v
k



,

y
k

,


v
k








(
12
)








z
k

=


H
k



x
k



,


z
k




,


H
k





1
×
N







(
13
)








sup


x
0

,

{

w
i

}

,

{

v
i

}









i
=
0

k







e

f
,
i




2

/
ρ








x
0

-


x


0






0

-
1


2

+




i
=
0

k






w
i



2


+




i
=
0

k







v
i



2

/
ρ





<

γ
f
2





(
14
)









x
^


k
|
k


=



x
^



k
-
1

|

k
-
1



+


K

s
,
k




(


y
k

-


H
k




x
^



k
-
1

|

k
-
1





)











K

s
,
k


=




K
k



(

:

,
1


)



1
+


γ
f

-
2




H
k




K
k



(

:

,
1


)









N
×
1








(
38
)








K
k

=


m
k

-


D
k



μ
k




,


D
k

=

[




0

N
-
M







D
k
M




]






(
39
)







D
k
M

=



D

k
-
1

M

-



m
k



(



N
-
M
+
1

:
N

,
:

)



W






η

M
,
k





1
-


μ
k


W






η

M
,
k

















η

M
,
k


=


c

k
-
N


+



C
_

k
M



D

k
-
1

M







(
40
)








[




m
k






μ
k




]

=


K


k
N


,


m
k





N
×
2



,


μ
k





1
×
2







(
41
)









K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+

G
k
N



,



K



k
-
N
+
M
-
1


M
-
1


=


K

k
-
N
+
M
-
1




(


1
:
M

,
:

)







(
42
)








S

M
,
k


=


ρ






S

M
,

k
-
1




+


e

M
,
k

T


W







e
~


M
,
k





,


e

M
,
k


=


c
k

+


C

k
-
1

M



A
k
M

















A
k
M

=


A

k
-
1

M

-



K

k
-
1




(


1
:
M

,
:

)



W







e
~


M
,
k





,



e
~


M
,
k


=


c
k

+


C

k
-
1

M



A

k
-
1

M








(
43
)







wherein GkN is updated as follows:










[




G
k
N






0

1
×
2





]

=


[




0

1
×
2







G

k
-
1

N




]

+





[





e

M
,
k

T

/

S

M
,
k









A
k
M




e

M
,
k

T

/

S

M
,
k









0


(

N
-
M
+
1

)

×
2





]

-


[




0


(

N
-
M
+
1

)

×
2








e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1










A

k
-
N
+
M
-
1

M




e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1








]










where


,











C
_

k
M

=


C
k



(


:

,


N
-
M
+
1

:
N


)



,










C

k
-
1

M

=


C

k
-
1




(

:

,

1
:
M



)



,


C
k

=

[




H
k






H
k




]


,









W
=

[



1


0




0



-

γ
f

-
2






]









(
44
)








ef, i=zvi|i−Hixi, zvi|i=Hix^i|i

(CkεR2×1 is a first column vector of Ck=[ck, . . . , ck−N+1], ck−1=02×1 and k−i<0 are assumed, and initial values are set to be K0=0N×2, G0N=0(N+1)×2, A0M=0M×1, SM, 0=1/ε0, D0M=0M×1, x^0|0=xV0=0N×1, here, 0m×n denotes an m×n zero matrix)









C
_

k
M

=


C
k



(

:

,


N
-
M
+
1

:
N



)



,


C

k
-
1

M

=


C

k
-
1




(

:

,

1
:
M



)



,






C
k

=

[




H
k






H
k




]


,

W
=

[



1


0




0



-

γ
f

-
2






]







here, custom character, {circumflex over (Ξ)}i are respectively defined by following expressions:










ϱ
=

1
=

γ
f
2



,



Ξ
^

i

=



ρ






H
i



K


s
,
i









1
-


H
i



K

s
,
i





=


ρ






H
i




K
i



(

:

,
1


)




1
-


(

1
-

γ
f

-
2



)



H
i




K
i



(

:

,
1


)











(
46
)








where, for the communication system or the sound system,


xk is defined as an unknown estimated state vector a state,


x0 is defined as an unknown initial state,


wk is defined as an unknown system noise,


vk is defined as an observation noise,


yk is defined as an observation signal and a known input of the filter,


zk is defined as an unknown output signal,


Gk is defined as a drive matrix and becomes known at execution,


Hk is defined as a known observation matrix,


x^k|k is defined as an estimated value of the state xk at a time k using observation signals y0−yk and is given by a filter equation,


Ks, k is defined as a filter gain and is obtained from a gain matrix Kk,


ρ is defined as a forgetting factor and, when γf is determined, is automatically determined by ρ=1−χ(γf),


Σ01 is defined as a known inverse matrix of a weighting matrix expressing uncertainty of a state,


N is defined as a predetermined dimension (tap number) of a state vector,


M is defined as a predetermined order of an AR model


μk is defined as N+1th row vector of KUkN; obtained from KUkN,


mk is defined as N×2 matrix including a first to an N-th row of KUkN; obtained from KUkN,


γf is defined as an attenuation level and is given at design time,


DkM is defined as a backward prediction coefficient vector and is obtained from mk, ηM, k and μk,


W is defined as a weighting matrix and is determined from γf,


AkM is defined as a forward prediction coefficient vector and is obtained from Kk−1 and e{tilde over ( )}M, k,


CkM is defined as a 2×M matrix including a 1st to an M-th column vector of Ck and is determined from the observation matrix Hk,


CkM is defined as a 2×M matrix including an N−M+1-th to an N-th column vector of Ck and is determined from the observation matrix Hk,


KUkN is defined as the auxiliary gain matrix, and


ηM, k is defined as a backward prediction error;


the method further comprising


by using a processing section of the filter of the system identification device, performing for the communication system or the sound system real-time identification of the time variable or time variable system based on the input signal.


According to one or more aspects, it is possible to provide an identification method for identifying a large-scale sound system or communication system at high speed and numerically stably. Besides, according to one or more aspects of the present invention, it is possible to derive an algorithm to greatly reduce the amount of calculation of a previously proposed fast H filter by using characteristics of a sound as an input signal. Further, according to one or more aspects of the present invention, it is possible to provide a method of numerically stabilizing a fast H filter by using a backward prediction error.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 An explanatory view showing update of a gain matrix of an order-recursive fast H filter.



FIG. 2 A flowchart of a processing of the order-recursive fast H filter.



FIG. 3 A flowchart of a check of a scalar existence condition.



FIG. 4 A flowchart of a calculation processing of a value of a prediction variable at a time tk−N+M−1.



FIG. 5 A flowchart of a numerically stabilized fast H filter processing.



FIG. 6 An explanatory view of a communication system and an echo.



FIG. 7 A principle view of an echo canceller.



FIG. 8 A view showing an estimation result (performance evaluation) of an impulse response by the order-recursive fast H filter.



FIG. 9 A convergence characteristic view of the order-recursive fast H filter with respect to a sound input.



FIG. 10 A view of comparison of an existence condition.



FIG. 11 A structural view of hardware of an embodiment.



FIG. 12 A view of a result of comparison between the amount of calculation of a fast H filter and that of the order-recursive fast H filter.



FIG. 13 A view showing an impulse response of an echo path.





BEST MODE FOR CARRYING OUT THE INVENTION
1. Explanation of Symbols

First, main symbols used in the embodiment will be described.


xk: a state vector or simply a state; unknown, this is an object of estimation,


x0: an initial state; unknown,


wk: a system noise; unknown,


vk: an observation noise; unknown,


yk: an observation signal; input of the filter and known,


zk: an output signal, unknown,


Gk: a drive matrix; becomes known at execution,


Hk: an observation matrix; known,


x^k|k: an estimated value of the state xk at a time k using observation signals y0−yk; given by a filter equation,


Ks, k: a filter gain; obtained from a gain matrix Kk,


ρ: a forgetting factor; in a case of Theorem 1-Theorem 3, when γf is determined, it is automatically determined by ρ=1−χ(γf),


Σ0−1: an inverse matrix of a weighting matrix expressing uncertainty of a state; Σ0 is known,


N: a dimension (tap number) of a state vector; previously given,


M: an order of an AR model; previously given,


μk: N+1th row vector of KUkN; obtained from KUkN,


mk: N×2 matrix including a first to an N-th row of KUkN; obtained from KUkN,


γf: attenuation level; given at design time


DkM: a backward prediction coefficient vector; obtained from mk, μM, k and μk,


W: a weighting matrix; determined from γf,


AkM; a forward prediction coefficient vector; obtained from Kk−1 and e{tilde over ( )}M, k,


CkM: a 2×M matrix including a 1st to an M-th column vector of Ck; determined from the observation matrix Hk, and


CkM: a 2×M matrix including an N-M+1-th to an N-th column vector of Ck; determined from the observation matrix Hk.


Incidentally, “^”, “v” placed above the symbol mean estimated values. Besides, “˜”, “−”, “U” and the like are symbols added for convenience. Although these symbols are placed at the upper right of characters for input convenience, as indicated in mathematical expressions, they are the same as those placed directly above the characters. Besides, x, w, H, G, K, R, Σ and the like are matrixes and should be expressed by thick letters as indicated in the mathematical expressions, however, they are expressed in normal letters for input convenience.


2. Hardware of System Estimation and Program

A system estimation method or a system estimation device/system can be provided by a system estimation program to cause a computer to execute respective procedures, a computer readable recording medium recording the system estimation program, a program product including the system estimation program and capable of being loaded in an internal memory of a computer, a computer, such as a server, including the program, and the like.



FIG. 11 is a structural view of hardware of this embodiment.


The hardware includes a processing section 101 as a central processing unit (CPU), an input section 102, an output section 103, a display section 104, and a storage section 105. Besides, the processing section 101, the input section 102, the output section 103, the display section 104, and the storage section 105 are connected through suitable connection means such as a star or a bus. The known data described in “1. Explanation of Symbols” and subjected to the system estimation are stored in the storage section 105 as the need arises. Besides, the unknown and known data, calculated data relating to the hyper H filter, and the other data are written and/or read by the processing section 101 as the need arises.


3. Means of an Identification Method Relevant to the Embodiment

In order to differentiate from a normal H filter, an H filter in which a forgetting factor can be optimally (quasi-optimally) determined in the meaning of H is especially called a hyper H filter (non-patent document 2, non-patent document 3). The H filter has a feature that it can be applied even when the dynamics of a state is unknown.


Theorem 1 (Hyper H Filter)


With respect to a state space model

[Mathematical Expression 13]
xk+1=xk+Gkwk, wk, xkεRN  (11)
yk=Hkxk+vk, yk, vkεR  (12)
zk=Hkxk, zkεR, HkεR1×N  (13)

a state estimated value x^k|k (or an output estimated value zvk|k) satisfying











sup


x
0

,

{

w
i

}

,

{

v
i

}









i
=
0

k







e

f
,
i




2

/
ρ








x
0

-


x


0






0

-
1


2

+




i
=
0

k






w
i



2


+




i
=
0

k







v
i



2

/
ρ





<

γ
f
2





(
14
)








is given by a following hyper H filter of level γf.

[Mathematical Expression 14]
{circumflex over (z)}k|k=Hk{circumflex over (x)}k|k  (15)
{circumflex over (x)}k|k={circumflex over (x)}k−1|k−1+Ks, k(yk−Hk{circumflex over (x)}k−1|k−1)  (16)
Ks, k={circumflex over (Σ)}k|k−1HkT(Hk{circumflex over (Σ)}k|k−1HkT+ρ)−1  (17)
{circumflex over (Σ)}k|k={circumflex over (Σ)}k|k−1−{circumflex over (Σ)}k|k−1CkTRe, k−1Ck{circumflex over (Σ)}k|k−1{circumflex over (Σ)}k+1|k={circumflex over (Σ)}k|k/ρ  (18)

where,












e

f
,
i


=



z



i
|
i


-


H
i



x
i




,



x
^


0
|
0





x


0


,



Σ
^


1
|
0




=

Σ
0












R

e
,
k


=


R
+


C

k










Σ
^


k
|

k
-
1






C
k
T






R




=

[



ρ


0




0




-
ρ







γ
f
2





]



,


C
k

=

[




H
k






H
k




]







(
19
)








0
<
ρ

=


1
-

χ


(

γ
f

)




1


,


γ
f

>
1





(
20
)








χ (γf) is a monotonically decreasing function satisfying χ(1)=1 and χ(∞)=0. Besides, a drive matrix Gk is generated as follows:









[

Mathematical





Expression





15

]













G
k



G
k
T


=



χ


(

γ
f

)





Σ
^



k
+
1

|
k



=



χ


(

γ
f

)


ρ




Σ
^


k
|
k








(
21
)








Where, a following existence condition must be satisfied.









[

Mathematical





Expression





16

]














Σ
^


i
|
i


-
1


=




Σ
^


i
|

i
-
1



-
1


+



1
-

γ
f

-
2



ρ



H
i
T



H
i



>
0


,

i
=
0

,





,
k




(
22
)







The feature of the hyper H filter is that the generation of robustness in the state estimation and the optimization of the forgetting factor ρ are simultaneously performed.


When the hyper H filter satisfies the existence condition, the in equation of expression (14) is always satisfied. Thus, in the case where the disturbance energy of the denominator of expression (14) is limited, the total sum of the square estimated error of the numerator becomes bounded, and the estimated error after a certain time has to become 0. This means that when γf can be made small, the estimated value x^k|k can more quickly follow the change of the state xk.


Here, attention should be given to the fact that the algorithm of the hyper H filter of Theorem 1 is different from that of the normal H filter (non-patent document 4). Besides, when γf→∞, then ρ=1 and Gk=0, and the algorithm of the hyper H filter of Theorem 1 formally coincides with the algorithm of the Kalman filter.


The amount of calculation of the hyper H filter is O(N2), and this is not suitable for a real-time processing. However, when an observation matrix Hk has a shift characteristic Hk+1=[uk+1, Hk(1), Hk(2), . . . , Hk(N−1)], a fast algorithm of the amount of calculation O(N) has been developed (non-patent document 2, non-patent document 3).


Theorem 2 (Fast H Filter)


When the observation matrix Hk satisfies the shift characteristic, the hyper H filter having Σ00I>0 can be executed by a following expression and with the amount of calculation O(N).









[

Mathematical





Expression





17

]













x
^


k
|
k


=



x
^



k
-
1

|

k
-
1



+


K

s
,
k




(


y
k

-


H
k




x
^



k
-
1

|

k
-
1





)







(
23
)







K

s
,
k


=




K
k



(

:

,
1


)



1
+


γ
f

-
2




H
k




K
k



(

:

,
1


)









N
×
1







(
24
)








K
k

=



m
k

-


D
k



μ
k







N
×
2











D
k

=



[


D

k
-
1


-


m
k


W






η
k



]



[

1
-


μ
k


W






η
k



]



-
1







(
25
)







η
k

=


c

k
-
N


+


C
k



D

k
-
1








(
26
)








[




m
k






μ
k




]

=


K


k


,


m
k





N
×
2



,


μ
k





1
×
2







(
27
)









K


k

=

[





S
k

-
1




e
k
T








K

k
-
1


+


A

k








S
k

-
1




e
k
T






]










S
k

=


ρ






S

k
-
1



+


e
k
T


W







e
~

k




,


e
k

=


c
k

+


C

k
-
1




A
k









(
28
)









A
k

=


A

k
-
1


-


K

k
-
1



W







e
~

k




,



e
~

k

=


c
k

+


C

k
-
1




A


k
-
1

















Where
,






C
k

=

[




H
k






H
k




]


,

W
=

[



1


0




0



-

γ
f

-
2






]







(
29
)








CkεR2×1 is a first column vector of Ck=[Ck, . . . , Ck−N+1], Ck−i=02×1 and k−i<0 are assumed, and initial values are set to be k0=ON×2, A0=ON×1, S0=1/ε0, D0=ON×1, and X^0|0=0N×1. Where, Om×n denotes an m×n zero matrix.


At this time, a following scalar existence condition must be satisfied.

[Mathematical Expression 18]













-
ϱ




Ξ
^

i


+

ργ
f
2


>
0

,

i
=
0

,





,
k




(
30
)








Where, custom character, {circumflex over (Ξ)}i are respectively defined by following expressions.










ϱ
=

1
-

γ
f
2



,



Ξ
^

i

=



ρ






H
i



K

s
,
i




1
-


H
i



K

s
,
i





=


ρ






H
i




K
i



(

:

,
1


)




1
-


(

1
-

γ
f

-
2



)



H
i




K
i



(

:

,
1


)











(
31
)







4. Means of the Identification Method of the Embodiment of the Invention

4.1 Preparation


It is assumed that an input signal is expressed by an M(≦N)-th order AR model (autoregressive model), and a method of optimally generating an (N+1)×2 auxiliary gain matrix KUkN=QUk−1CUkT from an M×2 gain matrix is derived.


Where, QUk is expressed by

[Mathematical Expression 19]
{hacek over (Q)}k=ρ{hacek over (Q)}k−1+{hacek over (C)}kTW{hacek over (C)}k  (32)

and CUk=[Ck ck−N]=[ck Ck−1] is established (non-patent document 2).


[Lemma 1]


When an input signal is expressed by the M(≦N)-th order AR model, the auxiliary gain matrix KUkN is given by a following expression.









[

Mathematical





Expression





20

]













K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+




i
=
0


N
-
M





1

S

M
,

k
-
i






[




0

i
×
2







e

M
,

k
-
i


T







A

k
-
i

M



e

M
,

k
-
i


T







0


(

N
-
M
-
i

)

×
2





]








(
33
)








Where, eM, k=CK+Ck−1MAkM, Ck−1M=Ck−1(:, 1:M), KUk−N+M−1M−1=Kk−N+M−1(1:M,:). Here, CK−1 (:, 1:M) denotes a matrix including the first column to the M-th column of Ck−1.


(Proof) See “7. Proof of lemma” (described later).


[Lemma 2]


When an input signal is expressed by the M(≦N)-th order AR model, the auxiliary gain matrix KUkN is equivalent to a following expression.














[

Mathematical





Expression





21

]




















K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+

G
k
N













Where
,






(
34
)







[




G
k
N






0

1
×
2





]

=


[




0

1
×
2







G

k
-
1

N




]

+


1

S

M
,
k





[




e

M
,
k

T







A
k
M



e

M
,
k

T







0


(

N
-
M
+
1

)

×
2





]


-


1

S

M
,

k
-
N
+
M
-
1






[




0


(

N
-
M
+
1

)

×
2







e

M
,

k
-
N
+
M
-
1


T







A

k
-
N
+
M
-
1

M



e

M
,

k
-
N
+
M
-
1


T





]







(
35
)








G0=O(N+1)×2, KUk−N+M−1=O(M+1)×2, k−N+M−1≦0.


(Proof) See “7. Proof of lemma” (described later)


[Lemma 3]


When an input signal is expressed by the M(≦N)-th order AR model, a gain matrix Kk is expressed by a following expression.









[

Mathematical





Expression





22

]













K
k

=


m
k

-


D
k



μ
k










Where
,






D
k

=

[




0

N
-
M







D
k
M




]


,


D
k
M

=



D

k
-
1

M

-



m
k



(



N
-
M
+
1

:
N

,
:

)



W






η

M
,
k





1
-


μ
k


W






η

M
,
k















η

M
,
k


=



c

k
-
N


+



C
_

k
M




D


k
-
1






M





[




m
k






μ
k




]



=


K


k
N



,


m
k





N
×
2



,


μ
k





1
×
2








(
36
)








C
_

k
M

=


C
k



(

:

,

N
-
M
+

1


:


N




)






(
37
)








Here, mk(M:N,:)=[mk(M,:)T, . . . mk (N,:)T]T, and mk(i,:) is the i-th row vector of mk.


(Proof) See “7. Proof of lemma” (described later)



FIG. 1 is an explanatory view showing update of a gain matrix of an order-recursive fast H filter.


This figure shows the summary of the above. By this, it is understood that

Kk−N+M−1(1:M,:)=Kk−1(N−M+1:N,:)

is theoretically established.


This point will be explained below.


Kk−N+M−1M including the first to the M-th row of the N×2 gain matrix Kk−N+M−1 is extrapolated by using Ak−iM and SM, k−i, and after the (N+1)×2 auxiliary gain matrix KUkN is generated, the gain matrix Kk at a next time is obtained using DkM, and this is repeated hereafter.


4.2 Derivation of the Order-Recursive Fast H Filter


Next, a fast H filter having an optimum expanded filter gain will be described.


Theorem 3 (Order-Recursive Fast H Filter)


When an input signal is expressed by the M(≦N)-th order AR model, the fast H filter is executed by a following expression and with the amount of calculation 3N+O(M).









[

Mathematical





Expression





23

]














x
^


k
|
k


=



x
^



k
-
1

|

k
-
1



+


K

s
,
k




(


y
k

-


H
k




x
^



k
-
1

|

k
-
1





)











K

s
,
k


=




K
k



(

:

,
1


)



1
+


γ
f

-
2




H
k




K
k



(

:

,
1


)









N
×
1








(
38
)









K
k

=


m
k

-


D
k



μ
k




,


D
k

=

[




0

N
-
M







D
k
m




]










D
k
M

=



D

k
-
1

M

-



m
k



(



N
-
M
+
1

:
N

,
:

)



W






η

M
,
k





1
-


μ
k


W






η

M
,
k










(
39
)







η

M
,
k


=


c

k
-
N


+



C
_

k
M



D

k
-
1

M







(
40
)








[




m
k






μ
k




]

=


K


k
N


,


m
k





N
×
2



,


μ
k





1
×
2







(
41
)









K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+

G
k
N



,



K



k
-
N
+
M
-
1


M
-
1


=


K

k
-
N
+
M
-
1




(


1
:
M

,
:

)







(
42
)









S

M
,
k


=


ρ






S

M
,

k
-
1




+


e

M
,
k

T


W







e
~


M
,
k





,


e

M
,
k


=


c
k

+


C

k
-
1

M



A
k
M













A
k
M

=


A

k
-
1

M

-



K

k
-
1




(


1


:


M

,

:


)



W







e
~


M
,
k





,



e
~


M
,
k


=


c
k

+


C

k
-
1

M



A

k
-
1

M









(
43
)








Where, GNk is updated as follows:










[




G
k
N






0

1
×
2





]

=


[




0

1
×
2







G

k
-
1

N




]

+





[





e

M
,
k

T

/

S

M
,
k









A
k
M




e

M
,
k

T

/

S

M
,
k









0


(

N
-
M
+
1

)

×
2





]

-


[




0


(

N
-
M
+
1

)

×
2








e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1










A

k
-
N
+
M
-
1

M




e

M
,

k
-
N
+
M
-
1


T

/

S

M
,

k
-
N
+
M
-
1








]










Where


,











C
_

k
M

=


C
k



(

:

,


N
-
M
+
1

:
N



)



,










C

k
-
1

M

=


C

k
-
1




(

:

,

1
:
M



)



,


C
k

=

[




H
k






H
k




]


,









W
=

[



1


0




0



-

γ
f

-
2






]









(
44
)








ef, i=Zvi|i−Hixi, zvi|i=Hix^i|i,


CkεR3×1 is the first column vector of Ck=[ck, . . . , Ck−N+1],


Ck−i=O2−1, k−i<0, and initial values are set to be K0=ON×2, G0N=O(N+1)×2, A0M=OM×1, SM,0=1/ε0, D0M=0M×1, x^0|0=xv0=0N×1.


Besides, a following scalar condition must be satisfied.

[Mathematical Expression 24]













-
ϱ




Ξ
^

i


+

ργ
f
2


>
0

,

i
=
0

,





,
k




(
45
)








Here, custom character, {circumflex over (Ξ)}i| are respectively defined by following expressions.










ϱ
=

1
-

γ
f
2



,



Ξ
^

i

=



ρ






H
i



K

s
,
i




1
-


H
i



K

s
,
i





=


ρ






H
i




K
i



(

:

,
1


)




1
-


(

1
-

γ
f

-
2



)



H
i




K
i



(

:

,
1


)











(
46
)








(Proof) It is obtained when the lemmas 1, 2 and 3 are applied to Theorem 2.


4.3 Mounting of the Order-Recursive Fast H∞ Filter



FIG. 2 shows a flowchart of the order-recursive fast H filter.


In order to reduce the amount of calculation, Kk(:, 1)/(1+γf−2HkKk(:, 1)) was directly used instead of Ks, k.


Hereinafter, the processing of the order-recursive fast H filter will be described with reference to a flowchart.


[Step S201, Initialization] The processing section 101 reads an initial state of a recursive equation from the storage section 105, or inputs the initial state from the input section 102, and determines it as shown in the figure.


[Step S203] The processing section 101 compares a time k with a maximum data number L. Incidentally, L denotes the previously determined maximum data number. When the time k is larger than the maximum data number, the processing section 101 ends the processing, and when not larger than that, advance is made to a next step. (If unnecessary, the conditional sentence can be removed. Alternatively, restart may be made as the need arises.)


[Step S205, Input] The processing section 101 inputs an input uk from the input section 102, and sets CUk as shown in the figure. Incidentally, the input uk may be read from the storage section 105.


[Step S207, Forward prediction] The processing section 101 recursively determines variables eM, k, AkM, SM, k, Kk(1: M,:) and the like by expression (43).


[Step S209, Extrapolation] The processing section 101 updates a matrix GkN by expression (44), and calculates an auxiliary gain matrix KUkN by expression (42).


[Step S211, Partition] The processing section 101 partitions the auxiliary gain matrix KUkN by expression (41).


[Step S213, Backward prediction] The processing section 101 calculates a variable DkM and a backward prediction error ηM, k by expression (40).


[Step S215, Gain matrix] The processing section 101 calculates a gain matrix Kk by expression (39).


[Step S217, Filtering] The processing section 101 updates a filter equation of the fast H filter by expression (38). Here, in order to reduce the amount of calculation, Kk(:, 1)/(1+γf−2HkKk(:, 1) is directly used as a filter gain Ks, k.


[Step S219] The processing section 101 advances the time k (k=k+1), returns to step S203, and continues as long as data exists or a specified number of times.


Incidentally, the processing section 101 may store a suitable intermediate value and a final value obtained at the respective steps of the calculation steps S205 to S217 of the H filter, a value of an existence condition and the like into the storage section 105 as the need arises, and may read them from the storage section 105.


Besides, it should be noted that the order-recursive fast H filter is completely coincident with the fast H filter when M=N is established.



FIG. 3 is a flowchart of check of a scalar existence condition.


In general, a check function of the existence condition of FIG. 3 is added to the flowchart of FIG. 2. This step can be executed by the processing section 101 at a suitable timing before or after a suitable one step or plural steps of the flowchart of FIG. 2 or FIG. 4 (described later) or the step (S219) of advancing the time k. EXC(k) corresponds to the left side of the foregoing expression (45).


4.4 Calculation of Prediction Variable


As a method of obtaining variables eM, k−N+M−1, Ak−N+M−1M, SM, k−N+M−1 and KUk−N+M−1M−1 (=Kk−N+M−1M=Kk−N+M−1 (1:M,:) at the Extrapolation (step S209) of FIG. 2, there are following two methods.


1) Values of the variables obtained at the respective steps are held on a memory during N−M steps.


2) Values of the variables at a time tk−N+M−1 are also again calculated at the respective steps (see FIG. 4 described later).


In general, the method of 1) is suitable for a DSP (Digital Signal Processor) having a large storage capacity, and the method of 2) is often suitable for a normal DSP, and either of the methods may be used.



FIG. 4 is a flowchart of a calculation method of values of prediction variables at the time tk−N+M−1.


Hereinafter, a calculation processing of the values of the prediction variables at the time tk−N+M−1 will be described with reference to the flowchart. The processing section 101 calculates the respective variables based on the flowchart, and uses the respective variables when step S209 or the like of the flowchart of FIG. 2 is calculated.


[Step S401, Initialization] The processing section 101 reads an initial state of a recursive equation from the storage section 105, or inputs the initial state from the input section 102, and determines it as shown in the figure.


[Step S403] The processing section 101 compares a time k with a maximum data number L. Incidentally, L denotes the previously determined maximum data number. When the time k is larger than the maximum data number, the processing section 101 ends the processing, and when not larger than that, advance is made to a next step. (If unnecessary, the conditional sentence can be removed. Alternatively, restart may be made as the need arises.)


[Step S405, Input] An input uk is inputted from the input section 102, and CUk is set as shown in the figure. Incidentally, the input uk may be read from the storage section 105.


[Step S407, Forward prediction] The processing section 101 recursively determines variables eM, k, AkM, SM, k, Kk−1M and the like as shown in the figure (see expression (43)).


[Step S409, Extrapolation] The processing section 101 calculates an auxiliary gain matrix KUkM as shown in the figure (see expression (42)).


[Step S411, Partition] The processing section 101 partitions the auxiliary gain matrix KUkM as shown in the figure (see expression (41)).


[Step S413, Backward prediction] The processing section 101 calculates a variable DkM and a backward prediction error ηM, k as shown in the figure (see expression (40)).


[Step S415, Gain matrix] The processing section 101 calculates a gain matrix KkM as shown in the figure (see expression (39)).


[Step S417] The processing section 101 advances the time k (k=k+1), returns to step S403, and continues as long as data exists or a specified number of times.


Incidentally, the processing section 101 may store a suitable intermediate value and a final value obtained at the respective steps of the calculation steps S405 to S415 of the H filter, a value of an existence condition and the like into the storage section 105 as the need arises, or may read them from the storage section 105.


4.5 Backward Prediction Error


For numerical stabilization, instead of the backward prediction error ηk,

[Mathematical Expression 25]
{tilde over (η)}kk+β(ηk−ρ−NSkμkT)  (47)

can be adopted.



FIG. 5 is a flowchart of a numerically stabilized fast H filter (order-recursive fast H filter at M=N).


In general, a check function of an existence condition of FIG. 3 is added to this flowchart. Besides, β denotes a control parameter, and β is generally made β=1.0. This stabilizing method can be applied also to a case of M<N.


Hereinafter, the processing of the numerically stabilized fast H filter will be described with reference to the flowchart.


[Step S501, Initialization] The processing section 101 reads an initial state of a recursive equation from the storage section 105, or inputs the initial state from the input section 102, and determines it as shown in the figure.


[Step S503] The processing section 101 compares a time k with a maximum data number L. Incidentally, L denotes the previously determined maximum data number. When the time k is larger than the maximum data number, the processing section 101 ends the processing, and when not larger than that, advance is made to a next step. (If unnecessary, the conditional sentence can be removed. Alternatively, restart may be made as the need arises.)


[Step S505, Input] An input uk is inputted from the input section 102, and CUk is set as shown in the figure. Incidentally, the input uk may be read from the storage section 105.


[Step S507, Forward prediction] The processing section 101 recursively determines variables eM, k, Ak, Sk, Kk−1 and the like as shown in the figure (see expression (43)).


[Step S509, Extrapolation] The processing section 101 calculates an auxiliary gain matrix KUkN as shown in the figure (see expression (42)).


[Step S511, Partition] The processing section 101 partitions the auxiliary gain matrix KUkN as shown in the figure (see expression (41)).


[Step S513, Stabilized backward prediction] The processing section 101 calculates a variable DkM and a backward prediction error η{tilde over ( )},k as shown in the figure (see expression (40)). Here, for numerical stabilization, η{tilde over ( )}k (expression (47)) is adopted instead of the backward prediction error ηk.


[Step S515, Gain matrix] The processing section 101 calculates a gain matrix KkM as shown in the figure (see expression (39)).


[Step S517, Filtering] The processing section 101 updates a filter equation of the fast H filter by expression (38). Here, in order to reduce the amount of calculation, Kk(:, 1)/(1+γf−2HkKk(:, 1) is directly used as the filter gain Ks, k.


[Step S519] The processing section 101 advances the time k (k=k+1), returns to step S503, and continues as long as data exists or a specified number of times.


Incidentally, the processing section 101 may store a suitable intermediate value and a final value obtained at the respective steps of the calculation steps S505 to S517 of the H filter, a value of an existence condition and the like into the storage section 105 as the need arises, or may read them from the storage section 105.


5. Comparison Result


FIG. 12 is a view of result of comparison between the amount of calculation of the fast H filter (FHF) and that of the order-recursive fast H filter (OR-FHF).


However, this result is obtained in the case where calculation is performed in accordance with the expressions, and the amount of calculation can be further reduced by contrivance. Besides, division of a vector by a scalar can be counted as multiplication by taking the reciprocal of the scalar.


By this, when M<<N is established, the number of times of multiplication required for the order-recursive fast H filter becomes about 3N, and it is understood that the amount of calculation can be greatly reduced as compared with the fast H filter.


6. Echo Canceller

6.1 Preparation


Hereinafter, an example of an echo canceller is adopted as the embodiment, and the operation of the identification algorism is confirmed. Before that, the echo canceller will be described in brief.



FIG. 6 is an explanatory view of a communication system and an echo.


In a long distance telephone circuit such as an international telephone, a four-wire circuit is used from the reason of signal amplification and the like. On the other hand, since a subscriber's circuit has a relatively short distance, a two-wire circuit is used. A hybrid transformer as shown in the figure is introduced at a connection part between the two-wire circuit and the four-wire circuit, and impedance matching is performed. When the impedance matching is complete, a signal (sound) from a speaker B reaches only a speaker A. However, in general, it is difficult to realize the complete matching, and there occurs a phenomenon in which part of the received signal leaks to the four-wire circuit, and returns to the receiver (speaker A) after being amplified. This is an echo. As a transmission distance becomes long (as a delay time becomes long), the influence of the echo becomes large, and the quality of a telephone call is remarkably deteriorated (in the pulse transmission, even in the case of short distance, the deterioration of a telephone call due to the echo can be a problem).



FIG. 7 is a principle view of an echo canceller.


As shown in the figure, the echo canceller is introduced, an impulse response of an echo path is successively estimated by using a received signal which can be directly observed and an echo, and a pseudo-echo obtained by using it is subtracted from the actual echo to cancel the echo and to remove it.


The estimation of the impulse response of the echo path is performed so that the mean square error of a residual echo ek becomes minimum. At this time, elements to interfere with the estimation of the echo path are circuit noise and a signal (sound) from the speaker A. In general, when two speakers simultaneously start to speak (double talk), the estimation of the impulse response is suspended. Besides, since the impulse response length of the hybrid transformer is, for example, about 50 [ms], when the sampling period is made 125 [μs], the order of the impulse response of the echo path becomes actually about 400.


Next, a mathematical model of this echo canceling problem is created. First, when consideration is given to the fact that the received signal {uk} becomes the input signal to the echo path, by the impulse response {hi[k]} of the echo path, an observed value {yk} of an echo {dk} is expressed by a following expression.









[

Mathematical





Expression





26

]













y
k

=



d
k

+

v
k


=





i
=
0


N
-
1






h
i



[
k
]




u

k
-
i




+

v
k




,





k
=
0

,
1
,
2
,





(
48
)








Here, uk and yk respectively denote the received signal and the observed value of the echo at a time tk (=kT; T is a sampling period), vk denotes circuit noise of a mean value 0 at the time tk, and it is assumed that a tap number N is known. At this time, when an estimated value {h^i[k]} of the impulse response is obtained every moment, the pseudo-echo can be generated by using that as described below.









[

Mathematical





Expression





27

]














d
^

k

=




i
=
0


N
-
1







h
^

i



[
k
]




u

k
-
i





,





k
=
0

,
1
,
2
,





(
49
)








When this is subtracted from the echo (yk−d^k≈0), and the echo can be cancelled. Where, it is assumed that if k−i<0, then uk−i=0.


From the above, the canceling problem can be reduced to the problem of successively estimating the impulse response {hi[k]} of the echo path from the received signal {uk} which can be directly observed and the observed value {yk} of the echo.


In general, in order to apply the H filter to the echo canceller, first, expression (48) must be expressed by a state space model including a state equation and an observation equation. Then, when {hi[k]} is made a state variable xk, and a variance of about wk is allowed, the following state space model can be established for the echo path.

[Mathematical Expression 28]
xk+1=xk+Gkwk, xk, wkεRN  (50)
yk=Hkxk+vk, yk, vkεR  (51)
zk=Hkxk, zkεR, HkεR1×N  (52)

Where,


xk=[h0[k], . . . , hN−1[k]]T, wk=[wk (1), . . . , wk (N)]T Hk=[uk, . . . uk−N+1]

6.2 Confirmation of Operation



FIG. 13 is a view showing an impulse response of an echo path.


With respect to the case where the impulse response of the echo path is temporally invariable (hi[k]=hi), and for example, the tap number N is 48, the operation of the fast algorithm is confirmed by using a simulation. At this time, the observed value yk of the echo is expressed by a following expression.









[

Mathematical





Expression





29

]












y
k

=





i
=
0

63




h
i



u

k
-
i




+

v
k






(
53
)








Where, the values of FIG. 13 are used for the impulse response {hi}i=023, and the other {hi}i=2463 is made 0. Besides, it is assumed that vk is stationary Gaussian white noise having a mean value of 0 and a variance σv=1.0×10−4, and the sampling period T is made 1.0 for convenience.


Besides, the received signal {uk} is approximated by a secondary AR model as follows:

[Mathematical Expression 30]
uk1uk−1+α2uk−2+w′k  (54)

Where, α1=0.7, α2=0.1, and Wk′ is the stationary Gaussian white noise having a mean value of 0 and a variance σw′2=0.04.



FIG. 8 is a view showing an estimation result (performance evaluation) of the impulse response by the order-recursive fast H filter at M=2.


Here, FIG. 8A shows an estimated value (X^256|256) of the impulse response at k=256 (broken line indicates a true value), and FIG. 8B shows a convergence characteristic thereof. Where, the vertical axis of FIG. 8B indicates ∥xk−x^k|k2i=063 (hi−x^k (i+1))2. By this, it is understood that the impulse response of the system can be excellently estimated by the order-recursive fast H filter. Where, ρ=1−χ(γf), χ(γf)=γf−2, x^0|0=0, γf=10.0, Σ^1|00I, and ε0=10.0 are assumed, and the calculation is performed at MATLAB (double precision)



FIG. 9 is a convergence characteristic view of the order-recursive fast H filter with respect to sound input (in this example, N=512, M=20, γf=54.0).


Here, it is assumed that the order of the impulse response of an unknown system is 512, and the sound is a 20th order AR model. Besides, γf=54.0 and ε0=1.0 are assumed, and the calculation is performed at MATLAB (double precision). By this, it is understood that even when the input is sound, the order-recursive fast H filter has excellent performance.



FIG. 10 is a view of comparison of an existence condition. This figure shows temporal changes of values of the left side (EXC(k)) of the scalar existence condition with respect to the fast H filter (order-recursive fast H filter at M=N) (FIG. 10A) and a numerically stabilized fast H filter (FIG. 10B) (in this example, N=256, γf=42.0).


By this, in the case of the fast H filter, the value of EXC (k) becomes negative at about 45000 steps, and the filter is stopped. On the other hand, in the numerically stabilized fast H filter, EXC(k) always keeps a positive value in this section, and the existence condition is satisfied. However, the calculation is performed at single precision, and the same sound signal is used for each input.


7. Proof of Lemma

7.1 Proof of Lemma 1


When the dimension of a state vector is M, a following expression is obtained from expression (28).









[

Mathematical





Expression





31

]













K


k
M

=


[




0

1
×
2








K



k
-
1


M
-
1





]

+


1

S

M
,
k





[




e

M
,
k

T







A
k
M



e

M
,
k

T





]







(
55
)








When this is applied to the case of the dimension of M+1, an auxiliary gain matrix KUkM+1 is obtained.









[

Mathematical





Expression





32

]













K


k

M
+
1


=


[




0

1
×
2








K



k
-
1

M




]

+


1

S


M
+
1

,
k





[




e


M
+
1

,
k

T







A
k

M
+
1




e


M
+
1

,
k

T





]







(
56
)







On the other hand, when an input signal can be expressed by an M-th order AR model, eM, k=min{em, k} and e{tilde over ( )}M, k=min{e{tilde over ( )}m, k} are established for m≧M. By this,









[

Mathematical





Expression





33

]













e

m
,
k


=



c
k

+


C

k
-
1

m



A
k
m



=



c
k

+


C

k
-
1

M



A
k
M



=

e

M
,
k













e
~


m
,
k


=



c
k

+


C

k
-
1

M



A

k
-
1

M



=


e
~


M
,
k












A
k
m



e

m
,
k

T


=

[





A
k
M



e

M
,
k

T







0


(

m
-
M

)

×
1





]









S

m
,
k


=


S

M
,
k


=


ρ






S

M
,

k
-
1




+


e

M
,
k

T


W



e
~


M
,
k










(
57
)








At this time,












Q


k

M
+
1




[






0




1








A

k
-
1

M




]


=

[



0





S

M
,

k
-
1








0
M




]














is established. Thus, a following expression is obtained.









[

Mathematical





Expression





34

]
















K


k

M
+
1


=




[




0

1
×
2








K



k
-
1

M




]

+


1

S

M
,
k





[




e

M
,
k

T










A
k
M



e

M
,
k

T







0

1
×
2








]









=




[




0

1
×
2







0

1
×
2








K



k
-
2


M
-
1





]

+


1

S

M
,

k
-
1






[







0

1
×
2







e

M
,

k
-
1


T










A

k
-
1

M



e

M
,

k
-
1


T





]


+


1

S

M
,
k





[







e

M
,
k

T







A
k
M



e

M
,
k

T










0

1
×
2





]


















Similarly, when the repetition is made up to N, at last, by using Ak−1M, SM, k−1 satisfying









Q



k
-
i

M



[



1





A

k
-
i

M




]


=

[




S

M
,

k
-
i








0
M




]






that is,









[

Mathematical





Expression





35

]














Q


k
N



[







0
i





1








A

k
-
1

M






0

N
-
M
-
i





]


=

[




0
i






S

M
,

k
-
1








0
M






0

N
-
M
-
i





]














the auxiliary gain matrix KUkN can be expressed as follows:









[

Mathematical





Expression





36

]













K


k
N

=


[




0


(

N
-
M
+
1

)

×
2








K



k
-
N
+
M
-
1


M
-
1





]

+




i
=
0


N
-
M





1

S

M
,

k
-
i






[




0

i
×
2







e

M
,

k
-
i


T







A

k
-
i

M



e

M
,

k
-
i


T







0


(

N
-
M
-
i

)

×
2





]

















7.2 Proof of Lemma 2


An auxiliary variable GkN is defined as









[

Mathematical





Expression





37

]












G
k
N

=




i
=
0


N
-
M





1

S

M
,

k
-
i






[




0

i
×
2







e

M
,

k
-
i


T







A

k
-
i

M



e

M
,

k
-
i


T







0


(

N
-
M
-
i

)

×
2





]
















At this time, when attention is paid to GkN(i−1,:)=Gk−1N(i,:), it becomes









[

Mathematical





Expression





38

]












[




G
k
N






0

1
×
2





]

=


[




0

1
×
2







G

k
-
1

N




]

=



1

S

M
,
k





[




e

M
,
k

T







A
k
M



e

M
,
k

T







0


(

N
-
M
+
1

)

×
2





]


-


1

S

M
,

k
-
N
+
M
-
1






[




0


(

N
-
M
+
1

)

×
2







e

M
,

k
-
N
+
M
-
1


T







A

k
-
N
+
M
-
1

M



e

M
,

k
-
N
+
M
-
1


T





]

















By this, expression (35) is established. Thus, the auxiliary gain matrix KUkN is equivalent to expression (34) (expression (42)).


7.3 Proof of Lemma 3


When an input signal is expressed by an M(≦N)-th order AR model, it becomes









[

Mathematical





Expression





39

]













η
k

=



c

k
-
N


+


C
k



D

k
-
1




=


c

k
-
N


+



C
_

k
M



D

k
-
1

M














Q



k
-
N
+
M

M



[




D
k
M





1



]


=

[




0
M







(

F
k
M

)


-
1





]








(




Q


k
N



[




D
k





1



]


=



Q


k
N



[







0

N
-
M







D
k
M








1



]



)





(
58
)








When attention is paid to this, expression (36) and expression (37) are obtained by a similar method to non-patent document 2.


INDUSTRIAL APPLICABILITY

The fast identification method of the invention can be applied to the identification of a large-scale sound system or communication system, and is effective for an echo canceller in a loudspeaker or a television meeting system, active noise control, sound field reproduction and the like. Besides, by the development of the numerically stabilizing method, the operation can be made more stably even at single precision, and higher performance can be realized at low cost.


In a normal civil communication equipment or the like, in general, calculation is often performed at single precision from the viewpoint of cost and speed. Thus, the invention, as the practical fast identification method, would have the effect in various industrial fields.

Claims
  • 1. A system identification device, for a communication system or a sound system, for performing real-time identification of a time-invariant or time-variant system in which real-time characteristics of signal or sound are provided as an input to the system device; the system identification device comprising: a processor; and,a filter, being included in said processor, that is robust against a disturbance by being configured to determine that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a predetermined upper limit γf2;wherein,the filter satisfies an H∞ evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), andthe filter satisfies a scalar existence condition of following expressions (45) and (46):
  • 2. The system identification device according to claim 1, wherein it is assumed that the input signal is expressed by the M(≦N)-th order autoregressive model (AR model), and wherein the processor of the filter is configured to optimally generate a (N+1)×2 auxiliary gain matrix KUkN=QUk−1CUkT from an M×2 gain matrix, wherein QUk is expressed by {hacek over (Q)}k=ρ{hacek over (Q)}k−1+{hacek over (C)}kTW{hacek over (C)}k  (32)
  • 3. The system identification device according to claim 1, wherein the processor of the filter is configured to adopt a following expression η{tilde over ( )}k instead of a backward prediction error ηk for numeral stabilization: {tilde over (η)}k=ηk+β(ηk−ρ−NSkμkT)  (47)
  • 4. The system identification device according to claim 1, wherein the processor of the filter is configured to read an initial state of a recursive equation from a storage section or input the initial state from an input section and determining it,the processor is further configured to input an input uk from the input section or read it from the storage section and set CUk, KUk(1:M,:),the processor is further configured to recursively determine variables eM, k, AkM, SM, k and Kk(1:M,:) by expression (43),the processor is further configured to update the matrix GkN by expression (44) and calculate an auxiliary gain matrix KUkN as in expression (42),the processor is further configured to partition the auxiliary gain matrix KUkN by expression (41),the processor is further configured to calculate a variable DkM and a backward prediction error ηM, k by expression (40),the processor is further configured to calculate the gain matrix Kk by expression (39),the processor is further configured to update the filter equation of a H∞ filter by expression (38), andthe processor is further configured to advance the time k and repeat the respective steps.
  • 5. The system identification device according to claim 1, wherein the processor of the filter is configured to obtain, at the step of calculation by expression (44), variables eM, k−N+m−1, Ak−N+M−1M, SM, k−N+M−1, KUk−N+M−1M−1(=Kk−n+M−1M=Kk−N+M−1(1: M,:)) by one of following substeps of: 1) holding values of the variables obtained at the respective steps on a memory during N−M steps, and2) re-calculating values of the variables at a time tk−N+M−1 at the respective steps.
  • 6. The system identification device according to claim 1, wherein the filter is applied to obtain a state estimated value x^k|k,a pseudo-echo d^k is estimated as in a following expression, and this is subtracted from an echo to cancel the echo,
  • 7. A method of system identification using a system identification device for a communication system or a sound system, for performing real-time identification of a time invariable or time variable system, the method comprising: providing real-time characteristics of signal or sound as an input signal for a system identification device;providing the system identification device comprising a filter that is robust against a disturbance by determining that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a predetermined upper limit γf2,wherein,the filter satisfies an H∞ evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), andthe filter satisfies a scalar existence condition of following expressions (45) and (46):
  • 8. The method of claim 7, wherein it is assumed that the input signal is expressed by the M(≦N)-th order autoregressive model (AR model), and the method further comprising optimally generating, by using the processor of the filter, a (N+1)×2 auxiliary gain matrix KUkN=QUk−1CUkT from an M×2 gain matrix, wherein QUk is expressed by {hacek over (Q)}k=ρ{hacek over (Q)}k−1+{hacek over (C)}kTW{hacek over (C)}k  (32)
  • 9. The method according to claim 7, the method further comprising by using the processor of the filter, adopting a following expression η{tilde over ( )}k instead of a backward prediction error ηk for numeral stabilization: {tilde over (η)}k=ηk+β(ηk−ρ−NSkμkT)  (47)
  • 10. The method according to claim 7, wherein the method further comprises by using the processor of the filter, reading an initial state of a recursive equation from a storage section or inputting the initial state from an input section and determining it,by using the processor, inputting an input uk from the input section or reading it from the storage section and setting CUk, KUk(1:M,:),by using the processor, recursively determining variables eM, k, AkM, SM, k and Kk(1:M,:) by expression (43),by using the processor, updating the matrix GkN by expression (44) and calculating an auxiliary gain matrix KUkN as in expression (42),by using the processor, partitioning the auxiliary gain matrix KUkN by expression (41),by using the processor, calculating a variable DkM and a backward prediction error ηM, k by expression (40),by using the processor, calculating the gain matrix Kk by expression (39),by using the processor, updating the filter equation of a H∞ filter by expression (38), andby using the processing section, advancing the time k and repeating the respective steps.
  • 11. The method according to claim 7, the method further comprising: by using the processor of the filter, obtaining, at the step of calculation by expression (44), variables eM, k−N+m−1, Ak−N+M−1M, SM, k−N+M−1, KUk−N+M−1M−1(=Kk−n+M−1M=Kk−N+M−1(1: M,:)) by one of following substeps of:1) holding values of the variables obtained at the respective steps on a memory during N−M steps, and2) re-calculating values of the variables at a time tk−N+M−1 at the respective steps.
  • 12. The method according to claim 7, further comprising applying the filter to obtain a state estimated value x^k|k, a pseudo-echo d^k is estimated as in a following expression, and this is subtracted from an echo to cancel the echo,
  • 13. A non-transitory computer readable recording medium, for a communication system or a sound system, which, when executed, causes a computer to perform the following instructions: providing real-time characteristics of signal or sound as an input signal for a filter;providing a filter for real-time identification of a time invariable or time variable system, the filter being robust against a disturbance by determining that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a predetermined upper limit γf2,whereinthe filter satisfies an H∞ evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), andthe filter satisfies a scalar existence condition of following expressions (45) and (46):
  • 14. A system identification program product for a communication system or a sound system, the product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method comprising providing real-time characteristics of signal or sound as an input signal for a filter;providing a filter for performing real-time identification of a time invariable or time variable system, the filter being robust against a disturbance by determining that a maximum energy gain to a filter error from the disturbance, as an evaluation criterion, is restricted to be smaller than a upper limit γf2,whereinthe filter satisfies an H∞ evaluation criterion as indicated by following expression (14) with respect to a state space model as indicated by following expressions (11) to (13),when an input signal is expressed by an M(≦N)-th order autoregressive model (AR model), the filter is given by following expressions (38) to (44), andthe filter satisfies a scalar existence condition of following expressions (45) and (46):
Priority Claims (1)
Number Date Country Kind
2006-111607 Apr 2006 JP national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/JP2007/058033 4/12/2007 WO 00 9/28/2009
Publishing Document Publishing Date Country Kind
WO2007/119766 10/25/2007 WO A
US Referenced Citations (5)
Number Name Date Kind
5394322 Hansen Feb 1995 A
5987444 Lo Nov 1999 A
5995620 Wigren Nov 1999 A
7039567 Nishiyama May 2006 B2
20070185693 Nishiyama Aug 2007 A1
Foreign Referenced Citations (4)
Number Date Country
61-200713 Sep 1986 JP
07-110693 Apr 1995 JP
07-185625 Jul 1995 JP
2002-135171 May 2002 JP
Non-Patent Literature Citations (6)
Entry
“Fuzzy H∞Filter Design for a Class of Nonlinear Discrete—Time Systems With Multiple Time Delays”, Zhang, et al. IEEE Transactions on Fuzzy Systems, vol. 15, No. 3, Jun. 2007.
“An H∞Optimization and Its Fast Algorithm for Time-Variant System Identification”, K. Nishiyama. IEEE Transactions on Signal Processing, vol. 52, No. 5, May 2004.
“A Nonlinear Filter for Estimating a Sinusoidal Signal and Its Parameters in White Noise: On the Case of a Single Sinusoid”, K Nishiyama. IEEE Transactions on Signal Processing, vol. 45, No. 4, Apr. 1997.
“A State-Space Approach to Adaptive RLS Filtering” by Ali H. Sayed and Thomas Kailath, IEEE Signal Processing Magazine, Jul. 1994, pp. 18-60.
“Robust Estimation of a Single Complex Sinusoid in White Noise—H Filtering Approach” by Kiyoshih Nishiyama , IEEE Transactions on Signal Processing, vol. 47, No. 10, Oct. 1999.
“H—Learning of Layered Neural Networks” by Nishiyama et al, IEEE Transactions on Nueral Networks, vol. 12, No. 6, Nov. 2001.
Related Publications (1)
Number Date Country
20100030529 A1 Feb 2010 US